uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,996,363
arxiv
\section{Introduction} Termination of higher-order term rewriting systems~\cite[Chapter~11]{Terese2003} has been an active area of research for several decades. One powerful method, introduced by v.d. Pol \cite{Pol1993,pol:96}, interprets terms into \emph{weakly monotonic algebras}. In later work \cite{FuhsKop2012,Kop2012}, these algebra interpretations are specialised into \emph{higher-order polynomial interpretations}, a generalisation of the popular -- and highly automatable -- technique of polynomial interpretations for first-order term rewriting. The methods of weakly monotonic algebras and polynomial interpretation are both limited to \emph{monomorphic} systems. In this paper, we will further generalise polynomial interpretations to a higher-order formalism with full impredicative polymorphism. This goes beyond shallow (rank-1, weak) polymorphism, where type quantifiers are effectively allowed only at the top of a type: it would be relatively easy to extend the methods to a system with shallow polymorphism since shallowly polymorphic rules can be seen as defining an infinite set of monomorphic rules. While shallow polymorphism often suffices in functional programming practice, there do exist interesting examples of rewrite systems which require higher-rank impredicative polymorphism. For instance, in recent extensions of Haskell one may define a type of heterogeneous lists. \[ \begin{array}{ll} \mathtt{List} : * & \mathtt{foldl}_\sigma(f,a,\mathtt{nil}) \longrightarrow a \\ \mathtt{nil} : \mathtt{List} & \mathtt{foldl}_\sigma(f,a,\mathtt{cons}_\tau(x,l)) \longrightarrow \mathtt{foldl}_\sigma(f,f \tau a x,l) \\ \mathtt{cons} : \forall \alpha . \alpha \rightarrow \mathtt{List} \rightarrow \mathtt{List} \quad\quad \\ \multicolumn{2}{l}{\mathtt{foldl} : \forall \beta . (\forall \alpha . \beta \rightarrow \alpha \rightarrow \beta) \rightarrow \beta \rightarrow \mathtt{List} \rightarrow \beta} \end{array} \] The above states that $\mathtt{List}$ is a type ($*$), gives the types of its two constructors $\mathtt{nil}$ and $\mathtt{cons}$, and defines the corresponding fold-left function~$\mathtt{foldl}$. Each element of a heterogeneous list may have a different type. In practice, one would constrain the type variable~$\alpha$ with a type class to guarantee the existence of some operations on list elements. The function argument of~$\mathtt{foldl}$ receives the element together with its type. The $\forall$-quantifier binds type variables: a term of type $\forall \alpha . \tau$ takes a type~$\rho$ as an argument and the result is a term of type~$\tau[\subst{\alpha}{\rho}]$. Impredicativity of polymorphism means that the type itself may be substituted for its own type variable, e.g., if $\mathtt{f} : \forall \alpha . \tau$ then $f (\forall \alpha . \tau) : \tau[\subst{\alpha}{\forall\alpha.\tau}]$. Negative occurrences of impredicative type quantifiers prevent a translation into an infinite set of simply typed rules by instantiating the type variables. The above example is not directly reducible to shallow polymorphism as used in the~ML programming language. \medskip\noindent\textbf{Related work.} The term rewriting literature has various examples of higher-order term rewriting systems with some forms of polymorphism. To start, there are several studies that consider shallow polymorphic rewriting (e.g., \cite{ham:18,jou:rub:07,wah:04}), where (as in ML-like languages) systems like $\mathtt{foldl}$ above cannot be handled. Other works consider extensions of the $\lambda\Pi$-calculus \cite{cou:dow:07,dow:17} or the calculus of constructions \cite{bla:05,wal:03} with rewriting rules; only the latter includes full impredicative polymorphism. The termination techniques presented for these systems are mostly syntactic (e.g., a recursive path ordering \cite{jou:rub:07,wal:03}, or general schema \cite{bla:05}), as opposed to our more semantic method based on interpretations. An exception is \cite{dow:17}, which defines interpretations into $\Pi$-algebras; this technique bears some similarity to ours, although the methodologies are quite different. A categorical definition for a general polymorphic rewriting framework is presented in \cite{fio:ham:13}, but no termination methods are considered for it. \medskip\noindent\textbf{Our approach.} The technique we develop in this paper operates on \emph{Polymorphic Functional Systems (PFSs)}, a form of higher-order term rewriting systems with full impredicative polymorphism (Section \ref{sec_systems}), that various systems of interest can be encoded into (including the example of heterogeneous fold above). Then, our methodology follows a standard procedure: \begin{itemize} \item we define a well-ordered set $(\mathcal{I},\succ,\succeq)$ (Section \ref{sec:World}); \item we provide a general methodology to map each PFS term $s$ to a natural number $\interpret{s}$, parameterised by a core interpretation for each function symbol (Section \ref{sec_reduction_pairs}); \item we present a number of lemmas to make it easy to prove that $s \succ t$ or $s \succeq t$ whenever $s$ reduces to $t$ (Section \ref{sec_rule_removal}). \end{itemize} Due to the additional complications of full polymorphism, we have elected to only generalise higher-order polynomial interpretations, and not v.d. Pol's weakly monotonic algebras. That is, terms of base type are always interpreted to natural numbers and all functions are interpreted to combinations of addition and multiplication. We will use the system of heterogeneous fold above as a running example to demonstrate our method. However, termination of this system can be shown in other ways (e.g., an enco\-ding in System~$\mathtt{F}$). Hence, we will also study a more complex example in Section~\ref{sec:examples}: termination of a substantial fragment of~IPC2, i.e., full intuitionistic second-order propositional logic with permutative conversions. Permutative conversions~\cite[Chapter~6]{TroelstraSchwichtenberg1996} are used in proof theory to obtain ``good'' normal forms of natural deduction proofs, which satisfy e.g.~the subformula property. Termination proofs for systems with permutative conversions are notoriously tedious and difficult, with some incorrect claims in the literature and no uniform methodology. It is our goal to make such termination proofs substantially easier in the future. \onlypaper{Complete proofs for the results in this paper are available in an online appendix.~\cite{versionwithappendix}.}% \onlyarxiv{This is a pre-publication copy of a paper at FSCD 2019. In particular, it contains an appendix with complete proofs for the results in this paper.} \section{Preliminaries}\label{sec_preliminaries} In this section we recall the definition of System~$\mathtt{F}_\omega$ (see e.g.~\cite[Section~11.7]{SorensenUrzyczyn2006}), which will form a basis both of our interpretations and of a general syntactic framework for the investigated systems. In comparison to System~$\mathrm{F}$, System~$\mathtt{F}_\omega$ includes type constructors which results in a more uniform treatment. We assume familiarity with core notions of lambda calculi such as substitution and $\alpha$-conversion. \begin{defn}\label{def_types} \emph{Kinds} are defined inductively: $*$ is a kind, and if $\kappa_1,\kappa_2$ are kinds then so is $\kappa_1 \Rightarrow \kappa_2$. We assume an infinite set~$\mathcal{V}_\kappa$ of \emph{type constructor variables} of each kind~$\kappa$. Variables of kind~$*$ are \emph{type variables}. We assume a fixed set~$\Sigma^T_\kappa$ of \emph{type constructor symbols} paired with a kind~$\kappa$, denoted $c : \kappa$. % We define the set~$\mathcal{T}_\kappa$ of \emph{type constructors} of kind~$\kappa$ by the following grammar. Type constructors of kind~$*$ are \emph{types}. \[ \begin{array}{rcl} \mathcal{T}_{*} &::=& \mathcal{V}_{*} \mid \Sigma^T_{*} \mid \mathcal{T}_{\kappa\Rightarrow *}\mathcal{T}_{\kappa} \mid \forall\mathcal{V}_\kappa\mathcal{T}_* \mid \mathcal{T}_*\rightarrow\mathcal{T}_* \\ \mathcal{T}_{\kappa_1\Rightarrow\kappa_2} &::=& \mathcal{V}_{\kappa_1\Rightarrow\kappa_2} \mid \Sigma^T_{\kappa_1\Rightarrow\kappa_2} \mid \mathcal{T}_{\kappa\Rightarrow(\kappa_1\Rightarrow\kappa_2)}\mathcal{T}_{\kappa} \mid \lambda \mathcal{V}_{\kappa_1} \mathcal{T}_{\kappa_2} \end{array} \] We use the standard notations $\forall \alpha . \tau$ and $\lambda \alpha . \tau$. When $\alpha$ is of kind $\kappa$ then we use the notation $\forall \alpha : \kappa . \tau$. If not indicated otherwise, we assume~$\alpha$ to be a type variable. We treat type constructors up to $\alpha$-conversion. \begin{example} If $\Sigma^T_{*} = \{ \mathtt{List} \}$ and $\Sigma^T_{* \Rightarrow * \Rightarrow *} = \{ \mathtt{Pair} \}$, types are for instance $\mathtt{List}$ and $\forall \alpha.\mathtt{Pair}\,\alpha\,\mathtt{List}$. The expression $\mathtt{Pair}\,\mathtt{List}$ is a type constructor, but not a type. If $\Sigma^T_{(* \Rightarrow *) \Rightarrow *} = \{ \exists \}$ and $\sigma \in \mathcal{T}_{* \Rightarrow *}$, then both $\exists(\sigma)$ and $\exists (\lambda \alpha.\sigma\alpha)$ are types. \end{example} The compatible closure of the rule $(\lambda\alpha.\varphi)\psi \to \varphi[\alpha := \psi]$ defines $\beta$-reduction on type constructors. As type constructors are (essentially) simply-typed lambda-terms, their $\beta$-reduction terminates and is confluent; hence every type constructor~$\tau$ has a unique $\beta$-normal form~$\mathrm{nf}_\beta(\tau)$. A \emph{type atom} is a type in $\beta$-normal form which is neither an arrow $\tau_1\rightarrow\tau_2$ nor a quantification $\forall\alpha.\tau$. We define $\mathrm{FTV}(\varphi)$ -- the set of free type constructor variables of the type constructor~$\varphi$ -- in an obvious way by induction on~$\varphi$. A type constructor~$\varphi$ is \emph{closed} if $\mathrm{FTV}(\varphi) = \emptyset$. We assume a fixed type symbol~$\chi_* \in \Sigma^T_*$. For $\kappa=\kappa_1\Rightarrow\kappa_2$ we define $\chi_\kappa = \lambda \alpha:\kappa_1 . \chi_{\kappa_2}$. \end{defn} \begin{defn}\label{def_preterms} We assume given an infinite set $\mathcal{V}$ of variables, each paired with a type, denoted $x : \tau$. We assume given a fixed set $\Sigma$ of \emph{function symbols}, each paired with a closed type, denoted $\mathtt{f} : \tau$. Every variable~$x$ and every function symbol $\mathtt{f}$ occurs only with one type declaration. The set of \emph{preterms} consists of all expressions~$s$ such that $s : \sigma$ can be inferred for some type $\sigma$ by the following clauses: \begin{itemize} \item $x : \sigma$ for $(x : \sigma) \in \mathcal{V}$. \item $\mathtt{f} : \sigma$ for all $(\mathtt{f} : \sigma) \in \Sigma$. \item $\abs{x:\sigma}{s} : \sigma \rightarrow \tau$ if $(x : \sigma) \in \mathcal{V}$ and $s : \tau$. \item $(\tabs{\alpha:\kappa}{s}) : (\quant{\alpha:\kappa}{\sigma})$ if $s : \sigma$ and $\alpha$ does not occur free in the type of a free variable of~$s$. \item $\app{s}{t} : \tau$ if $s : \sigma \rightarrow \tau$ and $t : \sigma$ \item $\tapp{s}{\tau} : \sigma[\subst{\alpha}{\tau}]$ if $s : \quant{\alpha:\kappa}{\sigma}$ and~$\tau$ is a type constructor of kind~$\kappa$, \item $s : \tau$ if $s : \tau'$ and $\tau =_\beta \tau'$. \end{itemize} The set of free variables of a preterm~$t$, denoted $\mathrm{FV}(t)$, is defined in the expected way. Analogously, we define the set~$\mathrm{FTV}(t)$ of type constructor variables occurring free in~$t$. If $\alpha$ is a type then we use the notation $\tabs{\alpha}{t}$. We denote an occurrence of a variable~$x$ of type~$\tau$ by~$x^\tau$, e.g.~$\lambda x : \tau\rightarrow\sigma . x^{\tau\rightarrow\sigma}y^\tau$. When clear or irrelevant, we omit the type annotations, denoting the above term by~$\lambda x . x y$. Type substitution is defined in the expected way except that it needs to change the types of variables. Formally, a type substitution changes the types associated to variables in~$\mathcal{V}$. We define the equivalence relation~$\equiv$ by: $s \equiv t$ iff $s$ and $t$ are identical modulo $\beta$-conversion in types. \end{defn} Note that we present terms in orthodox Church-style, i.e., instead of using contexts each variable has a globally fixed type associated to it. \begin{lemma} If $s : \tau$ and $s \equiv t$ then $t : \tau$. \end{lemma} \begin{proof} Induction on~$s$. \end{proof} \begin{defn}\label{def_terms} The set of \emph{terms} is the set of the equivalence classes of~$\equiv$. \end{defn} Because $\beta$-reduction on types is confluent and terminating, every term has a canonical preterm representative -- the one with all types occurring in it $\beta$-normalised. We define $\mathrm{FTV}(t)$ as the value of~$\mathrm{FTV}$ on the canonical representative of~$t$. We say that $t$ is \emph{closed} if both $\mathrm{FTV}(t) = \emptyset$ and $\mathrm{FV}(t) = \emptyset$. Because typing and term formation operations (abstraction, application, \ldots) are invariant under~$\equiv$, we may denote terms by their (canonical) representatives and informally treat them interchangeably. We will often abuse notation to omit $\cdot$ and $*$. Thus, $s t$ can refer to both $\app{s}{t}$ and $\tapp{s}{t}$. This is not ambiguous due to typing. When writing $\sigma[\subst{\alpha}{\tau}]$ we implicitly assume that $\alpha$ and $\tau$ have the same kind. Analogously with $t[\subst{x}{s}]$. \begin{lemma}[Substitution lemma]\label{lem:substitution} \begin{enumerate} \item If $s : \tau$ and $x : \sigma$ and $t : \sigma$ then $s[\subst{x}{t}] : \tau$. \item If $t : \sigma$ then $t[\subst{\alpha}{\tau}] : \sigma[\subst{\alpha}{\tau}]$. \end{enumerate} \end{lemma} \begin{proof} Induction on the typing derivation. \end{proof} \begin{lemma}[Generation lemma]\label{lem:generation} If $t : \sigma$ then there is a type~$\sigma'$ such that $\sigma' =_\beta \sigma$ and $\mathrm{FTV}(\sigma') \subseteq \mathrm{FTV}(t)$ and one of the following holds. \begin{itemize} \item $t \equiv x$ is a variable with $(x : \sigma') \in \mathcal{V}$. \item $t \equiv \mathtt{f}$ is a function symbol with $\mathtt{f} : \sigma'$ in $\Sigma$. \item $t \equiv \abs{x:\tau_1}{s}$ and $\sigma'=\tau_1\rightarrow\tau_2$ and $s : \tau_2$. \item $t \equiv \tabs{\alpha:\kappa}{s}$ and $\sigma' = \quant{\alpha:\kappa}{\tau}$ and $s : \tau$ and $\alpha$ does not occur free in the type of a free variable of~$s$. \item $t \equiv \app{t_1}{t_2}$ and $t_1 : \tau \rightarrow \sigma'$ and $t_2 : \tau$ and $\mathrm{FTV}(\tau) \subseteq \mathrm{FTV}(t)$. \item $t \equiv \tapp{s}{\tau}$ and $\sigma' = \rho[\subst{\alpha}{\tau}]$ and $s : \quant{(\alpha:\kappa)}{\rho}$ and~$\tau$ is a type constructor of kind~$\kappa$. \end{itemize} \end{lemma} \begin{proof} By analysing the derivation $t : \sigma$. To ensure $\mathrm{FTV}(\sigma') \subseteq \mathrm{FTV}(t)$, note that if $\alpha \notin \mathrm{FTV}(t)$ is of kind~$\kappa$ and~$t : \sigma'$, then $t : \sigma'[\subst{\alpha}{\chi_\kappa}]$ by the substitution lemma (thus we can eliminate~$\alpha$). \end{proof} \section{Polymorphic Functional Systems}\label{sec_systems} In this section, we present a form of higher-order term rewriting systems based on $\mathtt{F}_\omega$: \emph{Polymorphic Functional Systems (PFSs)}. Systems of interest, such as logic systems like~ICP2 and higher-order TRSs with shallow or full polymorphism can be encoded into PFSs, and then proved terminating with the technique we will develop in Sections \ref{sec:World}--\ref{sec_rule_removal}. \begin{defn}\label{def_pafs_types_terms} \emph{Kinds}, \emph{type constructors} and \emph{types} are defined like in Definition~\ref{def_types}, parameterised by a fixed set~$\Sigma^T = \bigcup_{\kappa}\Sigma^T_\kappa$ of type constructor symbols. Let~$\Sigma$ be a set of function symbols such that for $\mathtt{f} : \sigma \in \Sigma$: \[ \sigma = \forall (\alpha_1 : \kappa_1) \ldots \forall (\alpha_n : \kappa_n) . \sigma_1 \rightarrow \ldots \rightarrow \sigma_k \rightarrow \tau \quad\quad (\text{with}\ \tau\ \text{a type atom}) \] We define \emph{PFS terms} as in Definition~\ref{def_terms} (based on Definition~\ref{def_preterms}), parameterised by~$\Sigma$, with the restriction that for any subterm $\app{s}{u}$ of a term~$t$, we have $s = \mathtt{f} \rho_1 \ldots \rho_n u_1 \ldots u_m$ where: \[ \mathtt{f} : \forall (\alpha_1 : \kappa_1) \ldots \forall (\alpha_n : \kappa_n) . \sigma_1 \rightarrow \ldots \rightarrow \sigma_k \rightarrow \tau \quad\quad (\text{with}\ \tau\ \text{a type atom and}\ k > m) \] \end{defn} This definition does not allow for a variable or abstraction to occur at the head of an application, nor can we have terms of the form $s \cdot t * \tau \cdot q$ (although terms of the form $s \cdot t * \tau$, or $x * \tau$ with $x$ a variable, \emph{are} allowed to occur). To stress this restriction, we will use the notation $\mathtt{f}_{\rho_1,\ldots,\rho_n}(s_1,\ldots,s_m)$ as an alternative way to denote $\mathtt{f} \rho_1 \ldots \rho_n s_1 \ldots s_m$ when $ \mathtt{f} : \forall (\alpha_1 : \kappa_1) \ldots \forall (\alpha_n : \kappa_n) . \sigma_1 \rightarrow \ldots \rightarrow \sigma_k \rightarrow \tau $ is a function symbol in~$\Sigma$ with~$\tau$ a type atom and $m \leq k$. This allows us to represent terms in a ``functional'' way, where application does not explicitly occur (only implicitly in the construction of $\mathtt{f}_{\rho_1,\ldots,\rho_n}(s_1,\ldots,s_m)$). The following result follows easily by induction on term structure: \begin{lemma} If $t,s$ are PFS terms then so is $t[\subst{x}{s}]$. \end{lemma} PFS terms will be rewritten through a reduction relation $\arr{\mathcal{R}}$ based on a (usually infinite) set of rewrite rules. To define this relation, we need two additional notions. \begin{defn}\label{def_replacement} A \emph{replacement} is a function $\delta = \gamma \circ \omega$ satisfying: \begin{enumerate} \item $\omega$ is a type constructor substitution, \item $\gamma$ is a term substitution such that $\gamma(\omega(x)) : \omega(\tau)$ for every $(x : \tau) \in \mathcal{V}$. \end{enumerate} For~$\tau$ a type constructor, we use $\delta(\tau)$ to denote $\omega(\tau)$. We use the notation $\delta[\subst{x}{t}] = \gamma[\subst{x}{t}] \circ \omega$. Note that if $t : \tau$ then $\delta(t) : \delta(\tau)$. \end{defn} \begin{defn}\label{def:context} A \emph{$\sigma$-context}~$C_\sigma$ is a PFS term with a fresh function symbol $\Box_\sigma \notin \Sigma$ of type~$\sigma$ occurring exactly once. By~$C_\sigma[t]$ we denote a PFS term obtained from~$C_\sigma$ by substituting~$t$ for~$\Box_\sigma$. We drop the $\sigma$ subscripts when clear or irrelevant. \end{defn} Now, the rewrite rules are simply a set of term pairs, whose monotonic closure generates the rewrite relation. \begin{defn}\label{def_rules} A set $\mathcal{R}$ of term pairs $(\ell,r)$ is a set of \emph{rewrite rules} if: (a) $\mathrm{FV}(r) \subseteq \mathrm{FV}(\ell)$; (b) $\ell$ and $r$ have the same type; and (c) if $(\ell,r) \in \mathcal{R}$ then $(\delta(\ell),\delta(r)) \in \mathcal{R}$ for any replacement~$\delta$. The reduction relation $\arr{\mathcal{R}}$ on PFS terms is defined by: \begin{center} $t \arr{\mathcal{R}} s$ iff $t = C[\ell]$ and $s = C[r]$ for some $(\ell,r)\in\mathcal{R}$ and context~$C$. \end{center} \end{defn} \begin{defn}\label{def_pafs} A \emph{Polymorphic Functional System (PFS)} is a triple $(\Sigma^T,\Sigma,\mathcal{R})$ where~$\Sigma^T$ is a set of type constructor symbols, $\Sigma$ a set of function symbols (restricted as in Def.~\ref{def_pafs_types_terms}), and $\mathcal{R}$ is a set of rules as in Definition~\ref{def_rules}. A term of a PFS~$A$ is referred to as an $A$-term. \end{defn} While PFS-terms are a restriction from the general terms of system $\mathtt{F}_\omega$, the reduction relation allows us to actually encode, e.g., system~$\mathtt{F}$ as a PFS: we can do so by including the symbol ${@} : \forall\alpha\forall\beta . (\alpha \rightarrow \beta) \rightarrow \alpha \rightarrow \beta$ in $\Sigma$ and adding all rules of the form $@_{\sigma,\tau}(\abs{x}{s},t) \longrightarrow s[x:=t]$. Similarly, $\beta$-reduction of type abstraction can be modelled by including a symbol $\mathtt{A} : \forall \alpha : * \Rightarrow * . \forall \beta . (\forall \gamma.\alpha \gamma) \rightarrow \alpha \beta$ and rules $\mathtt{A}_{\abs{\gamma}{\sigma},\tau}(\tabs{\gamma}{s}) \longrightarrow s[\gamma:=\tau]$.% \footnote{The use of a type constructor variable $\alpha$ of kind $* \Rightarrow *$ makes it possible to do type substitution as part of a rule. An application $s * \tau$ with $s : \quant{\gamma}{\sigma}$ is encoded as $\mathtt{A}_{\abs{\gamma}{\sigma},\tau}(s)$, so $\alpha$ is substituted with $\abs{\gamma}{\tau}$. This is well-typed because $(\abs{\gamma}{\sigma})\gamma =_\beta \sigma$ and $(\abs{\gamma}{\sigma})\tau =_\beta \sigma[\gamma:=\tau]$. } We can also use rules $(\tabs{\alpha}{s})*\tau \longrightarrow s[\alpha:=\tau]$ without the extra symbol, but to apply our method it may be convenient to use the extra symbol, as it creates more liberty in choosing an interpretation. \begin{example}[Fold on heterogenous lists]\label{ex_fold_pafs} The example from the introduction may be represented as a PFS with one type symbol $\mathtt{List} : *$, the following function symbols: \[ \begin{array}{rcl} @ & : & \forall \alpha \forall \beta . (\alpha \rightarrow \beta) \rightarrow \alpha \rightarrow \beta \\ \mathtt{A} & : & \forall \alpha : * \Rightarrow * . \forall \beta . (\forall \gamma .\alpha \gamma) \rightarrow \alpha \beta \\ \mathtt{nil} & : & \mathtt{List} \\ \mathtt{cons} & : & \forall \alpha . \alpha \rightarrow \mathtt{List} \rightarrow \mathtt{List} \\ \mathtt{foldl} & : & \forall \beta . (\forall \alpha . \beta \rightarrow \alpha \rightarrow \beta) \rightarrow \beta \rightarrow \mathtt{List} \rightarrow \beta \end{array} \] and the following rules (which formally represents an infinite set of rules: one rule for each choice of types $\sigma, \tau$ and PFS terms $s$, $t$, etc.): \[ \begin{array}{rcl} @_{\sigma,\tau}(\abs{x:\sigma}{s},t) & \longrightarrow & s[x:=t] \\ \mathtt{A}_{\abs{\alpha}{\sigma},\tau}(\tabs{\alpha}{s}) & \longrightarrow & s[\alpha:=\tau] \\ \mathtt{foldl}_\sigma(f,s,\mathtt{nil}) & \longrightarrow & s \\ \mathtt{foldl}_\sigma(f,s,\mathtt{cons}_\tau(h,t)) & \longrightarrow & \mathtt{foldl}_\sigma(f,@_{\tau,\sigma}(@_{\sigma,\tau \rightarrow\sigma}(\mathtt{A}_{\abs{\alpha}{\sigma\rightarrow\alpha\rightarrow\sigma},\tau}(f),s),h),t) \end{array} \] \end{example} \section{A well-ordered set of interpretation terms}\label{sec:World} In polynomial interpretations of first-order term rewriting~\cite[Chapter 6.2]{Terese2003}, each term $s$ is mapped to a natural number $\interpret{s}$, such that $\interpret{s} > \interpret{t}$ whenever $s \arr{\mathcal{R}} t$. In higher-order rewriting, this is not practical; instead, following \cite{pol:96}, terms are mapped to weakly monotonic functionals according to their type (i.e., terms with a $0$-order type are mapped to natural numbers, terms with a $1$-order type to weakly monotonic functions over natural numbers, terms with a $2$-order type to weakly monotonic functionals taking weakly monotonic functions as arguments, and so on). In this paper, to account for full polymorphism, we will interpret PFS terms to a set $\mathcal{I}$ of \emph{interpretation terms} in a specific extension of System~$\mathtt{F}_\omega$. This set is defined in Section \ref{subsec:I}; we provide a well-founded partial ordering $\succ$ on $\mathcal{I}$ in Section \ref{subsec:succ}. Although our world of interpretation terms is quite different from the weakly monotonic functionals of \cite{pol:96}, there are many similarities. Most pertinently, every interpretation term $\abs{x}{s}$ essentially defines a weakly monotonic function from $\mathcal{I}$ to $\mathcal{I}$. This, and the use of both addition and multiplication in the definition of $\mathcal{I}$, makes it possible to lift higher-order polynomial interpretations \cite{FuhsKop2012} to our setting. We prove weak monotonicity in Section \ref{subsec:weakmono}. \subsection{Interpretation terms}\label{subsec:I} \begin{defn}\label{def_iterms} The set~$\mathcal{Y}$ of \emph{interpretation types} is the set of types as in Definition~\ref{def_types} with $\Sigma^T = \{ \mathtt{nat} : * \}$, i.e., there is a single type constant~$\mathtt{nat}$. Then $\chi_* = \mathtt{nat}$. The set~$\mathcal{I}$ of \emph{interpretation terms} is the set of terms from Definition~\ref{def_terms} (see also Definition~\ref{def_preterms}) where as types we take the interpretation types and for the set~$\Sigma$ of function symbols we take $\Sigma = \{ n : \mathtt{nat} \mid n \in \mathbb{N} \} \cup \Sigma_f$, where $ \Sigma_f = \{ \oplus : \forall \alpha . \alpha \rightarrow \alpha \rightarrow \alpha, \otimes : \forall \alpha . \alpha \rightarrow \alpha \rightarrow \alpha, \mathtt{flatten} : \forall \alpha . \alpha \rightarrow \mathtt{nat}, \mathtt{lift} : \forall \alpha . \mathtt{nat} \rightarrow \alpha \} $. \end{defn} For easier presentation, we write $\oplus_\tau$, $\otimes_\tau$, etc., instead of $\tapp{\oplus}{\tau}$, $\tapp{\otimes}{\tau}$, etc. We will also use $\oplus$ and $\otimes$ in \emph{infix, left-associative} notation, and omit the type denotation where it is clear from context. Thus, $s \oplus t \oplus u$ should be read as $\oplus_\sigma\,(\oplus_\sigma\,s\,t)\,u$ if $s$ has type $\sigma$. Thus, our interpretation terms include natural numbers with the operations of addition and multiplication. It would not cause any fundamental problems to add more monotonic operations, e.g., exponentiation, but we refrain from doing so for the sake of simplicity. \paragraph*{Normalising interpretation terms} The set $\mathcal{I}$ of interpretation terms can be reduced through a relation $\leadsto$, that we will define below. This relation will be a powerful aid in defining the partial ordering $\succ$ in Section \ref{subsec:succ}. \begin{defn} We define the relation $\leadsto$ on interpretation terms as the smallest relation on~$\mathcal{I}$ for which the following properties are satisfied: \begin{enumerate} \item\label{arrW:mono:abs} if $s \leadsto t$ then both $\abs{x}{s} \leadsto \abs{x}{t}$ and $\tabs{\alpha}{s} \leadsto \tabs{\alpha}{t}$ \item\label{arrW:mono:right} if $s \leadsto t$ then $\app{u}{s} \leadsto \app{u}{t}$ \item\label{arrW:mono:left} if $s \leadsto t$ then both $\app{s}{u} \leadsto \app{t}{u}$ and $\tapp{s}{\sigma} \leadsto \tapp{t}{\sigma}$ \item\label{arrW:beta:abs} $\app{(\abs{x:\sigma}{s})}{t} \leadsto s[\subst{x}{t}]$ and $\tapp{(\tabs{\alpha}{s})}{\sigma} \leadsto s[\subst{\alpha}{\sigma}]$ ($\beta$-reduction) \item\label{arrW:plus:base} $\app{\app{\oplus_{\mathtt{nat}}}{n}}{m} \leadsto n+m$ and $\app{\app{\otimes_{\mathtt{nat}}}{n}}{m} \leadsto n \times m$ \item\label{arrW:circ:arrow} $\app{\app{\circ_{\sigma \rightarrow \tau}}{s}}{t} \leadsto \abs{x:\sigma}{\app{\app{\circ_\tau}{(\app{s}{x})}}{(\app{t}{x})}}$ for $\circ \in \{ \oplus, \otimes \}$ \item\label{arrW:circ:forall} $\app{\app{\circ_{\quant{\alpha}{\sigma}}}{s}}{t} \leadsto \tabs{\alpha}{\app{\app{\circ_\sigma}{(\tapp{s}{\alpha})}}{( \tapp{t}{\alpha})}}$ for $\circ \in \{ \oplus, \otimes \}$ \item $\app{\mathtt{flatten}_\mathtt{nat}}{s} \leadsto s$ \item $\app{\mathtt{flatten}_{\sigma \rightarrow \tau}}{s} \leadsto \app{\mathtt{flatten}_\tau}{(\app{s}{(\app{\mathtt{lift}_\sigma}{0})})}$ \item $\app{\mathtt{flatten}_{\quant{\alpha:\kappa}{\sigma}}}{s} \leadsto \app{\mathtt{flatten}_{\sigma[\subst{\alpha}{\chi_\kappa}]}}{(\tapp{s}{\chi_\kappa})}$ \item $\app{\mathtt{lift}_\mathtt{nat}}{s} \leadsto s$ \item $\app{\mathtt{lift}_{\sigma \rightarrow \tau}}{s} \leadsto \abs{x:\sigma}{\app{\mathtt{lift}_{\tau}}{s}}$ \item $\app{\mathtt{lift}_{\quant{\alpha}{\sigma}}}{s} \leadsto \tabs{\alpha}{\app{\mathtt{lift}_{\sigma}}{s}}$ \end{enumerate} Recall Definition~\ref{def_terms} and Definition~\ref{def_iterms} of the set of interpretation terms~$\mathcal{I}$ as the set of the equivalence classes of~$\equiv$. So, for instance, $\mathtt{lift}_\mathtt{nat}$ above denotes the equivalence class of all preterms $\mathtt{lift}_\sigma$ with $\sigma =_\beta \mathtt{nat}$. Hence, the above rules are invariant under~$\equiv$ (by confluence of $\beta$-reduction on types), and they correctly define a relation on interpretation terms. We say that $s$ is a \emph{redex} if $s$ reduces by one of the rules 4--13. A \emph{final interpretation term} is an interpretation term $s \in \mathcal{I}$ such that (a) $s$ is closed, and (b) $s$ is in normal form with respect to $\leadsto$. We let $\mathcal{I}^f$ be the set of all final interpretation terms. By~$\mathcal{I}_\tau$ ($\mathcal{I}^f_\tau$) we denote the set of all (final) interpretation terms of interpretation type~$\tau$. \end{defn} An important difference with System~$\mathtt{F}_\omega$ and related ones is that the rules for $\oplus_\tau$, $\otimes_\tau$, $\mathtt{flatten}_\tau$ and $\mathtt{lift}_\tau$ depend on the type~$\tau$. In particular, type substitution in terms may create redexes. For instance, if $\alpha$ is a type variable then $\oplus_\alpha t_1 t_2$ is not a redex, but $\oplus_{\sigma\rightarrow\tau} t_1 t_2$ is. This makes the question of termination subtle. Indeed, System~$\mathtt{F}_\omega$ is extremely sensitive to modifications which are not of a logical nature. For instance, adding a constant $\mathtt{J} : \forall \alpha \beta . \alpha \rightarrow \beta$ with a reduction rule $\mathtt{J} \tau \tau \leadsto \lambda x : \tau . x$ makes the system non-terminating~\cite{Girard1971}. This rule breaks parametricity by making it possible to compare two arbitrary types. Our rules do not allow such a definition. Moreover, the natural number constants cannot be distinguished ``inside'' the system. In other words, we could replace all natural number constants with 0 and this would not change the reduction behaviour of terms. So for the purposes of termination, the type $\mathtt{nat}$ is essentially a singleton. This implies that, while we have polymorphic functions between an arbitrary type $\alpha$ and $\mathtt{nat}$ which are not constant when seen ``from outside'' the system, they are constant for the purposes of reduction ``inside'' the system (as they would have to be in a parametric $\mathtt{F}_\omega$-like system). Intuitively, these properties of our system ensure that it stays ``close enough'' to $\mathtt{F}_\omega$ so that the standard termination proof still generalises. Now we state some properties of~$\leadsto$, including strong normalisation. Because of space limitations, most (complete) proofs are delegated to \onlypaper{\cite[Appendix~A.1]{versionwithappendix}}% \onlyarxiv{Appendix~\ref{app_proofs_SN}}. \begin{lemma}[Subject reduction] If $t : \tau$ and $t \leadsto t'$ then $t' : \tau$. \end{lemma} \begin{proof} By induction on the definition of $t \leadsto t'$, using Lemmas \ref{lem:substitution} and \ref{lem:generation}. \end{proof} \begin{theorem}\label{thm_sn} If $t : \sigma$ then $t$ is terminating with respect to $\leadsto$. \end{theorem} \begin{proof} By an adaptation of the Tait-Girard computability method. The proof is an adaptation of chapters~6 and~14 from the book~\cite{Girard1989}, and chapters~10 and~11 from the book~\cite{SorensenUrzyczyn2006}. Details are available in \onlypaper{\cite[Appendix A.1]{versionwithappendix}}% \onlyarxiv{Appendix~\ref{app_proofs_SN}}. \end{proof} \begin{lemma}\label{lem_unique_final} Every term $s \in \mathcal{I}$ has a unique normal form~$s\mathord{\downarrow}$. If~$s$ is closed then so is~$s\mathord{\downarrow}$. \end{lemma} \begin{proof} One easily checks that~$\leadsto$ is locally confluent. Since the relation is terminating by Theorem~\ref{thm_sn}, it is confluent by Newman's lemma. \end{proof} \begin{lemma}\label{lem_final_nat} The only final interpretation terms of type $\mathtt{nat}$ are the natural numbers. \end{lemma} \begin{example}\label{ex:arrWreduce} Let $s \in \mathcal{I}_{\mathtt{nat} \rightarrow \mathtt{nat}}$ and $t \in \mathcal{I}_\mathtt{nat}$. Then we can reduce $(s \oplus \mathtt{lift}_{\mathtt{nat} \rightarrow \mathtt{nat}}(1)) \cdot t \leadsto (\abs{x}{s x \oplus \mathtt{lift}_{\mathtt{nat} \rightarrow \mathtt{nat}}(1)x}) \cdot t \leadsto s t \oplus \mathtt{lift}_{\mathtt{nat} \rightarrow \mathtt{nat}}(1)t \leadsto s t \oplus (\abs{y}{\mathtt{lift}_{\mathtt{nat}}(1)})t \leadsto s t \oplus \mathtt{lift}_\mathtt{nat}(1) \leadsto s t \oplus 1$. If $s$ and $t$ are variables, this term is in normal form. \end{example} \subsection{The ordering pair $(\succeq,\succ)$}\label{subsec:succ} With these ingredients, we are ready to define the well-founded partial ordering $\succ$ on $\mathcal{I}$. In fact, we will do more: rather than a single partial ordering, we will define an \emph{ordering pair}: a pair of a quasi-ordering $\succeq$ and a compatible well-founded ordering $\succ$. The quasi-ordering $\succeq$ often makes it easier to prove $s \succ t$, since it suffices to show that $s \succeq s' \succ t' \succeq t$ for some interpretation terms $s',t'$. Having $\succeq$ will also allow us to use rule removal (Theorem \ref{thm:ruleremove}). \begin{defn}\label{def:succ} Let $R \in \{ \succ^0,\succeq^0 \}$. For \emph{closed}~$s,t\in\mathcal{I}_\sigma$ and closed~$\sigma$ in $\beta$-normal form, the relation $s\ R_{\sigma}\ t$ is defined coinductively by the following rules. \[ \begin{array}{ccc} \infer={s\ R_\mathtt{nat}\ t}{s\mathord{\downarrow}\ R\ t\mathord{\downarrow} \text{ in }\mathbb{N}} \quad&\quad \infer={s\ R_{\sigma\rightarrow\tau}\ t}{\app{s}{q}\ R_{\tau}\ \app{t}{q} \text{ for all } q \in \mathcal{I}^f_\sigma} & \infer={s\ R_{\forall(\alpha:\kappa).\sigma}\ t}{\tapp{s}{\tau}\ R_{\mathrm{nf}_\beta(\sigma[\subst{\alpha}{\tau}])}\ \tapp{t}{\tau} \text{ for all closed } \tau \in \mathcal{T}_{\kappa}} \end{array} \] We define $s \approx_\sigma^0 t$ if both $s \succeq_\sigma^0 t$ and $t \succeq_\sigma^0 s$. We drop the type subscripts when clear or irrelevant. \end{defn} Note that in the case for~$\mathtt{nat}$ the terms~$s\mathord{\downarrow}$, $t\mathord{\downarrow}$ are natural numbers by Lemma~\ref{lem_final_nat} ($s\mathord{\downarrow},t\mathord{\downarrow}$ are closed and in normal form, so they are final interpretation terms). Intuitively, the above definition means that e.g. $s \succ^0 t$ iff there exists a possibly infinite derivation tree using the above rules. In such a derivation tree all leaves must witness $s\mathord{\downarrow} > t\mathord{\downarrow}$ in natural numbers. However, this also allows for infinite branches, which solves the problem of repeating types due to impredicative polymorphism. If e.g.~$s \succ_{\forall \alpha . \alpha}^0 t$ then $\tapp{s}{\forall\alpha.\alpha} \succ_{\forall \alpha . \alpha}^0 \tapp{t}{\forall\alpha.\alpha}$, which forces an infinite branch in the derivation tree. According to our definition, any infinite branch may essentially be ignored. Formally, the above coinductive definition of e.g.~$\succ_\sigma^0$ may be interpreted as defining the largest relation such that if $s \succ_\sigma^0 t$ then: \begin{itemize} \item $\sigma = \mathtt{nat}$ and $s\mathord{\downarrow} > t\mathord{\downarrow}$ in $\mathbb{N}$, or \item $\sigma = \tau_1\rightarrow\tau_2$ and $\app{s}{q} \succ_{\tau_2}^0 \app{t}{q}$ for all $q \in \mathcal{I}^f_{\tau_1}$, or \item $\sigma = \forall(\alpha:\kappa).\rho$ and $\tapp{s}{\tau} \succ_{\mathrm{nf}_\beta(\rho[\subst{\alpha}{\tau}])}^0 \tapp{t}{\tau}$ for all closed $\tau \in \mathcal{T}_{\kappa}$. \end{itemize} For more background on coinduction see e.g.~\cite{KozenSilva2017,Sangiorgi2012,JacobsRutten2011}. In this paper we use a few simple coinductive proofs to establish the basic properties of~$\succ$ and~$\succeq$. Later, we just use these properties and the details of the definition do not matter. \begin{defn}\label{def_closure} A \emph{closure}~$\mathcal{C} = \gamma \circ \omega$ is a replacement such that $\omega(\alpha)$ is closed for each type constructor variable~$\alpha$, and $\gamma(x)$ is closed for each term variable~$x$. % For arbitrary types~$\sigma$ and arbitrary terms $s,t \in \mathcal{I}$ we define $s \succ_\sigma t$ if for every closure~$\mathcal{C}$ we can obtain $\mathcal{C}(s) \succ_{\mathrm{nf}_\beta(\mathcal{C}(\sigma))}^c \mathcal{C}(t)$ coinductively with the above rules. The relations $\succeq_\sigma$ and $\approx_\sigma$ are defined analogously. \end{defn} Note that for closed $s,t$ and closed~$\sigma$ in $\beta$-normal form, $s \succ_\sigma t$ iff $s \succ_\sigma^0 t$ (and analogously for~$\succeq,\approx$). In this case we shall often omit the superscript~$0$. The definition of~$\succ$ and~$\succeq$ may be reformulated as follows. \begin{lemma}\label{lem_succ_explicit} $t \succeq s$ if and only if for every closure~$\mathcal{C}$ and every sequence $u_1,\ldots,u_n$ of closed terms and closed type constructors such that $\mathcal{C}(t) u_1 \ldots u_n : \mathtt{nat}$ we have $(\mathcal{C}(t) u_1 \ldots u_n)\mathord{\downarrow} \ge (\mathcal{C}(s) u_1 \ldots u_n)\mathord{\downarrow}$ in natural numbers. An analogous result holds with $\succ$ or $\approx$ instead of~$\succeq$. \end{lemma} \begin{proof} The direction from left to right follows by induction on~$n$; the other by coinduction. \end{proof} In what follows, all proofs by coinduction could be reformulated to instead use the lemma above. However, this would arguably make the proofs less perspicuous. Moreover, a coinductive definition is better suited for a formalisation -- the coinductive proofs here could be written in Coq almost verbatim. Our next task is to show that $\succeq$ and $\succ$ have the desired properties of an ordering pair; e.g., transitivity and compatibility. We first state a simple lemma that will be used implicitly. \begin{lemma} If $\tau \in \mathcal{Y}$ is closed and $\beta$-normal, then $\tau = \mathtt{nat}$ or $\tau = \tau_1\rightarrow\tau_2$ or $\tau = \forall\alpha\sigma$. \end{lemma} \begin{lemma}\label{lem_well_founded} $\succ$ is well-founded. \end{lemma} \begin{proof} It suffices to show this for closed terms and closed types in $\beta$-normal form, because any infinite sequence $t_1 \succ_\tau t_2 \succ_\tau t_3 \succ_\tau \ldots$ induces an infinite sequence $\mathcal{C}(t_1) \succ_{\mathrm{nf}_\beta(\mathcal{C}(\tau))} \mathcal{C}(t_2) \succ_{\mathrm{nf}_\beta(\mathcal{C}(\tau))} \mathcal{C}(t_3) \succ_{\mathrm{nf}_\beta(\mathcal{C}(\tau))} \ldots$ for any closure~$\mathcal{C}$. By induction on the size of a $\beta$-normal type~$\tau$ (with size measured as the number of occurrences of~$\forall$ and~$\rightarrow$) one proves that there does not exist an infinite sequence $t_1 \succ_\tau t_2 \succ_\tau t_3 \succ_\tau \ldots$ For instance, if $\alpha$ has kind~$\kappa$ and $t_1 \succ_{\forall\alpha\tau} t_2 \succ_{\forall\alpha\tau} t_3 \succ_{\forall\alpha\tau} \ldots$ then $\tapp{t_1}{\chi_\kappa} \succ_{\tau'} \tapp{t_2}{\chi_\kappa} \succ_{\tau'} \tapp{t_3}{\chi_\kappa} \succ_{\tau'} \ldots$, where $\tau'=\mathrm{nf}_\beta(\tau[\subst{\alpha}{\chi_\kappa}])$. Because $\tau$ is in $\beta$-normal form, all redexes in $\tau[\subst{\alpha}{\chi_\kappa}]$ are created by the substitution and must have the form $\chi_\kappa u$. Hence, by the definition of~$\chi_\kappa$ (see Definition~\ref{def_types}) the type~$\tau'$ is smaller than~$\tau$. This contradicts the inductive hypothesis. \end{proof} \begin{lemma}\label{lem_transitive} Both $\succ$ and $\succeq$ are transitive. \end{lemma} \begin{proof} We show this for~$\succ$, the proof for~$\succeq$ being analogous. Again, it suffices to prove this for closed terms and closed types in $\beta$-normal form. We proceed by coinduction. If $t_1 \succ_\mathtt{nat} t_2 \succ_\mathtt{nat} t_3$ then $t_1\mathord{\downarrow} > t_2\mathord{\downarrow} > t_3\mathord{\downarrow}$, so $t_1\mathord{\downarrow} > t_3\mathord{\downarrow}$. Thus $t_1 \succ_\mathtt{nat} t_3$. If $t_1 \succ_{\sigma\rightarrow\tau}t_2\succ_{\sigma\rightarrow\tau}t_3$ then $\app{t_1}{q}\succ_{\tau}\app{t_2}{q}\succ_\tau\app{t_3}{q}$ for $q \in \mathcal{I}^f_\sigma$. Hence $\app{t_1}{q}\succ_\tau\app{t_3}{q}$ for $q \in \mathcal{I}^f_\sigma$ by the coinductive hypothesis. Thus $t_1\succ_{\sigma\rightarrow\tau} t_3$. If $t_1 \succ_{\forall(\alpha:\kappa)\sigma}t_2\succ_{\forall(\alpha:\kappa)\sigma}t_3$ then $\tapp{t_1}{\tau}\succ_{\sigma'}\tapp{t_2}{\tau}\succ_{\sigma'}\tapp{t_3}{\tau}$ for any closed~$\tau$ of kind~$\kappa$, where $\sigma' = \mathrm{nf}_\beta(\sigma[\subst{\alpha}{\tau}])$. By the coinductive hypothesis $\tapp{t_1}{\tau}\succ_{\sigma'}\tapp{t_3}{\tau}$; thus $t_1\succ_{\forall\alpha\sigma}t_3$. \end{proof} \begin{lemma}\label{lem_reflexive} $\succeq$ is reflexive. \end{lemma} \begin{proof} By coinduction one shows that $\succeq_\sigma$ is reflexive on closed terms for closed $\beta$-normal~$\sigma$. The case of~$\succeq$ is then immediate from definitions. \end{proof} \begin{lemma}\label{lem:compatibility} The relations~$\succeq$ and~$\succ$ are compatible, i.e., $\succ \cdot \succeq\ \subseteq\ \succ$ and $\succeq \cdot \succ\ \subseteq\ \succ$. \end{lemma} \begin{proof} By coinduction, analogous to the transitivity proof. \end{proof} \begin{lemma}\label{lem_succ_to_succeq} If $t \succ s$ then $t \succeq s$. \end{lemma} \begin{proof} By coinduction. \end{proof} \begin{lemma}\label{lem_leadsto_to_approx} If $t \leadsto s$ then $t \approx s$. \end{lemma} \begin{proof} Follows from Lemma~\ref{lem_succ_explicit}, noting that $t \leadsto s$ implies $\mathcal{C}(t) \leadsto \mathcal{C}(s)$ for all closures~$\mathcal{C}$. \end{proof} \begin{lemma}\label{lem_succ_red} Assume $t \succ s$ (resp.~$t \succeq s$). If $t \leadsto t'$ or $t' \leadsto t$ then $t' \succ s$ (resp.~$t' \succeq s$). If $s \leadsto s'$ or $s' \leadsto s$ then $t \succ s'$ (resp.~$t \succeq s'$). \end{lemma} \begin{proof} Follows from Lemma~\ref{lem_leadsto_to_approx}, transitivity and compatibility. \end{proof} \begin{corollary}\label{cor_succ_da} For $R \in \{\succ,\succeq,\approx\}$: $s\ R\ t$ if and only if $s\downarrow\ R\ t\downarrow$. \end{corollary} \begin{example}\label{ex:plus1} We can prove that $x \oplus \mathtt{lift}_{\mathtt{nat} \rightarrow \mathtt{nat}}(1) \succ x$: by definition, this holds if $s \oplus \mathtt{lift}_{\mathtt{nat} \rightarrow \mathtt{nat}}(1) \succ s$ for all closed $s$, so if $(s \oplus \mathtt{lift}_{\mathtt{nat} \rightarrow \mathtt{nat}}(1))u \succ s u$ for all closed $s,u$. Following Example \ref{ex:arrWreduce} and Lemma \ref{lem_succ_red}, this holds if $s u \oplus 1 \succ s u$. By definition, this is the case if $(s u \oplus 1)\downarrow > (s u)\downarrow$ in the natural numbers, which clearly holds for any $s,u$. \end{example} \subsection{Weak monotonicity}\label{subsec:weakmono} We will now show that $s \succeq s'$ implies $t[\subst{x}{s}] \succeq t[\subst{x}{s'}]$ (weak monotonicity). For this purpose, we prove a few lemmas, many of which also apply to~$\succ$, stating the preservation of~$\succeq$ under term formation operations. We will need these results in the next section. \begin{lemma}\label{lem_app_succ} For $R \in \{\succeq,\succ\}$: if $t\:R\:s$ then $t u\:R\:s u$ with $u$ a term or type constructor. \end{lemma} \begin{proof} Follows from definitions. \end{proof} \begin{lemma}\label{lem:liftgreater} For $R \in \{\succeq,\succ\}$: if $n\:R\:m$ then $\mathtt{lift}_\sigma n\:R\:\mathtt{lift}_\sigma m$ for all types $\sigma$. \end{lemma} \begin{proof} Without loss of generality we may assume $\sigma$ closed and in $\beta$-normal form. By coinduction we show $\mathtt{lift}(n) u_1 \ldots u_k \succeq \mathtt{lift}(m) u_1 \ldots u_k$ for closed $u_1,\ldots,u_k$. First note that $(\mathtt{lift}\,t) u_1 \ldots u_k \leadsto^* \mathtt{lift}(t)$ (with a different type subscript in~$\mathtt{lift}$ on the right side, omitted for conciseness). If $\sigma = \mathtt{nat}$ then $(\mathtt{lift}(n) u_1 \ldots u_k)\mathord{\downarrow} = n \ge m = (\mathtt{lift}(m) u_1 \ldots u_k)\mathord{\downarrow}$. If $\sigma = \tau_1\rightarrow\tau_2$ then by the coinductive hypothesis $\mathtt{lift}(n) u_1 \ldots u_k q \succeq_{\tau_2} \mathtt{lift}(m) u_1 \ldots u_k q$ for any $q \in \mathcal{I}^f_{\tau_1}$, so $\mathtt{lift}(n) u_1 \ldots u_k \succeq_{\sigma} \mathtt{lift}(m) u_1 \ldots u_k$ by definition. If $\sigma = \forall(\alpha:\kappa)\tau$ then by the coinductive hypothesis $\mathtt{lift}(n) u_1 \ldots u_k \xi \succeq_{\sigma'} \mathtt{lift}(m) u_1 \ldots u_k \xi$ for any closed $\xi \in \mathcal{T}_\kappa$, where $\sigma' = \tau[\subst{\alpha}{\xi}]$. Hence $\mathtt{lift}(n) u_1 \ldots u_k \succeq_{\sigma} \mathtt{lift}(m) u_1 \ldots u_k$ by definition. \end{proof} \begin{lemma}\label{lem_flatten_succ} For $R \in \{\succeq,\succ\}$: if $t\:R_\sigma\:s$ then $\mathtt{flatten}_\sigma t\:R_\mathtt{nat}\: \mathtt{flatten}_\sigma s$ for all types $\sigma$. \end{lemma} \begin{proof} Without loss of generality we may assume~$\sigma$ is closed and in $\beta$-normal form. Using Lemma~\ref{lem_succ_red}, the lemma follows by induction on~$\sigma$. \end{proof} \begin{lemma}\label{lem_abs_succ} For $R \in \{\succeq,\succ\}$: if $t\:R\:s$ then $\abs{x}{t}\:R\:\abs{x}{s}$ and $\tabs{\alpha}{t}\:R\:\tabs{\alpha}{s}$. \end{lemma} \begin{proof} Assume $t \succeq_\tau s$ and $x : \sigma$. Let~$\mathcal{C}$ be a closure. We need to show $\mathcal{C}(\abs{x}{t}) \succeq_{\mathcal{C}(\sigma\rightarrow\tau)} \mathcal{C}(\abs{x}{s})$. Let $u \in \mathcal{I}^f_{\mathcal{C}(\sigma)}$. Then $\mathcal{C}' = \mathcal{C}[\subst{x}{u}]$ is a closure and $\mathcal{C}'(t) \succeq_{\mathcal{C}(\tau)} \mathcal{C}'(s)$. Hence $\mathcal{C}(t)[\subst{x}{u}] \succeq_{\mathcal{C}(\tau)} \mathcal{C}(s)[\subst{x}{u}]$. By Lemma~\ref{lem_succ_red} this implies $\mathcal{C}(\abs{x}{t}) u \succeq_{\mathcal{C}(\tau)} \mathcal{C}(\abs{x}{s}) u$. Therefore $\mathcal{C}(\abs{x}{t}) \succeq_{\mathcal{C}(\sigma\rightarrow\tau)} \mathcal{C}(\abs{x}{s})$. The proof for $\succ$ is analogous. \end{proof} \begin{lemma}\label{lem:plustimesmonotonic} Let $s,t,u$ be terms of type $\sigma$. \begin{enumerate} \item If $s \succeq t$ then $s \oplus_\sigma u \succeq t \oplus_\sigma u$, $u \oplus_\sigma s \succeq u \oplus_\sigma t$, $s \otimes_\sigma u \succeq t \otimes_\sigma u$, and $u \otimes_\sigma s \succeq u \otimes_\sigma t$. \item If $s \succ t$ then $s \oplus_\sigma u \succ t \oplus_\sigma u$ and $u \oplus_\sigma s \succ u \oplus_\sigma t$. Moreover, if additionally $u \succeq \mathtt{lift}_\sigma(1)$ then also $s \otimes_\sigma u \succ t \otimes_\sigma u$ and $u \otimes_\sigma s \succ u \otimes_\sigma t$. \end{enumerate} \end{lemma} \begin{proof} It suffices to prove this for closed $s,t,u$ and closed $\sigma$ in $\beta$-normal form. The proof is similar to the proof of Lemma~\ref{lem:liftgreater}. For instance, we show by coinduction that for closed $w_1,\ldots,w_n$ (denoted $\vec{w}$): if $s \vec{w} \succ t \vec{w}$ and $u \vec{w} \succeq \mathtt{lift}(1) \vec{w}$ then $(s \otimes u) \vec{w} \succ (t \otimes u) \vec{w}$. \end{proof} The following lemma depends on the lemmas above. The full proof may be found in \onlypaper{\cite[Appendix~A.2]{versionwithappendix}}% \onlyarxiv{Appendix~\ref{sec_weakly_monotone_proof}}. The proof is actually quite complex, and uses a method similar to Girard's method of candidates for the termination proof. \begin{lemma}[Weak monotonicity]\label{lem_succeq_subst} If $s \succeq s'$ then $t[\subst{x}{s}] \succeq t[\subst{x}{s'}]$. \end{lemma} \begin{corollary}\label{cor_app_wm} If $s \succeq s'$ then $t s \succeq t s'$. \end{corollary} \section{A reduction pair for PFS terms}\label{sec_reduction_pairs} Recall that our goal is to prove termination of reduction in a PFS. To do so, in this section we will define a systematic way to generate \emph{reduction pairs}. We fix a~PFS~$A$, and define: \begin{defn} A binary relation~$R$ on $A$-terms is \emph{monotonic} if $R(s, t)$ implies $R(C[s], C[t])$ for every context~$C$ (we assume $s,t$ have the same type~$\sigma$). A \emph{reduction pair} is a pair~$(\succeq^A,\succ^A)$ of a quasi-order~$\succeq^A$ on $A$-terms and a well-founded ordering~$\succ^A$ on $A$-terms such that: (a) $\succeq^A$ and~$\succ^A$ are compatible, i.e., ${\succ^A} \cdot {\succeq^A} \subseteq {\succ^A}$ and ${\succeq^A} \cdot {\succ^A} \subseteq {\succ^A}$, and (b) $\succeq^A$ and~$\succ^A$ are both monotonic. \end{defn} If we can generate such a pair with $\ell \succ^A r$ for each rule $(\ell,r) \in \mathcal{R}$, then we easily see that the PFS $A$ is terminating. (If we merely have $\ell \succ^A r$ for \emph{some} rules and $\ell \succeq^A r$ for the rest, we can still progress with the termination proof, as we will discuss in Section \ref{sec_rule_removal}.) To generate this pair, we will define the notion of an \emph{interpretation} from the set of $A$-terms to the set $\mathcal{I}$ of interpretation terms, and thus lift the ordering pair $(\succeq,\succ)$ to $A$. In the next section, we will show how this reduction pair can be used in practice to prove termination of PFSs. One of the core ingredients of our interpretation function is a mapping to translate types: \begin{defn} A \emph{type constructor mapping} is a function $\mathcal{T\!M}$ which maps each type constructor symbol to a closed interpretation type constructor of the same kind. A fixed type constructor mapping $\mathcal{T\!M}$ is extended inductively to a function from type constructors to closed interpretation type constructors in the expected way. We denote the extended \emph{interpretation (type) mapping} by~$\typeinterpret{\sigma}$. Thus, e.g.~$\typeinterpret{\quant{\alpha}{\sigma}} = \quant{\alpha}{\typeinterpret{\sigma}}$ and $\typeinterpret{\sigma \rightarrow \tau} = \typeinterpret{\sigma} \rightarrow \typeinterpret{\tau}$. \end{defn} \begin{lemma}\label{lem:substitutioninterpret:types} $\typeinterpret{\sigma}[\alpha:=\typeinterpret{\tau}] = \typeinterpret{\sigma[\alpha:=\tau]}$ \end{lemma} \begin{proof} Induction on~$\sigma$. \end{proof} Similarly, we employ a \emph{symbol mapping} as the key ingredient to interpret PFS terms. \begin{defn} Given a fixed type constructor mapping~$\mathcal{T\!M}$, a \emph{symbol mapping} is a function $\mathcal{J}$ which assigns to each function symbol $\mathtt{f} : \rho$ a closed interpretation term $\mathcal{J}(\mathtt{f})$ of type~$\typeinterpret{\rho}$. For a fixed symbol mapping $\mathcal{J}$, we define the \emph{interpretation mapping} $\interpret{s}$ inductively: \[ \begin{array}{rclcrclcrcl} \interpret{x} & = & x &\quad& \interpret{\tabs{\alpha}{s}} & = & \tabs{\alpha}{\interpret{s}} &\quad& \interpret{\app{t_1}{t_2}} &=& \app{\interpret{t_1}}{\interpret{t_2}} \\ \interpret{\mathtt{f}} &=& \mathcal{J}(\mathtt{f}) & \quad & \interpret{\abs{x:\sigma}{s}} & = & \abs{x:\typeinterpret{\sigma}}{ \interpret{s}} & \quad & \interpret{\tapp{t}{\tau}} &=& \tapp{\interpret{t}}{\typeinterpret{\tau}} \\ \end{array} \] \end{defn} Note that $\typeinterpret{\sigma},\typeinterpret{\tau}$ above depend on~$\mathcal{T\!M}$. Essentially, $\interpret{\cdot}$ substitutes $\mathcal{T\!M}(\mathtt{c})$ for type constructor symbols $\mathtt{c}$, and $\mathcal{J}(\mathtt{f})$ for function symbols $\mathtt{f}$, thus mapping $A$-terms to interpretation terms. This translation preserves typing: \begin{lemma} If $s : \sigma$ then $\interpret{s} : \typeinterpret{\sigma}$. \end{lemma} \begin{proof} By induction on the form of $s$, using Lemma~\ref{lem:substitutioninterpret:types}. \end{proof} \begin{lemma}\label{lem:substitutioninterpret} For all $s,t,x,\alpha,\tau$: $\interpret{s}[\alpha:=\typeinterpret{\tau}] = \interpret{s[\alpha:=\tau]}$ and $\interpret{s}[x:=\interpret{t}] = \interpret{s[x:=t]}$. \end{lemma} \begin{proof} Induction on~$s$. \end{proof} \begin{defn} For a fixed type constructor mapping $\mathcal{T\!M}$ and symbol mapping $\mathcal{J}$, the \emph{interpretation pair} $(\succeq^{\Termmap},\succ^{\Termmap})$ is defined as follows: $s \succeq^{\Termmap} t$ if $\interpret{s} \succeq \interpret{t}$, and $s \succ^{\Termmap} t$ if $\interpret{s} \succ \interpret{t}$. \end{defn} \begin{remark} The polymorphic lambda-calculus has a much greater expressive power than the simply-typed lambda-calculus. Inductive data types may be encoded, along with their constructors and recursors with appropriate derived reduction rules. This makes our interpretation method easier to apply, even in the non-polymorphic setting, thanks to more sophisticated ``programming'' in the interpretations. The reader is advised to consult e.g.~\cite[Chapter~11]{Girard1989} for more background and explanations. We demonstrate the idea by presenting an encoding for the recursive type $\mathtt{List}$ and its fold-left function (see also Ex.~\ref{ex_fold_interpretation}). \end{remark} \begin{example}\label{ex:notyetmono} Towards a termination proof of Example~\ref{ex_fold_pafs}, we set $\mathcal{T\!M}(\mathtt{List}) = \forall \beta. (\forall \alpha. \beta \rightarrow \alpha \rightarrow \beta) \rightarrow \beta \rightarrow \beta$ and $\mathcal{J}(\mathtt{nil}) = \tabs{\beta}{\abs{f:\quant{\alpha}{\beta \rightarrow \alpha \rightarrow \beta}}{\abs{x:\beta}{x}}}$. If we additionally choose $\mathcal{J}(\mathtt{foldl}) = \tabs{\beta}{\abs{f}{\abs{x}{\abs{l}{l \beta f x}}} \oplus \mathtt{lift}_\beta(1)}$, we have $\interpret{\mathtt{foldl}_{\sigma}(f,s,\mathtt{nil})} = (\tabs{\beta}{ \abs{f}{\abs{x}{\abs{l}{l \beta f x}}} \oplus \mathtt{lift}_\beta(1)}) \typeinterpret{\sigma} f s (\tabs{\beta}{\abs{f}{\abs{x}{x}}}) \leadsto^* s \oplus \mathtt{lift}_{\interpret{\sigma}}(1)$ by $\beta$-reduction steps. An extension of the proof from Example~\ref{ex:plus1} shows that this term $\succ \interpret{s}$. \end{example} It is easy to see that $\succeq^{\Termmap}$ and $\succ^{\Termmap}$ have desirable properties such as transitivity, reflexivity (for $\succeq^{\Termmap}$) and well-foundedness (for $\succ^{\Termmap}$). However, $\succ^{\Termmap}$ is not necessarily monotonic. Using the interpretation from Example~\ref{ex:notyetmono}, $\interpret{\mathtt{foldl}_{\sigma}(\abs{x}{s},t,\mathtt{nil})} = \interpret{\mathtt{fold}_{\sigma}(\abs{x}{w},t,\mathtt{nil})}$ regardless of $s$ and $w$, so a reduction in $s$ would not cause a decrease in $\succ^{\Termmap}$. To obtain a reduction pair, we must impose certain conditions on $\mathcal{J}$; in particular, we will require that $\mathcal{J}$ is \emph{safe}. \begin{defn}\label{def_safe} If $s_1 \succ s_2$ implies $t[\subst{x}{s_1}] \succ t[\subst{x}{s_2}]$, then the interpretation term~$t$ is \emph{safe for~$x$}. A symbol mapping~$\mathcal{J}$ is \emph{safe} if for all $ \mathtt{f} : \forall (\alpha_1 : \kappa_1) \ldots \forall (\alpha_n : \kappa_n) . \sigma_1 \rightarrow \ldots \rightarrow \sigma_k \rightarrow \tau $ with~$\tau$ a type atom we have: $\mathcal{J}(\mathtt{f}) = \tabs{\alpha_1 \dots \alpha_n}{\abs{x_1 \dots x_k}{t}}$ with $t$ safe for each~$x_i$. \end{defn} \begin{lemma}\label{lem_safe} \begin{enumerate} \item $x u_1 \ldots u_m$ is safe for~$x$. \item If $t$ is safe for~$x$ then so are~$\mathtt{lift}(t)$ and~$\mathtt{flatten}(t)$. \item If $s_1$ is safe for~$x$ or $s_2$ is safe for~$x$ then $s_1 \oplus s_2$ is safe for~$x$. \item If either (a) $s_1$ is safe for~$x$ and $s_2 \succeq \mathtt{lift}(1)$, or (b) $s_2$ is safe for~$x$ and $s_1 \succeq \mathtt{lift}(1)$, then $s_1 \otimes s_2$ is safe for~$x$. \item If~$t$ is safe for~$x$ then so is~$\tabs{\alpha}{t}$ and~$\abs{y}{t}$ ($y \ne x$). \end{enumerate} \end{lemma} \begin{proof} Each point follows from one of the lemmas proven before, Lemma~\ref{lem_succ_to_succeq}, Lemma~\ref{lem_succeq_subst}, Lemma~\ref{lem:compatibility} and the transitivity of~$\succeq$. For instance, for the first, assume $s_1 \succ s_2$ and let $u_i^j=u_i[\subst{x}{s_j}]$. Then $(x u_1 \ldots u_m)[\subst{x}{s_1}] = s_1 u_1^1 \ldots u_m^1$. By Lemma~\ref{lem_app_succ} we have $s_1 u_1^1 \ldots u_m^1 \succ s_2 u_1^1 \ldots u_m^1$. By Lemma~\ref{lem_succ_to_succeq} and Lemma~\ref{lem_succeq_subst} we have $u_i^1 \succeq u_i^2$. By Corollary~\ref{cor_app_wm} and the transitivity of~$\succeq$ we obtain $s_2 u_1^1 \ldots u_m^1 \succeq s_2 u_1^2 \ldots u_m^2$. By Lemma~\ref{lem:compatibility} finally $(x u_1 \ldots u_m)[\subst{x}{s_1}] = s_1 u_1^1 \ldots u_m^1 \succ s_2 u_1^2 \ldots u_m^2 = (x u_1 \ldots u_m)[\subst{x}{s_2}]$. \end{proof} \begin{lemma}\label{lem_succinterpret_monotonic} If~$\mathcal{J}$ is safe then~$\succ^{\Termmap}$ is monotonic. \end{lemma} \begin{proof} Assume $s_1 \succ^{\Termmap} s_2$. By induction on a context~$C$ we show $C[s_1] \succ^{\Termmap} C[s_2]$. If $C=\Box$ then this is obvious. If $C = \abs{x}{C'}$ or $C = \tabs{\alpha}{C'}$ then $C'[s_1] \succ^{\Termmap} C'[s_2]$ by the inductive hypothesis, and thus $C[s_1] \succ^{\Termmap} C[s_2]$ follows from Lemma~\ref{lem_abs_succ} and definitions. If $C = C' t$ then $C'[s_1] \succ^{\Termmap} C'[s_2]$ by the inductive hypothesis, so $C[s_1] \succ^{\Termmap} C[s_2]$ follows from definitions. Finally, assume $C = \app{t}{C'}$. Then $t = \mathtt{f} \rho_1 \ldots \rho_n t_1 \ldots t_m$ where $ \mathtt{f} : \forall (\alpha_1 : \kappa_1) \ldots \forall (\alpha_n : \kappa_n) . \sigma_1 \rightarrow \ldots \rightarrow \sigma_k \rightarrow \tau $ with~$\tau$ a type atom, $m < k$, and $\mathcal{J}(\mathtt{f}) = \tabs{\alpha_1 \dots \alpha_n}{\abs{x_1 \dots x_k}{u}}$ with $u$ safe for each~$x_i$. Without loss of generality assume $m=k-1$. Then $\interpret{C[s_i]} \leadsto u'[\subst{x_k}{\interpret{C'[s_i]}}]$ where $u'=u[\subst{\alpha_1}{\typeinterpret{\rho_1}}]\ldots[\subst{\alpha_n}{\typeinterpret{\rho_n}}][\subst{x_1}{\interpret{t_1}}]\ldots[\subst{x_{k-1}}{\interpret{t_{k-1}}}]$. By the inductive hypothesis $\interpret{C'[s_1]} \succ \interpret{C'[s_2]}$. Hence $u'[\subst{x_k}{\interpret{C'[s_1]}}] \succ u'[\subst{x_k}{\interpret{C'[s_2]}}]$, because~$u$ is safe for~$x_k$. Thus $\interpret{C[s_1]} \succ \interpret{C[s_2]}$ by Lemma~\ref{lem_succ_red}. \end{proof} \begin{theorem}\label{thm_reduction_pair} If~$\mathcal{J}$ is safe then $(\succeq^{\Termmap},\succ^{\Termmap})$ is a reduction pair. \end{theorem} \begin{proof} By Lemmas~\ref{lem_transitive} and~\ref{lem_reflexive}, $\succeq^{\Termmap}$ is a quasi-order. Lemmas~\ref{lem_well_founded} and~\ref{lem_transitive} imply that~$\succ^{\Termmap}$ is a well-founded ordering. Compatibility follows from Lemma~\ref{lem:compatibility}. Monotonicity of~$\succeq^{\Termmap}$ follows from Lemma~\ref{lem_succeq_subst}. Monotonicity of~$\succ^{\Termmap}$ follows from Lemma~\ref{lem_succinterpret_monotonic}. \end{proof} \begin{example}\label{ex_fold_interpretation} The following is a safe interpretation for the PFS from Example~\ref{ex_fold_pafs}: \[ \begin{array}{rcll} \mathcal{T\!M}(\mathtt{List}) & = & \multicolumn{2}{l}{ \quant{\beta}{(\quant{\alpha}{\beta \rightarrow \alpha \rightarrow \beta}) \rightarrow \beta \rightarrow \beta}}\\ \mathcal{J}(\mathtt{@}) & = & \Lambda \alpha.\Lambda\beta. \lambda f.\lambda x. & f \cdot x \oplus \mathtt{lift}_\beta(\mathtt{flatten}_\alpha(x)) \\ \mathcal{J}(\mathtt{A}) & = & \Lambda \alpha.\Lambda \beta.\lambda x. & x * \beta \\ \mathcal{J}(\mathtt{nil}) & = & & \Lambda \beta.\lambda f.\abs{x}{x} \\ \mathcal{J}(\mathtt{cons}) & = & \Lambda \alpha.\lambda h.\lambda t. & \Lambda \beta.\lambda f.\lambda x. t \beta f (f \alpha x h \oplus \mathtt{lift}_\beta(\mathtt{flatten}_\beta(x)\ \oplus \\ & & & \phantom{ABCDEFGHIJKLMNOP,} \mathtt{flatten}_\alpha(h)))\ \oplus\ \\ & & & \phantom{ABCDE\ } \mathtt{lift}_\beta(\mathtt{flatten}_\beta(f\alpha x h) \oplus \mathtt{flatten}_\alpha(h) \oplus 1) \\ \mathcal{J}(\mathtt{foldl}) & = & \Lambda \beta.\lambda f. \lambda x. \lambda l. & l \beta f x \oplus \mathtt{lift}_\beta(\mathtt{flatten}_{\forall \alpha. \beta \rightarrow \alpha \rightarrow \beta}(f)\ \oplus \\ & & & \phantom{ABCDEFG\ \ } \mathtt{flatten}_\beta(x) \oplus 1) \\ \end{array} \] Note that $\mathcal{J}(\mathtt{cons})$ is \emph{not} required to be safe for $x$, since $x$ is not an argument of $\mathtt{cons}$: following its declaration, $\mathtt{cons}$ takes one type and two terms as arguments. The variable $x$ is only part of the \emph{interpretation}. Note also that the current interpretation is a mostly straightforward extension of Example~\ref{ex:notyetmono}: we retain the same \emph{core} interpretations (which, intuitively, encode $\mathtt{@}$ and $\mathtt{A}$ as forms of application and encode a list as the function that executes a fold over the list's contents), but we add a clause $\oplus \mathtt{lift}(\mathtt{flatten}(x))$ for each argument $x$ that the initial interpretation is not safe for. The only further change is that, in $\mathcal{J}(\mathtt{cons})$, the part between brackets has to be extended. This was necessitated by the change to $\mathcal{J}(\mathtt{foldl})$, in order for the rules to still be oriented (as we will do in Example \ref{ex_fold_final}). \end{example} \section{Proving termination with rule removal}\label{sec_rule_removal} A PFS $A$ is certainly terminating if its reduction relation $\arr{\mathcal{R}}$ is contained in a well-founded relation, which holds if $\ell \succ^{\Termmap} r$ for all its rules $(\ell,r)$. However, sometimes it is cumbersome to find an interpretation that orients all rules strictly. To illustrate, the interpretation of Example \ref{ex_fold_interpretation} gives $\ell \succ^{\Termmap} r$ for two of the rules and $\ell \succeq^{\Termmap} r$ for the others (as we will see in Example \ref{ex_fold_final}). In such cases, proof progress is still achieved through \emph{rule removal}. \begin{theorem}\label{thm:ruleremove} Let $\mathcal{R} = \mathcal{R}_1 \cup \mathcal{R}_2$, and suppose that $\mathcal{R}_1\subseteq{\succ^\mathcal{R}}$ and $\mathcal{R}_2\subseteq{\succeq^\mathcal{R}}$ for a reduction pair $(\succeq^\mathcal{R},\succ^\mathcal{R})$. Then $\arr{\mathcal{R}}$ is terminating if and only if $\arr{\mathcal{R}_2}$ is (so certainly if $\mathcal{R}_2 = \emptyset$). \end{theorem} \begin{proof} Monotonicity of~$\succeq^\mathcal{R}$ and~$\succ^\mathcal{R}$ implies that ${\arr{\mathcal{R}_1}}\subseteq{\succ^\mathcal{R}}$ and ${\arr{\mathcal{R}_2}}\subseteq{\succeq^\mathcal{R}}$. By well-foundedness of $\succ^\mathcal{R}$, compatibility of~$\succeq^\mathcal{R}$ and~$\succ^\mathcal{R}$, and transitivity of~$\succeq^\mathcal{R}$, every infinite $\arr{\mathcal{R}}$ sequence can contain only finitely many $\arr{\mathcal{R}_1}$ steps. \end{proof} The above theorem gives rise to the following \emph{rule removal} algorithm: \begin{enumerate} \item While $\mathcal{R}$ is non-empty: \begin{enumerate} \item Construct a reduction pair $(\succeq^\mathcal{R},\succ^\mathcal{R})$ such that all rules in $\mathcal{R}$ are oriented by $\succeq^\mathcal{R}$ or $\succ^\mathcal{R}$, and at least one of them is oriented using $\succ^\mathcal{R}$. \item Remove all rules ordered by $\succ^\mathcal{R}$ from $\mathcal{R}$. \end{enumerate} \end{enumerate} If this algorithm succeeds, we have proven termination. \medskip To use this algorithm with the pair $(\succeq^{\Termmap},\succ^{\Termmap})$ from Section~\ref{sec_reduction_pairs}, we should identify an interpretation $(\mathcal{T\!M},\mathcal{J})$ such that (a) $\mathcal{J}$ is safe, (b) all rules can be oriented with $\succeq^{\Termmap}$ or $\succ^{\Termmap}$, and (c) at least one rule is oriented with $\succ^{\Termmap}$. The first requirement guarantees that $(\succeq^{\Termmap},\succ^{\Termmap})$ is a reduction pair (by Theorem~\ref{thm_reduction_pair}). Lemma~\ref{lem_safe} provides some sufficient safety criteria. The second and third requirements have to be verified for each individual rule. \begin{example}\label{ex_fold_intermediate} We continue with our example of fold on heterogeneous lists. We prove termination by rule removal, using the symbol mapping from Example~\ref{ex_fold_interpretation}. We will show: \[ \begin{array}{rcl} @_{\sigma,\tau}(\abs{x:\sigma}{s},t) & \succeq^{\Termmap} & s[x:=t] \\ \mathtt{A}_{\abs{\alpha}{\sigma},\tau}(\tabs{\alpha}{s}) & \succeq^{\Termmap} & s[\alpha:=\tau] \\ \mathtt{foldl}_\sigma(f,s,\mathtt{nil}) & \succ^{\Termmap} & s \\ \mathtt{foldl}_\sigma(f,s,\mathtt{cons}_\tau(h,t)) & \succ^{\Termmap} & \mathtt{foldl}_\sigma(f,@_{\tau,\sigma}(@_{\sigma,\tau \rightarrow\sigma}( \mathtt{A}_{\abs{\alpha}{\sigma\rightarrow\alpha\rightarrow\sigma}, \tau}(f),s),h),t) \\ \end{array} \] Consider the first inequality; by definition it holds if $\interpret{@_{\sigma,\tau}(\abs{x:\sigma}{s},t)} \succeq \interpret{s[x:=t]}$. Since $\interpret{@_{\sigma,\tau}(\abs{x: \sigma}{s},t)} \leadsto^* \interpret{s}[x:=\interpret{t}] \oplus \mathtt{lift}_{\typeinterpret{\tau}}(\mathtt{flatten}_{\typeinterpret{\sigma}}( \interpret{t}))$, and $\interpret{s}[x:=\interpret{t}] = \interpret{s[x:=t]}$ (by Lemma~\ref{lem:substitutioninterpret}), it suffices by Lemma~\ref{lem_leadsto_to_approx} if $\interpret{s[x:=t]} \oplus \mathtt{lift}_{\typeinterpret{\tau}}(\mathtt{flatten}_{\typeinterpret{\sigma}}( \interpret{t})) \succeq \interpret{s[x:=t]}$. This is an instance of the general rule $u \oplus w \succeq u$ that we will obtain below. \end{example} To prove inequalities $s \succ t$ and $s \succeq t$, we will often use that $\succ$ and $\succeq$ are transitive and compatible with each other (Lem.~\ref{lem_transitive} and~\ref{lem:compatibility}), that $\leadsto\:\subseteq\:\approx$ (Lem.~\ref{lem_leadsto_to_approx}), that $\succeq$ is monotonic (Lem.~\ref{lem_succeq_subst}), that both $\succ$ and $\succeq$ are monotonic over $\mathtt{lift}$ and $\mathtt{flatten}$ (Lem.~\ref{lem:liftgreater} and \ref{lem_flatten_succ}) and that interpretations respect substitution (Lem.~\ref{lem:substitutioninterpret}). We will also use Lemma \ref{lem:plustimesmonotonic} which states (among other things) that $s \succ t$ implies $s \oplus u \succ t \oplus u$. In addition, we can use the calculation rules below. The proofs may be found in \onlypaper{\cite[Appendix~A.3]{versionwithappendix}}% \onlyarxiv{Appendix~\ref{app_rule_removal}}. \begin{lemma}\label{lem:approxproperties} For all types $\sigma$ and all terms $s,t,u$ of type $\sigma$, we have: \begin{enumerate} \item\label{lem:approx:symmetry} $s \oplus_\sigma t \approx t \oplus_\sigma s$ and $s \otimes_\sigma t \approx t \otimes_\sigma s$; \item\label{lem:approx:assoc} $s \oplus_\sigma (t \oplus_\sigma u) \approx (s \oplus_\sigma t) \oplus_\sigma u$ and $s \otimes_\sigma (t \otimes_\sigma u) \approx (s \otimes_\sigma t) \otimes_\sigma u$; \item\label{lem:approx:distribution} $s \otimes_\sigma (t \oplus_\sigma u) \approx (s \otimes_\sigma t) \oplus_\sigma (s \otimes_\sigma u)$; \item\label{lem:approx:neutral} $(\mathtt{lift}_\sigma 0) \oplus_\sigma s \approx s$ and $(\mathtt{lift}_\sigma 1) \otimes_\sigma s \approx s$. \end{enumerate} \end{lemma} \begin{lemma}\label{lem_lift_approx} \begin{enumerate} \item\label{lem_lift_approx:plussplit} $\mathtt{lift}_\sigma(n+m) \approx_\sigma (\mathtt{lift}_\sigma n) \oplus_\sigma (\mathtt{lift}_\sigma m)$; \item $\mathtt{lift}_\sigma(n m) \approx_\sigma (\mathtt{lift}_\sigma n) \otimes_\sigma (\mathtt{lift}_\sigma m)$; \item $\mathtt{flatten}_\sigma(\mathtt{lift}_\sigma(n)) \approx n$. \end{enumerate} \end{lemma} \begin{lemma}\label{lem:plusparts} For all types $\sigma$, terms $s,t$ of type $\sigma$ and natural numbers $n > 0$: \begin{enumerate} \item\label{lem:plusparts:removefromsucceq} $s \oplus_{\sigma} t \succeq s$ and $s \oplus_{\sigma} t \succeq t$; \item $s \oplus_{\sigma} (\mathtt{lift}_{\sigma} n) \succ s$ and $(\mathtt{lift}_{\sigma} n) \oplus_{\sigma} t \succ t$. \end{enumerate} \end{lemma} Note that these calculation rules immediately give the inequality $x \oplus \mathtt{lift}_{nat \rightarrow \mathtt{nat}}(1) \succ x$ from Example~\ref{ex:plus1}, and also that $\mathtt{lift}_\sigma(n) \succ \mathtt{lift}_\sigma(m)$ whenever $n > m$. By Lemmas~\ref{lem:plustimesmonotonic} and~\ref{lem:plusparts} we can use \emph{absolute positiveness}: the property that (a) $s \succeq t$ if we can write $s \approx s_1 \oplus \dots \oplus s_n$ and $t \approx t_1 \oplus \dots \oplus t_k$ with $k \leq n$ and $s_i \succeq t_i$ for all $i \leq k$, and (b) if moreover $s_1 \succ t_1$ then $s \succ t$. This property is typically very useful to dispense the obligations obtained in a termination proof with polynomial interpretations. \begin{example}\label{ex_fold_final} We now have the tools to finish the example of heterogeneous lists (still using the interpretation from Example~\ref{ex_fold_interpretation}). The proof obligation from Example \ref{ex_fold_intermediate}, that $\interpret{@_{\sigma,\tau}(\abs{x:\sigma}{s},t)} \succeq \interpret{s[x:=t]}$, is completed by Lemma \ref{lem:plusparts}(\ref{lem:plusparts:removefromsucceq}). We have $\interpret{\mathtt{A}_{\abs{\alpha}{\sigma}, \tau}(\tabs{\alpha}{s})} \approx \interpret{\tabs{\alpha}{s}} * \typeinterpret{\tau} \approx \interpret{s[\alpha:=\tau]}$ by Lemma \ref{lem:substitutioninterpret}, and $\interpret{\mathtt{foldl}_\sigma(f,s,\mathtt{nil})} = \interpret{\mathtt{nil}}*\typeinterpret{\sigma} \cdot \interpret{f} \cdot \interpret{s} \oplus \mathtt{lift}_{\typeinterpret{\sigma}}(\langle \text{something}\rangle\oplus 1) \approx \interpret{s} \oplus \mathtt{lift}_{\typeinterpret{\sigma}}(\langle\text{something}\rangle\oplus 1) \succ \interpret{s}$ by Lemmas \ref{lem_lift_approx}(\ref{lem_lift_approx:plussplit}) and \ref{lem:plusparts}(\ref{lem:plusparts:removefromsucceq}). For the last rule note that (using only Lemmas \ref{lem_leadsto_to_approx} and \ref{lem_lift_approx}(\ref{lem_lift_approx:plussplit})): \[ \begin{array}{l} \interpret{\mathtt{foldl}_\sigma(f,s,\mathtt{cons}_\tau(h,t))} \approx \\ \interpret{\mathtt{cons}_\tau(h,t))} * \typeinterpret{\sigma} \cdot \interpret{f} \cdot \interpret{s} \oplus \mathtt{lift}_{\typeinterpret{\sigma}}( \mathtt{flatten}(\interpret{f}) \oplus \mathtt{flatten}(\interpret{s}) \oplus 1) \approx \\ (\ \interpret{t} * \typeinterpret{\sigma} \cdot \interpret{f} \cdot (\interpret{f} * \typeinterpret{\tau} \cdot \interpret{s} \cdot \interpret{h} \oplus \mathtt{lift}_{\typeinterpret{\sigma}}(\mathtt{flatten}(\interpret{s}) \oplus \mathtt{flatten}(\interpret{h})))\ \oplus \\ \phantom{AB} \mathtt{lift}_{\typeinterpret{\sigma}}(\mathtt{flatten}(\interpret{f} * \typeinterpret{\tau} \cdot \interpret{s} \cdot \interpret{h}) \oplus \mathtt{flatten}(\interpret{h}) \oplus 1)\ )\ \oplus \\ \phantom{A} \mathtt{lift}_{\typeinterpret{\sigma}}(\mathtt{flatten}(\interpret{f}) \oplus \mathtt{flatten}(\interpret{s}) \oplus 1) \approx \\ \interpret{t} * \typeinterpret{\sigma} \cdot \interpret{f} \cdot (\ \interpret{f} * \typeinterpret{\tau} \cdot \interpret{s} \cdot \interpret{h} \oplus \mathtt{lift}_{\typeinterpret{\sigma}}(\mathtt{flatten}(\interpret{s}) \oplus \mathtt{flatten}(\interpret{h}))\ )\ \oplus \\ \phantom{A} \mathtt{lift}_{\typeinterpret{\sigma}}(\mathtt{flatten}(\interpret{f} * \typeinterpret{ \tau} \cdot \interpret{s} \cdot \interpret{h}) \oplus \mathtt{flatten}(\interpret{h}) \oplus \mathtt{flatten}(\interpret{f}) \oplus\mathtt{flatten}(\interpret{s}) \oplus 2) \\ \end{array} \] On the right-hand side of the inequality, noting that $\mathtt{lift}_{\sigma \rightarrow \tau}(u) \cdot w \leadsto^* \mathtt{lift}_{\tau}(u)$, we have: \[ \begin{array}{l} \interpret{\mathtt{foldl}_\sigma(f,@_{\tau,\sigma}(@_{\sigma,\tau \rightarrow\sigma}( \mathtt{A}_{\abs{\alpha}{\sigma\rightarrow\alpha\rightarrow\sigma}, \tau}(f),s),h),t)} \approx \\ \mathcal{J}(\mathtt{foldl})_\sigma(\interpret{f},\ \interpret{f} * \typeinterpret{\tau} \cdot \interpret{s} \cdot \interpret{h} \oplus \mathtt{lift}_{\typeinterpret{\sigma}}(\mathtt{flatten}(\interpret{s}) \oplus \mathtt{flatten}(\interpret{h})),\ \interpret{t}) \approx \\ \interpret{t} * \typeinterpret{\sigma} \cdot \interpret{f} \cdot (\ \interpret{f} * \typeinterpret{\tau} \cdot \interpret{s} \cdot \interpret{h} \oplus \mathtt{lift}_{\typeinterpret{\sigma}}(\mathtt{flatten}(\interpret{s}) \oplus \mathtt{flatten}(\interpret{h}))\ )\ \oplus \\ \phantom{A} \mathtt{lift}_{\typeinterpret{\sigma}}(\mathtt{flatten}(\interpret{f}) \oplus \mathtt{flatten}(\interpret{f} * \typeinterpret{\tau} \cdot \interpret{s} \cdot \interpret{h}\ \oplus \\ \phantom{ABCDEFGHIJKLMNOPQRSt} \mathtt{lift}_{\typeinterpret{\sigma}}(\mathtt{flatten}(\interpret{s}) \oplus \mathtt{flatten}(\interpret{h}))) \oplus 1) \approx \\ \interpret{t} * \typeinterpret{\sigma} \cdot \interpret{f} \cdot (\ \interpret{f} * \typeinterpret{\tau} \cdot \interpret{s} \cdot \interpret{h} \oplus \mathtt{lift}_{\typeinterpret{\sigma}}(\mathtt{flatten}(\interpret{s}) \oplus \mathtt{flatten}(\interpret{h}))\ )\ \oplus \\ \phantom{A} \mathtt{lift}_{\typeinterpret{\sigma}}(\mathtt{flatten}(\interpret{f}) \oplus \mathtt{flatten}(\interpret{f} * \typeinterpret{\tau} \cdot \interpret{s} \cdot \interpret{h}) \oplus \mathtt{flatten}(\interpret{s}) \oplus \mathtt{flatten}(\interpret{h}) \oplus 1) \end{array} \] Now the right-hand side is the left-hand side $\oplus\ \mathtt{lift}(1)$. Clearly, the rule is oriented with $\succ$. Thus, we may remove the last two rules, and continue the rule removal algorithm with only the first two, which together define $\beta$-reduction. This is trivial, for instance with an interpretation $\mathcal{J}(@) = \Lambda \alpha.\Lambda \beta.\lambda f.\lambda x. (f \cdot x) \oplus \mathtt{lift}_\beta(\mathtt{flatten}_\alpha(x) \oplus 1)$ and $\mathcal{J}(\mathtt{A}) = \Lambda \alpha.\Lambda \beta.\lambda x. x * \beta \oplus \mathtt{lift}_{\alpha\beta}(1)$. \end{example} \section{A larger example}\label{sec:examples} System~$\mathtt{F}$ is System~$\mathtt{F}_\omega$ where no higher kinds are allowed, i.e., there are no type constructors except types. By the Curry-Howard isomorphism~$\mathtt{F}$ corresponds to the universal-implicational fragment of intuitionistic second-order propositional logic, with the types corresponding to formulas and terms to natural deduction proofs. The remaining connectives may be encoded in~$\mathtt{F}$, but the permutative conversion rules do not hold~\cite{Girard1989}. In this section we show termination of the system IPC2 (see~\cite{SorensenUrzyczyn2010}) of intuitionistic second-order propositional logic with all connectives and permutative conversions, minus a few of the permutative conversion rules for the existential quantifier. The paper~\cite{SorensenUrzyczyn2010} depends on termination of~IPC2, citing a proof from~\cite{Wojdyga2008}, which, however, later turned out to be incorrect. Termination of Curry-style~IPC2 without~$\bot$ as primitive was shown in~\cite{Tatsuta2007}. To our knowledge, termination of the full system~IPC2 remains an open problem, strictly speaking. \begin{remark} Our method builds on the work of van de Pol and Schwichtenberg, who used higher-order polynomial interpretations to prove termination of a fragment of intuitionistic first-order logic with permutative conversions~\cite{PolSchwichtenberg1995}, in the hope of providing a more perspicuous proof of this well-known result. Notably, they did not treat disjunction, as we will do. More fundamentally, their method cannot handle impredicative polymorphism necessary for second-order logic. \end{remark} The system IPC2 can be seen as a PFS with type constructors: \[ \begin{array}{c} \Sigma^T_\kappa = \{\quad \bot : *,\quad \mathtt{or} : * \Rightarrow * \Rightarrow *,\quad \mathtt{and} : * \Rightarrow * \Rightarrow *,\quad \exists : (* \Rightarrow *) \Rightarrow * \} \end{array} \] We have the following function symbols: \[ \begin{array}{rclcrcl} @ & : & \forall \alpha \forall \beta . (\alpha \rightarrow \beta) \rightarrow \alpha \rightarrow \beta & \quad & \epsilon & : & \forall \alpha . \bot \rightarrow \alpha \\ \mathtt{tapp} & : & \forall \alpha : * \Rightarrow * . \forall \beta . (\forall \beta [\alpha \beta]) \rightarrow \alpha \beta & \quad & \mathtt{pr}^1 & : & \forall \alpha \forall \beta . \mathtt{and}\, \alpha\, \beta \rightarrow \alpha \\ \mathtt{pair} & : & \forall \alpha \forall \beta . \alpha \rightarrow \beta \rightarrow \mathtt{and}\, \alpha\, \beta & \quad & \mathtt{pr}^2 & : & \forall \alpha \forall \beta . \mathtt{and}\, \alpha\, \beta \rightarrow \beta \\ \mathtt{case} & : & \forall \alpha \forall \beta \forall \gamma . \mathtt{or}\, \alpha\, \beta \rightarrow (\alpha \rightarrow \gamma) \rightarrow (\beta \rightarrow \gamma) \rightarrow \gamma & \quad & \mathtt{in}^1 & : & \forall \alpha \forall \beta . \alpha \rightarrow \mathtt{or}\, \alpha\, \beta \\ \mathtt{let} & : & \forall \alpha : * \Rightarrow * . \forall \beta . (\exists (\alpha)) \rightarrow (\forall \gamma . \alpha \gamma \rightarrow \beta) \rightarrow \beta & \quad & \mathtt{in}^2 & : & \forall \alpha \forall \beta . \beta \rightarrow \mathtt{or}\, \alpha\, \beta \\ \mathtt{ext} & : & \forall \alpha : * \Rightarrow * . \forall \beta . \alpha \beta \rightarrow \exists (\alpha) \end{array} \] The types represent formulas in intuitionistic second-order propositional logic, and the terms represent proofs. For example, a term $\mathtt{case}_{\sigma,\tau,\rho}\ s\ u\ v$ is a proof term of the formula $\rho$, built from a proof $s$ of $\mathtt{or}\ \sigma\ \tau$, a proof $u$ that $\sigma$ implies $\rho$ and a proof $v$ that $\tau$ implies $\rho$. Proof terms can be simplified using 28 reduction rules, including the following (the full set of rules is available in \onlypaper{\cite[Appendix~B]{versionwithappendix}}% \onlyarxiv{Appendix~\ref{app_ineqs}}): \[ \begin{array}{rclrcl} @_{\sigma,\tau}(\abs{x}{s},t) & \longrightarrow & s[x:=t] & \mathtt{case}_{\sigma,\tau,\rho}(\mathtt{in}^1_{\sigma,\tau}(u), \abs{x}{s},\abs{y}{t}) & \longrightarrow & s[x:=u] \\ \mathtt{tapp}_{\abs{\alpha}{\sigma},\tau}(\tabs{\alpha}{s}) & \longrightarrow & s[\alpha:=\tau] & \mathtt{case}_{\sigma,\tau,\rho}(\mathtt{in}^2_{\sigma,\tau}(u), \abs{x}{s},\abs{y}{t}) & \longrightarrow & t[x:=u] \\ \mathtt{pr}^1_{\sigma,\tau}(\mathtt{pair}_{\sigma,\tau}(s,t)) & \longrightarrow & s & \mathtt{let}_{\varphi,\rho}(\mathtt{ext}_{\varphi,\tau}(s),\tabs{\alpha}{\abs{x}{t}}) & \longrightarrow & t[\alpha:=\tau][x:=s] \\ \mathtt{pr}^2_{\sigma,\tau}(\mathtt{pair}_{\sigma,\tau}(s,t)) & \longrightarrow & t \\ \end{array} \] \[ \begin{array}{ll} & @_{\sigma,\tau}(\epsilon_{\sigma \rightarrow \tau}(s),t) \longrightarrow \epsilon_\tau(s) \\ & \mathtt{case}_{\sigma,\tau,\rho}(\epsilon_{\mathtt{or}\,\sigma\,\tau}( u),\abs{x}{s},\abs{y}{t}) \longrightarrow \epsilon_\rho(u) \\ & \epsilon_\rho(\mathtt{case}_{\sigma,\tau,\bot}(u,\abs{x}{s}, \abs{y}{t})) \longrightarrow \mathtt{case}_{\sigma,\tau,\rho}(u,\abs{x}{\epsilon_\rho(s)}, \abs{y}{\epsilon_\rho(t)}) \\ & \mathtt{pr}^2_{\rho,\pi}(\mathtt{case}_{\sigma,\tau,\mathtt{and}\,\rho,\pi}(u, \abs{x:\sigma}{s},\abs{y:\tau}{t}))\longrightarrow \mathtt{case}_{\sigma,\tau,\pi}(u,\abs{x:\sigma}{\mathtt{pr}^2_{\rho,\pi}(s)}, \abs{y:\tau}{\mathtt{pr}^2_{\rho,\pi}(t)}) \\ & \mathtt{case}_{\rho,\pi,\xi}(\mathtt{case}_{\sigma,\tau,\mathtt{or}\, \rho\,\pi}(u,\abs{x}{s},\abs{y}{t}),\abs{z}{v}, \abs{a}{w}) \longrightarrow \\ & \phantom{AB} \mathtt{case}_{\sigma,\tau,\xi}(u,\abs{x}{\mathtt{case}_{ \rho,\pi,\xi}(s,\abs{z}{v},\abs{a}{w})}, \abs{y}{\mathtt{case}_{\rho,\pi,\xi}(t,\abs{z}{v}, \abs{a}{w})}) \\ & \mathtt{let}_{\varphi,\rho}( \mathtt{case}_{\sigma,\tau,\exists\varphi}( u,\abs{x}{s},\abs{y}{t}),v) \longrightarrow \mathtt{case}_{\sigma,\tau,\rho}(u, \abs{x}{\mathtt{let}_{\varphi,\rho}(s,v)}, \abs{y}{\mathtt{let}_{\varphi,\rho}(t,v)}) \\ \hspace{-10pt} (*) & \mathtt{let}_{\psi,\rho}(\mathtt{let}_{\varphi,\exists\psi}(s, \tabs{\alpha}{\abs{x:\varphi\alpha}{t}}),u) \longrightarrow \mathtt{let}_{\varphi,\rho}(s,\tabs{\alpha}{\abs{x: \varphi\alpha}{\mathtt{let}_{\psi,\rho}(t,u)}}) \\ \end{array} \] To define an interpretation for~IPC2, we will use the standard encoding of product and existential types (see~\cite[Chapter~11]{Girard1989} for more details). \[ \begin{array}{rclcrcl} \sigma \times \tau &=& \forall p . (\sigma \rightarrow \tau \rightarrow p) \rightarrow p & \quad & \pi^1_{\sigma,\tau}(t) &=& t \sigma (\abs{x:\sigma}{\abs{y:\tau}{x}}) \\ \pair{t_1}{t_2}_{\sigma,\tau} &=& \tabs{p}{\abs{x:\sigma\rightarrow\tau\rightarrow p}{x t_1 t_2}} & & \pi^2_{\sigma,\tau}(t) &=& t \tau (\abs{x:\sigma}{\abs{y:\tau}{y}}) \\ \Sigma \alpha . \sigma &=& \forall p . (\forall \alpha . \sigma \rightarrow p) \rightarrow p & \quad & \phantom{ABCD} \expair{\tau}{t}_{\Sigma\alpha.\sigma} &=& \tabs{p}{\abs{x:\forall\alpha.\sigma\rightarrow p}{x \tau t}} \\ & & \multicolumn{3}{r}{ \xlet{\rho}{t}{\alpha,x:\sigma}{s}} &=& t \rho (\tabs{\alpha}{\abs{x:\sigma}{s}}) \\ \end{array} \] We do not currently have an algorithmic method to find a suitable interpretation. Instead, we used the following manual process. We start by noting the minimal requirements given by the first set of rules (e.g., that $\mathtt{pr}^1_{\sigma,\tau}(\mathtt{pair}_{\sigma,\tau}(s, t)) \succeq s$); to orient these inequalities, it would be good to for instance have $\interpret{\mathtt{pair}_{\sigma,\tau}(s,t)} \succeq \pair{\interpret{s}}{\interpret{t}}_{\typeinterpret{\sigma}, \typeinterpret{\tau}}$ and $\interpret{\mathtt{pr}^i_{\sigma,\tau}(s)} = \pi^i_{\typeinterpret{\sigma},\typeinterpret{\tau}}(\interpret{s})$. To make the interpretation safe, we additionally include clauses $\mathtt{lift}(\mathtt{flatten}(x))$ for any unsafe arguments $x$; to make the rules \emph{strictly} oriented, we include clauses $\mathtt{lift}(1)$. Unfortunately, this approach does not suffice to orient the rules where some terms are duplicated, such as the second- and third-last rules. To handle these rules, we \emph{multiply} the first argument of several symbols with the second (and possibly third). Some further tweaking gives the following safe interpretation, which orients most of the rules: \[ \begin{array}{rclcrcl} \mathcal{T\!M}(\bot) & = & \mathtt{nat} & \quad & \mathcal{T\!M}(\mathtt{and}) & = & \lambda\alpha_1\lambda\alpha_2 . \alpha_1\times\alpha_2 \\ \mathcal{T\!M}(\exists) & = & \lambda(\alpha : * \Rightarrow *) . \Sigma \gamma . \alpha \gamma & \quad & \mathcal{T\!M}(\mathtt{or}) & = & \lambda\alpha_1\lambda\alpha_2 . \alpha_1\times\alpha_2 \\ \end{array} \] \[ \begin{array}{rcll} \mathcal{J}(\epsilon) & = & \Lambda \alpha:* . \lambda x:\mathtt{nat}. & \mathtt{lift}_\alpha(2 \otimes x \oplus 1) \\ \mathcal{J}(@) & = & \Lambda\alpha\Lambda\beta\lambda x: \alpha \rightarrow \beta . \lambda y : \alpha . \quad & \mathtt{lift}_\beta(2) \otimes (x \cdot y) \oplus \mathtt{lift}_\beta(\mathtt{flatten}_\alpha(y)\ \oplus \\ & & & \phantom{AB}\mathtt{flatten}_{\alpha \rightarrow \beta}(x) \otimes \mathtt{flatten}_\beta(y) \oplus 1) \\ \mathcal{J}(\mathtt{tapp}) & = & \Lambda \alpha : * \Rightarrow * . \Lambda \beta . \lambda x : \quant{\gamma}{\alpha\gamma} . \quad & \mathtt{lift}_{\alpha\beta}(2) \otimes (x * \beta) \oplus \mathtt{lift}_{\alpha\beta}(1) \\ \mathcal{J}(\mathtt{ext}) & = & \Lambda \alpha : * \Rightarrow * . \Lambda \beta : * . \lambda x:\alpha\beta . & \expair{\beta}{x} \oplus \mathtt{lift}_{\Sigma\gamma.\beta\gamma}( \mathtt{flatten}_{\alpha\gamma}(x)) \\ \mathcal{J}(\mathtt{pair}) & = & \Lambda \alpha \Lambda \beta \lambda x : \alpha, y : \beta.\quad & \pair{x}{y} \oplus \mathtt{lift}_{ \alpha \times \beta}(\mathtt{flatten}_\alpha(x) \oplus \mathtt{flatten}_{\beta}(y)) \\ \mathcal{J}(\mathtt{pr}^1) & = & \Lambda \alpha \Lambda \beta \lambda x : \alpha \times \beta . \quad & \mathtt{lift}_\alpha(2) \otimes \pi^1(x) \oplus \mathtt{lift}_{\alpha}(1) \\ \mathcal{J}(\mathtt{pr}^2) & = & \Lambda \alpha \Lambda \beta \lambda x : \alpha\times\beta.\quad & \mathtt{lift}_\beta(2) \otimes \pi^2(x) \oplus \mathtt{lift}_{\beta}(1) \\ \mathcal{J}(\mathtt{in}^1) & = & \Lambda \alpha \Lambda \beta \lambda x : \alpha.\quad & \pair{x}{\mathtt{lift}_\beta(1)} \oplus \mathtt{lift}_{\alpha \times \beta}(\mathtt{flatten}_{\alpha}(x)) \\ \mathcal{J}(\mathtt{in}^2) & = & \Lambda \alpha \Lambda \beta \lambda x : \beta.\quad & \pair{\mathtt{lift}_\alpha(1)}{x} \oplus \mathtt{lift}_{\alpha \times \beta}(\mathtt{flatten}_{\beta}(x)) \\ \end{array} \] \[ \begin{array}{rcl} \mathcal{J}(\mathtt{let}) & = & \Lambda \alpha : * \Rightarrow * . \Lambda \beta : * . \lambda x : \Sigma \xi . \alpha\xi, y : \quant{\xi}{\alpha\xi \rightarrow \beta}. \\ & & \mathtt{lift}_\beta(1) \oplus \mathtt{lift}_\beta(2) \otimes (\xlet{\beta}{x}{\xi,z}{y\xi z})\ \oplus \\ & & \mathtt{lift}_\beta(\mathtt{flatten}_{\Sigma\gamma.\alpha\gamma}(x) \oplus 1) \otimes (y * \mathtt{nat} \cdot \mathtt{lift}_{\alpha\mathtt{nat}}(0)) \\ \mathcal{J}(\mathtt{case}) & = & \Lambda \alpha,\beta,\xi . \lambda x : \alpha \times \beta, y : (\alpha \rightarrow \xi), z : (\beta \rightarrow \xi). \\ & & \quad \mathtt{lift}_\xi(2) \oplus \mathtt{lift}_\xi(3 \otimes \mathtt{flatten}_{\alpha \times \beta}(x)) \oplus \\ & & \quad\phantom{ABCDE} \mathtt{lift}_\xi(\mathtt{flatten}_{\alpha \times \beta}(x) \oplus 1) \otimes (y \cdot \pi^1(x) \oplus z \cdot \pi^2(x)) \\ \end{array} \] Above, $\otimes$ binds stronger than~$\oplus$. The derivations to orient rules with these interpretations are also given in \onlypaper{\cite[Appendix~B]{versionwithappendix}}% \onlyarxiv{Appendix~\ref{app_ineqs}}. The only rules that are not oriented with this interpretation -- not with~$\succeq$ either -- are the ones of the form $f(\mathtt{let}(s,t), \dots) \longrightarrow \mathtt{let}(s,f(t,\dots))$, like the rule marked (*) above. Nonetheless, this is already a significant step towards a systematic, extensible methodology of termination proofs for IPC2 and similar systems of higher-order logic. Verifying the orientations is still tedious, but our method raises hope for at least partial automation, as was done with polynomial interpretations for non-polymorphic higher-order rewriting~\cite{FuhsKop2012}. \section{Conclusions and future work} We introduced a powerful and systematic methodology to prove termination of higher-order rewriting with full impredicative polymorphism. To use the method one just needs to invent safe interpretations and verify the orientation of the rules with the calculation rules. As the method is tedious to apply manually for larger systems, a natural direction for future work is to look into automation: both for automatic verification that a given interpretation suffices and -- building on existing termination provers for first- and higher-order term rewriting -- for automatically finding a suitable interpretation. In addition, it would be worth exploring improvements of the method that would allow us to handle the remaining rules of IPC2, or extending other techniques for higher-order termination such as orderings (see, e.g., \cite{jou:rub:07}) or dependency pairs (e.g.,~\cite{kop:raa:12,suz:kus:bla:11}). \addcontentsline{toc}{section}{References}
1,314,259,996,364
arxiv
\section{Introduction}\label{introduction} Text in the wild comes in a variety of shapes. However, linear text arrangements, be it horizontal or rotated (as defined by multi-oriented text datasets like ICDAR2015\cite{karatzas2015icdar} and MSRA-TD500\cite{yao2012detecting}) dominate existing popular datasets such as ICDAR2013 \cite{karatzas2013icdar}, ICDAR2015 \cite{karatzas2015icdar}, COCO-Text \cite{veit2016cocotext}. Text instances arranged in curved or other irregular arrangements, as pointed out in Total-Text \cite{Chng2017TotalTextAC} and SCUT-CTW1500 \cite{yuliang2017detecting}, despite their commonness in our real world scenes, are rarely seen in the mentioned datasets. As a result, text detection models properly considering arbitrary-shaped text are relatively uncommon. In addition, recent studies \cite{Chng2017TotalTextAC,yuliang2017detecting, long2018textsnake, lyu2018mask, long2018scene} point out that existing state-of-the-art scene text detection models perform poorly against such data. Their studies suggest that a major design change is needed to handle the wild nature of arbitrary-shaped text instances.\blfootnote{*Authors contributed equally} Motivated by \cite{Chng2017TotalTextAC} and \cite{yuliang2017detecting}, numerous research works \cite{long2018textsnake, lyu2018mask} have demonstrated their interest in tackling the curved text reading problem. These studies suggest that some principle design changes are necessary in order to produce a tight polygon detection result, which is capable of binding arbitrary-shaped texts tightly. One example is the increment of the regression variables in order to cater for the higher count of vertices of a curved text region \cite{yuliang2017detecting}. Meanwhile, \cite{lyu2018mask} took advantage of the segmentation-based approach to address this problem. However, since the testing sets of \cite{Chng2017TotalTextAC} and \cite{yuliang2017detecting} consist of only 300 and 500 images, respectively, it is hard to draw conclusive claims based on them due to its relatively small sample size. Hence, we combined all the released images and ground truth in both of the mentioned datasets as the training set for this competition, and at the same time collected new images with similar attributes (i.e. high existence of arbitrary-shaped text alongside horizontal and multi-oriented text) to increase the size of both the training and testing set. This competition is a natural extension to all the previous RRC competitions, and consists of three main tasks: i) scene text detection, ii) scene text recognition, and iii) scene text spotting. It stands out by demanding higher robustness out of the scene text understanding models against text of arbitrary shapes. Details about this competition and \textit{ArT} dataset can be found on the RRC competition website\footnote{\url{http://rrc.cvc.uab.es/?ch=14}}. The structure of this paper is as follows. Related work is presented in Sec. \ref{related} and details of the \textit{ArT} dataset are described in Sec. \ref{art_general}. Tasks that are involved in this competition can be found in Sec. \ref{task1}, \ref{task2}, \ref{task3} respectively with the task's description, evaluation metric and a brief discussion of participants' results in the subsections. This paper will then end with our conclusions in Sec. \ref{conclusions}. \section{Related Work}\label{related} Scene text reading methods have achieved significant progress alongside the evolution of the scene text benchmarks. The continuously emerging datasets follow several noticeable patterns: i) the size getting bigger, ii) the data becomes harder, and iii) the annotation becomes more flexible. In 2013, ICDAR2013 \cite{karatzas2013icdar} comprised 462 images with only well-focused rectangular-shaped text. On ICDAR2015 \cite{karatzas2015icdar} dataset, the number is increased to 1,500 and all the images were incidentally captured. Besides, the dataset introduces quadrilateral annotation to meet the variety of text shapes. In 2017, IC17-MLT \cite{nayef2017icdar2017} was introduced to challenge the community with the multi-script scene text reading problem in 9 different languages. Similarly, the size of the dataset increases to 18,000, and quadrangles were used as the ground truth format. Recently, \cite{Chng2017TotalTextAC,yuliang2017detecting} pointed out although curved text instances are commonly found in the real world, they are rarely seen in the existing benchmarks. Besides, in the limited appearance of the curved text instances, their annotations are wildly loose with both the axis-aligned and quadrilateral bounding regions. Therefore, Total-text\cite{Chng2017TotalTextAC} and SCUT-CTW1500\cite{yuliang2017detecting} were collected with a great emphasis on curved text instance. Additionally, both of the datasets employed polygonal shape as the ground truth format for their annotations. These two benchmarks have quickly attracted the interests of the research community, motivating many promising text reading methods. Following the principles of both of the said datasets, the \textit{ArT} dataset aims to provide the community with a much larger data size to work with and a more comprehensive benchmark for future evaluations. \section{The \textit{`ArT'} Dataset}\label{art_general} The dataset intended for this competition, \textit{ArT}, is a combination of Total-Text \cite{Chng2017TotalTextAC}\footnote{\url{https://github.com/cs-chan/Total-Text-Dataset}}, SCUT-CTW1500\cite{yuliang2017detecting}\footnote{\url{https://github.com/Yuliang-Liu/Curve-Text-Detector}}, Baidu Curved Text Dataset\footnote{A subset of LSVT} plus a large sample of newly collected images. The new images were collected following the same principles as \cite{Chng2017TotalTextAC,yuliang2017detecting}: i) At least one arbitrary-shaped text per image; ii) high diversity in terms of text orientations (i.e. large amounts of horizontal, multi-oriented, and curved text instances); iii) text instances are annotated with tight polygon ground truth format. \subsubsection{Type/source of images} Images in the \textit{ArT} dataset were collected via digital camera, mobile phone camera, Internet, Flickr, image libraries, and Google Open-Image \cite{krasin2016openimages}. Also, part of the new images that contain Chinese text are collected from Baidu Street View. Similar to most of the publicly scene text datasets, the images in \textit{ArT} contain scenes from both indoor and outdoor settings, with digitally born images included. Apart from the usual vision-related challenges (illumination, background complexity, perspective distortion, etc.), \textit{ArT} stands out in challenging scene text understanding models with the combination of different text orientations within one image. \subsubsection{Homogeneity of the dataset} The images from Total-Text \cite{Chng2017TotalTextAC}, SCUT-CTW1500 \cite{yuliang2017detecting} and Baidu Curved Text Dataset are similar in nature, they are i) from real world scenes, and ii) the images are mostly well focused. Hence, the combination is smooth in this aspect. However, since SCUT-CTW1500 considers Chinese script in their annotation while Total-Text does not; a refinement to the ground truth of Total-Text is done to annotate all the Chinese characters in it. In addition, the line-level annotation of the Latin scripts in SCUT-CTW1500 is also re-annotated to word-level. \subsubsection{Number of images} On top of the existing images (3055) from Total-Text \cite{Chng2017TotalTextAC} and SCUT-CTW1500 \cite{yuliang2017detecting}, 7111 images are added to make the \textit{ArT} dataset, one of the largest scene text datasets for arbitrary-shaped text. There is a total of 10,166 images in the \textit{ArT} dataset that is split into a training set with 5,603 images, and a testing set of 4,563 newly collected images. We acknowledge the Baidu team for annotating all the newly collected images via the Baidu crowd-sourcing platform. \subsubsection{Ground truth} It is worth pointing out that the polygon ground truth format employed in \textit{ArT} is different from all the previous RRC, which adopted the axis-aligned bounding box \cite{karatzas2013icdar,karatzas2015icdar}, or quadrilateral \cite{nayef2017icdar2017} as the ground truth format. Both of these annotation styles have two and four vertices respectively, which are intuitively inappropriate for the arbitrary-oriented Chinese and Latin text instances in \textit{ArT}, especially the curved text instances. Following the practice of the MLT dataset \cite{nayef2017icdar2017}, we annotated Chinese and Latin scripts at line-level and word-level granularities respectively. The transcription and the language type of annotated text instances are provided. Also, note that the coordinates of the polygon bounding boxes are labelled to have either 4, 8, 10, or 12 polygon vertices depending on their shape. All illegible text instances and symbols were labelled as ``Do Not Care'', which will not contribute to the evaluation result. \section{Organization} This competition is jointly organized by the University of Malaya, Malaysia; South China University of Technology, China; Baidu Inc, China; and the Computer Vision Centre (Autonomous University of Barcelona), Spain. There are monetary rewards to the winner of this challenge, which is sponsored by Baidu Inc. \section{Task 1: Scene Text Detection}\label{task1} \subsection{Description} The main objective of this task is to detect the location of every text instance in the input image. Given an input image, participants are expected to provide the spatial location and confidence score of each prediction. \subsection{Evaluation metrics} IoU-based evaluation protocol is adopted for this task by following \cite{yuliang2017detecting}. IoU (Intersection over Union) is a threshold-based evaluation protocol, with a default threshold of $0.5$. Results are reported both at $0.5$ and $0.7$ thresholds but only the H-Mean of the former threshold is used to determine the official ranking. To ensure fairness, the participants are required to submit confidence score for each detection, and thus all confidence thresholds are iterated to find the best H-Mean score. It is also worth mentioning that, \textit{ArT} will be the first RRC to handle unfixed detection output coordinates in Task 1 (Sec. \ref{task1}) and Task 3 (Sec. \ref{task3}). \subsection{Results and Discussion} For Task 1, we received 48 submissions with 35 of them submitted from unique participants. The average H-mean score for Task 1 is 67.46\%. The first place of this task is \textbf{\textit{Pil-Mask-RCNN}} by Wang \textit{et al}. from Institute of Computational Technology, CAS, China, with the winning H-mean score of 82.65\%. The proposed method is built based on the Mask R-CNN pipeline with two different backbone networks: Senet-152 and Shuffle-net v2. Figure \ref{fig:pil_success} illustrates some of the successful examples. The visualization of its results show that the detection regions are of high quality: smooth and tight. Besides, it appears to be robust against the language variant of the text instances as well (i.e. Chinese and Latin scripts). We also investigated the failure examples of the winning method (as seen in Figure \ref{fig:pil_failure}), the common problems are: i) under segmenting (combining multiple text instances into one), ii) mistaken group of crowded text instances in a small area (especially Chinese characters), and iii) small text instances. We notice that most of the top performing methods (both runners-up included) are based on the Mask-RCNN pipeline. Also, most of the participants (except 4 submissions) design their models to produce polygon bounding region as the detection output, which align with the emphasis of this competition - tightness of detection outputs. The ranking of Task 1 is tabulated in Table \ref{tab:Task1}. Note that the top 3 teams between 0.5 IoU and 0.7 IoU are different - the original runner up - \textit{\textbf{NJU-ImagineLab}} is overtook by \textit{\textbf{ArtDet-v2}} and drops to the fourth place. Meanwhile, Figure \ref{subfig:t1} is the histogram of the average H-mean scores of each image in the testing set. As we can see, most images have the average H-mean scores between 0.8 to 0.9, followed by 0.7-0.8 so forth. The challenging images with 0 - 0.1 H-mean score can be seen in Figure \ref{fig:T1_hmean_low}. \begin{figure*}[htbp] \centering \begin{minipage}[t]{0.3\textwidth} \includegraphics[keepaspectratio=true, scale=0.22]{T1_hist.png} \subcaption{Task 1}\label{subfig:t1} \end{minipage} \hfill \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[keepaspectratio=true, scale=0.22]{T2_hist.png} \subcaption{Task 2.1 (blue) and Task 2.2 (red)}\label{subfig:t2} \end{minipage} \hfill \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[keepaspectratio=true, scale=0.22]{T3_hist.png} \subcaption{Task 3.1 (blue) and Task 3.2 (red)} \label{subfig:t3} \end{minipage} \caption{Histogram of the average score by all submissions of each test set image.} \label{fig:T1_hist} \end{figure*} \begin{figure*}[!] \begin{minipage}[t]{0.24\textwidth} \includegraphics[keepaspectratio=true, height=1.0\textwidth, width=1.0\textwidth]{T1.PNG} \subcaption{Task 1}\label{subfig:lh_t1} \end{minipage} \begin{minipage}[b]{0.24\textwidth} \begin{minipage}[t]{1.0\textwidth} \includegraphics[keepaspectratio=true, width=1.0\textwidth]{T2_1.png} \subcaption{Task 2.1}\label{subfig:lh_t2_1} \end{minipage} \begin{minipage}[t]{1.0\textwidth} \includegraphics[keepaspectratio=true, width=1.0\textwidth]{T2_2.png} \subcaption{Task 2.2}\label{subfig:lh_t2_2} \end{minipage} \end{minipage} \begin{minipage}[t]{0.24\textwidth} \includegraphics[keepaspectratio=true, height=1.0\textwidth, width=1.0\textwidth]{T3_1.png} \subcaption{Task 3.1}\label{subfig:lh_t3_1} \end{minipage} \begin{minipage}[t]{0.24\textwidth} \includegraphics[keepaspectratio=true, height=1.0\textwidth, width=1.0\textwidth]{T3_2.png} \subcaption{Task 3.2}\label{subfig:lh_t3_2} \end{minipage} \caption{Example images with low average H-mean score (i.e. 0-0.1). \textbf{Red}: Misdetections, \textbf{Blue}: False recognitions. } \vspace{-.1in} \label{fig:T1_hmean_low} \end{figure*} \begin{figure*} \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[keepaspectratio=true, scale=0.175]{pil_success.png} \subcaption{ }\label{fig:pil_success} \end{minipage} \hfill \begin{minipage}[t]{0.5\textwidth} \includegraphics[keepaspectratio=true, scale=0.2]{pil_failure.png} \subcaption{ }\label{fig:pil_failure} \end{minipage} \caption{Successful (a) and failure (b) detection examples of \textbf{\textit{Pil-Mask-RCNN}}. \textbf{Green}: True Positive, \textbf{Red}: False Positive and False Negative.}\vspace{-.1in} \label{fig:pil_suc_and_fail} \end{figure*} \begin{table} \centering \begin{adjustbox}{width=0.98\linewidth, height=2.3in} \begin{tabular}{lccccc} \toprule Task & Rank &Method Name & Affiliation & H-mean (IoU $>$ 0.5) & H-mean (IoU $>$ 0.7) \\ \midrule 1 & 1 &pil\_maskrcnn & Institute of Computing Technology, Chinese Academy of Sciences & 82.65 & 76.06 \\ & 2 &NJU-ImagineLab & Nanjing University & 80.24 & 70.33 \\ & 3 &ArtDet-v2 & Sogou-OCR team & 79.48 & 72.01 \\ & 4 &baseline\_polygon& Beihang University & 78.79 & 68.36 \\ & 5 &CUTeOCR & CUHK, HIT & 78.36 & 71.31 \\ & 6 &Sg\_ptd& Sogou Tech & 77.42 & 65.04 \\ & 7 &Alibaba-PAI& Alibaba Group & 76.1 & 64.41 \\ & 8 &Fudan-Supremind Detection v3 & Fudan University & 75.24 & 64.76 \\ & 9 &SRCB\_Art& SRCB & 75.02 & 65.25 \\ & 10 &A scene text detection method based on maskrcnn & Fudan University & 74.72 & 65.24 \\ & 11 &DMText\_art& Tencent & 74.43 & 65.94 \\ & 12 &TEXT\_SNIPER& IIE, CAS & 73.74 & 59.36 \\ & 13 &CLTDR& Chinese Academy of Sciences & 73.32 & 64.66 \\ & 14 &CRAFT& Clova AI OCR Team, NAVER/LINE Corp & 72.85 & 56.16 \\ & 15 &Sogou\_MM& Sogou Inc Sogou\_MM team & 72.69 & 60.61 \\ & 16 &QAQ& Institute of Automation, Chinese Academy of Sciences & 72.21 & 55.60 \\ & 17 &MaskDet & MetaSota.ai & 71.44 & 59.07 \\ & 18 &fdu\_ai & Fairleigh Dickinson University & 70.4 & 61.11 \\ & 19 &CCISTD & Peking University & 69.47 & 61.09 \\ & 20 &Mask RCNN& - & 68.95 & 59.07 \\%QiaoZhi & 21 &TextMask\_V1 & - & 68.92 & 60.63 \\%LiLin & 22 &MFTD: Mask Filters for Text Detection & - & 67.27 & 55.92 \\%Zhang Wenqing & 23 &Art detect by vivo & VIVO AI Lab & 66.92 & 55.55 \\ & 24 &PAT-S.Y& - & 66.72 & 54.22 \\%563392487@qq.com & 25 &DMCA & IIE,CAS & 66.45 & 52.25 \\ & 26 &TMIS & USTC-iFLYTEK & 66.01 & 56.53 \\ & 27 &mask rcnn& - & 63.81 & 50.11 \\ & 28 &Unicamp-SRBR-PN-1& SRBR, Unicamp & 62.37 & 46.46 \\ & 29 &TP & Shanghai Jiao Tong University & 62.18 & 50.86 \\ & 30 &Improved Progressive scale expansion Net& - &61.88 & 49.50 \\ & 31 &1& - & 58.2 & 41.66 \\%384745354@qq.com & 32 &TextCohesion\_1 & Zhengzhou University & 53.2 & 42.40 \\ & 33 &EM-DATA& - & 51.99 & 32.22 \\%yuyang & 34 &RAST: Robust Arbitrary Shape Text Detector& - & 47.3 & 36.51 \\ & 35 &MSR & - & 0.50 & 0.07 \\ \midrule Task & Rank & Method Name & Affiliation & Accuracy & 1-N.E.D\\ \midrule 2.1 & 1 &PKU\_Team\_Zero & MEGVII (Face++), Peking University & 74.30 & - \\ & 2 &CUTeOCR & CUHK, HIT & 73.91 & - \\ & 3 &CRAFT (Preprocessing) + TPS-ResNet & Clova AI OCR Team, NAVER/LINE Corp & 73.87 & - \\ & 4 &NPU-ASGO & Northwestern Polytechnical University & 71.82 & - \\ & 5 &CIGIT and XJTLU & CIGIT, XJTLU & 70.73 & - \\ & 6 &Attention based method for scene text recognition & SenseTime Group & 70.39 & - \\ & 7 &Ensemble and post processes & - & 69.15 & - \\%Qiuyang baiy@hust.edu.cn & 8 &CSN-ED & USTC-iFLYTEK & 67.32 & - \\ & 9 &Alchera AI & Alchera AI & 66.81 & - \\ & 10 &Irregular Text Recognizer with Attention Mechanism & Pennsylvania State University & 64.45 & - \\ & 11 &class\_5435\_rotate & Beihang University & 63.86 & - \\ & 12 &MatchCRNN & MetaSota.ai & 58.03 & - \\ & 13 &Arbitrary shape scene text recognition based on CNN and Attention Enhanced Bi-directional LSTM & - & 56.09 & - \\%Vector_xu@163.com & 14 &Fudan-Supremind Recognition & Fudan University & 50.56 & - \\ & 15 &LCT\_OCR & IIE, CAS & 47.31 & - \\ & 16 &So Cold 2.0 & - & 45.30 & - \\%QiaoZhi Zhou yu. Chen yudi. QIn xugong. Yang dongbao. Qiao zhi. Li xiaoni & 17 &task2x & - & 38.08 & - \\%1531660@tongji.edu.cn \midrule 2.2 & 1 &CRAFT (Preprocessing) + TPS-ResNet & Clova AI OCR Team, NAVER/LINE Corp & - & 85.32 \\ & 2 &Attention based method for arbitrary-shaped scene text recognition & SenseTime Group & - & 85.20 \\ & 3 &CSN-ED & USTC-iFLYTEK & - & 81.23 \\ & 4 &class\_5435\_rotate & Beihang University & - & 80.60 \\ & 5 &MatchCRNN & MetaSota.ai & - & 72.61 \\ & 6 &Ensemble and post processes & - & - & 71.27 \\ & 7 &So Cold 2.0 & - & - & 69.76 \\ & 8 &Fudan-Supremind Recognition & Fudan University & - & 66.15 \\ & 9 &CUTeOCR & CUHK, HIT & - & 65.38 \\ & 10 &PKU\_Team\_Zero & MEGVII (Face++), Peking University & - & 65.06 \\ & 11 &NPU-ASGO & Northwestern Polytechnical University & - & 63.82 \\ & 12 &CIGIT and XJTLU & CIGIT and XJTLU & - & 63.15 \\ & 13 &Alchera AI & Alchera AI & - & 61.61 \\ & 14 &Irregular Text Recognizer with Attention Mechanism & Pennsylvania State University & - & 61.42 \\ & 15 &LCT\_OCR & IIE, CAS & - & 59.77 \\ & 16 &task2x & - & - & 56.53 \\ & 17 &Arbitrary shape scene text recognition based on CNN and Attention Enhanced Bi-directional LSTM & - & - & 54.49 \\ \midrule Task & Rank & Method Name & Affiliation & Accuracy H-mean & 1-N.E.D\\ \midrule 3.1 & 1 &baseline\_0.5\_class\_5435 & Beihang University & 52.45 & 53.86 \\ & 2 &Alibaba-PAI & Alibaba Group & 57.32 & 53.36\\ & 3 &QAQ3 & Institute of Automation, Chinese Academy of Sciences & 45.57 & 46.01\\ & 4 &Detection-Recognition & USTC-iFLYTEK & 48.64& 45.84\\ & 5 &CLTDR & Chinese Academy of Sciences & 44.71 & 44.49\\ & 6 &So Cold 2.0 & - & 37.09& 39.71\\ & 7 &task3 & - & 37.48& 34.03\\ & 8 &CRAFT + TPS-ResNet v1& Clova AI OCR Team, NAVER/LINE Corp & 31.68 & 27.21 \\ \midrule 3.2 & 1 &baseline\_0.5\_class\_5435 & Beihang University & 50.17 & 54.91 \\ & 2 &Alibaba-PAI & Alibaba Group & 53.48 & 51.68 \\ & 3 &QAQ3 & Institute of Automation, Chinese Academy of Sciences & 47.48 & 49.10 \\ & 4 &CLTDR & Chinese Academy of Sciences & 45.65 & 48.78 \\ & 5 &Detection-Recognition & USTC-iFLYTEK & 46.13 & 48.03 \\ & 6 &So Cold 2.0 & - & 34.14 & 39.58 \\ & 7 &task3 & - & 38.58 & 37.65 \\ & 8 &CRAFT + TPS-ResNet v1& Clova AI OCR Team, NAVER/LINE Corp & 32.26 & 29.58 \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Official ranking of all the tasks in the RRC-ArT competition. Zoom in for a better view.} \vspace{-10pt} \label{tab:Task1} \end{table} \section{Task 2: Scene Text Recognition}\label{task2} \subsection{Description} The main objective of this task is to recognize every character in a cropped image patch. The input patterns of this task are the cropped image patches with corresponding text instances, and the relative polygon spatial coordinates. Participants are asked to provide the recognized string of characters as output. Nevertheless, it is up to participants to choose if they want to utilise the polygon coordinates as they are provided as optional information. Furthermore, we decided to further break down Task 2 into two subcategories: i) Task 2.1 - Latin script only recognition, and ii) Task 2.2 - Latin and Chinese scripts recognition. We hope that such a split could make this task friendlier for non-Chinese participants, as the aim of this competition is to detect and recognize arbitrary-shaped text. Participants are required to make a single submission only regardless of the scripts. We evaluated all submissions under two categories, Latin and mixed (Latin and Chinese) scripts. When evaluating the recognition performance for Latin script, all non-Latin transcriptions will be treated as ``Do Not Care'' regions. \subsection{Evaluation metrics} For Task 2.1, case-insensitive word accuracy is used as the primary challenge metric. Apart from this, all the standard practices for text recognition evaluation are followed. For example, symbols in the middle of ground truth text instances are considered but symbols such as ( !?.:,*"()·[]/'\_ ) at the beginning and at the end of both the ground truth and the submissions are removed. For Task 2.2, the Normalized Edit Distance metric (1-N.E.D specifically, which is also used in the ICDAR 2017 competition, RCTW-17 \cite{rctw}) are treated as the ranking metric. The reason of utilizing 1-N.E.D as the official ranking metric for Task 2.2 is motivated by the fact that Chinese scripts usually contain more characters than the Latin scripts, which makes word accuracy metric too harsh to evaluate Task 2.2 fairly. In the 1-N.E.D evaluation protocol, all characters (Latin and Chinese) will be treated in a consistent manner. To avoid ambiguities in the annotations, we performed several pre-processing steps before the evaluation process: 1) English letters are treated as case insensitive; 2) Chinese traditional and simplified characters are treated as the same label; 3) Blank spaces and symbols will be removed; 4) All illegible images will not contribute to the evaluation result. \subsection{Results and Discussion} For Task 2, there are 22 unique submissions from 17 unique teams. Starting with Task 2.1, the average accuracy score of this task is 62.47\%. The winner of this task is \textbf{\textit{PKU\_Team\_Zero}} by Shangbang \textit{et al}. from MEGVII (Face++) and Peking University, China, with the winning score of 74.30\%. It comprises of three major modules: 1) A detection module that can provide the spatial coordinates of the text (in polygon vertices) within the cropped image; 2) a spatial transformer that can straighten the image based on the coordinates; and 3) an attention RNN model for recognizing words. We notice that all three winning models have similar pipelines - all of them rectify the cropped image patches (i.e. straighten the text region, in turn removing background) before recognizing the word in it. This shows that the polygon ground truth format instead of the normal bounding box is indeed crucial in the problem of recognizing curved or any arbitrary text instances. Besides, another similarity is that all three of them employ attention mechanism in their RNN word recognition module. Qualitative results of the \textbf{\textit{PKU\_Team\_Zero}} method can be seen in Figure \ref{fig:T2_1_PKU_success}. The method has demonstrated its outstanding ability in recognizing curved text instances of challenging attributes in real world scene. On the other hand, Figure \ref{fig:T2_1_PKU_failure} illustrates some of the failure examples. The failure cases are mainly caused by unusual font types and severely blurred patch. The top three methods for Task 2.2 are quite different from Task 2.1. The average 1-N.E.D score of this sub-task is 68.43\%, and the winner of this task is \textbf{\textit{CRAFT (Preprocessing) + TPS-ResNet}} by Baek \textit{et al}. from Naver Corporation which scores 85.32\%. This method also has three major modules: detection, rectification, and recognition. Specifically, it adopts CRAFT \cite{baek2019character} as its text detector, Thin-Plate-Spline (TPS) based Spatial Transformer Network as its image normalizer, and a BiLSTM with attention as its text recognizer. Figure \ref{fig:T2_2_craft_success} shows some successful examples of the said method, it appears that the method is robust against curved text instances on both the Chinese and Latin scripts. Failure cases can be seen in Figure \ref{fig:T2_2_craft_failure}, where it fails in 1) Chinese character with similar appearance, 2) vertical oriented text, 3) blurred patch, and 4) interestingly Chinese character that looks like `K' under perspective distortion and illumination. The global performance of Task 2 is summarized in Figure \ref{subfig:t2}. From this figure, we notice two obvious spikes in the 0-0.1 and 0.9-1.0 bars for Task 2.1 (blue). This phenomenon is because of the attribute of accuracy scoring mechanism (i.e. 1 for getting every character recognized and 0 otherwise). Meanwhile, in Task 2.2 (red), we see a smoother distribution between 0 to 1. As we can see, most of the patches have a high average 1-N.E.D score (between 0.9 and 1). \begin{figure} \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[keepaspectratio=true, scale=0.26]{success.png} \subcaption{ }\label{fig:T2_1_PKU_success} \end{minipage} \vspace{-.1in} \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[keepaspectratio=true, scale=0.25]{T2_1_failure.png} \subcaption{ }\label{fig:T2_1_PKU_failure} \end{minipage} \caption{Successful (a) and failure (b) recognition examples of \textbf{\textit{PKU\_Team\_Zero}}.} \vspace{-.1in} \label{fig:pku_suc_and_fail} \end{figure} \section{Task 3: Scene Text Spotting}\label{task3} \subsection{Description} The main objective of this task is to detect and recognize every text instance in the provided image in an end-to-end manner. Given an input image, the output must be the spatial location of every text instance at word-level for Latin script and line-level for Chinese script together with the predicted word for each detection. Similar to RRC 2017 \cite{nayef2017icdar2017}, a generic vocabulary list (90K common English words) will be provided as a reference for this task. Identical to Task 2, we break Task 3 down into two subcategories: i) Task 3.1 Latin script only text spotting, and ii) Task 3.2 Latin and Chinese scripts text spotting. \subsection{Evaluation metrics} For Task 3, we first evaluate the detection result by calculating its IoU with the corresponding ground truth. Detection regions with an IoU value higher than 0.5 are then matched with the recognition ground truth (i.e. the transcript ground truth of that particular text region). Meanwhile, in the case of multiple matches, we only consider the detection region with the highest IOU, the rest of matches will be counted as False Positive. The pre-processing steps for the recognition part are the same as Task 2 and all Chinese text regions are ignored in Task 3.1. Also, it is worth mentioning that although the results of case-insensitive word accuracy H-mean and 1-N.E.D will be reported but the official ranking metric for both sub-tasks are 1-N.E.D. \begin{figure}[t] \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[keepaspectratio=true, scale=0.34]{success1.png} \subcaption{ }\label{fig:T2_2_craft_success} \end{minipage} \vspace{-.1in} \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[keepaspectratio=true, scale=0.28]{T2_2_failure.png} \subcaption{ }\label{fig:T2_2_craft_failure} \end{minipage} \caption{Successful (a) and failure (b) recognition examples of \textbf{\textit{CRAFT (Preprocessing) + TPS-ResNet}}.} \label{fig:craft_suc_and_fail} \end{figure} \subsection{Results and Discussion} Task 3 received 8 submissions from 8 individual teams. It is also the hardest task among all tasks - the average accuracy H-mean score for Task 3.1 is only 44.37\%. The method that ranks the first place is \textbf{\textit{baseline\_0.5\_class\_5435}} by Jinjin Zhang from Beihang University, China with the accuracy H-mean score of 52.45\%. The winning method has a segmentation-based detector and an attention-based recognizer. Zhang mentioned that the method is modelled to have 5,435 classes for the recognition task. Besides, extra training data from LSVT, ICDAR2017, COCO-Text, ReCTS, and augmented data were used to train their recognition network. The top three winners of Task 3.2 is the same as Task 3.1, with a slightly higher average 1-N.E.D score - 44.91\%. Figure \ref{fig:T3_baseline_0.5} depicts several successful and failure examples. As observed, in a high contrast setting (left figure), every text instance is well detected and recognized by the model; while the challenging example on the right confused the method with multiple possible combinations of the text instances. To be specific, the four vertical red regions are evaluated as false positives; the actual ground truths are supposed to be two text regions arranged from left to right (top and bottom), making up two Chinese words. Such an example could potentially be solved by instilling semantic information (e.g. the specific language knowledge) into the text spotting model. \begin{figure}[!] \begin{center} \includegraphics[keepaspectratio=true, height=0.4\linewidth, width=\linewidth]{T3.png} \end{center} \caption{Successful \textbf{(left)} and failure \textbf{(right)} examples of \textbf{\textit{baseline\_0.5\_class\_5435}}.} \vspace{-.1in} \label{fig:T3_baseline_0.5} \end{figure} In contrast to Task 1 and 2, the histogram of Task 3 (in Figure \ref{subfig:t3}) demonstrates the most distributed pattern across the score range. Note that for Task 3.1 (blue) the spike at the 0.9-1.0 (in contrast to the low count of Task 3.2 (red)) is due to the fact that Chinese scripts are counted as ``Do Not Care'' regions, which makes it easier to score a full mark on Chinese text dominant images. In general, most of the images have 0.4 to 0.6 average score which reflect the challenging aspect of this task. Again, Figure \ref{subfig:lh_t3_1} and \ref{subfig:lh_t3_2} shows some of the most challenging images in the test set with 0 to 0.1 average score. \section{Conclusions}\label{conclusions} The ICDAR2019 Robust Reading Challenge on \textit{ArT} received an overwhelming number of submissions, which is a delightful outcome considering that scene text understanding works with curved text in consideration were rarely seen before the introduction of the Total-Text and SCUT-CTW1500 datasets recently. Although the scene text understanding community has seen tremendous improvements in very recent years, the gap between the research-end and the application-end still exists. The main motivation behind \textit{ArT} dataset and this challenge is to encourage both the academic and industrial fields to look into the arbitrary orientation or shape aspect of text instances in the wild. The score of the top three winners in all tasks are close to each other, which is a good indication of where the state-of-the-arts resides at the moment. By taking a deeper look into the submission models, segmentation based methods seem to dominate the arbitrary-shaped text detection. Besides, we also find that the current IoU metric has many drawbacks; for example, some of the detections that miss several characters are still being rewarded with 100\% recall. Therefore, a better and more reasonable metric such as the recent TIoU metric \cite{liu2019tightness} may be worth practising in the future. In the recognition tasks, popular and high performing models share similar pipelines, which includes rectifying the text patches before recognizing them with an attention RNN/LSTM module. To this end, text spotting seems to be the most challenging task with the lowest winning H-mean score. \bibliographystyle{IEEEtran}
1,314,259,996,365
arxiv
\section{Introduction} Spins in semiconductor quantum dots are envisioned as essential building blocks of future quantum processors and other quantum technologies\cite{Vandersypen2017, Loss1998, DiVincenzo2000, Hanson2007, Zwanenburg2013}. Their distinctive features include the possibility to be assembled in dense arrays of qubits, good coherence properties, and the ability to operate at relatively high temperatures \cite{Vandersypen2017, Yang2020, Petit2020}. Spin qubits in heterostructures made of silicon and germanium are particularly relevant because the materials can be isotopically purified. In such environments with negligible amount of nuclear spins, the coherence time of the qubits is greatly enhanced, and is ultimately limited by the quasi-stationary charge noise\cite{Yoneda2018}. Single and two-qubit operations of electronic spins in silicon have actually been demonstrated\cite{Pla2012, Veldhorst2014, Veldhorst2015, Kawakami2016, Yoneda2018, Watson2018, Zajac2018, Huang2019, Xue2019} with fidelities approaching the values compatible with fault-tolerant quantum computation. Realizing qubits with holes instead of electrons can be attractive since in semiconductors such as silicon and germanium the spin-orbit interaction is much stronger in the valence than in the conduction band. This makes possible the all electrical manipulation of hole pseudo-spins\cite{comment_spin} without the need for micromagnets, and also the coupling of the effective spin with other degrees of freedom such as microwave photon modes in resonators\cite{Kloeffel2013}. Electrical manipulation of hole spin qubits has been shown experimentally in silicon metal-oxide-semiconductor (MOS) structures\cite{Maurand2016,Crippa2018} and in germanium\cite{Watzinger2018,Hendrickx2019}. Also in germanium arrays of hole quantum dots have been designed\cite{Lawrie2019,Scappucci2020,vanRiggelen2020} and multiple qubit logic has been demonstrated\cite{Hendrickx2020,Hendrickx2020_four_qubit}. These advances motivate theoretical descriptions of the hole spin manipulation in cubic diamond materials such as silicon and germanium, which have an inversion symmetry center in bulk. The Rashba spin-orbit interaction has already been analyzed in nanowires\cite{Kloeffel2011,Kloeffel2018} and in planar (quasi-2D) geometries\cite{Bulaev2005, Bulaev2007, Marcellina2017, Terrazos2020}. The hole $g$-tensor modulation resonance ($g$-TMR) effect has also been described in connection with one-dimensional MOS channels on silicon-on-insulator (SOI)\cite{Venitucci2018, Venitucci2019}. In these structures, the time-dependent ($ac$) electric field that drags the hole is parallel to the static electric field that breaks the inversion symmetry of the dot. It modulates the $g$-factors of the dot and drives spin rotations. Ref. \citenum{Venitucci2019} does, in particular, include analytical and numerical calculations with the Luttinger-Kohn (LK) model in an idealized setup (almost identical to the one studied here but with a different confinement along the channel). The $g$-TMR Rabi frequency was derived in a minimal basis set and compared with the results of an exact diagonalization of the LK Hamiltonian in an extended basis. In Refs. \citenum{Venitucci2018} and \citenum{Venitucci2019}, the $g$-matrix formalism\cite{Kato2003} has proven useful in the numerical calculations of the Rabi frequency under electrical driving. Following these works, we investigate here an alternative way of manipulating the hole qubit with an $ac$ electric field perpendicular to the static electric field (and parallel to the channel). This $ac$ field drives the dot as a whole so that the Rashba spin-orbit interaction gives rise to an effective time-dependent magnetic field. This effect has been analyzed theoretically in Refs. \citenum{Rashba2003,Golovach2006} and we refer to it as iso-Zeeman electric-dipole spin resonance (IZ-EDSR \cite{Crippa2018}) because the Zeeman splitting of the qubit remains unchanged during the motion. We compare IZ-EDSR and $g$-TMR and we identify the regimes of operation where the Rabi frequencies are the largest. The spin-electric coupling can indeed be tuned by the static electric field and we show that the two effects are maximized in different conditions. Furthermore we study the influence of biaxial strain that can strongly change the interplay between the two mechanisms with a hole that transitions from a mostly heavy to a mostly light type at large enough tensile strain. With the perspective of optimizing the design we then discuss the material dependence and the influence of the device orientation. The structure of the paper is as follows. In Section II we calculate the effective pseudo-spin Hamiltonian and the effective $g$-factors based on perturbation of the four-band LK Hamiltonian. In Section III, we recompute the $g$-TMR Rabi frequency with the g-matrix formalism, as an alternative derivation to Ref. 31. The latter, which is based on a power series expansion in a minimal basis set, includes some higher order contributions than the present work, but misses corrections on the effective $g$-factors due to the vector potential that are addressed here. We then discuss the conditions that optimize the $g$-TMR. In Section IV we analyze the IZ-EDSR starting from an effective Rashba spin-orbit coupling model and we also derive the conditions that maximize the Rabi frequency. In Section V we study the effect of strain and show how the situation changes when the qubit has a dominant light-hole character. In Section VI we discuss the results and compare the efficiency of $g$-TMR and IZ-EDSR. We also compare the analytical and semi-analytical results with numerical calculations based on the four-band LK model. Then we discuss the material dependence and the impact of the crystallographic orientation of the structure. We conclude in Section VII. In the Appendices we give details about the effective Hamiltonians (Appendix \ref{ap:EH}), the corrections to the $g$-factors that arise from the electromagnetic vector potential (Appendix \ref{ap:g_corr}), the derivation of the Rashba spin-orbit coupling model and the calculation of IZ-EDSR in one dimension (Appendices \ref{ap:Hso_para}, \ref{ap:EDSR} and \ref{ap:Hso_perp}), and we discuss additional figures in Appendix \ref{ap:add_figures}. \section{Effective Zeeman Hamiltonian and $g$-tensor}\label{sec:HZ} Motivated by spin qubit realizations in CMOS devices\cite{Maurand2016,Crippa2018} we consider a hole strongly confined along $z$ and weakly confined in the $(xy)$ plane. An idealization of the setup is shown in Fig. \ref{fig:setup}. We assume a rectangular channel along $x=[110]$ with hard wall boundary conditions and dimensions $L_y$ along $y=[1\overline{1}0]$ and $L_z\ll L_y$ along $z=[001]$ (infinite square well potentials along $y$ and $z$). A hole is confined along the channel in a parabolic potential $V_x(x)=-\frac{1}{2}K x^2$ (this setup slightly differs from Ref. \citenum{Venitucci2019} in order to allow for efficient spin manipulation with an $ac$ electric field along $x$). A static electric field $\ef_y$ (or equivalently a potential $V_y(y)=e\ef_y y$, $e>0$ being the elementary charge) is applied along $y$ that breaks the inversion symmetry of the channel and confines the hole towards the left or right facets. Additionally, a time-dependent ($ac$) electric field modulation is applied along $y$ or $x$. Note that we assume valence bands with negative dispersions, hence the signs of $V_x$ and $V_y$. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{setup.pdf} \caption{The setup studied here: a) Three-dimensional perspective with system of coordinates and alignment with respect to the crystallographic axes. The structure has the smallest length $L_z$ along the direction of strong confinement $z=[001]$. b) Energy potential profiles along the $x$, $y$ and $z$ directions (see main text). A static electric field $\ef_y$ is applied along $y$. $ac$ electric fields are applied along $y$ and $x$ and lead to the $g$-TMR and IZ-EDSR effects respectively. c) Color maps of the probability densities in the $(xy)$ and the $(yz)$ planes that follow from the fundamental envelope functions numerically computed with the four-band LK model.\cite{Comment_numerics} In this calculation, $L_z=10\unit{nm}$, $L_y=30\unit{nm}$, the electric field is $\ef_y=0.5\unit{mV/nm}$, and $x_0\equiv(\pi\hbar)^{1/2}/(m_0K)^{1/4}=10\unit{nm}$ (note that $x_0$ is introduced here for convenience as a mass-independent characteristic length; the actual extent of the ground-state wave function of heavy- and light-holes along $x$ being given by Eq. (\ref{eq:ell_x})).} \label{fig:setup} \end{figure} In this quasi two-dimensional configuration and in the absence of strain the ground state is expected to have a dominant heavy-hole character with small mixing with light-hole envelopes (see for instance Ref. \citenum{Katsaros2011} for a summary of the properties of heavy-hole and light-hole states in a quasi-2D setup). We first analyze heavy-hole-like ground-states and we discuss the effects of strains and light-hole-like ground-states in Sec. \ref{sec:strain}. Because the mixing between the heavy-hole and the light-hole states is relatively small in the thin-film regime we derive effective quasi two-dimensional (quasi-2D) Hamiltonians by perturbation of the four-band LK model (defined in Ref. \citenum{Venitucci2019}). We describe the effective Hamiltonian method in Appendix \ref{ap:EH}. This approach is different from Ref. \citenum{Venitucci2019}, which solved the equations in a minimal basis set (and at higher order in perturbation) but missed some corrections on the hole masses and $g$-factors discussed in this work. As shown in Ref. \citenum{Ares2013}, at leading order in the perturbation theory the heavy-hole/light-hole coupling leads to renormalization of the in-plane heavy-hole effective mass to \begin{equation}\label{m_para_h} m_\parallel^{h}=\frac{m_0}{\gamma_1+\gamma_2-\gamma_{h,1}}, \end{equation} where the correction $\gamma_{h,1}$ reads\cite{Ares2013}: \begin{equation}\label{gammah} \gamma_{h,1}=\frac{6\gamma_3^2\hbar^2}{m_0}\sum_{n}\frac{|\langle\psi_1^h|k_z|\psi_n^l\rangle|^2}{E_1^h-E_n^l}. \end{equation} Here $m_0$ is the bare electron mass, $\gamma_1$, $\gamma_2$, $\gamma_3$ are the Luttinger parameters characterizing the dispersion of the valence bands\cite{Comment_gammah}, $\psi_n^{h/l}$ are the envelopes of the heavy/light-hole states in the thin film, and $k_z=-i\partial/\partial_z$. For a heavy-hole confined in an unstrained silicon quantum well the correction evaluates to $\gamma_{h,1}\approx 1.16$ while in germanium $\gamma_{h,1}\approx 3.56$. As a consequence the in-plane envelope wavefunction of the heavy hole satisfies the 2D Schr{\"o}dinger equation with the effective mass $m_\parallel^h$. We show in Fig. \ref{fig:setup} b) the sketches of the envelope functions and in Fig. \ref{fig:setup} c) the color maps of the envelopes numerically computed with the four-band $k\cdot p$ (LK) model. The heavy-hole/light-hole coupling furthermore affects the $g$-tensor components. Without mixing between the heavy-hole and light-hole states the $g$-tensor of the heavy hole is diagonal\cite{Venitucci2018,Venitucci2019,Comment_g0} in the $\{|J=3/2,J_z=3/2\rangle$, $|J=3/2,J_z=-3/2\rangle\}$ basis in use\cite{Comment_Heff}: \begin{equation}\label{g0} g_0^h=\textrm{diag}(0, 0, -6\kappa). \end{equation} It was shown in Refs. \citenum{Ares2013} and \citenum{Watzinger2016} that at leading order in the perturbation theory the heavy-hole/light-hole coupling also leads to a renormalization of the $g$-factor in the direction of strong confinement: \begin{equation}\label{gzh} g_z^{h}=-6\kappa+2\gamma_{h,1}. \end{equation} Moreover in the perturbation theory the effective Zeeman Hamiltonian acquires transverse components and writes (see Appendix \ref{ap:EH}) \begin{equation}\label{Heff} H_{Z}^{h}= \begin{pmatrix} \frac{1}{2}g_z^{h}\mu_B B_z & \frac{2\sqrt{3}\langle R\rangle}{\Delta}\kappa\mu_B(B_x-iB_y) \\ \frac{2\sqrt{3}\langle R\rangle^*}{\Delta}\kappa\mu_B(B_x+iB_y) & -\frac{1}{2}g_z^{h}\mu_B B_z \end{pmatrix}, \end{equation} with\cite{Venitucci2018, Venitucci2019, Comment_A} \begin{equation}\label{R_Lutt} R=\frac{\hbar^2}{2m_0}\sqrt{3}[-\gamma_3(k_x^2-k_y^2)+2i\gamma_2k_xk_y], \end{equation} that we average over the heavy-hole envelope function in the $(xy)$ plane, $k_a=-i\partial/\partial_a$ ($a=x,y$), and $\Delta=E_1^h-E_1^l$ is the energy gap between the topmost heavy-hole and light-hole states. For a hole strongly confined within the infinite well of width $L_z$ of the thin film, this energy gap is, neglecting strains and the influence of the split-off band: \begin{equation}\label{Delta} \Delta=\frac{2\pi^2\gamma_2\hbar^2}{m_0L_z^2}. \end{equation} Thus Eq. (\ref{Heff}) gives the dependence of the effective $g$-factors on in-plane confinement: \begin{subequations} \begin{align} &g_{x}^h=g_{y}^h=-\frac{6\gamma_3\kappa\hbar^2}{m_0\Delta}\langle k_x^2-k_y^2\rangle,\label{geff_para}\\ &g_{xy}^h=-g_{yx}^h=\frac{12\gamma_2\kappa\hbar^2}{m_0\Delta}\langle k_xk_y\rangle. \end{align} \end{subequations} With the confinement considered here the off-diagonal element $g_{xy}$ vanishes \cite{Comment_A}. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{alpha_F.eps}\\ \includegraphics[width=0.4\textwidth]{alpha_Fp.eps}\\ \caption{a) The function $\f(\alpha_\parallel)=\langle k_y^2\rangle L_y^2/\pi^2$ and b) its derivative $\f'(\alpha_\parallel)$ where $\alpha_\parallel$ is the dimensionless parameter defined by Eq. (\ref{alpha_pl}). The blue curve corresponds to the numerical calculation, the orange and the green curves correspond to the weak electric field and to the strong electric field asymptotics [Eq. (\ref{asymp})] respectively.} \label{fig_ffp} \end{figure} The hole motion is separable in the ($xy$) plane ; the eigensolutions along $x$ are those of the 1D harmonic oscillator, and we solve numerically the Schr\"odinger equation along $y$ with the method of Fourier series\cite{Venitucci2019}. We introduce the scaling function \begin{equation}\label{sf} \f(\alpha_\parallel)=\frac{\langle k_y^2\rangle L_y^2}{\pi^2}, \end{equation} where we take the average over the ground state envelope function. The parameter $\alpha_\parallel$ quantifies the relative importance of structural and electric confinements: \begin{equation}\label{alpha_pl} \alpha_\parallel=\frac{2m_\parallel e\ef_yL_y^3}{\pi^3\hbar^2}=\Big(\frac{L_y}{\pi\ell_{\ef_y}}\Big)^3. \end{equation} In the above equation $m_\parallel$ can be the effective mass of a heavy hole or a light hole (the latter case will be discussed in Sec. \ref{sec:strain}). Also $\ell_{\ef_y}=(\hbar^2/(2m_\parallel e\ef_y))^{1/3}$ is the characteristic length of confinement by the electric field $\ef_y$. The asymptotics of the scaling function (\ref{sf}) are \begin{equation}\label{asymp} \f(\alpha_\parallel)\approx\left\{ \begin{array}{rl} 1+c_1\alpha_\parallel^2\text{, } \alpha_\parallel\ll 1,\\ \frac{|a_1|}{3}\alpha_\parallel^{2/3}\text{, } \alpha_\parallel\gg 1, \end{array} \right. \end{equation} with $c_1\approx0.14$ and $a_1\approx-2.34$ is the first zero of the Airy function ${\rm Ai}$. Plots of $\f$, its derivative, and comparison with the asymptotics Eq. (\ref{asymp}) are shown in Fig. (\ref{fig_ffp}). We note $g_\parallel^h=g_{x}^h=g_{y}^h$, and introduce the extent of the wave function along $x$: \begin{equation}\label{eq:ell_x} \ell_x^2=2\langle x^2\rangle=\frac{\hbar}{(m_\parallel K)^{1/2}}=\frac{1}{2\langle k_x^2\rangle}, \end{equation} to get: \begin{equation}\label{eq:g_para} g_\parallel^h=\frac{6\gamma_3\kappa\hbar^2}{m_0\Delta}\Big(\frac{\pi^2\f(\alpha_\parallel^h)}{L_y^2}-\frac{1}{2\lx^2}\Big). \end{equation} The effective in-plane $g$-factor hence depends on the in-plane electric field. This can lead to spin coherent oscillations under $ac$ electrical driving\cite{Kato2003,Venitucci2018,Venitucci2019} as we show in the next section. \section{$g$-tensor magnetic resonance}\label{sec:gtmr} The $g$-TMR mechanism has been recently analyzed numerically and analytically in Refs. \citenum{Venitucci2018} and \citenum{Venitucci2019}. The present set-up, with the $ac$ electric field applied along $y$, is a paradigm of this mechanism. It is practically realized when the same gate partly overlapping the channel is used to apply the static electric field $\ef_y$ and the $ac$ modulation $\ef_y^{ac}$. The Rabi oscillations then result from the electrical modulation of the principal $g$-factors $g_\parallel^h$ and $g_z^h$ in the anharmonic confinement potential $V_y(y)$ shaped by the structural confinement and transverse electric field $\ef_y$ \cite{Venitucci2018}. Here we give an alternative analytical derivation of the Rabi frequency based on the $g$-matrix formalism\cite{Kato2003,Venitucci2018,Venitucci2019}, and we discuss additional corrections that come from the vector potential terms derived in Appendix \ref{ap:g_corr}. In this formalism the Rabi frequency is computed at linear order in the applied magnetic field and in the $ac$ gate voltage. Because the box that contains the hole behaves as a parallel plate capacitor, we can express the Rabi frequency in term of the electric fields instead of gate voltages: \begin{equation}\label{fRg} f_{Rg} = \frac{\mu_B\ef_y^{ac}\|(g{\bf B})\times(\frac{\partial g}{\partial\ef_y}{\bf B})\|}{2h\| g{\bf B}\|}, \end{equation} with $h$ the Planck constant and $\ef_y^{ac}$ the amplitude of the $ac$ electric field along $y$: $\ef_y^{ac}(t)=\ef_y^{ac}\sin(\omega t)$. The qubit is resonantly driven at the average Larmor angular frequency $\omega_L = \mu_B\|g{\bf B}\|/\hbar$. With the $g$-matrix $g = \textrm{diag}(g_\parallel^h, g_\parallel^h, g_z^h)$ the Rabi frequency becomes \begin{equation} f_{Rg}^h= \frac{\mu_B \ef_y^{ac}|\frac{\partial g_\parallel^h}{\partial\ef_y}g_z^hB_\parallel B_z|}{2h\sqrt{(g_\parallel^h B_\parallel)^2+(g_z^h B_z)^2}}, \end{equation} with $B_\parallel=\sqrt{B_x^2+B_y^2}$. In the thin film regime the Rabi frequency has an approximate rotational symmetry with respect to the magnetic field orientation in the $(xy)$ plane\cite{Venitucci2019}. On the other hand it strongly depends on the angle $\theta$ between the magnetic field and the $z$ axis\cite{AresAPL,Venitucci2019} and reaches a maximum \begin{equation}\label{fRgmax} f_{Rg\max}^h= \frac{\mu_B B\ef_y^{ac}|\frac{\partial g_\parallel^h}{\partial\ef_y}|}{2h(|g_\parallel^h/g_z^h|+1)} \end{equation} at the angles $ \theta_{\max}=\pi/2\pm\arctan\Big(\sqrt{|g_\parallel^h/g_z^h|}\Big). $ Holes with dominant heavy character fulfill $g_\parallel^h\ll g_z^h$ and the optimal angles approximate as \begin{equation}\label{thetamax} \theta_\textrm{max}\approx \pi/2\pm\sqrt{|g_\parallel^h/g_z^h|}. \end{equation} With Eqs. (\ref{sf}) and (\ref{alpha_pl}) the Rabi frequency of a heavy hole develops as \begin{equation}\label{fRgmaxh} f_{Rg\max}^h\approx\frac{6\gamma_3|\kappa|\mu_B Be\ef_y^{ac}L_y\f'(\alpha_\parallel^h)}{\pi h(m_0/m_\parallel^h)\Delta}, \end{equation} together with the asymptotics of the derivative: \begin{equation} \f'(\alpha_\parallel^h) \approx \begin{dcases} \frac{4c_1m_\parallel^h e\ef_yL_y^3}{\pi^3\hbar^2},&L_y/\pi\ll\ell_{\ef_y},\\ \frac{2\pi|a_1|\ell_{\ef_y}}{9L_y},&L_y/\pi\gg\ell_{\ef_y}. \end{dcases} \end{equation} As a function of $\ef_y$ it reaches a maximum \begin{equation}\label{fRgmaxs} f_{Rg\max*}^h\approx\frac{0.6\gamma_3|\kappa|\mu_B Be\ef_y^{ac}L_y}{h(m_0/m_\parallel^h)\Delta} \end{equation} at $e\ef_{y*}\approx1.25\pi^3\hbar^2/(m_\parallel^h L_y^3)$, which is consistent with Eqs. (42), (43) of Ref. \citenum{Venitucci2019}, given the different approximations made here and in Ref. \citenum{Venitucci2019}. For a heavy-hole spin qubit in silicon with $B=1\unit{T}$, $\ef_y^{ac}L_y\sim1\unit{mV}$, and $\Delta\sim5\unit{meV}$ ($L_z=10\unit{nm}$), this evaluates as $f_{Rg\max*}^h\sim 250\unit{MHz}$. There are corrections beyond Eq. (\ref{eq:g_para}) that break the rotational symmetry of the $g$-tensor in the $(xy)$ plane ($g_x^h\neq g_y^h$). In Ref. \citenum{Venitucci2019} such corrections arose in the perturbation series at higher orders in the parameter $L_z/L_y\ll 1$. We derive in Appendix \ref{ap:g_corr} other anisotropic corrections due to the electromagnetic vector potential (whose action was neglected in Ref. \citenum{Venitucci2019}). With the corrected $g$-factors we compute the maximal Rabi frequencies semi-analytically with Eq. (\ref{fRg}). In Fig. \ref{fig:comparison} we compare the semi-analytical results with the fully numerical calculations based on the exact solution of the four-band $k\cdot p$ model that follows from previously developed methods\cite{Venitucci2019,Comment_numerics}. We further comment Fig. \ref{fig:comparison} in Section \ref{sec:discussion}, where we will also make the comparison with the IZ-EDSR effect that we describe in the next section. \section{Iso-Zeeman EDSR}\label{sec:iso_para} The $ac$ electric field may be aligned with the channel (along $x$) rather than perpendicular to it (along $y$). As the confinement is parabolic along $x$, such a modulation drags a real-space oscillation of the dot as a whole with amplitude \begin{equation}\label{dx} \delta x=\frac{e\ef_x^{ac}}{K}=\frac{e\ef_x^{ac}m_\parallel\lx^4}{\hbar^2}, \end{equation} where $\ef_x^{ac}$ is the amplitude of the oscillating electric field $\ef_x^{ac}(t)=\ef_x^{ac}\sin(\omega t)$. The qubit is resonantly driven so that the angular frequency $\omega$ is set to the Larmor frequency of the effective two-level system. Then the spin-orbit interaction leads to an effective magnetic field that is position-dependent \cite{Aleiner2001,Levitov2003} and the oscillating hole experiences an effective time-dependent magnetic field that can lead to coherent oscillations of the (pseudo) spin \cite{Rashba2003,Golovach2006,Nowack2007}. In the typical gate configuration of Refs. \citenum{Maurand2016,Crippa2018,Venitucci2018}, the effective Rashba spin-orbit coupling is mostly ruled by the in-plane static electric field $\ef_y$ because the electrical polarizability is much weaker along $z$ due to the strong confinement (the effect of a static electric field $\ef_z$ will be briefly discussed in section \ref{sec:discussion}). With the method presented in Appendix \ref{ap:EH} we derive the Rashba Hamiltonian (see details of the calculation in Appendix \ref{ap:Hso_para}): \begin{equation}\label{HR1D_para} H_{so\parallel}=\frac{\hbar^2}{m_\parallel\ell_{so\parallel}}k_x\sigma_z. \end{equation} The basis employed is as before and we have used the Pauli matrix notation in order to express the Hamiltonian in a compact form. In the thin film limit and for $L_y/\pi<\ell_{\ef_y}$ the inverse effective spin-orbit length given by Eq. (\ref{l_so_inv}) of Appendix \ref{ap:Hso_para} is well approximated by: \begin{equation}\label{l_so_inv_lin} \ell_{so\parallel}^{-1}\approx\frac{3\gamma_2\gamma_3e\ef_y}{(m_0/m_\parallel)^2\Delta}. \end{equation} In Fig. \ref{fig:lso} we plot the inverse spin-orbit length computed with Eq. (\ref{l_so_inv}) and we compare it with the approximation Eq. (\ref{l_so_inv_lin}). \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{Ey_lso_inv_simple.eps} \caption{Inverse spin-orbit length as a function of the static electric field along $y$. We compare the semi-analytical formula Eq. (\ref{l_so_inv}) of Appendix \ref{ap:Hso_para} (solid line) with the linear approximation Eq. (\ref{l_so_inv_lin}) (dashed line) for a silicon channel with dimensions $L_z=4\unit{nm}$ and $L_y=30\unit{nm}$.} \label{fig:lso} \end{figure} The Rashba spin-orbit interaction Eq. (\ref{HR1D_para}) implies the time-dependent effective Zeeman interaction \cite{Golovach2006} (see Appendix \ref{ap:EDSR}) \begin{equation}\label{HZt} \delta H_Z(t)=\frac{\delta x}{\ell_{so\parallel}}\sin(\omega t)\mu_B(g_x B_x\sigma_y-g_y B_y\sigma_x), \end{equation} where $\delta x$ is given by Eq. (\ref{dx}). The time-dependent effective magnetic field associated to Eq. (\ref{HZt}) is perpendicular to the external magnetic field\cite{Golovach2006} and immediately yields the Rabi frequency \begin{equation}\label{fRipara} f_{Ri\parallel}=\frac{\delta x}{h\ell_{so\parallel}}\mu_B\sqrt{g_x^2 B_x^2+g_y^2 B_y^2}. \end{equation} The result does not depend on the character of the hole and holds for a mostly heavy as well as a mostly light hole. The light-hole case will be addressed in Sec. \ref{sec:strain}. For the heavy hole the Rabi frequency Eq. (\ref{fRipara}) is approximately symmetric with respect to the magnetic field orientation in the $(xy)$ plane and reaches a maximum when the magnetic field is in the equatorial plane ($B_z=0$): \begin{equation}\label{fRiparamax} f_{Ri\parallel\max}^h=\frac{\delta x}{h\ell_{so\parallel}}\max(|g_x^h|,|g_y^h|)\mu_B B, \end{equation} where $g_x^h$ and $g_y^h$ are given by Eq. (\ref{g_h_corr}). When the electric field is so strong that $\ell_{\ef_y}<L_z/\pi$, but still $\Delta<\Delta_{so}$ ($\Delta_{so}$ being the spin-orbit energy gap between the split-off bands and the heavy- and light-hole bands), then the Rashba spin-orbit coupling can be addressed in the quasi-two-dimensional regime with strong confinement in the direction of the electric field. We numerically find (see Fig. \ref{fig:loglog}a of Appendix \ref{ap:add_figures}) that for a strong electric field the Rabi frequency decreases as $\ef_y^{-1/3}$. Qualitatively, the energy separation between the confined states is now dominated by the electric field ($\propto\ell_{\ef_y}^{-2}$) and the matrix elements responsible for the spin-orbit coupling are linear in the momentum in the direction of the strong confinement ($\propto\ell_{\ef_y}^{-1}$). Since the $g$-factors saturate to constants for strong electric fields (see Fig. \ref{fig:loglog}b of Appendix \ref{ap:add_figures}), the Rabi frequency must be asymptotically proportional to $\ell_{\ef_y}\propto\ef_y^{-1/3}$ according to Eq. (\ref{fRipara}). The Rabi frequency is therefore maximum in the range where the energies of confinement in the $y$ and $z$ directions are comparable. This intermediate regime is in fact similar to the nanowire (quasi-1D) configuration\cite{Kloeffel2018} where the Rashba spin-orbit coupling remains of the form of Eq. (\ref{HR1D_para}) with an inverse spin-orbit length that expresses as: \begin{equation} \ell_{so\parallel}^{-1}=C\frac{e\ef_y}{\Delta}, \end{equation} $\Delta$ being the energy splitting between the two relevant Kramers pairs of the four-band LK model, and $C$ a dimensionless factor that depends on the LK parameters. We have numerically computed the maximum EDSR Rabi frequencies for silicon and germanium. They are reached for a magnetic field oriented in the $y$-direction since the component of the $g$-tensor with the largest magnitude is $g_y$ in this regime (see Fig. \ref{fig:loglog}b of Appendix \ref{ap:add_figures}). For silicon with $L_z=10\unit{nm}$ and in the absence of strain, the Rabi frequency tends to saturate when $\ef_{y*}\gtrsim 10$ mV/mn and reaches a maximum $f_{Ri\max*}\sim270\unit{MHz}$ at $\ef_{y*}\sim40\unit{mV/nm}$. This large field is, however, practically beyond the operating range of CMOS qubits (and actually above the breakdown field of bulk silicon). In the present and in the previous sections we have analyzed the IZ-EDSR and the $g$-TMR as two distinct mechanisms. However we remind that IZ-EDSR can be accompanied by a $g$-TMR-like contribution if the confinement potential along $x$ is not strictly parabolic\cite{Crippa2018}. \begin{figure*} \centering \includegraphics[width=0.4\textwidth]{map_fRgTMR_A_Lz4nm_labels.eps} \includegraphics[width=0.4\textwidth]{map_fRgTMR_A_Lz10nm_labels.eps} \includegraphics[width=0.4\textwidth]{map_fRIZ_A_Lz4nm_labels.eps} \includegraphics[width=0.4\textwidth]{map_fRIZ_A_Lz10nm_labels.eps} \caption{Maps of the Rabi frequency for the $g$-TMR effect (maps a) and b)) and the IZ-EDSR effect (maps c) and d)), as a function of the lateral electric field $\ef_y$ and biaxial strain $\varepsilon_\parallel$. The maps are obtained from a numerical solution of the four-band $k\cdot p$ model\cite{Comment_numerics} in silicon. The lengths that characterize the lateral confinement are $x_0\equiv(\pi\hbar)^{1/2}/(m_0K)^{1/4}=10\unit{nm}$ and $L_y=30\unit{nm}$. The height of the semiconductor channel is $L_z=4\unit{nm}$ for maps a) and c), and it is $L_z=10\unit{nm}$ for maps b) and d). The dashed black lines outline the constant strains and electric field cuts shown in Figs. \ref{fig:comparison} and \ref{fig:strain}. The red dashed lines mark the critical strain $\varepsilon_{\parallel}^{\ast}$ that separates the mostly heavy-hole from the mostly light-hole ground state.} \label{fig:map} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.4\textwidth]{Ey_fR_A_Lz4nm_eps0.eps} \includegraphics[width=0.4\textwidth]{Ey_fR_A_Lz10nm_eps0.eps}\\ \includegraphics[width=0.4\textwidth]{Ey_fR_A_Lz4nm_eps0p7.eps} \includegraphics[width=0.4\textwidth]{Ey_fR_A_Lz10nm_eps0p7.eps} \caption{Comparison of the maximal $g$-TMR and the IZ-EDSR Rabi frequencies as given by Eq. (\ref{fRg}), the $g$-factors Eqs. (\ref{g_h_corr}), (\ref{g_l_corr}), and Eq. (\ref{fRipara}) (dashed lines), with numerical calculations based on the four band $k\cdot p$ model \cite{Comment_numerics} (full lines). The material is silicon and the parameters are $\varepsilon_\parallel=0\%$ for Figures a) and b), $\varepsilon_\parallel=0.7\%$ for Figures c) and d), $L_z=4\unit{nm}$ for Figures a) and c), $L_z=10\unit{nm}$ for Figures b) and d), $B=1\unit{T}$, $\ef_{x/y}^{ac}=(1/30)\unit{mV/nm}$, $L_y=30\unit{nm}$, and $x_0=10\unit{nm}$.} \label{fig:comparison} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.4\textwidth]{eps_fRgTMR_A_Lz4nm_Ey0p5.eps} \includegraphics[width=0.4\textwidth]{eps_fRgTMR_A_Lz10nm_Ey0p5.eps}\\ \includegraphics[width=0.4\textwidth]{eps_fRIZ_A_Lz4nm_Ey0p5.eps} \includegraphics[width=0.4\textwidth]{eps_fRIZ_A_Lz10nm_Ey0p5.eps} \caption{$g$-TMR [a) and b)] and IZ-EDSR [c) and d)] Rabi frequencies dependence on biaxial strain. We compare the numerics\cite{Comment_numerics} (full line with symbols) with the semi-analytical formulas (dashed lines). The dashed vertical lines in magenta mark the critical strain $\varepsilon_\parallel^{\ast}$ that separates the heavy- and light-hole ground states. The material is silicon and the parameters are $L_z=4\unit{nm}$ for Figures a) and c), $L_z=10\unit{nm}$ for Figures b) and d), $B=1\unit{T}$, $\ef_{x/y}^{ac}=(1/30)\unit{mV/nm}$, $L_y=30\unit{nm}$, and $x_0=10\unit{nm}$.} \label{fig:strain} \end{figure*} \section{Effect of biaxial strain, hole with mainly light character}\label{sec:strain} The $g$-TMR and IZ-EDSR Rabi frequencies both depend on the gap $\Delta$ that rules the mixing between heavy- and light-hole states by lateral confinement and electric fields. Rabi oscillations are indeed forbidden in the absence of such mixing as discussed in Refs. \citenum{Kloeffel2018} and \citenum{Venitucci2019}. Biaxial strain in the ($xy$) plane controls the magnitude of this gap, and can even switch the character of the ground state \cite{Venitucci2019}. With strain the gap Eq. (\ref{Delta}) indeed becomes: \begin{equation}\label{Delta_strain} \Delta\rightarrow\Big|\frac{2\pi^2\gamma_2\hbar^2}{m_0L_z^2}-\Delta_{BP}\Big|. \end{equation} Here $\Delta_{BP}=-2(\nu+1)b_v\varepsilon_\parallel$ is the Bir-Pikus energy shift due to biaxial strain \cite{Venitucci2019,BirPikusBook}, with $\nu=2c_{12}/c_{11}$ the biaxial Poisson ratio, $c_{11}$, $c_{12}$ the elastic constants of the semiconductor, $b_v$ the uniaxial valence band deformation potential, and $\varepsilon_\parallel$ the in-plane strain. In the regime $\Delta_{BP}\gg 2\pi^2\gamma_2\hbar^2/(m_0L_z^2)$ the ground state has a dominant light-hole character. The transition from from a mostly heavy- to a mostly light-hole ground state actually takes place at strain: \begin{equation} \varepsilon_{\parallel}^{\ast}=\frac{\pi^2\gamma_2\hbar^2}{(\nu+1)|b_v|m_0L_z^2}. \end{equation} Note that $\varepsilon_{\parallel}^{\ast}$ can be very small in silicon ($\varepsilon_{\parallel}^{\ast}=0.069\%$ at $L_z=10$ nm), so that the transition to a light-hole ground state may possibly result from non-intentional process and cooldown strains \cite{Venitucci2018}. With the methods of Section \ref{sec:HZ} we calculate the $g$-tensor corrections due to the coupling between the light-hole and the heavy-hole states. The $g$-tensor of a light-hole state is also diagonal and, by including the dominant perturbative corrections, its elements read \begin{subequations}\label{g_l} \begin{align} &g_x^{l}=-4\kappa+\delta g_{\parallel}^l+\delta g_{x}^l,\\ &g_y^{l}=-4\kappa-\delta g_{\parallel}^l+\delta g_{y}^l,\\ &g_z^{l}=-2\kappa-2\gamma_{l,1}, \end{align} \end{subequations} where $\delta g_{\parallel}$ is defined as \begin{equation}\label{eq:g_para_l} \delta g_\parallel^l=\frac{6\gamma_3\kappa\hbar^2}{m_0\Delta}\Big(\frac{\pi^2\f(\alpha_\parallel^l)}{L_y^2}-\frac{1}{2\lx^2}\Big), \end{equation} which is analogous to Eq. (\ref{eq:g_para_l}) but computed with the gap Eq. (\ref{Delta_strain}) and the in-plane light-hole effective mass \begin{equation}\label{m_para_l} m_\parallel^{l}=\frac{m_0}{\gamma_1-\gamma_2-\gamma_{l,1}}. \end{equation} The parameter $\gamma_{l,1}$ is \begin{equation}\label{gammal} \gamma_{l,1}=\frac{6\gamma_3^2\hbar^2}{m_0}\sum_{n}\frac{|\langle\psi_1^l|k_z|\psi_n^h\rangle|^2}{E_1^l-E_n^h}, \end{equation} where the energy denominator includes the Bir-Pikus energy shift. In an infinite square well potential along $z$ the matrix elements of the numerator of Eq. (\ref{gammal}) are the same as for heavy holes [Eq. (\ref{gammah})]. We compute the $g$-TMR Rabi frequency with the $g$-matrix formalism used in Sec. \ref{sec:gtmr}. We first neglect the corrections $\delta g_x^l$ and $\delta g_y^l$ that are due to the vector potential and are calculated in Appendix \ref{ap:g_corr}. With the $g$-tensor elements given above we find that in strongly strained silicon and germanium the Rabi frequency is maximized for magnetic field components $(B_x,B_y,B_z)=(\pm\frac{B}{\sqrt{2}},\pm\frac{B}{\sqrt{2}},0)$ and reaches: \begin{equation}\label{fRgmaxl} f_{Rg\max}^l\approx\frac{6\gamma_3|\kappa|\mu_B Be\ef_y^{ac}L_y\f'(\alpha_\parallel^l)}{\pi h(m_0/m_\parallel^l)\Delta}. \end{equation} We also note that the IZ-EDSR frequency for the light holes remains of the form of Eq. (\ref{fRipara}) with the corresponding effective mass and effective g-factors [Eqs. (\ref{m_para_l}) and (\ref{g_l})]. Then the maximal IZ-EDSR frequency is: \begin{equation}\label{fRiparamax_l} f_{Ri\parallel\max}^l=\frac{\delta x}{h\ell_{so\parallel}}\max(|g_x^l|,|g_y^l|)\mu_B B. \end{equation} In Fig. \ref{fig:map} we plot the color maps of the $g$-TMR and IZ-EDSR Rabi frequencies as functions of the parallel electric field $\ef_y$ and the in-plane strain. In Figs. (\ref{fig:comparison}) and (\ref{fig:strain}) we show the dependences of the Rabi frequencies on the electric field and strain respectively and we compare the semi-analytical formulas (that include the anisotropic corrections $\delta g_x$ and $\delta g_y$) with the numerical calculations. We discuss these figures in the next section. \section{Discussion}\label{sec:discussion} \subsection{Comparison between $g$-TMR and IZ-EDSR} The similarities between the static electric field configurations of Sections \ref{sec:gtmr} and \ref{sec:iso_para} allow for comparison between $g$-tensor magnetic resonance and iso-Zeeman electric dipole spin resonance effects. In the regime $\ell_{\ef_y}\gg L_y/\pi$ the ratio of the linear in static electric field Rabi frequencies is (neglecting the corrections of Appendix \ref{ap:g_corr}) \begin{equation}\label{eq:ratio} \frac{f_{Rg\max}}{f_{Ri\parallel\max}}\approx\frac{|\kappa|}{\gamma_2|g_\parallel|}\frac{m_0}{m_\parallel}\frac{\ef_y^{ac}}{\ef_x^{ac}}\Big(\frac{L_y}{\pi\lx}\Big)^4. \end{equation} With this it is clear that a key factor in the relative efficiency of $g$-TMR and IZ-EDSR manipulations is the ratio between the characteristic lengths of confinement along $x$ and $y$. If $\lx\gg L_y/\pi$ (with $\ef_x^{ac}$ and $\ef_y^{ac}$ of comparable magnitudes) then IZ-EDSR can be faster than $g$-TMR oscillations as illustrated in Fig. \ref{fig:Ey_fR_x015nm} of Appendix \ref{ap:add_figures}. For heavy holes however it turns out that $g_\parallel^h$ is quite small, which limits the IZ-EDSR Rabi frequencies ($g_\parallel^h\propto L_z^2$ so that $f_{Ri\parallel\max}^h\propto L_z^4$ when $L_z\to0$). On the contrary, we expect much more efficient IZ-EDSR manipulation for light holes (see Figs. \ref{fig:comparison} and \ref{fig:strain}). Alternatively, the static electric field may be applied along $z$ in order to lift this limitation on the g-factor of heavy holes; yet polarizing the hole envelope in this direction is more challenging because of the strong confinement. In fact, the IZ-EDSR Rabi frequency $f_{Ri\perp\max}^h$ of heavy holes for static electric field $\ef_z$ perpendicular to the thin film remains of the same (fourth) order with respect to $L_z/\min(L_y, \pi\ell_{\ef_y})\ll 1$ (see Appendix \ref{ap:Hso_perp}). For $L_y/\pi<\ell_{\ef_y}$ we find\cite{Bulaev2007, Marcellina2017, Terrazos2020} that $f_{Ri\parallel\max}^h/f_{Ri\perp\max}^h\sim(\gamma_1/\gamma_2)(\ef_y/\ef_z)$ when $\gamma_2\ll\gamma_1$. In silicon, the IZ-EDSR shall therefore be much more efficient when the static electric field is parallel to the thin film (along $y$) than when it is perpendicular (along $z$). We have numerically verified (see Fig. \ref{fig:Eyz_fRi} of Appendix \ref{ap:add_figures}) that it is indeed the case when $\ef_y\sim\ef_z$. In germanium, however, the two configurations show Rabi frequencies with comparable magnitudes for electric fields in the few $\unit{mV/nm}$ range. For light holes on the other hand $f_{Ri\parallel\max}^l/f_{Ri\perp\max}^l\sim(\gamma_1/\gamma_3)(\ef_y/\ef_z)(\langle k_y^2\rangle L_z^2)^{-1}$, when the gap $\Delta$ including strain is of the order of the confinement gap at zero strain ($\varepsilon_\parallel\simeq 2\varepsilon_\parallel^\ast$). This is typically large since $\langle k_y^2\rangle L_z^2\ll 1$. Applying the static electric field along $y$, as done in this study, is therefore always much more efficient. Fig. \ref{fig:map} represents the color maps of the numerically computed Rabi frequencies as a function of the static electric field and biaxial strain. Figures \ref{fig:map}a and \ref{fig:map}b show the $g$-TMR Rabi frequency for $L_z=4\unit{nm}$ and $L_z=10\unit{nm}$ while Figures \ref{fig:map}c) and \ref{fig:map}d) show the IZ-EDSR Rabi frequency for $L_z=4\unit{nm}$ and $L_z=10\unit{nm}$. The $g$-TMR and IZ-EDSR Rabi frequencies vanish along the line $\ef_y=0$. Breaking the inversion symmetry of the channel with a static electric field is indeed a pre-requisite for both mechanisms\cite{Kloeffel2018,Venitucci2018,Venitucci2019}. There is also a quasi-horizontal dip visible on Fig. \ref{fig:map}, near (but not at) the strain $\epsilon_\parallel^\ast$ where the heavy- and light-hole ground-states anti-cross ($\Delta=0$). This feature is also clearly visible on Fig. \ref{fig:strain}, and has already been identified in Ref. \citenum{Venitucci2019}. As it takes place near $\Delta = 0$, it is not captured by the present semi-analytical models. It actually arises when the qubit states and the excited states that are coupled by the static and $ac$ electric fields share very similar Bloch functions. Indeed, the Zeeman Hamiltonian can not couple such states (because their envelopes are, by design, orthogonal), so that the real space motion induced by the $ac$ electric field does not come along with pseudo-spin rotations. In general, this condition is met near $\Delta = 0$, because the qubit states rapidly switch from almost pure heavy- to almost pure light-hole states, and therefore cross the composition of the relevant excited states. In Figure \ref{fig:comparison} we show the Rabi frequencies computed semi-analytically with Eqs. (\ref{fRg}), (\ref{g_h_corr}), (\ref{g_l}), and (\ref{fRiparamax}) as a function of the static electric field $\ef_y$ and we compare them with the numerical calculations based on the four-band LK model. Figures \ref{fig:comparison}a and \ref{fig:comparison}b are computed at zero strain whereas Figures \ref{fig:comparison}c and \ref{fig:comparison}d are computed at $\varepsilon_\parallel = 0.7\%$ where the ground-state is mostly light-hole. In the thin film regime $L_z\ll L_y$ and for small electric fields such that $\ell_{\ef_y}>L_y/\pi$ we note a good correspondence between the analytics and the numerical calculations for both $g$-TMR and IZ-EDSR. We also correctly predict the electric field optimum for the $g$-TMR Rabi frequency at $\ell_{\ef_y}\sim L_y/\pi$. For a stronger electric field such that $\ell_{\ef_y}<L_y/\pi$ the analytical expressions can significantly differ from the numerics. We attribute these discrepancies to deviations from the applicability of the lowest order of the perturbation theory, because the thin film condition may not be strictly fulfilled. We note that the $g$-TMR Rabi frequency shows an maximum at rather weak electric field while the IZ-EDSR Rabi frequency increases continuously over a wide range of electric field (see Fig. \ref{fig:lso}). The $g$-TMR Rabi frequency indeed decreases rapidly once the hole is squeezed by the static electric field $\ef_y$ and cannot be dragged efficiently anymore by the $ac$ electric field along $y$. On the contrary, the motion along $x$ is little hampered, and the direct Rashba spin-orbit coupling responsible for the IZ-EDSR oscillations is enhanced over a wide range of $\ef_y$. For heavy holes in silicon however, $g$-TMR remains more efficient than IZ-EDSR over the practical range of fields reached at low inversion density in CMOS devices typical of Refs. \citenum{Maurand2016} and \citenum{Crippa2018}. $g$-TMR shows, nonetheless, a more complex dependence on the magnetic field orientation (the optimal orientation showing, in particular, dot-to-dot variability, as suggested by Eq. (\ref{thetamax})). For light holes in silicon, IZ-EDSR can be more efficient than $g$-TMR at moderate electric fields. IZ-EDSR requires, on the other hand, at least two gates (for confinement and manipulation), whereas $g$-TMR can be achieved with one single gate both confining the hole and shaking the dot\cite{Venitucci2018}. As discussed above, an other way to promote IZ-EDSR over $g$-TMR (even for heavy holes) is to make the potential softer along $x$ ($\lx\gg L_y/\pi$) in order to enhance the motion of the dot, at the possible expense of an increased sensitivity to disorder along the channel. Fig. \ref{fig:strain} shows the Rabi frequencies as a function of strain at a fixed electric field. The $g$-TMR and IZ-EDSR Rabi frequencies show a complex dependence on strain near $\varepsilon_\parallel=\varepsilon_\parallel^\ast$, characterized by a broad peak (due to the enhanced heavy- and light-hole mixing) split by the dip discussed above. The Rabi frequencies do decrease at large positive (tensile) and negative (compressive) strains as the heavy- and light-hole mixing gets inhibited by the increasing $|\Delta|$. The spin-orbit coupling strength in the devices can, therefore, be tuned by strains then further modulated by the static electric field. This does not only rule the Rabi frequency, but also the relaxation and coherence times of the qubits\cite{Li2020}. Compressive strains for example (as encountered in epitaxial germanium layers \cite{Lawrie2019,Scappucci2020,vanRiggelen2020,Hendrickx2020,Hendrickx2020_four_qubit}) do stabilize an almost pure heavy-hole, which can increase $T_1$ and $T_2$ even faster than it decreases the Rabi frequency ($f_R$ being proportional to a dipole matrix element, and the electrical contributions to $1/T_1$ and $1/T_2^*$ to a dipole matrix element squared, except for quasi-static $1/f$ noise \cite{Paladino2014}). This might also ease the management and reduce variability in exchange interactions. Strains and electric fields must, therefore, be carefully engineered in order to optimize the overall performances of the devices. \subsection{Dependence on material and channel orientation} The Rabi frequencies depend on the host material through the Luttinger parameters $\gamma_1$, $\gamma_2$, $\gamma_3$, and through the Zeeman parameter $\kappa$. In order to compare channel materials and orientations, we have extracted the material-dependent prefactors of the $g$-TMR and IZ-EDSR Rabi frequencies of heavy ($h$) and light ($l$) holes. In the small electric field regime $\ell_{\ef_y}>L_y/\pi$, the maximal Rabi frequencies are proportional to: \begin{subequations}\label{zeta_110} \begin{align} &\zeta_{[110]}^{g{\rm -TMR},h}=\frac{\gamma_3\max(|\kappa-2\gamma_3\eta_{h,1}|,|\kappa-2\gamma_2\eta_{h,1}|)}{\gamma_2(\gamma_1+\gamma_2-\gamma_{h,1})^2},\\ &\zeta_{[110]}^{{\rm IZ-EDSR},h}=\frac{\gamma_3^2\max(|\kappa-2\gamma_3\eta_{h,1}|,|\kappa-2\gamma_2\eta_{h,1}|)}{\gamma_2(\gamma_1+\gamma_2-\gamma_{h,1})^2},\\ &\zeta_{[110]}^{g{\rm -TMR},l}=\frac{\gamma_3|\kappa+(\gamma_3-\gamma_2)\eta_{l,1}|}{(\gamma_1-\gamma_2-\gamma_{l,1})^2},\\ &\zeta_{[110]}^{{\rm IZ-EDSR},l}=\frac{\gamma_2\gamma_3|\kappa|}{(\gamma_1-\gamma_2-\gamma_{l,1})^2}. \end{align} \end{subequations} The $g$-factor corrections calculated in Appendix \ref{ap:g_corr} ($\eta$ terms) are included in these prefactors. The $\zeta_{[110]}$'s of heavy holes are computed at zero strain, where the gap $\Delta$ is set by vertical confinement. Those of light-holes are computed at the same $\Delta$ that we assume controlled by strains. The values of the $\zeta_{[110]}$'s in silicon and germanium are collected in table \ref{tab:comparison}. We emphasize that the $\zeta_{[110]}$'s are intended for a comparison between different materials for a given mechanism, but not for a comparison between different mechanisms. \begin{table} \begin{tabular}{ c r r } \hline \hline \noalign{\smallskip} Material parameters & Si & Ge \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\gamma_1$ & 4.29 & 13.38 \\ $\gamma_2$ & 0.34 & 4.24 \\ $\gamma_3$ & 1.45 & 5.69 \\ $\kappa$ & $-0.42$ & 3.41 \\ $\gamma_{h,1}$ & 1.16 & 3.56 \\ $\eta_{h,1}$ & 0.08 & 0.20\\ $b_{v}\,[\unit{eV}]$ & $-2.10$ & $-2.86$\\ $\nu=2c_{12}/c_{11}$ & 0.77 & 0.73\\ \noalign{\smallskip} \hline \noalign{\smallskip} $\zeta_{[110]}^{g{\rm -TMR},h}$ & 0.23 & 0.012\\ \noalign{\smallskip} \hline \noalign{\smallskip} $\zeta_{[110]}^{{\rm IZ-EDSR},h}$ & 0.34 & 0.067 \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\zeta_{[110]}^{g{\rm -TMR},l}$ & 0.064 & 0.33\\ \noalign{\smallskip} \hline \noalign{\smallskip} $\zeta_{[110]}^{{\rm IZ-EDSR},l}$& 0.013 & 0.98\\ \noalign{\smallskip} \hline \hline \end{tabular} \caption{Rabi frequency dependence on the materials and comparison between silicon and germanium. The scaling factors $\zeta_{[110]}$ are those of Eqs. (\ref{zeta_110}). We evaluate the heavy-hole parameter $\gamma_{h,1}$ as well as $\zeta_{[110]}^{g{\rm -TMR},h}$ and $\zeta_{[110]}^{{\rm IZ-EDSR},h}$ at zero strain. For the light-hole case we take the large strain limit such that the energy gap between the light-hole and the heavy-hole states is dominated by the strain instead of the structural confinement, and the parameters $\gamma_{l,1}\ll\gamma_1-\gamma_2$ and $\eta_{l,1}\approx 1$. We emphasize that the numbers here illustrate the differences between silicon and germanium through their material dependent parameters; however they are not meant for a comparison between $g$-TMR and IZ-EDSR, nor for a comparison between the heavy-hole and light-hole cases. The Luttinger and strain parameters are borrowed from Ref. \citenum{Venitucci2019}.} \label{tab:comparison} \end{table} In table \ref{tab:comparison} we note clear differences between silicon and germanium. As a main trend, electrically driving a heavy hole is expected to be more efficient in silicon than in germanium (for a given dot size). Indeed, the Rabi frequency at given static electric and magnetic fields is inversely proportional to a Luttinger parameter (IZ-EDSR) or to a Luttinger parameter squared ($g$-TMR), because heavier particles respond stronger to the static electric field $\ef_y$.\cite{Venitucci2019} Also, heavy holes benefit from the strong anisotropy of the valence band of silicon (large $\gamma_3/\gamma_2$ ratio). As a consequence of this anisotropy, the coupling between heavy and light holes by lateral confinement (driven by $\gamma_3$) is strong with respect to their splitting $\Delta$ ($\propto\gamma_2$), which enhances the heavy- and light-hole mixing in the qubit states and low-lying excitations, a pre-requisite for Rabi oscillations\cite{Kloeffel2018,Venitucci2019}. The advantage of silicon is even greater if the Rabi frequencies are compared at the same Zeeman splitting rather than the same magnetic field, as, in a first approximation, the $\zeta_{[110]}$'s must be rescaled by a factor $\simeq 1/\kappa$. On the contrary driving a light hole is expected to be more efficient in germanium, especially for IZ-EDSR that is almost two orders of magnitude stronger in germanium than in silicon. As a matter of fact, the gap $\Delta$ of light-hole qubits is primarily controlled by strains, so that silicon looses the benefits of its valence band anisotropy ($\gamma_2$ disappears from the denominator of the $\zeta_{[110]}$'s). Therefore, dealing with light holes in germanium may be interesting, but will require complex strain engineering. We would like, finally, to emphasize that the dots may be made larger in germanium than in silicon thanks to the lighter hole masses (reduced sensitivity to disorder), which can enhance the Rabi frequency of both heavy- and light-hole qubits. In particular, germanium hole qubits systematically perform better than silicon qubits if compared at the same vertical and lateral confinement energies (same $\gamma_2/L_z^2$, $(\gamma_1\pm\gamma_2-\gamma_{h/l,1})/L_y^2$ and $(\gamma_1\pm\gamma_2-\gamma_{h/l,1})/\lx^2$, which suppresses the demoninators of Eqs. (\ref{zeta_110})). We also highlight the importance of the choice of the device orientation that expresses through the two parameters $\gamma_2$ and $\gamma_3$. Indeed, if the orientation of the channel ($x$ axis) is changed from $[110]$ to $[100]$ (and the $y$ axis from $[-110]$ to $[010]$), then $\gamma_2$ and $\gamma_3$ must be exchanged in the term $R$ [Eq. (\ref{R_Lutt})]: \begin{equation}\label{R_Lutt_100} R\rightarrow\frac{\hbar^2}{2m_0}\sqrt{3}[-\gamma_2(k_x^2-k_y^2)+2i\gamma_3k_xk_y], \end{equation} With this transformation the material-dependent prefactors become: \begin{subequations} \begin{align} &\zeta_{[100]}^{g{\rm -TMR},h}=\frac{\max(|\gamma_2(\kappa-2\gamma_3\eta_{h,1})|,|\gamma_2\kappa-2\gamma_3^2\eta_{h,1}|)}{\gamma_2(\gamma_1+\gamma_2-\gamma_{h,1})^2},\\ &\zeta_{[100]}^{{\rm IZ-EDSR},h}=\frac{\gamma_3\max(|\gamma_2(\kappa-2\gamma_3\eta_{h,1})|,|\gamma_2\kappa-2\gamma_3^2\eta_{h,1}|)}{\gamma_2(\gamma_1+\gamma_2-\gamma_{h,1})^2},\\ &\zeta_{[100]}^{g{\rm -TMR},l}=\frac{|\gamma_2\kappa+\gamma_3(\gamma_2-\gamma_3)\eta_{l,1}|}{(\gamma_1-\gamma_2-\gamma_{l,1})^2},\\ &\zeta_{[100]}^{{\rm IZ-EDSR},l}=\frac{\gamma_2\gamma_3|\kappa|}{(\gamma_1-\gamma_2-\gamma_{l,1})^2}. \end{align} \end{subequations} We give in Table \ref{tab:comparison_100} the values of the $\zeta_{[100]}$'s for silicon and germanium. For a heavy hole the $\zeta_{[100]}$'s are smaller than the $\zeta_{[110]}$'s for both $g$-TMR and IZ-EDSR, so that the $[110]$ orientation is optimal in this case. This largely results for silicon from the loss of the $\sim\gamma_3/\gamma_2$ enhancement factor related to the valence band anisotropy\cite{Venitucci2019} (both the gap $\Delta$ and the coupling between heavy and light holes being ruled by $\gamma_2$ in the $[100]$ orientation). For the $g$-TMR of a light hole $\zeta_{[100]}^{g{\rm -TMR},l}>\zeta_{[110]}^{g{\rm -TMR},l}$ for silicon and $\zeta_{[100]}^{g{\rm -TMR},l}<\zeta_{[110]}^{g{\rm -TMR},l}$ for germanium. For the IZ-EDSR of a light hole the prefactors are essentially the same for the two orientations. Therefore regarding silicon in the present configuration the choice of the $[110]$ orientation is optimal, at least for a heavy hole. \begin{table} \begin{tabular}{ c r r } \hline \hline \noalign{\smallskip} Material parameters & Si & Ge \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\zeta_{[100]}^{g{\rm -TMR},h}$ & 0.12 & 0.0058\\ \noalign{\smallskip} \hline \noalign{\smallskip} $\zeta_{[100]}^{{\rm IZ-EDSR},h}$ & 0.17 & 0.033 \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\zeta_{[100]}^{g{\rm -TMR},l}$ & 0.11 & 0.074\\ \noalign{\smallskip} \hline \noalign{\smallskip} $\zeta_{[100]}^{{\rm IZ-EDSR},l}$& 0.013 & 0.98\\ \noalign{\smallskip} \hline \hline \end{tabular} \caption{Rabi frequency material-dependent prefactors and comparison between silicon and germanium for a channel oriented along $[100]$.} \label{tab:comparison_100} \end{table} \section{Conclusion} We have examined the electrical manipulation of hole qubits in a 1D channels that resemble the MOS setup of Refs. \citenum{Maurand2016} and \citenum{Crippa2018} where the structural confinement is strong in one direction ($z$) and the most relevant static electric field is perpendicular to that direction ($y$). This configuration allows for stronger electrical polarizability than a static electric field along $z$. We have compared two mechanisms of electrical manipulation, the $g$-tensor magnetic resonance ($g$-TMR, $ac$ electric field also parallel to $y$), and the iso-Zeeman electric dipole spin resonance (IZ-EDSR, $ac$ electric field along the channel direction $x$), and we have evaluated their efficiencies as given by the magnitudes of the Rabi frequencies. In the regime of weak mixing between the heavy-hole and the light-hole states we thoroughly analyze the spin-orbit interactions responsible for the two effects. In particular we derive the effective Rashba Hamiltonian, Eq. (\ref{HR1D_para}), that leads to the IZ-EDSR effect with Rabi frequency given by Eq. (\ref{fRipara}). The two mechanisms can be controlled by the electric bias and by the strains, as highlighted in Fig. \ref{fig:map}. The $g$-TMR Rabi frequency is maximal at only moderate electric fields [Eq. (\ref{fRgmaxs})] while IZ-EDSR is optimal at stronger electric fields such that the energy of electrical confinement (along $y$) is comparable to the energy of the strong structural confinement (along $z$). For such strong electric fields IZ-EDSR can be the most efficient mechanism as shown in Fig. (\ref{fig:comparison}). In addition, the IZ-EDSR Rabi frequency strongly depends on the extent of the envelope function along the driving $ac$ field ($x$), as shown by Eq. (\ref{eq:ratio}). Furthermore, we have discussed the effect of strains, which can notably switch the dominant character of the hole (Sec. \ref{sec:strain}). Moving from a heavy-hole to a light-hole qubit actually strengthens the IZ-EDSR owing to the dependence of the Rabi frequency on the in-plane $g$-factors in Eq. (\ref{fRiparamax}). The behavior of the Rabi frequencies with biaxial strain is illustrated in Fig. \ref{fig:strain}, which highlights particular values of $\varepsilon_\parallel$ near the heavy- to light-hole transition where the frequencies essentially vanish. The Rabi frequencies do also decrease at large compressive and tensile strains because of the reduce heavy- and light-hole mixing; this may however strongly increase lifetimes and reduce variability. Strains and electric fields must, therefore, be carefully engineered in order to optimize the overall performances of the qubits. Then we have discussed the choice of the host material and we have compared, in particular, electrical manipulation in silicon and in germanium. According to Table \ref{tab:comparison}, both $g$-TMR and IZ-EDSR are more efficient in silicon than in germanium quantum dots with the same size, in the weak electric field regime and in the absence of strains, due to the larger hole effective masses. However, when the qubit acquires a dominant light-hole character under tensile strain, germanium can be more efficient than silicon, especially in the IZ-EDSR configuration. Moreover, germanium systematically outperforms silicon if the dots are compared at different sizes but same vertical and lateral confinement energies. Tables \ref{tab:comparison} and \ref{tab:comparison_100} also show the influence of the cristallographic orientation of the channel. We find that for heavy holes the $[110]$ orientation is optimal as it takes best advantage of the anisotropy of the valence band of silicon. These conclusions provide guidelines for the design and optimization of hole spin-orbit qubits embedded in one-dimensional channels. \section*{Acknowledgements} This work was supported by the European Union Horizon 2020 research and innovation program under grant agreement 810504-QUCUBE-ERC-2018-SyG, and by the French national research agency (ANR project MAQSi).
1,314,259,996,366
arxiv
\section{Tensor-based MRNSD derivation} \label{sec:mrnsdderivation} Suppose $\T{C} = e^{\T{Z}}$ meaning $\T{C}_{ij}^{(k)} = e^{\T{Z}_{ij}^{(k)}}$. We then compute the search direction $\T{S}$ by computing the gradient of \ref{eqn:mrnsd formulation} as follows: \begin{eqnarray} \T{S} &=& \nabla_{\T{Z}} \left(\frac{1}{2} \| \T{B} - \T{D} * e^{\T{Z}} \|_F^2 \right) \\ & = & e^{\T{Z}}\odot (-{\T{D}}^T*(\T{B} - \T{D} * e^{\T{Z}} )). \nonumber \end{eqnarray} The search direction $\T{S}$ is exactly the gradient of \cref{eqn:mrnsd formulation} with the addition of a Hadamard product $\odot$ with $\T{C}$. To determine the optimal step size, we solve for $\alpha$ as follows. For notational simplicity, we define the residual tensor $\T{R}$, the gradient tensor ${\T{G}}$, and ${\T{U}}$ as follows: \begin{align*} \T{R} &= \T{B} - \T{D} * \T{C}\\ {\T{G}} &= -\T{D}^T * \T{R}\\ {\T{U}} &= \T{D} * \T{S} \end{align*} Note that ${\T{U}}^T * \T{R} = -\T{S}^T * {\T{G}}$. We reformulate \cref{eqn:mrnsd formulation} using \cref{defn:fronorm} in terms of $\T{R}$, ${\T{G}}$, and ${\T{U}}$ as follows: \begin{align*} \frac{1}{2}||\T{B} - \T{D}*(\T{C} - \alpha\cdot \T{S})||_F^2 &= \frac{1}{2} ||\T{R} + \alpha \cdot {\T{U}} ||_F^2\\ &= \frac{1}{2} \texttt{trace}[((\T{R} + \alpha \cdot {\T{U}})^T*(\T{R} + \alpha \cdot {\T{U}}))^{(1)}]\\ &= \frac{1}{2}\texttt{trace}[(\T{R}^T*\T{R} + 2\alpha\cdot {\T{U}}^T *\T{R} + \alpha^2\cdot {\T{U}}^T*{\T{U}})^{(1)}]. \end{align*} Note that typically ${\T{U}}^T*\T{R} \not= \T{R}^T*{\T{U}}$; however, the trace of the first frontal slice is always equal. We made use of this fact in the last line above. Now, we solve for $\alpha$ as follows: \begin{align*} \nabla_\alpha \frac{1}{2}\texttt{trace}[(\T{R}^T*\T{R} + 2\alpha\cdot {\T{U}}^T *\T{R} + \alpha^2\cdot {\T{U}}^T*{\T{U}})^{(1)}] &= 0\\ \texttt{trace}[({\T{U}}^T *\T{R} + \alpha \cdot {\T{U}}^T*{\T{U}})^{(1)}] &= 0 \end{align*} Solving for $\alpha$ and rewriting in terms of $\T{D}$, $\T{S}$, and ${\T{G}}$, we get the optimal step size: \begin{align*} \alpha &= -\texttt{trace}[({\T{U}}^T *\T{R})^{(1)}]/\texttt{trace}[({\T{U}}^T *{\T{U}})^{(1)}]\\ &=\texttt{trace}[(\T{S}^T *{\T{G}})^{(1)}]/||\T{D} * \T{S} ||_F^2. \end{align*} We add an additional constraint on the $\alpha$ to ensure that we never move too far along the search direction and turn some coefficients $\T{C}$ to negative values. \begin{align*} \theta &= \texttt{trace}[(\T{S}^T *{\T{G}})^{(1)}]/||\T{D} * \T{S} ||_F^2\\ \alpha &= \min\{\theta, \min_{\T{S}_{ij}^{(k)} > 0}({\T{X}}_{ij}^{(k)}/\T{S}_{ij}^{(k)}) \}. \end{align*} \section{Quasiconvexity of MRNSD} \label{sec:quasiconvex mrnsd} From Boyd and Vandenberghe's \href{https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf}{\color{cyan}Convex Optimization}, we have the following definition: \begin{definition}[Quasiconvex] A function $f: \mathbb{R}^n \to \mathbb{R}$ is \emph{quasiconvex} if all of its sublevel sets $S_\alpha = \{\V{x} \in \mathbb{R}^n \mid f(\V{x}) \le \alpha\}$ for $\alpha\in \mathbb{R}$ are convex. \end{definition} We first expand $\Phi$ as follows: \begin{eqnarray*} \Phi(\V{z}) & = & \tfrac{1}{2} \| \M{D} e^{\V{z}} - \V{b} \|_F^2 \\ & = & \tfrac{1}{2} \| \M{D} e^{\V{z}} \|^2 + \tfrac{1}{2} \| \V{b} \|_F^2 - \V{b}^T \M{D}e^{\V{z}} \end{eqnarray*} Suppose for some $\V{x}, \V{y} \in \mathbb{R}^n$ and $\alpha \in \mathbb{R}$, $\V{s}, \V{y} \in S_\alpha$; that is, $\Phi(\V{x}), \Phi(\V{y}) \le \alpha$. We show that $\theta \V{x} + (1-\theta) \V{y} \in S_\alpha$ for all $\theta \in (0,1)$, hence that $S_\alpha$ is convex and $\Phi$ is quasiconvex. \begin{align*} \Phi(\theta \V{x} + (1-\theta)\V{y} ) &= \frac{1}{2} \| \M{D} e^{\theta \V{x} + (1-\theta)\V{y} }\|_F^2 + \frac{1}{2}\| \V{b} \|_F^2- \V{b}^T \M{D}e^{\theta \V{x} + (1-\theta)\V{y}} \\ &\le \frac{\theta}{2} \| \M{D} e^{\V{x}} \|_F^2 + \frac{1-\theta}{2}\| \M{D} e^{\V{y}}\|_F^2 + \frac{1}{2}\| \V{b} \|_F^2 - \V{b}^T \M{D}e^{\theta \V{x} + (1-\theta)\V{y}} & \text{by convexity}\\ &\le \theta \alpha + (1-\theta) \alpha- \V{b}^T \M{D} e^{\theta \V{x} + (1-\theta)\V{y}} & \text{by assumption}\\ &=\alpha- \V{b}^T \M{D}e^{\theta \V{x} + (1-\theta)\V{y} } \end{align*} For our dictionary-learning problem, we assume $\M{D}$ and $\V{b}$ are non-negative because they are composed of images. Furthermore, $e^{\V{z}}$ has non-negative components. Therefore, \begin{align*} \Phi(\theta \V{x} + (1-\theta)\V{y} ) \le \alpha- \V{b}^T \M{D}e^{\theta\V{x} + (1-\theta)\V{y}} \le \alpha. \end{align*} Therefore, $\theta \V{x} + (1-\theta)\V{y} \in S_\alpha$ and $S_\alpha$ is convex. Because $\Phi$ is quasiconvex, gradient descent will make progress towards a minimum (i.e., we will not be stuck at a saddle point). \section{Introduction} \label{sec:introduction} Many applications in imaging science, such as image deblurring and image reconstruction, typically model the object to be recovered as a vector of unknowns, and the forward operator as a matrix. Treating image and video data processing problems using tensor approaches is yet a new, but increasingly popular and promising approach, as suggested by recent literature \cite{HaoEtAl2013,Martin:2013kx, liu2013tensor, hunyadi2017tensor, vasilescu2002multilinear, semerci2013tensor,NewKilHor17,Soltani2016,ZZhangetal}. However, a close look at the increasing body of literature in which tensor decompositions are used in practice shows that no one specific tensor decomposition has fit all these image and video applications equally well. Indeed, here, as in many other multiway data processing and representation problems, the type of decomposition to be employed may be quite specific to the application. Decompositions based on a tensor-tensor product called the $t$-product \cite{KilmerMartin2011} have proven to be particularly useful in applications where there is a natural orientation dependence to be preserved, such as pixel or voxel position, relative to some other variable such as number of images or time (see \cite{HaoEtAl2013,ZZhangetal} for example). The $t$-product is advantageous over other tensor decompositions because of the algebraic framework induced by the definition of the $t$-product, which enables the definition and computation of factorizations reminiscent of their matrix counterparts (e.g. SVD, QR) and because the products can be computed in a straightforward way in parallel (see also \cite{KilmerEtAl2013}). Non-negative tensor factorizations have been introduced in the literature recently as well, and just like the unconstrained counterparts, the type of decomposition used varies \cite{Cickbook}. In \cite{Soltani2016, HaoEtAl2013}, the authors consider non-negative tensor decompositions based specifically on the $t$-product, the approaches in the two papers differing by the additional constraints on the optimization as well as the algorithms proposed to compute the factorization. In \cite{Soltani2016}, the authors develop an Alternating Direction Method of Multipliers (ADMM) \cite{Boyd2010} method for producing a non-negative patch tensor dictionary from a single, high resolution training image with the end goal of using the dictionary in the context of X-ray CT image reconstruction. The authors showed the method was good at producing reconstructions even for missing data situations, and that it gave improvements over matrix-based patch dictionary learning since the reconstructions were sparser and less sensitive to regularization parameters. In this paper, we consider two classical problems -- (lossy) image compression and image deblurring. In both applications, the first stage is to learn a tensor patch dictionary from multiple images of the same class using the approach in \cite{Soltani2016}, and hence we review that problem briefly. Our first new contribution deals with finding a non-negative representation under the tensor $t$-product \cite{KilmerMartin2011} of any image given a tensor dictionary. We give theoretical results and concrete illustrations that demonstrate the superiority of a tensor-patch dictionary over the corresponding matrix case. Then, we show how the modified residual norm steepest descent (MRNSD), \cite{Nagy2000,Kaufman1993} can be utilized for non-negative tensor coefficient recovery under the $t$-product. Additionally, we introduce sparsity-inducing regularization to the algorithm that can lead to compressed representations for images. Furthermore, we show that constraining our image to the non-negative patch dictionary representation can lead to a new effective debluring approach that is robust to certain model mismatches. This paper is organized as follows. \Cref{sec:back} is devoted to the introduction of background and notation. In \Cref{sec:patches}, we describe the process of patchification of images to make a tensor representation, and review the dictionary learning approach presented in \cite{Soltani2016} which we will use to generate our tensor dictionaries. In \Cref{sec:superior}, we investigate the power of the tensor-tensor product based representation of images vs. the traditional matrix-based approach. The choice of parameters such as patch and dictionary sizes are relative to quality, storage and computation time are also considered here. Following that discussion is the MRNSD algorithm for tensors in \Cref{sec:mrnsd}. Here, we also discuss the incorporation of coefficient sparsity constraints into MRNSD to allow for the compressed representation of images. In \Cref{sec:deblur} we give a short introduction to the image deblurring problem, explain how to represent the unknown image in terms of the tensor dictionary, and discuss the restoration problem that needs to be solved for the tensor coefficients. Numerical results are contained in Section \Cref{sec:num} and a discussion and list of future work is given in \Cref{sec:future}. Detailed derivations for some of the claims are left to the appendicies. \section{Notation and preliminaries} \label{sec:back} A \emph{tensor} is a multidimensional array of data; a first-order tensor is a vector and a second-order tensor is a matrix. This paper focuses on third-order tensors (i.e., three-dimensional data), though much of the theory can be extended to higher-order tensors. We denote tensors with script letters. Suppose $\T{A}$ is an $\ell \times m\times n$ tensor. As depicted in \Cref{fig:tensor notation}, we can divide the tensor in several directions. \emph{Frontal slices}, denoted $\M{A}^{(k)}$ for $k = 1,\dots, n$, are $\ell\times m$ matrices which fix the third-dimension of $\T{A}$. \emph{Lateral slices}, denoted $\vec{\T{A}}_j$ for $j = 1,\dots, m$, are $\ell\times 1\times n$ tensors which fix the second-dimension of $\T{A}$; we consider lateral slices to be $\ell\times n$ matrices oriented along the third-dimension. \emph{Tube fibers or tubes}, denote $\V{a}_{ij}$ for $i = 1,\dots, \ell$ and $j = 1,\dots, m$, are the $1\times 1\times n$ mode-3 fibers of $\T{A}$ or $n \times 1$ column vectors oriented along the third dimension. \begin{figure}[H] \centering \subfigure[Tensor $\T{A}$]{\includegraphics[scale=0.25]{tensor.jpg}} \subfigure[Frontal slices $\M{A}^{(k)}$]{\includegraphics[scale=0.25]{frontal.jpg}} \subfigure[Lateral slices $\vec{\T{A}}_j$]{\includegraphics[scale=0.25]{lateral.jpg}} \subfigure[Tubes $\V{a}_{ij}$]{\includegraphics[scale=0.25]{tubes.jpg}} \caption{Tensor notation.} \label{fig:tensor notation} \end{figure} If we consider tensors as linear operators analogous to matrices, lateral slices are the analogous to column vectors, hence the notation $\vec{\T{A}}$. In particular, tensors act on lateral slices just as matrices act on column vectors. Furthermore, lateral slices form the range and null space of a tensor \cite{KilmerEtAl2013}. For more detailed analysis on tensor linear algebra, we reference \cite{KilmerEtAl2013}. Many of the following definitions are taken directly from \cite{KilmerMartin2011}. Using the $\ell\times m\times n$ tensor $\T{A}$ illustrated in \Cref{fig:tensor notation}, we define the \texttt{unfold}\ and \texttt{fold}\ operations as follows: \begin{equation}\label{eqn:unfold} \texttt{unfold}(\T{A}) = \underset{\begin{array}{c}\\[-1em] \ell n\times m\end{array}}{\begin{pmatrix}\M{A}^{(1)} \\ \M{A}^{(2)}\\ \vdots \\ \M{A}^{(n)}\end{pmatrix}}, \qquad \texttt{fold}(\texttt{unfold}(\T{A})) = \T{A}. \end{equation} The \texttt{unfold}\ function reshapes a tensor $\T{A}$ into a block-column vector where each block is a frontal slice. The \texttt{fold}\ function reshapes an unfolded tensor into its original structure. Notice that the number of elements of $\T{A}$ and $\texttt{unfold}(\T{A})$ is the same. We now define the function {\tt circ} which transforms a tensor $\T{A}$ into a block-circulant matrix whose blocks are the frontal slices of $\T{A}$. \begin{equation}\label{eqn:bcirc} \Circ{\T{A}} = \underset{\begin{array}{c}\\[-1em] \ell n\times mn\end{array}} {\begin{pmatrix} \M{A}^{(1)} & \M{A}^{(n)} & \dots & \M{A}^{(2)}\\ \M{A}^{(2)} & \M{A}^{(1)} & \dots & \M{A}^{(3)}\\ \vdots & \vdots & \ddots & \vdots\\ \M{A}^{(n)} & \M{A}^{(n-1)} & \dots & \M{A}^{(1)}\\ \end{pmatrix}}. \end{equation} Notice that the first column of $\Circ{\T{A}}$ is the unfolded tensor from \Cref{eqn:unfold}. Furthermore, notice that $\Circ{\T{A}}$ has $n$ times the number of elements of the original tensor $\T{A}$. Fortunately, we need not form $\Circ{\T{A}}$ explicitly. Using \cref{eqn:unfold} and \cref{eqn:bcirc}, the $t$-product of two tensors is defined in \cite{KilmerMartin2011} as follows: \begin{definition}[$t$-product]\label{defn:tprod} Given $\T{A}$ is an $\ell\times p \times n$ tensor and $\T{B}$ is $p\times m\times n$, we define the \emph{$t$-product} as \begin{align*} \T{A} * \T{B} = \Fold{\Circ{\T{A}} \cdot \Unfold{\T{B}}}, \end{align*} where $\T{A} * \T{B}$ is an $\ell\times m\times n$ tensor and $``*"$ denote the $t$-product. \end{definition} Note that tubes commute under the $t$-product, and thus act analogously to scalars. For the algorithms we describe in \Cref{sec:mrnsd}, we require the following two tensor norms \cite{KoldaBader,Soltani2016}. \begin{definition}[Frobenius norm]\label{defn:fronorm} Suppose $\T{A}$ is an $\ell\times m\times n$ tensor. Then: \begin{align*} \|\T{A} \|_F^2 = \Trace{(\T{A}^T*\T{A})^{(1)}} = \sum_{k=1}^n\sum_{j=1}^m\sum_{i=1}^\ell (\T{A}_{ij}^{(k)})^2. \end{align*} \end{definition} \begin{definition}[Sum norm]\label{defn:sum norm} Suppose $\T{A}$ is an $\ell\times m\times n$ tensor. The sum norm is \begin{align*} \|\T{A} \|_{\textnormal{sum}} = \sum_{k=1}^n\sum_{j=1}^m\sum_{i=1}^\ell |\T{A}_{ij}^{(k)}|. \end{align*} \end{definition} \subsection{Properties of the $t$-product} \label{ssec:props} As we alluded to above and given in \cite{KilmerMartin2011}, we can compute the $t$-product (see \Cref{defn:tprod}) more efficiently using the Fourier transform: \begin{definition}[$t$-product with Fourier transform]\label{defn:tprod fourier} Given $\T{A}$ is an $\ell\times p \times n$ tensor and $\T{B}$ is $p\times m\times n$, the $t$-product $\T{C} = \T{A}*\T{B}$ can be computed as follows: \begin{align*} \widehat{\M{C}}^{(i)} = \widehat{\M{A}}^{(i)} \cdot \widehat{\M{B}}^{(i)} \quad \text{for }i = 1,\dots n, \end{align*} where $\widehat{\T{A}} = \mbox{\tt fft}(\T{A},[\,],3)$, $\T{C} = \mbox{\tt ifft}(\widehat{\T{C}},[\,],3)$, and \mbox{\tt fft}, \mbox{\tt ifft} are the one-dimensional fast Fourier and inverse Fourier transforms, respectively, applied along the third-dimension. \end{definition} \Cref{defn:tprod fourier} can be implemented in parallel perfectly, hence is an efficient algorithm for computing the $t$-product. An alternative perspective on the $t$-product will be essential to our understanding of tensor dictionary learning in \Cref{subsec:localglobal}. (See also \cite{HaoEtAl2013,Soltani2016}.) Suppose $\vec{\T{A}}$ is an $\ell\times 1\times n$ lateral slice. Then, $\mbox{\tt squeeze}(\vec{\T{A}})$ rotates the lateral slice into an $\ell\times m$ matrix; the \texttt{twist} transformation reverses this process (see \Cref{fig:squeeze}). \begin{figure}[H] \centering \includegraphics[scale = 0.35]{squeeze_twist.jpg} \caption{Illustration of \mbox{\tt squeeze} and \mbox{\tt twist} transformations.} \label{fig:squeeze} \end{figure} Next we show how the structure imposed by the $t$-product impacts lateral slices. \begin{definition}[$t$-product with lateral slices] \label{defn:tprodlat} Given $\T{A}$ is an $\ell\times p \times n$ tensor and $\T{B}$ is $p\times m\times n$, we can write the $\T{C} = \T{A}*\T{B}$ as follows: \begin{align*} \Squeeze{\vec{\T{C}}_j} = \sum_{i=1}^p \Squeeze{\vec{\T{A}}_i} \cdot \mbox{\tt circ}(\V{b}_{ij}^T) \quad \text{for } j = 1,\dots, m. \end{align*} \end{definition} Originally, we viewed the $t$-product as $\T{A}$ acting on the lateral slices of $\T{B}$ (see \Cref{defn:tprod}). The significance of \Cref{defn:tprodlat}, originally noted in \cite{HaoEtAl2013}, is that we can consider tubes of $\T{B}$ to be ``coefficients" of lateral slices of $\T{A}$. We say more about this in Section \ref{subsec:localglobal}. \section{Patch Tensor Representation and Learning} \label{sec:patches} We briefly discuss the general idea of dictionary learning with tensors. For more background on matrix-based dictionary learning, one can see \cite{SoltAndHans16} and the references therein. As this paper focuses on tensor formulations, we keep our overview of the literature to describing the tools from \cite{Soltani2016} that we use here. \subsection{Image to Tensor Mapping} \label{ssec:patch} First, let us describe the transformation of a single two dimensional image into a third-order tensor. Let us suppose we have one image $\M{B}$ of size $N_r \times N_c$, and we desire to consider this image in terms of $p \times q$ patches, where $N_r = p n_r$ and $N_c = q n_c$ for some integers $n_r, n_c$, respectively. Then our $N_r \times N_c$ image can be mapped to a $p \times n_r n_c \times q$ third order tensor $\T{B}$ by putting each image patch into a lateral slice of our tensor. We choose to use a lexicographical ordering by patch columns. As seen in \Cref{fig:patchifying}, the (1,1) patch in $\M{B}$ is mapped to the first lateral slice in $\T{B}$ (i.e. $\T{B}_{:,1,:}$), the (2,1) patch in $B$ becomes the 2nd lateral slice of $\T{B}$, etc. Clearly, the process is completely reversible: given the patch tensor representation of an image, we can map back to its matrix representation. \begin{figure}[htb] \centering \includegraphics[scale=.35]{baby_lines.png} \includegraphics[scale=.35]{tensor_patcheslite2.png} \caption{Illustration of tensor patchification of image (left) to construct its tensor representation (right).} \label{fig:patchifying} \end{figure} In sum, $\V{b}, \M{B}, \T{B}$ all represent the same image, but in different formats. Relative dimensions are summarized in \Cref{tab:sizes} for easy reference. \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{$N_r = n_r p; \mbox{ } N_c = n_c q; \mbox{ } M = n_r n_c$} \\ \hline $\M{B}$ & $\T{B}$ & $\V{b}$ & $\T{D}$ & $\T{C}$ \\ \hline $N_r \times N_c$ & $p \times M \times q$ & $N_r N_c \times 1 $ & $p \times s \times q$ & $s \times M \times q$ \\ \hline \end{tabular} \caption{\label{tab:sizes} Left three columns give the dimensions of the various representations of the same image. In the right two columns, the image approximation in tensor form is assumed $\T{X} = \T{D} * \T{C}$, and the corresponding sizes of $\T{D}$ and $\T{C}$ under this assumption are given. } \end{table} \subsection{Tensor-based dictionary learning} \label{ssec:tendict} Now suppose we have a sample space of $N_I$ images, each image of size $N_r \times N_c$. Following \cite{Soltani2016}, we divide each image into patches of size $p\times q$ and let $M$ be the number of patches per image. Unlike matrix-based dictionary learning, we do not vectorize each patch. Instead, we store all patches as lateral slices of a sample space tensor $\T{Y}$ of size $p\times t\times q$ where $t = N_I\cdot M$ is the total number of patches (see \Cref{fig:tensor dictionary}). \begin{figure}[H] \centering \includegraphics[scale = 0.4]{tensor_dictionary1.jpg} \caption{Illustration of tensor dictionary learning decomposition.} \label{fig:tensor dictionary} \end{figure} To `learn' the dictionary representation is to minimize $\|\T{Y} - \T{D} * \T{H}\|_F^2$ where $\T{D}\in \mathbb{R}_+^{p\times s\times q}$, $\T{H}\in \mathbb{R}_+^{s\times t\times q}$, and $s \ll t$. Here, $\T{H}$ contains the tensor coefficients for the tensor dictionary $\T{D}$. From \cite{Soltani2016}, the problem to solve is \begin{align}\label{eq:admm tensor} &\min_{\T{D}, \T{H}, \T{U}, \T{V}} \quad \frac{1}{2} \|\T{Y} - \T{U} * \T{V} \|_F^2 + \lambda \|\T{H} \|_{\text{sum}} + I_{\mathbb{R}_+^{s\times t\times q}}(\T{H}) + I_{\texttt{D}}(\T{D})\\ &\text{subject to}\quad \T{D} = \T{U} \quad \text{and} \quad \T{H} = \T{V}, \nonumber \end{align} where $\T{D}, \T{U} \in \mathbb{R}_+^{p\times s \times q}$ and $\T{H}, \T{V} \in \mathbb{R}_+^{s\times t\times q}$. In \Cref{eq:admm tensor}, $\lambda$ is a regularization parameter and the sum norm (see \Cref{defn:sum norm}) promotes sparsity of the coefficient tensor $\T{H}$. We denote the indicator function of a set $Z$ as $I_Z$. Thus, $I_{\mathbb{R}_+^{s\times t\times q}}(\T{H})$ ensures the coefficients $\T{H}$ are non-negative and $I_{\texttt{D}}$ ensures the dictionary $\T{D}$ belongs to the compact and convex set \cref{eqn:dictionary set} \begin{equation}\label{eqn:dictionary set} \texttt{D} \equiv \left\{\T{D} \in \mathbb{R}_+^{p\times s\times q} \mid \|\vec{\T{D}}_i \|_F \le \sqrt{pq},\, i = 1,\dots,s\right\}. \end{equation} As described in \cite{Soltani2016}, we impose the extra constraint that $\T{D} \in \texttt{D}$ (as opposed to $\T{D}\in \mathbb{R}_+^{p\times s\times q}$) to avoid scaling ambiguity; that is, for any $\beta > 0$, $\|\T{Y} - (\beta\cdot \T{D}) * (\frac{1}{\beta}\T{H}) \|_F^2 = \|\T{Y} - \T{D}* \T{H} \|_F^2$. We will not discuss the specifics of the tensor-based ADMM algorithm in this paper; we refer the reader to \cite{Soltani2016} for a full analysis. When optimizing \cref{eq:admm tensor}, we project $\T{D}$ and $\T{H}$ into $\texttt{D}$ and $\mathbb{R}_+^{s\times t\times q}$, respectively. We choose to project $\T{D}$ into $\texttt{D}$ using the infinity norm, that is: \begin{align}\label{eqn:projection} P_{\texttt{D}}(\T{D})_{ij}^{(k)} = \min(\max(\T{D}_{ij}^{(k)},0),1), \end{align} where $P_{\texttt{D}}$ is the projection operator, an option included in the publically available code \cite{Githubref}. \subsection{Representation/Recovery Formulation} Now suppose we have image $\M{B} \in \mathbb{R}_+^{N_r \times N_c}$ which we would like to represent in terms of a dictionary we learn by solving \cref{eq:admm tensor}. Let $\T{B}$ be the $p\times M\times q$ be the patchified tensor representation, and $\T{D}$ the $p \times s \times q$ non-negative patch dictionary. To represent the non-negative image via our patch dictionary, we solve \begin{equation}\label{eqn:mrnsd formulation} \min_{\T{C}} \frac{1}{2} \|\T{B} - \T{D} * \T{C} \|_F^2 \quad \text{subject to } \T{C} \in \mathbb{R}_+^{s\times M\times q}. \end{equation} In other words, if we can determine $\T{C} \ge 0$ such that $\T{B} \approx \T{D} * \T{C}$, then the image approximation is obtained by computing $\T{D}* \T{C}$ and mapping the resulting tensor back to a 2D image by inverting the patchification process. But several issues warrant discussion before presentation of the algorithm to solve for $\T{C}$. First, we need to give intuition as to why the tensor-based approach to the image model can provide significantly different results than the a matrix-based analogue, independent of the method produced to generate the dictionary. Then, we need to consider choices of patch and dictionary sizes required to maximize the representation power and harness the computational efficiencies of the tensor-based approach. These issues are covered in the next section. \section{The Tensor Formulation: Advantages and Parameter Choices} \label{sec:superior} We first explain the power behind the tensor-based approach. Then, we discuss the choice of parameters such as dictionary size and patch size to maximize the potential of our new method. \subsection{Tensor Superiority} In this subsection, we will assume that a patch-dictionary has already been determined. It does not matter for the moment how that dictionary was derived: our goal is to show the differences in the solution sets to the two problems of image approximation, one based on a matrix-formulation, and one based on the tensor-formulation. Let $\T{D} \in \mathbb{R}_{+}^{p \times s \times q} $ denote the dictionary in tensor form, and define $\underbar{\M{D}} = \Unfold{\T{D}} \in \mathbb{R}_{+}^{pq \times s}$. Likewise, let $\T{B}$ denote the patchified tensor image, and let $\underbar{\M{B}} = \Unfold{\T{B}} \in \mathbb{R}_{+}^{pq \times n_r n_c}$. We have the following theorem: \begin{theorem}\label{thm:superior} Consider the set of solutions to within a tolerance $\epsilon$: \[ \mathcal{X}_{mat} := \{ \M{C} \in \mathbb{R}_{+}^{s \times n_r n_c} | \| \underbar{\M{B}} - \underbar{\M{D}} \cdot \M{C} \|_F \le \epsilon \} \] \[ \mathcal{X}_{ten} := \{ \T{C} \in \mathbb{R}_{+}^{s \times n_r n_c \times q} | \| \T{B} - \T{D} * \T{C} \|_F \le \epsilon \}. \] Let $\mathcal{X}_{mat,e}$ denote the set of tensors of size $s \times n_r n_c \times q$ whose first frontal slice is from $\mathcal{X}_{mat}$ and the remaining $q-1$ frontal slices are zeros. Then $\mathcal{X}_{mat,e} \subset \mathcal{X}_{ten}$. That is, the set of solutions of the tensor problem effectively contains the set of solutions to the matrix problem. \end{theorem} \begin{proof} See \cite{LizThesis}. \end{proof} This suggests that in solving the tensor problem, the solutions to the matrix problem are achievable, and we would be able to recover those if those are optimal in the tensor framework, as we demonstrate in \Cref{ex:one} below. However, the tensor case may provide better solutions by virtue of working in the tensor algebra, which we see in \Cref{ex:two}. \begin{example} \label{ex:one} Let $\M{B} = \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$ be a single patch ($p=q=2$, $n_r=n_c=1$) which we interpret as the entire image, meaning $\underbar{\M{B}} = \mbox{\tt vec}(\M{B})$ is $4 \times 1$. Suppose \[ \underbar{\M{D}} = \begin{bmatrix} 1 & 0 & 0 & 0 & \frac{1}{4} \\[0.25em] 0 & 1 & 0 & 0 & \frac{1}{2} \\[0.25em] 0 & 0 & 1 & 0 & \frac{3}{4} \\[0.25em] 0 & 0 & 0 & 1 & 1 \end{bmatrix}. \] Set $\epsilon = 0$; that is, find exact solutions $\V{c} \in \mathbb{R}_+^{5\times 1}$ such that $\| \underbar{\M{B}} - \underbar{\M{D}} \cdot \V{c} \| = 0$. It is easily checked that $\V{c}_a = [1,1,1,1,0]^T$ and $\V{c}_b = [3/4,1/2,1/4,0,1]^T$ with $\| \V{c}_a \|_{1} = 4$ and $\| \V{c}_b \|_{1} = 5/2$ are both exact solutions of the matrix optimization problem. It is also easy to see a non-negative solution cannot be obtained with fewer than four non-zero coefficients. We can exactly capture these matrix solutions in the tensor framework. Let $\T{B} = {\tt twist}(\M{B})$ be the patch stored as a $2\times 1\times 2$ lateral slice and let $\T{D} = \mbox{\tt fold}(\underbar{\M{D}})$ be the equivalent tensor dictionary of size $2\times 5\times 2$. We are trying to find exact solutions $\T{C}\in \mathbb{R}_+^{5\times 1\times 2}$ such that $\| \T{B} - \T{D} * \T{C} \| = 0$. If we let $\T{C}_{:,:,1} = \V{c}_a$ and $\T{C}_{:,:,2} = \V{0}$, then it is easily seen that $\T{C}$ is a solution of the tensor version of the problem -- this solution is effectively the matrix solution, in tensor form. However, the coefficient tensor \[ \T{C}_{:,:,1} = [1/3,0,0,0,2/3]^T \text{ and } \T{C}_{:,:,2} = [1/3,0,0,0,2/3]^T, \] is a solution to the tensor version of the problem with no matrix-based analogue, and we also observe $\| \T{C} \|_{\textnormal{sum}} = 2$. Thus, the set of tensor solutions is bigger, and for the same number of non-zeros in the coefficients, we can get tensor solutions of smaller sum norm. \end{example} \begin{example} \label{ex:two} Here, we let $\M{B} = \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix}$, still assuming a single patch, and we assume \[ \underbar{\M{D}} = \begin{bmatrix} 1 & 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & 0 & 1 \end{bmatrix}, \] with $\T{D} = \mbox{fold}(\underbar{\M{D}})$. It is easy to verify that ther is no non-negative $\V{c}$ that can exactly recover $\mbox{\tt vec}(\M{B})$ in the matrix case. If we set $\epsilon = 1/2$, then one element of $\mathcal{X}_{mat}$ is $\V{c} = [1/2,7/2,2,0,1/2]^T$ with $\| \V{c} \|_1 = 13/2$. However, with only four non-zeros entries in our tensor coefficients, we can resolve $\T{B}$ exactly: e.g., $\T{C}_{:,:,1} = [0,1,1,0,0]^T$ and $\T{C}_{:,:,2} = [0,1,3,0,0]^T$ satisfies $\T{B} = \T{D}*\T{C}$, has four non-zeros, and has $\| \T{C} \|_{\text {sum}} = 6$. This demonstrates that we can get a richer and possibly more accurate set of solutions for the tensor representation of the problem than for the matrix version. \end{example} \subsection{Parameters} \label{subsec:localglobal} In \cite{Soltani2016}, the authors use a value for $s$ that is consistent with a matrix-based patch dictionary learning algorithm, and illustrate on some CT image examples that when keeping $s$ fixed, the tensor patch dictionary allows for better approximation, but otherwise, the choices of $s$, $p$ and $q$ are not further discussed. Here, we explain why $s \ge p$ is necessary to get good representations. We then discuss why $s \gg p$ is not advantageous from a storage perspective, and explain why $s=2p$ is sufficient from a qualitative point of view. \paragraph{\bf Dictionary Dimesion $s$} In \cite{Soltani2016}, $s$ was chosen to be a small multiple of the product $pq$. The reasoning for this was that in matrix patch dictionary learning, each patch is expressed as a vector and then approximated as a linear combination of the columns of the dictionary matrix. Since the dictionary matrix would have $pq$ rows, then the choice of $s \ge pq$ would be required to try to ensure a spanning set. However, taking $s$ this large for the tensor dictionary case is in fact not necessary for reasonably sized patches, as we now explain. Further, large $s$ is not a good choice in terms of computational efficiency, as we show later. From Definition (\ref{defn:tprodlat}), each of the $N$ image patches is approximated as \begin{equation} \label{eq:bi} \M{B}_j \approx \sum_{i=1}^s \M{D}_i \mbox{\tt circ}({\V{c}}_{ij}) , \qquad j = 1,\ldots,M, \end{equation} where $\M{B}_j= \Squeeze{\vec \T{B}_j} $ and $\M{D}_i := \Squeeze{\vec \T{D}_i}$ are both in $\mathbb{R}_{+}^{p \times q}$ and ${\V{c}}_{ij} = \T{C}_{i,j,:}^T$. Postmultiplication of $p \times q$ matrix $\M{D}_i$ by a $q \times q$ circulant generated by the tube $\V{c}_{ij}$ can be written \[ \M{D}_i \Circ{\V{c}_{ij}} = \V{c}_{ij}^{(1)} \M{D}_j + \V{c}_{ij}^{(2)} \M{D}_j \M{Z} + \cdots + \V{c}_{ij}^{(q)} \M{D} \M{Z}^{q-1} , \] where $\M{Z}$ denotes the $q \times q$ circulant downshift matrix (i.e. $\M{Z}^q = \M{I}$). Since each term in the sum (\ref{eq:bi}) admits such an expansion, after regrouping we obtain \begin{equation} \label{eq:expand} \M{B}_j \approx \sum_{i=1}^s \V{c}_{ij}^{(1)} \M{D}_i + \sum_{j=1}^s \V{c}_{ij}^{(2)} \M{D}_i \M{Z} + \cdots + \sum_{j=1}^s \V{c}_{ij}^{(q)} \M{D}_i \M{Z}^{q-1} ,\end{equation} meaning the $p \times q$ non-negative patch $\M{B}_j$ is described by a linear combination of $sq$, $p \times q$ non-negative matrices, although subsets of those matrices in the expansion are related via column shifts. We know that a spanning set for all $p \times q$ matrices would need to be of dimension $pq$. We do not know if the matrices in the above expression are all independent so we do not know if $p=s$ is sufficient, but certainly we do need $s \ge p$. We found in practice that it was sufficient to take $s$ a small multiple of $p$ as long as the patch sizes were not too large. Typically $s = 2p$ was all that was needed in our experiments to get reasonable representations. Though one might argue a larger value of $s$ may result in sparser coefficients, there is a trade-off with respect to the computational cost. \paragraph{\bf Storage of Tensor Coefficients} Storage of the original image requires storage of $N_r N_c$ pixel values. Storage of $\T{D}$ (assuming it is dense, which it may not be) requires $pqs$ numbers, while storage of $\T{C}$ requires $sq \frac{N_r}{p} \frac{N_c}{q} = s \frac{N_r}{p} N_c$ numbers. Thus, if $s = 2p$, storage of $\T{C}$ {\it assuming that $\T{C}$ is dense} requires $2 N_r N_c $ numbers, twice the amount of storage of the image itself. For the deblurring application, we will not be concerned with this additional storage -- the coefficients are a means to an end (namely, producing a high quality restoration). For the compression application, however, our goal will be to produce a $\T{C}$ that is sparse, so that only non-zeros need to be stored. \paragraph{\bf Patch Sizes} There clearly must be a lower bound on the patch size: in the extreme with $p=q=1$, $s = 1$, the dictionary is only one non-zero constant and we cannot have compression because then $\T{C}$ is the image itself. Choosing patch sizes too small undermines the power of the representation in (\ref{eq:expand}), and since the implementation of the algorithms utilizes FFT's of length $q$, there will be too much inefficiency if $q$ is very small (see further discussion in \Cref{subsec:patch size ratio}). If $p$ is too large, since we have shown we need $s \ge p$, we would have a high storage cost for the dictionary. After we discuss the algorithm, we will see that we are further constrained by the computational impact of the choice of patch sizes. \subsection{Global Interpretation: Image Resolution vs. Patch Size} To gain intuition, we observe that by placing all the image patches into position in the image, our tensor approximation is equivalent to the matrix representation \[ \M{B} \approx \sum_{j=1}^s (\M{I}_{n_r} \otimes \M{D}_j) \begin{bmatrix} \Circ{ \V{c}_{j,1} } & \Circ{\V{c}_{j,n_r+1}} & \cdots & \Circ{\V{c}_{j,n_r(n_c-1)+1}} \\ \Circ{ \V{c}_{j,2}} & \Circ{\V{c}_{j,n_r+2}} & \cdots & \Circ{\V{c}_{j,n_r(n_c-1)+2}} \\ \vdots & \vdots & \vdots & \vdots \\ \Circ{ \V{c}_{j,n_r}} & \cdots & \cdots & \Circ{\V{c}_{j,n_r n_c} } \end{bmatrix}, \] where each circulant block in the block matrix is of size $q \times q$, and there are $n_r$ block rows and $n_c$ block columns. Thus, the image has a expansion in terms of a structured global dictionary $(\M{I} \otimes \M{D}_j), j=1,\ldots s$, although such an expansion is never computed explicitly. From this we see that the same dictionary can be used to reconstruct the same image at different resolutions. We illustrate this in the numerical results. \section{MRNSD for tensors} \label{sec:mrnsd} Since \[ \| \T{B} - \T{D}*\T{C} \|_F^2 = \| \mbox{\tt unfold}(\T{B}) - \mbox{\tt circ}(\T{D})\mbox{\tt unfold}(\T{C}) \|_F^2 .\] let us (implicitly) define \[ \V{v} = \mbox{\tt vec}(\mbox{\tt unfold}(\T{B})), \qquad \V{c} = \mbox{\tt vec}(\mbox{\tt unfold}(\T{C})), \qquad \mbox{ and } \M{D} = \M{I} \otimes \mbox{\tt circ}(\T{D}). \] The MRNSD algorithm (\cite{Nagy2000,Kaufman1993}) was developed to solve $\min_{\V{c} \ge 0} \| \V{v} - \M{D} \V{c} \|_2 ,$ so it can clearly be applied to our formulation. Of course it would be foolish to form $\M{D}$ explicitly. In fact, we can use an equivalent and elegant formulation of each MRNSD step that uses all the tensor mechanics, and therefore only requires we have a routine that performs the $t$-product. The algorithm is given in \Cref{alg:mrnsd}, and the details of the equivalence to this approach are given in \Cref{sec:mrnsdderivation}. \begin{algorithm}[H] \caption{MRNSD with $t$-product} \label{alg:mrnsd} \begin{algorithmic}[1] \STATE{\textbf{Input:} image $\T{B}$, dictionary $\T{D}$, initial estimate $\T{C}_0$} \STATE{Form gradient $\T{G}_0 = -\T{D}^T * (\T{B} - \T{D} * \T{C}_0)$} \FOR{$k = 0,1,2,\dots$} \STATE{$\T{S}_k = \T{C}_k \odot \T{G}_k$ \hfill \COMMENT{Form search direction (\Cref{sec:mrnsdderivation})}} \STATE{$\theta_k = \texttt{trace}[(\T{S}_k^T * \T{G}_k)^{(1)}]/ \| \underbrace{\T{D} * \T{S}_k}_{\T{W}_k} \|_F^2$ \hfill \COMMENT{Determine optimal step size (\Cref{sec:mrnsdderivation})}} \STATE{$\alpha_k = \max\{\theta_k, \min_{\T{S}_{ij}^{(\ell)} > 0} (\T{C}_k)_{ij}^{(\ell)} / (\T{S}_k)_{ij}^{(\ell)}\}$ \hfill \COMMENT{Ensure step size preserves non-negativity}} \STATE{$\T{C}_{k+1} = \T{C}_k - \alpha_k \cdot \T{S}_k$ \hfill \COMMENT{Update coefficients} \label{alg:update coefficients}} \STATE{$\T{G}_{k+1} = \T{G}_k - \alpha_k \cdot \T{D}^T * \T{W}_k$ \hfill \COMMENT{Update gradient}} \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Implementation Details} \label{subsec:patch size ratio} Per iteration in \Cref{alg:mrnsd}, the dominant costs are the two products $\T{W}_k := \T{D}*\T{S}_k$ and $\T{D}^T *\T{W}_k$. Recall the $t$-product is computed by moving into the Fourier domain (i.e. computing $\widehat{\T{D}}, \widehat{\T{S}}_k$ and and their facewise matrix-matrix products). Some computations can be reused. We note that $\widehat{\T{W}_k}$ need not be recomputed, since those entries are already known from computing $\T{W}_k$ in the step size computation. Also, entries of $\widehat{\T{D}^T}$ are known from entries of $\T{D}$. So we need only to assess the costs of computing $\widehat{\T{D}}, \widehat{\T{S}}_k$, and the costs of doing the $q$, matrix-matrix products $\widehat{\T{D}}^{(\ell)} \widehat{\T{S}}_k^{(\ell)}$ and $\widehat{\T{D}^T}^{(\ell)} \widehat{\T{W}}^{(\ell)}$. The computational cost for the FFTs is $O(s\cdot\log_2(q)\cdot(pq + \tfrac{N_r N_c}{p}))$, and the computational cost for the matvecs is $O( s N_r N_c )$. Note that the cost of the matrix multiplications is independent of the patch size if we assume serial implementation. At the other extreme, for $q$ processors, each processor would compute a single matrix-matrix product at $s \tfrac{N_r N_c}{q}$ flops, so a larger value of $q$ is beneficial. Either way, $q$ should not be too small or the constant in front of the cost to perform length-$q$ FFTs will not be suitably amortized. We already observed $s \ge p$. If $s = k p$ for a small integer $k$, the total flop count in serial is $O( k p^2 \log_2(q) q + N_r N_ck (p+\log_2(q)) )$. Note the cost grows more slowly for $q$, suggesting $p \le q$ may be desirable. For sufficiently small fixed $k,p,q$, the cost grows as the number of unknowns in the image. \subsection{Compression} \label{subsec:compression} When we reconstruct images via tensor MRNSD, we tend to generate coefficients $\T{C}$ which contain many small values. This is because of the efficient encoding of information inherent in the $t$-product discussed previously. We introduce a sparsity regularization to our MRNSD minimization motivated by a proximal-operator framework~\cite{Boyd2013}. We briefly outline the proximal gradient method; specific details can be found in~\cite{Boyd2013}. Traditionally, proximal algorithms are a class of convex optimization techniques solving problems of the following form: \begin{equation}\label{eqn:proximal framework} \min_{\V{c}} h(\V{c}) \equiv f(\V{c}) + g(\V{c}), \end{equation} where $f$ is smooth and convex and $g$ is simple and convex. For example, $f$ could be the $\ell_2$-norm (i.e., quadratic) and $g$ could be an $\ell_1$-regularization (i.e., piecewise-linear). We cannot use a conventional gradient-descent algorithm in \cref{eqn:proximal framework} because $g$ need not be differentiable. Instead, we define the \emph{proximal operator} of $g$ as follows: \begin{equation}\label{eqn:prox operator} \text{prox}_g(\V{y}) = \argmin_{\V{c}} \left\{g(\V{c}) + \frac{1}{2} \| \V{c} - \V{y}\|_2^2\right\}. \end{equation} The intuition behind \cref{eqn:prox operator} is to balance a point which minimizes a function $g$ that is close to another point $\V{y}$. This interpretation gives rise to a two-step procedure to solve \cref{eqn:proximal framework}: \medskip \begin{enumerate} \item Minimize $f$ using gradient descent: $\V{y} = \V{c}_k - \alpha_k \nabla f(\V{c}_k).$ \item Find a nearby point which minimizes $g$: $\V{c}_{k+1} = \text{prox}_g(\V{y})$. \end{enumerate} \medskip We can apply this proximal operator framework to MRNSD with regularization. For simplicity, we derive our method for matrix-vector products, with the understanding that we can translate this to tensor notation in our case, as we show at the end of this section. Ideally, we use $\ell_1$-regularization to promote sparsity in our original problem: \begin{equation}\label{eqn:mrnsd with regularization1} \min_{\V{c} \ge 0} \frac{1}{2} \|\V{b} - \M{D} \V{c} \|_F^2 + \lambda \| \V{c} \|_1. \end{equation} Using MRNSD, we incorporate the non-negativity constraint into the optimization using the mapping $\V{c} = e^{\V{z}}$. However, because $e^{\V{z}}$ is strictly positive, simply regularizing $e^{\V{z}}$ will not promote sparsity. We incorporate the constraint into our function using the following mapping: \begin{align*} \V{c} = e^{\V{z}} - \epsilon {\V{1}}, \end{align*} where $\V{1}$ denotes the vector of all ones. This means $\V{c}_i > -\epsilon$ and we will take $\epsilon \to 0$. We now minimize the unconstrained problem: \begin{align*} \min_{\V{z}} = \underbrace{\tfrac{1}{2} \|\V{b} - \M{D} (e^{\V{z}} - \epsilon) \|_F^2}_{f} + \underbrace{\lambda \|e^{\V{z}} - \epsilon \|_1}_{g} \end{align*} We compute the gradient of $f$ and the proximal operator of $g$ as follows: \begin{align*} \nabla f &= e^{\V{z}} \odot [-\M{D}^T(\V{b} - \M{D} ( e^{\V{z}} - \epsilon))] \\ &= (\V{c} + \epsilon) \odot [-\M{D}^T(\V{b} - \M{D} \V{c})], \quad \epsilon \to 0. \\%&&\V{x} = e^{\V{z}} - \epsilon \\ &= \V{c} \odot [-\M{D}^T(\V{b} - \M{D} \V{c})] \end{align*} This is the same gradient we had before. Next we consider the proximal operator: \begin{align*} \texttt{prox}_g(\V{y}) &= \text{arg}\min_{\V{c}} \left\{ \tfrac{1}{2}\|\V{c} - \V{y}\|_F^2 + \lambda \|\V{c} \|_1\right\}\\ &= \text{arg}\min_{\V{z}} \left\{ \tfrac{1}{2}\|e^{\V{z}} - \epsilon - \V{y}\|_F^2 + \lambda \|e^{\V{z}} - \epsilon \|_1\right\} \end{align*} We solve this by computing the (sub)gradient and setting it equal to zero as follows: \begin{align*} e^{\V{z}}\odot (e^{\V{z}} - \epsilon- \V{y}) + \lambda e^{\V{z}}\odot \texttt{sign}(e^{\V{z}} - \epsilon) &= \V{0}.\\ e^{\V{z}} - \epsilon- \V{y} + \lambda \cdot \texttt{sign}(e^{\V{z}} - \epsilon) &=\V{0}. \end{align*} We can map this back to $\V{c}$ as follows: \begin{align*} \V{c} - \V{y} + \lambda \cdot \texttt{sign}(\V{c}) &= 0. \end{align*} The solution to the above equation is exactly the soft-thresholding operator. Therefore, our MRNSD iteration with encorporated $\ell_1$-regularization is the following: \begin{align*} \V{c}_{k+1} = G_{\alpha_k \cdot \lambda}[\V{c}_k - \alpha_k \cdot \V{c}_k \odot (-\M{D}^T(\V{b} - \M{D} \V{c}_k)) ], \end{align*} where $G_\mu$ is the soft-thresholding operator: \begin{align*} G_{\mu}[c] &= \left\{ \begin{array}{ll} c - \mu, & c > \mu \\ 0, & |c| < \mu \\ c + \mu, & c < -\mu. \end{array}\right. \end{align*} Using the same observations as in the start of this section that allowed us to move from the matrix formulation to the tensor formulation, we arrive at the sparsity-constrained tensor-MRNSD formulation by changing \Cref{alg:mrnsd}, Line \ref{alg:update coefficients} to the following: \begin{align}\label{eq:update coefficients with sparsity} \T{C}_{k+1} = G_{\alpha_k \cdot \lambda}[\T{C}_k - \alpha_k \cdot \T{S}_k]. \end{align} We conclude this section by noting that we are not the first to consider augmentation of MRNSD iterates in order to encourage sparsity. In \cite{Cickbook}, the authors suggest applying a sparsity-type constraint to an MRNSD step. But the text was without mathematical justification, and we found in our examples that incorporation of their suggestion did little to promote sparsity. \section{Deblurring} \label{sec:deblur} We briefly review the standard model for image blurring/deblurring to set the stage for our tensor-based deblurring approach. For more background see \cite{OlearyEtAl}. The basic blurring model given assuming known image $\V{x}_{true} = \mbox{\tt vec}(\M{X}_{true}) \in \mathbb{R}^{N_r \times N_c}_{+}$, is $$\M{A} \V{x}_{true} + \V{n} = \V{b},$$ where $\M{A} \in \mathbb{R}^{N_r M_r \times N_c M_c}$ is a blurring operator whose singular values decay rapidly to 0, $\V{n}$ is the unknown white noise vector and $\V{b}$ is the blurred noisy image in vector form; that is, $\V{b} = \mbox{\tt vec}(\M{B})$ where $\M{B}$ is $M_r \times M_c$. Since $\M{A}$ and $\V{b}$ are known but the noise is not, one might be tempted to ignore the noise, and compute the minimum-norm, least squares solution to $\M{A} \V{x} = \V{b}$. However, the ill-conditioning of the operator renders the least squares solution worthless, since small singular values magnify the noise present in the data. Algorithms for computing estimates $\V{x} \approx \V{x}_{true}$ in the presence of noise are called regularization methods. Iterative solvers, such as MRNSD, can be used as regularization methods. Consider applying MRNSD to \[ \min_{\V{x} \ge 0} \| \V{b} - \M{A} \V{x}\|_2. \] It will produce sequences of iterates $\V{x}_k$. Those iterates tend to exhibit semi-convergent behavior in that they will approximate the noise-free solution $\V{x}_{true}$ with increasing $k$, up to a point. After a particular iteration, the method starts to fit the noise in $\V{b}$ to solve the optimization problem, and the solution begins to resemble the noise-contaminated solution. If the stopping parameter is picked before the contamination happens, the method is considered to be a regularization method. In our approach, we require non-negativity of the image estimate in addition to the fact that the image be comprised from a learned, non-negative patch dictionary. In other words, we want our image estimate (expressed as a tensor, $\T{X}$) to be given by $\T{X} \approx \T{D} * \T{C}_k $ for $\T{C}_k \ge 0$. MRNSD can be used to treat this problem. But first we need to show that it is possible to express the term on the right in the norm via matrix-vector products (but yet still employ the tensor format for computational efficiency during actual implementation). Consider the relationship between the two formats of the same image: $\V{x}$ and $\T{X}$. We see that \begin{equation} \label{eq:Pdef} \V{x} = \M{P} \mbox{\tt vec} (\Unfold{ \T{X}} ) \approx \M{P} \mbox{\tt vec}(\Unfold{ \T{D}*\T{C} }), \end{equation} for a permutation matrix $\M{P}$. Now $\Unfold{\T{D} * \T{C}} = \underbar{\M{D}} \, \Unfold{\T{C}}$, where $\underbar{\M{D}} = \Circ{\T{D}}$. So $\mbox{\tt vec}(\Unfold{\T{D}*\T{C}}) = (\M{I} \otimes \underbar{\M{D}}) \mbox{vec}(\Unfold{\T{C}})$. Thus, we can apply MRNSD to solve \[ \min_{\T{C} \ge 0} \| \V{b} - \left(\M{A} \M{P} (\M{I} \otimes \underbar{\M{D}}) \right) \mbox{\tt vec}(\Unfold{\T{C}}) \|_2 \] though in practice, we construct neither $\M{P}$ nor $\M{I} \otimes \underbar{\M{D}}$ explicitly, since all the necessary computations can be done with permutation indicies and $t$-products with $\T{D}$ and $\T{D}^T$. The computational cost of one iteration is dominated by matrix-vector products with $\M{A}, \M{A}^T$ and products with $\T{D}, \T{D}^T$, which as we saw previously, for sufficiently small values of $p, q$ is a small multiple of the number of unknowns in the image (and can be efficiently parallelized). \section{Numerical Experiments} \label{sec:num} We illustrate the power of the tensor dictionaries to represent images, both qualitatively and quantitatively. In all examples, we represent square images and the dimensions of the image and patches are powers of two. \subsection{Power of Tensor Representations} As discussed in \Cref{sec:superior} and \Cref{thm:superior}, the tensor representations can exactly capture the matrix representations and there are a greater number of possible tensor representations. To illustrate the advantages of using a tensor representation, we compare the representations with either a learned tensor dictionary $\T{D} \in \mathbb{R}_+^{16\times 32\times 16}$ against representations from a learned matrix dictionary $\M{D}\in \mathbb{R}_+^{256\times 512}$. In both cases, we use patches of size $16\times 16$, either stored as lateral slices of $\T{D}$ or as columns of $\M{D}$. The number of dictionary elements in each case is twice the size of the first dimension (i.e., the dictionaries are equally over-complete). Both dictionaries were formed solving the ADMM formulation from images of faces in the CalTech101 database \cite{Caltech101}. We form our representations in \Cref{fig:matrix vs tensor} using $200$ MRNSD iterations (\Cref{alg:mrnsd}) and we start with a random, normalized initial guess. \begin{figure}[ht] \centering \def\n{4} \subfigure[Original: $\M{B} \in \mathbb{R}_+^{512\times 512}$. \label{subfig:dog original}]{\includegraphics[height=\n cm]{dalmatian_orig.jpg}} % \subfigure[Ten: $\frac{\| \T{B} - \T{D} * \T{C} \|}{\| \T{B}\|} \approx 0.03$ \label{subfig:dog tensor}] {\includegraphics[height=\n cm]{dalmatian_recon.jpg}} % \subfigure[Mat: $\frac{\| \underline{\M{B}} - \M{D} * \M{C} \|}{\| \underline{\M{B}}\|} \approx 0.10$ \label{subfig:dog matrix}] {\includegraphics[height=\n cm]{matrix_dalmatian_recon.jpg}} \caption{Comparison of tensor representation \ref{subfig:dog tensor} with learned $\T{D}\in \mathbb{R}_+^{16\times 32\times 16}$ vs. learned matrix representation \ref{subfig:dog matrix} with $\M{D} \in \mathbb{R}_+^{256\times 512}$.} \label{fig:matrix vs tensor} \end{figure} In \Cref{fig:matrix vs tensor}, we see the tensor representation in \ref{subfig:dog tensor} is better than the matrix representation in \ref{subfig:dog matrix}, both numerically and qualitatively. Thus, not only are there more possible the tensor representations (see \Cref{thm:superior}), the representation we form is better. This is somewhat surprising as the number of coefficients (i.e., the representation ability) for both the tensor and matrix cases is the the same. More specifically, the sizes are the following: the tensor coefficients $\T{C} \in \mathbb{R}_+^{32\times 1024\times 16}$ and the matrix coefficients $\M{C} \in \mathbb{R}^{512 \times 1024}$ where $1024$ is the number of patches in our original image $\M{B}$. A potential reason for this improved representation is that the patches stored in the tensor dictionary $\T{D}$ maintain some spatial relationships typical in natural images (e.g., smooth curves) whereas the patches stored in the matrix dictionary $\M{D}$ are more binary (e.g., sharp edges). \subsection{Fixed Dictionary, Changing Resolution} As noted previously, independent of how the dictionary was learned, we can employ that dictionary (assuming appropriate dimensions) on multiple resolutions of the same image, as illustrated in \Cref{fig:scale}. Importantly, the fact that the data was trained on images of a different resolution (in this case, the training data were all $128 \times 128$ images) is insignificant. \begin{figure}[htb] \centering \def\n{2.75} \begin{tikzpicture} \def2.85{2.85} \foreach \i/\size in {0/2048, 1/1024, 2/512, 3/256, 4/128}{ \node at (\i*2.85, 0) {\includegraphics[height=\n cm]{soccer_recon_\size.jpg}}; \node at (\i*2.85, -2.85) {\includegraphics[height=\n cm]{soccer_diff_\size.jpg}}; \node at (\i*2.85, -1.5*2.85-0.5) {$\size \times \size$}; } \end{tikzpicture} \caption{Effects of representations with $\T{D}\in \mathbb{R}_+^{16\times 32\times 16}$ as the image resolution changes. The dictionary was formed from $128 \times 128$ images. The top row shows the representations $\T{D}* \T{C}$ and the bottom row shows the absolute difference $|\T{B} - \T{D} * \T{C}|$. The resolution decreases from left to right.} \label{fig:scale} \end{figure} In \Cref{fig:scale}, we notice that as the image resolution decreases, the quality of our representations decreases as well (this is borne out by our reconstruction relative error). This is because the size of our patch relative to the image increases; i.e., each patch is representing a larger portion of the image, and hence is less likely to match exactly. From another perspective, when we represent an image with a higher resolution, the dictionary patches act more like individual pixels in the image and hence provide a more accurate representation. \subsection{Color} We can also represent color images using the same dictionary generated from grayscale images. Suppose we have an RGB image of size $N_r\times N_c \times 3$ where the third dimension is the number of color channels. To patchify an RGB image, we treat each channel as a separate grayscale image from which we form patches and store as lateral slices of a tensor. This means we have three tensors of size $p\times M \times q$ We then concatenate the lateral slices of the patchified tensors for each color channel to obtain our RGB patchified tensor of size $p\times 3M\times q$. In \Cref{fig:color}, we depict a representation of a color image using the same tensor dictionary $\T{D}\in \mathbb{R}_+^{16\times 32\times 16}$. \begin{figure}[htb] \centering \def\n{4} \subfigure[Original $512\times 512$, $\M{B}$.]{\includegraphics[height=\n cm]{sunflower_orig.jpg}} \subfigure[Representation, $\T{D} * \T{C}$.]{\includegraphics[height=\n cm]{sunflower_recon.jpg}} \subfigure[Difference, $|\T{B} - \T{D} * \T{C}|$.]{\includegraphics[height=\n cm]{sunflower_diff.jpg}} \caption{Representing color images using $\T{D}\in \mathbb{R}_+^{16\times 32\times 16}$. The relative error of our representation is $\| \T{B} - \T{D} * \T{C} \| / \| \T{B} \| \approx 0.02$.} \label{fig:color} \end{figure} \subsection{Compression} In the examples, the $\M{B}$ is $N \times N$. If we want to talk about the compression of a single image via the approximation $\T{B} \approx \T{D}*\T{C}$, we need to compute the compression ratio \[ \frac{ \mbox{nnz}(\T{D}) + \mbox{nnz}(\T{C})}{ N^2 } .\] However, if we are storing compressed representations of multiple images where they have all been compressed using the same dictionary, the cost of storing the dictionary becomes amortized over the multiple test images, so we approximate compression via $\mbox{nnz}(\T{C}) / N^2$. For a fixed patch size, we know we want $s \ge p$. We can make $s$ larger (maybe a bit larger than $2p$), and increase sparsity to a point, but too big $s$ means too much non-uniqueness and the optimization problem gets trickier. We can change patch size. Increasing patch size for a fixed resolution beyond a certain point is not a good idea -- we lose representability. But for larger images, we may well want to increase the patch size if we think our representation may be more sparse and we don't lose much representability. If we do that, $s$ must increase as a small multiple of $p$ and the cost of producing $\T{C}$ increases. To examine the effects of patch size on compressibility, we compare the relative error to the approximate compression $\mbox{nnz}(\T{C}) / N^2$ where $N = 512$ in \Cref{fig:compression with dictionary size}. We use $200$ MRNSD iterations (\Cref{alg:mrnsd}) with the soft-thresholding step \eqref{eq:update coefficients with sparsity}. \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{compression_graph_v2.pdf} \caption{Comparison of approximate compression for various dictionary sizes using sparsity-promoting MNRSD (\Cref{alg:mrnsd} with \eqref{eq:update coefficients with sparsity}) and a sparsity parameter of $\lambda=10^{-10}$. The cyan line represents dictionaries of the same patch size, but varying the number of dictionary elements (i.e., width). The magenta line represents dictionary of various patch size, but the same level of of over-completeness (i.e., twice as many lateral slices as the first patch dimension $p$).} \label{fig:compression with dictionary size} \end{figure} There are a few key trends to notice in \Cref{fig:compression with dictionary size}. The first is that the more over-complete a dictionary is, the more compressed the representation without significant loss of accuracy (the cyan dictionaries). This behavior occurs because with a wider selection of dictionary patches to select, we likely need to select fewer patches to represent an image well. However, if we include the cost of storing these wider dictionaries, the compression ratio greatly increases due to the width of the dictionary. The second trend is that the larger the patch size, the more compressed the representation, however with a significant loss of accuracy (the magenta dictionaries). This behavior occurs because larger patches are able to capture larger sections of an image, hence fewer patches are required form the representation. However, the larger patches are less likely to reproduce the original image exactly, and hence decreases the representation quality. Interestingly, if we include the cost of storing the dictionaries with large patches, it does not significantly impact the overall storage cost -- the width of the dictionary relative to the first patch size $p$ is a substantially more significant factor. \subsection{Deblurring Results} We used Matlab and features in the RestoreTools Matlab toolbox \cite{restoretools} as indicated. \subsubsection{Example 1} Our true image was the $512 \times 512$ images of the orca in \Cref{fig:aa1}. We used the grain blur in Restoretools set of example files\footnote{The grain blur point-spread-function is for a 256 x 256 image, so we padded the blur by zeros to get a PSF suitable for a 512 x 512 image.} to create a blurring operator $\M{A}$ corresponding to reflexive boundary conditions. We computed $\M{A} \V{x}_{true}$, and added Gaussian noise at a noise level of 1 percent to the image. The blurred, noisy image in the figure. Convergence to regularized solutions is known to be slow with MRNSD \cite{Nagy2000}, so preconditioning is often used. Thus, in both the non-dictionary and dictionary reconstructions, we used the built-in preconditioner option and used MRNSD on the preconditioned problems \[ \min_{\V{x} \ge 0} \| \M{M} \V{b} - \M{M} \M{A} \V{x} \|_F \mbox{ or } \min_{\T{C} \ge 0} \| \M{M} \V{b} - \M{M} \left(\M{A} \M{P} (\M{I} \otimes \underbar{\M{D}}) \right) \mbox{\tt vec}(\Unfold{\T{C}}) \|_F \] where $\M{M}$ denotes the preconditioner determined from the PSF and $\V{b}$, using the default settings. The matrix $\M{P}$ is a permutation matrix (see (\ref{eq:Pdef}). The algorithm needs a non-zero starting guess. In the matrix case, we used a vector of all ones as the initial guess for $\V{x}$. In the tensor case, to make an equivalent comparison, we first formed a patchified version of an image of all ones. We then multiplied that by the tensor-pseudoinverse of $\T{D}$ (see \cite{KilmerEtAl2013} for details) and used this for the starting guess for $\T{C}$. We wanted to compare the quality of MRNSD with and without dictionaries. We do not discuss choosing optimal truncation parameters, though we note that the semi-convergence behavior is very much damped when using the dictionaries. We used two dictionaries derived from different data sets at different patch sizes. The first dictionary was obtained from the CalTech face database. We took $p=q=16$ and $s=32$. The second dictionary was obtained from a collection of 60 elephant photos \cite{Caltech101}. Here, we took $p=q=32$ and $s=64$. In the figure we compare the `optimal' (i.e. solution at the iterate that gave smallest relative error against ground truth) solution with preconditioned MRNSD with no dictionary approach against other reconstructions. In \Cref{fig:cc1}, we give the optimal reconstruction for the smaller dictionary. In \Cref{fig:dd1}, we give the solution after 2000 iterations for the larger dictionary (the error is still decreasing at this point, so it may not be an optimal stopping point). In \Cref{fig:ee1}, we averaged the solutions\footnote{In fact, any convex combination of the reconstructions would have been an option.} to acknowledge the fact that this image has both fine scale features and components that are nearly uniform, so we expect that different resolution patches would be sensitive to this fact. We address the issue of multiresolution reconstructions in the Conclusions. All dictionary based solutions gave reconstructions with smaller relative error and smaller structured similarity as shown in the table. It is worth noting that the dictionary-based reconstructions took longer to converge: while preconditioned MRNSD in the matrix-only case took 63 iterations to reach the optimal solution, it took the small dictionary 1,211 iterations, and as mentioned, we let the large dictionary case run 2000 iterations. On the other hand, this is not an entirely fair comparison, either, since the preconditioner was constructed relative to $\M{A}$, whereas in the matrix-formulation of our tensor approach, we see the structure of the matrix-operator is quite a bit different. \begin{figure}[H] \centering \subfigure[Original.]{\label{fig:aa1} \includegraphics[scale=.2]{orca_true.png}} \subfigure[Blurred, noisy.]{\label{fig:bb1} \includegraphics[scale=.2]{orca_blurrednoisy.png}} \subfigure[Ten. rcn $p,q=16$, opt]{\label{fig:cc1} \includegraphics[scale=.2]{smallpatchorca_1em2.png}} \subfigure[Ten. rcn. $p,q=32$]{\label{fig:dd1} \includegraphics[scale=.2] {lrgpatchorca_1em2.png}} \subfigure[Combined tensor]{\label{fig:ee1} \includegraphics[scale=.2]{combinedorca_1em2.png}} \subfigure[Matrix Recon, opt]{\label{fig:ff1} \includegraphics[scale=.2]{orca_pmrnsd1em2.png}} \caption{Example 1: Orca original, blurred and noisy images, and various reconstruction results. } \label{fig:orcarecon} \end{figure} \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|} \hline & Small Dictionary & Large Dictionary & Combined & PMNRSD \\ \hline Rel Err & 0.119 & 0.123 & 0.116 & 0.144\\ \hline SSIM & 0.518 & 0.522 & 0.541 & 0.376 \\ \hline \end{tabular} \caption{Relative error and SSIM results for Example 1. The combined-dictionary image had slightly better relative error and SSIM results than any other. The matrix-based, non-dictionary reconstruction has notably worse relative error and SSIM to all the other dictionary-based reconstructions.} \end{table} \subsubsection{Examples 2 and 3: Underdetermined Problems} In the first illustration, our true image was $256 \times 256$. We wanted to simulate a situation in which the boundary conditions of the blur were taken to be unknown. We took a symmetric Gaussian blur of discrete bandwidth 8 and $\sigma = 3$, and applied it to the true image, trimmed blurred the image by 8 pixels on all sides, reshaped, and added 1 percent random Gaussian noise to the data\footnote{To implement this process in {\sc Matlab}: Let $\V{v} = \mbox{exp}(-\frac{1}{\sqrt{2\sigma}}[0\!\!:\!\!7]\mbox{.}^2)$, and $\M{A}_1 = {\tt toeplitz}(\V{v})$. Define $\M{T} = \M{A}_1 * \M{X}_{true} * \M{A}_1$, and $\V{b}_{true} = \M{T}(8\!:\!247,8\!:\!247)$, $\V{b}_{true} = \V{b}_{true}(:); $ and $\V{b} = \V{b}_{true} + c \cdot \mbox{\tt randn}( {\tt length}(\V{b}_{true},1 ))$ where $c$ is such that the noise level is 0.01.} . This meant the data vector was only length $240^2$ while the true image was $256^2$, indicating there are fewer equations than unknowns. Since the problem is underdetermined, it may be desirable to add regularization to enforce smooth transitions between patches. For an $N \times N$ image and $p \times p$ patches, we consider \[ \min_{\T{C} \ge 0} \left\| \bea{c} \V{b} \\ 0 \end{array} \right] - \left( \bea{c} \M{A} \\ \lambda \M{L} \end{array} \right] \M{P} (\M{I} \otimes \underbar{\M{D}}) \right) \mbox{\tt vec}(\Unfold{\T{C}}) \right\|_2, \qquad \M{L} = \bea{c} \M{I} \otimes \M{Q} \\ \M{Q} \otimes \M{I} \end{array} \right] , \qquad \] where $\M{Q}$ could either be an $(N-1) \times N$ first order discrete derivative operator, or, in order to minimize computation, an $(\frac{N}{p} -1) \times N$ matrix approximating discrete derivatives only across patch jumps. We expect, for a suitable value of $\lambda$, some smoothing across patch boundaries. As our results below show, there is some modest gain that can be had when including extra regularization. However our method is relatively insensitive to choice of $\lambda$, whereas MRNSD without regularization is not. The dictionary used in the reconstruction was constructed from the CalTech face data base (same as in the previous example). Patch sizes were $16 \times 16$ and we took $s=32$ and used 2000 iterations to obtain each reconstruction (all convergence curves were nearly flat at this point). The original image in \Cref{fig:a1} was obtained by cropping the {\sc Matlab} image {\tt clutteredDesk.jpg}. The tensor dictionary based reconstructions for $\M{L}$ a discrete gradient operator with $\lambda=10$ is in \Cref{fig:c1}; the tensor-based reconstruction with no regularization is in \Cref{fig:d1}. The SSIM values of these were $.815$ and $.776$, respectively, showing the insensitivity to $\lambda$ and to additional regularization in general. The corresponding matrix MRNSD reconstructions with additional regularization (for $\lambda=10$) and without additional regularization. Without regularization, the borders are white. The quality depends closely on the value of the regularization parameter, which is problematic. Moreover, the same or better quality reconstruction can be obtained using our tensor dictionary based approach without need of choosing a $\lambda$ -- for example, the SSIM of the tensor-based reconstruction in (\ref{fig:d1}) was higher than for the matrix case for any value of $\lambda$ that we tried. \begin{figure} \centering \subfigure[Original.]{\label{fig:a1} \includegraphics[height=1.5in,width=1.6in]{ele256_orig.png}} \subfigure[Blurred, noisy.]{\label{fig:b1} \includegraphics[height=1.5in,width=1.6in]{ele256_blurredN.png}} \subfigure[Ten rcn $\lambda=10$]{\label{fig:c1} \includegraphics[height=1.5in,width=1.6in]{ele256_ten_L10_2000its.png}} \subfigure[Ten rcn $\lambda = 0$]{\label{fig:d1} \includegraphics[height=1.5in,width=1.6in] {ele256_ten_L0_2000its.png}} \subfigure[Matrix rcn, $\lambda=10$]{\label{fig:e1} \includegraphics[height=1.5in,width=1.6in]{ele256_matL10_2000its.png}} \subfigure[Matrix rcn, $\lambda=0$]{\label{fig:f1} \includegraphics[height=1.5in,width=1.6in]{ele256_matL0_2000its.png}} \caption{Example 2: Original, blurred noisy image, and various reconstructions, with and without discrete derivative regularization for the tensor-dictionary-based and matrix reconstructions.} \label{fig:dancerrecon} \end{figure} In the second illustration, the blurred and noisy image of size $512 \times 512$ is given in \Cref{fig:dogrecon}. We use the same dictionary (i.e. learned from human faces) as in the previous illustration, but this time we used a discrete bandwidth of 12, $\sigma=4$, and 5 percent Gaussian noise. In \Cref{fig:a}, we show the relative errors for using our tensor approach with $\lambda = 100$ and patch-smoothing regularizer, the tensor approach with $\lambda = 0$ (i.e. no smoothing across patches), and the matrix-based MRNSD. We observe that the behavior is similar for the two tensor classes, with a slight improvement in the error observed when using the patch-regularization term. We note that semi-convergence behavior is observed in the matrix-based case whereas it is not observed in the tensor cases over the first 2000 iterations. We show the matrix-based reconstruction at the `optimal' iteration count (198) in subfigure \ref{fig:g}, and the reconstruction after 2000 iterations in subfigure \ref{fig:h}. Even the optimal reconstruction is qualitatively not as good - clearly, there is a white boundary where it could not be reconstructed, and also there are ringing and fine scale noise artifacts in other areas of the matrix-based image as well. However, details are recovered using the tensor patch-based dictionary. In \Cref{fig:f} we show a reconstruction using a different dictionary, also constructed from face data, of size $32 \times 64 \times 32$ for $\lambda = 100$ and 2000 iterations. We see that the quality is very close to the $16 \times 16$ patch dictionary, and there are improvements in some areas of the image but subtle degradataion in others. As we note in the conclusions, this suggests using a multilevel dictionary approach may improve the situation further. \begin{figure}[h] \centering \includegraphics[height=1.75in,width=3.25in]{conv_sage512_16} \caption{\label{fig:a} Example 3: Convergence behavior. Horizontal axis is iteration number, vertical is relative error in the iterate against the true image.} \end{figure} \begin{figure}[h] \centering \subfigure[Blurred, Noisy.]{\label{fig:c} \includegraphics[height=1.25in,width=1.5in]{blurred_sage512_Noi5.png}} \subfigure[Ten $\lambda=100, p,q=16$]{\label{fig:d} \includegraphics[height=1.25in,width=1.5in]{sage512_ten_L100No5_2000its_16.png}} \subfigure[Ten $\lambda = 0, p,q=16$]{\label{fig:e} \includegraphics[height=1.25in,width=1.5in] {sage512_ten_L0No5_2000its_16.png}} \subfigure[Ten $\lambda = 100,p,q=32$]{\label{fig:f} \includegraphics[height=1.25in,width=1.5in]{sage512_ten_L100No5_2000its_32.png}} \subfigure[Opt. Matrix Recon]{\label{fig:g} \includegraphics[height=1.25in,width=1.5in]{sage512_mat_No5_198itsopt.png}} \subfigure[Matrix Recon, 2000 its]{\label{fig:h} \includegraphics[height=1.25in,width=1.5in]{sage512_mat_No5_2000its.png}} \caption{Example 3. blurred and noisy image, several reconstructions. In the matrix cases, note the white border in the reconstructions due to the lack of information near the boundary. } \label{fig:dogrecon} \end{figure} \section{Summary and Future Work} \label{sec:future} In this work, we have shown the utility of learned tensor-patch dictionaries in the context of non-negative image representation, compression and image deblurring applications. In all cases, once a non-negative tensor patch-dictionary is available, we showed that the problems of compression and deblurring could be formulated in terms of recovering the corresponding non-negative tensor coefficient object. We gave an MRNSD tensor algorithm for finding the coefficient tensor, and described a modification that encourages sparsity in the coefficient tensor. Notably, this sparsity constraint is applicable whether or not one uses matrices or tensors in the formulation, thereby indicating the proposed approach has broader utility than for the purpose described here. In the case of deblurring, we showed the tensor representation is particularly effective in mitaging the effects of noise on the solution, especially in the case of underdetermined problems and boundary effects. Importantly, we demonstrated that the class of data on which the dictionary is trained is surprisingly irrelevant in the context of image representation under the tensor-dictionary formulation, as is the resolution of the training data, both in the context of image compression and image deblurring. We also discussed issues related to patch size, and the trade-offs between sparsity, representability and computation time. We showed that a fixed dictionary can do remarkably well on representing images at various resolutions, and even across color channels. In our deblurring examples, we saw that the tensor dictionaries could mitigate semi-convergence behavior. We also observed that better representations could be obtained by convex combination of deblurred images constructed using dictionaries at different resolutions, which suggests further work is needed to design a multi-level dictionary representation that allows for better local feature description. Finally, as noted in \cite{Martin:2013kx, Soltani2016}, the t-product generalizes to tensors of order higher than three, so our ideas generalize to higher order. Recently, in \cite{LAA}, new tensor-tensor products have been defined which, like the t-product, permit a linear algebraic-type framework. The tensor-patch dictionary learning and representation approach can therefore be extended to these tensor-tensor products. Some of the preliminary details are offered in \cite{LizThesis}. Further investigation into which class of tensor-tensor products provide for the best non-negative dictionaries for use in image compression and representation still needs to be considered, and is the subject of future research. \section{Introduction} \label{sec:intro} This file is documentation for the SIAM \LaTeX\ style, including how to typeset the main document, the {\scshape Bib}\TeX\xspace\ file, and any supplementary material. More information about SIAM's editorial style can be found in the style manual, available at \url{https://www.siam.org/journals/pdf/stylemanual.pdf}. The major changes in the SIAM online-only class are summarized in \cref{sec:changes}. The SIAM \LaTeX\@ files can be found at \url{https://www.siam.org/journals/auth-info.php}. The files that are distributed are given below. \begin{itemize} \item \texttt{siamonline1116.cls} (required): Main SIAM online-only \LaTeX\ class file. \item \texttt{siamplain.bst} (required): Bibliographic style file for {\scshape Bib}\TeX\xspace. \item \texttt{docsiamonline.tex}: Produces this documentation. \item \texttt{references.bib}: {\scshape Bib}\TeX\xspace\ database for this documentation and examples. \item \texttt{ex\_article.tex}: Template for article. \item \texttt{ex\_supplement.tex}: Template for supplement. \item \texttt{ex\_shared.tex}: Template for shared information for article and supplement. \end{itemize} To use these files, put \texttt{siamonline1116.cls} and \texttt{siamplain.bst} in the directory with your paper or, alternatively, into your \LaTeX\@ and {\scshape Bib}\TeX\xspace\@ paths, respectively. The outline of a SIAM \LaTeX\ article is shown in \cref{ex:outline}. Templates are provided and discussed in more detail in \cref{sec:template}. \section{Class options} \label{sec:class-options} Class options can be included in the bracketed argument of the command, separated by commas. The possible class options are: \begin{itemize} \item \code{review} --- Recommended for submitting your manuscript to a SIAM journal. Adds line numbers as well as the statement ``This manuscript is for review purposes only'' to the bottom of each page. \item \code{final} --- Turns off the black boxes that help authors identify lines that are too long. The final published version will have this option on. \item \code{supplement} --- Specifies that the file is a supplement and not the main document, causing changes in the appearance of the title and numbering; see \cref{sec:supplement} for details. \end{itemize} \begin{example}[label={ex:outline},listing only,% listing options={style=siamlatex,{morekeywords=[1]{maketitle}, morekeywords=[2]{siamonline1116}},}]% {Document outline} \documentclass{siamonline1116} \begin{document} \maketitle \end{document} \end{example} \section{Front matter} \label{sec:front} The title and author parts are formatted using the standard \code{\title}, \code{\author}, and \code{\maketitle} commands as described in Lamport \cite{La86}. The title and author should be declared in the preamble. The title and author names are automatically converted to uppercase in the document. If there is more than one author, each additional author should be preceded by the \code{\and} command. The addresses and support acknowledgments are added via \code{\thanks}. Each author's thanks should specify their address. The support acknowledgment should be put in the title thanks, unless specific support needs to be specified for individual authors, in which case it should follow the author address. The header for this file was produced by the code in \cref{ex:header}, including an example of a shared footnote. Each thanks produces a footnote, so the footnote of the second author is \#3. The command \code{\headers{title}{authors}} command, with the title (possibly shortened to fit) and the authors' names, creates the page headers, automatically converted to uppercase. ~ \examplefile[label={ex:header},listing only,% listing options={style=siamlatex,% deletetexcs={and,thanks,title,author},% {moretexcs=[2]{and,thanks,title,author,maketitle,headers,email}}} ]{Title and authors in preamble}{tmp_\jobname_header.tex} Following the author and title is the abstract, key words listing, and AMS subject classifications, designated using the \code{abstract}, \code{keywords}, and \code{AMS} environments. Authors are responsible for providing AMS numbers which can be found on the AMS web site \cite{AMSMSC2010}. The abstract, keywords, and AMS subject classifications for this document are specified in \cref{ex:abstract}. \examplefile[label={ex:abstract},% before upper={\preamble{\boldsymbol newcommand\{\boldsymbol BibTeX\}\{\{\boldsymbol scshape Bib\}\boldsymbol TeX\boldsymbol xspace\}}}, listing only,% listing options={style=siamlatex,% {morekeywords=[2]{abstract,keywords,AMS}}} ]{Abstract, keywords, and AMS classifications}{tmp_\jobname_abstract.tex} A more complete example, including a PDF supplement, that uses the included files \texttt{ex\_article.tex}, \texttt{ex\_supplement.tex}, and \texttt{ex\_shared.tex} is discussed in \cref{sec:template}. The example files can be used as a starting point for producing a document. \section{Cross references and hyperlinks} \label{sec:cr+hyp} SIAM now supports cross references and hyperlinks via the \texttt{cleveref} and \texttt{hyperef} packages, which are loaded by the class file. \subsection{Cleveref} \label{sec:cleveref} SIAM strongly recommends using the commands provided by the \texttt{cleveref} package for cross referencing. The package is automatically loaded and already customized to adhere to SIAM's style guidelines. To create a cross reference, use the command \code{\cref} (inside sentence) or \code{\Cref} (beginning of a sentence) in place of the object name and \code{\ref}. The \texttt{cleveref} package enhances \LaTeX's cross-referencing features, allowing the format of cross references to be determined automatically according to the ``type" of cross reference (equation, section, etc.) and the context in which the cross reference is used. So, the package \emph{automatically} inserts the object name as well as the appropriate hyperlink; see \cref{ex:cref}. It may require two \LaTeX\@ compilations for the references to show up correctly. Additional examples are shown in the sections below for equations, tables, figures, sections, etc. \begin{example}[label=ex:cref,bicolor,listing options={style=siamlatex,% {morekeywords=[2]{cref,ref}}}]{Advantage of using cleveref} The normal way to get a cross reference with a hyperlink requires a lot of typing: \hyperref[thm:mvt]{Theorem~\ref*{thm:mvt}}. The \texttt{cleveref} package gets both the name and hyperlink automatically using a single macro: \cref{thm:mvt}. It also handles multiple references with the same macro, such as \cref{thm:mvt,fig:pgfplots,fig:testfig}. \end{example} \subsection{Hyperef} \label{sec:hyperef} Hyperlinks are created with the \code{\href} and \code{\url} commands, as shown in \cref{ex:href}. SIAM has also defined the \code{\email} command, as shown in \cref{ex:header}. \begin{example}[label={ex:href},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{href,url}}}]{Creating hyperlinks} The \href{https://www.siam.org}{SIAM homepage} has general information. There are times when the author may want to specify the location explicitly instead by using \url{https://www.siam.org}. \end{example} Note that homepage links via \code{\url} in the \code{\thanks} environment require special formatting for the tilde (\string~) character. The formatting is used in the template and shown in \cref{ex:shared}. \section{Math and equations} \label{sec:math} Here we show some example equations, with numbering, and examples of referencing the equations. SIAM now includes the package \texttt{amsmath} by default, and we include some of its features as well, although the reader should consult the package user manual for further guidance \cite{amsmath,shortmath}. Several of the example are adapted from Mittlebach and Goossen's guide to \LaTeX~\cite{MiGo04}. \Cref{ex:textmath} is a straightforward example of inline mathematics equations that does not use any special packages or features. \begin{example}[label={ex:textmath},bicolor]{Inline math} The following shows an example of math in text: Let $S=[s_{ij}]$ ($1\leq i,j\leq n$) be a $(0,1,-1)$-matrix of order $n$. \end{example} \newpage In \cref{ex:bbm}, we show the recommended method for getting blackboard fonts using the \texttt{amsfonts} package. This is not loaded by default and must be included in the preamble. \begin{example}[label={ex:bbm},bicolor,before upper={\preamble{\boldsymbol usepackage\{amsfonts\}}},% listing options={style=siamlatex,% {morekeywords=[2]{mathbb}}}]{Blackboard math} Blackboard bold characters, such as $\mathbb{C}$ and $\mathbb{R}$, should be created with the \texttt{amsfonts} package, although this is not included by default. \end{example} \Cref{ex:smallmatrix} shows the \code{smallmatrix} environment for an inline matrix from the \texttt{amsmath} package, which is included by default. \begin{example}[label={ex:smallmatrix},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{smallmatrix}}}]{Inline matrix} Matrices of no more than two rows appearing in text can be created as shown in the next example: $B = \bigl[ \begin{smallmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{smallmatrix} \bigr]$. \end{example} Bigger matrices can be rendered with environments from the \texttt{amsmath} package, such as \code{bmatrix} and \code{pmatrix} used in \cref{ex:matrices}. \begin{example}[label={ex:matrices},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{bmatrix,pmatrix}}}]{Creating matrices} Display matrices can be rendered using environments from \texttt{amsmath}: \begin{equation}\label{eq:matrices} S=\begin{bmatrix}1&0\\0&0\end{bmatrix} \quad\text{and}\quad C=\begin{pmatrix}1&1&0\\1&1&0\\0&0&0\end{pmatrix}. \end{equation} \Cref{eq:matrices} shows some example matrices. \end{example} \newpage \Cref{ex:dmo} shows how to use the \code{\DeclareMathOperator} command from the \texttt{amsopn} package to declare the \code{\Range} macro. (This example also uses the \texttt{braket} package for the \code{\set} macro, but this is not necessarily recommended by SIAM.) \begin{example}[label={ex:dmo},% before upper={\preamble{\boldsymbol usepackage\{braket,amsfonts,amsopn\}}\\ \noindent\preamble{\boldsymbol DeclareMathOperator\{\boldsymbol Range\}\{Range\}}},% bicolor,% listing options={style=siamlatex,% {moretexcs=[2]{Range}}} ]{Declaring math operators} An example of a math operator: \begin{equation}\label{eq:range} \Range(A) = \set{ y \in \mathbb{R}^n | y = Ax }. \end{equation} \end{example} \Cref{ex:foo} shows how to use the \code{align} environment from \texttt{amsmath} to easily align multiple equations. \begin{example}[label={ex:foo},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{align}}}]{Aligned equations} \Cref{eq:a,eq:b,eq:c} show three aligned equations. \begin{align} f &= g, \label{eq:a} \\ f' &= g', \quad\text{and} \label{eq:b} \\ \mathcal{L}f &= \mathcal{L}g \label{eq:c}. \end{align} \end{example} Another way to number a set of equations is the \code{subequations} environment from \texttt{amsmath}, as shown in \cref{ex:aligned}. \begin{example}[label={ex:aligned},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{subequations}}}]{Subequations} We calculate the Fr\'{e}chet derivative of $F$ as follows: \begin{subequations} \begin{align} F'(U,V)(H,K) &= \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T} - P(H\Sigma V^{T} + U\Sigma K^{T})\rangle \label{eq:aa} \\ &= \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T}\rangle \nonumber \\ &= \langle R(U,V)V\Sigma^{T},H\rangle + \langle \Sigma^{T}U^{T}R(U,V),K^{T}\rangle. \label{eq:bb} \end{align} \end{subequations} \Cref{eq:aa} is the first line, and \cref{eq:bb} is the last line. \end{example} For an equation split over multiple lines, \cref{ex:ml} shows the usage of the \code{multline} environment provided by \texttt{amsmath}. \begin{example}[label={ex:ml},bicolor,% listing options={style=siamlatex,% {morekeywords=[2]{multline}}}]{Equation split across lines} We claim that the projection $g(U,V)$ is given by the pair of matrices: \begin{multline} \label{eq:ml} g(U,V) = \biggl( \frac{R(U,V)V\Sigma^{T}U^{T} - U\Sigma V^{T}R(U,V)^{T}}{2}U,\\ \frac{R(U,V)^{T}U\Sigma V^{T}-V \Sigma^{T}U^{T}R(U,V)}{2}V \biggr). \end{multline} \end{example} \section{Theorem-like environments} \label{sec:thm} SIAM loads \texttt{ntheorem} package and uses it to define the following theorem-like environments: \code{theorem}, \code{lemma}, \code{corollary}, \code{definition}, and \code{proposition}. SIAM also defines a \code{proof} environment that automatically inserts the symbol ``$\,\proofbox\,$'' at the end of any proof, even if it ends in an equation environment. \emph{Note that the document may need to be compiled twice for the mark to appear.} Some of the calculus examples were adapted from \cite{CalcI}. \Cref{ex:theorem} shows usage of the \code{theorem} environment. An optional argument can be used to name the theorem. \Cref{ex:cor} illustrates a corollary, without a name, and the proof environment. \begin{example}[label=ex:theorem,bicolor,parbox=false,% listing options={style=siamlatex,% {morekeywords=[2]{theorem}}}]{Theorem} \begin{theorem}[Mean Value Theorem]\label{thm:mvt} Suppose $f$ is a function that is continuous on the closed interval $[a,b]$. and differentiable on the open interval $(a,b)$. Then there exists a number $c$ such that $a < c < b$ and \begin{displaymath} f'(c) = \frac{f(b)-f(a)}{b-a}. \end{displaymath} In other words, $f(b)-f(a) = f'(c)(b-a)$. \end{theorem} \end{example} \begin{example}[label=ex:cor,bicolor,parbox=false,% listing options={style=siamlatex,% {morekeywords=[2]{corollary,proof}}}]% {Corollary and proof} \begin{corollary} Let $f(x)$ be continuous and differentiable everywhere. If $f(x)$ has at least two roots, then $f'(x)$ must have at least one root. \end{corollary} \begin{proof} Let $a$ and $b$ be two distinct roots of $f$. By \cref{thm:mvt}, there exists a number $c$ such that \begin{displaymath} f'(c) = \frac{f(b)-f(a)}{b-a} = \frac{0-0}{b-a} = 0. \end{displaymath} \end{proof} \end{example} \newpage SIAM also defines commands to create your own theorem- and remark-like environments: \begin{itemize} \item \code{newsiamthm} --- Small caps header, italized body. \item \code{newsiamremark} --- Italics header, roman body. \end{itemize} Each command takes two arguments. The first is the environment name, and the second is the name to show in the document. These commands should be used instead of \code{\newtheorem}. \Cref{ex:claim,ex:ref} shows how to use the commands above, including how to specify the plural version for \texttt{cleveref} if it is unusual. \begin{example}[label=ex:claim,bicolor,% before upper={\preamble{\boldsymbol newsiamthm\{claim\}\{Claim\}}\\ \noindent\preamble{\boldsymbol newsiamremark\{hypothesis\}\{Hypothesis\}}\\ \noindent\preamble{\boldsymbol crefname\{hypothesis\}\{Hypothesis\}\{Hypotheses\}}},% parbox=false,% listing options={style=siamlatex,% {morekeywords=[2]{claim,proof,hypothesis}}}]{New theorem-like environment} \begin{claim}\label{cl:constant} If $f'(x) = 0$ for all $x \in (a,b)$ then $f(x)$ is constant on $(a,b)$. \end{claim} \begin{hypothesis}\label{hyp1} The function $f$ is continuously differentiable. \end{hypothesis} \begin{hypothesis}\label{hyp2} The random variable is normally distributed. \end{hypothesis} \end{example} \begin{example}[label=ex:ref,bicolor,listing options={style=siamlatex,% {morekeywords=[2]{cref}}}]{References} We can reference multiple types of objects with a single reference: \cref{cl:constant,thm:mvt,hyp1,hyp2}. \end{example} \section{Tables} \label{sec:tab} Table captions should go above the tables. \Cref{ex:simpletable} shows the code to generate a \cref{tab:simpletable}. A more complicated example is shown in \cref{ex:table}, which generates \cref{tab:KoMa14}. This example uses subfloats via the \texttt{subfig} package, as well as special column options from the \texttt{array} package. \begin{tcbverbatimwrite}{tmp_\jobname_simpletable.tex} \begin{table}[tbhp] \caption{Example table} \label{tab:simpletable} \centering \begin{tabular}{|c|c|c|} \hline Species & \bf Mean & \bf Std.~Dev. \\ \hline 1 & 3.4 & 1.2 \\ 2 & 5.4 & 0.6 \\ \hline \end{tabular} \end{table} \end{tcbverbatimwrite} \examplefile[label={ex:simpletable},% listing only, listing options={style=siamlatex}]% {Example table.}{tmp_\jobname_simpletable.tex} \input{tmp_\jobname_simpletable.tex} \begin{tcbverbatimwrite}{tmp_\jobname_table.tex} \newcolumntype{R}{>{$}r<{$}} % \newcolumntype{V}[1]{>{[\;}*{#1}{R@{\;\;}}R<{\;]}} % \begin{table}[tbhp] \captionsetup{position=top} \caption{Example table adapted from Kolda and Mayo \rm{\cite{KoMa14}}.} \label{tab:KoMa14} \centering \subfloat[$\beta=1$]{ \begin{tabular}{|r|R|V{3}|c|r@{\,$\pm$\,}l|} \hline occ. & \multicolumn{1}{c|}{$\lambda$} & \multicolumn{4}{c|}{$\mathbf{x}$} & fevals & \multicolumn{2}{c|}{time (sec.)}\\ \hline 718 & 11.3476 & 0.5544 & 0.3155 & 1.2018 & 0.0977 & 45 & 0.17 & 0.06 \\ \hline 134 & 3.7394 & 0.2642 & -1.1056 & 0.2657 & -0.3160 & 31 & 0.12 & 0.05 \\ \hline 4 & \multicolumn{6}{c|}{\emph{--- Failed to converge ---}} & 0.21 & 0.10 \\ \hline \end{tabular}} \subfloat[$\beta=-1$]{ \begin{tabular}{|r|R|V{3}|c|r@{\,$\pm$\,}l|} \hline occ. & \multicolumn{1}{c|}{$\lambda$} & \multicolumn{4}{c|}{$\mathbf{x}$} & fevals & \multicolumn{2}{c|}{time (sec.)}\\ \hline 72 & -1.1507 & 0.2291 & 0.6444 & 0.3540 & -0.8990 & 34 & 0.14 & 0.06 \\ \hline 624 & -6.3985 & 0.1003 & 0.1840 & 0.5305 & 1.2438 & 48 & 0.19 & 0.08 \\ \hline 2 & \multicolumn{6}{c|}{\emph{--- Failed to converge ---}} & 0.23 & 0.02 \\ \hline \end{tabular}} \end{table} \end{tcbverbatimwrite} \examplefile[label={ex:table},% before upper={\preamble[\scriptsize]{\boldsymbol usepackage\{array\}}\\[-0.4em] \noindent\preamble[\scriptsize]{\boldsymbol usepackage[caption=false]\{subfig\}}},% listing only, listing options={% style=siamlatex,basicstyle=\ttfamily\scriptsize}]% {Example table with subtables.}{tmp_\jobname_table.tex} \input{tmp_\jobname_table.tex} \newpage \section{Figures} \label{sec:fig} It is recommended that all figures be generated in high resolution. In the past, SIAM has required encapsulated postscript (EPS) format for final production. This is still an acceptable format, but SIAM also now allows high-resolution PDF, JPEG, and PNG figures. If working with EPS images and using \texttt{pdflatex}, we recommend the package \texttt{epstopdf} to automatically convert EPS images to PDF for inclusion in PDF documents created by \texttt{pdflatex}. \Cref{ex:fig} shows the code to generate \cref{fig:testfig}. This example uses the \texttt{graphicx} package for the \code{\includegraphics} command. \begin{tcbverbatimwrite}{tmp_\jobname_fig.tex} \begin{figure}[tbhp] \centering \subfloat[$\epsilon_{\max}=5$]{\label{fig:a}\includegraphics{lexample_fig1}} \subfloat[$\epsilon_{\max}=0.5$]{\label{fig:b}\includegraphics{lexample_fig2}} \caption{Example figure using external image files.} \label{fig:testfig} \end{figure} \end{tcbverbatimwrite} \examplefile[label={ex:fig},% before upper={\preamble[\scriptsize]{\boldsymbol usepackage\{graphicx,epstopdf\}}\\[-0.4em] \noindent\preamble[\scriptsize]{\boldsymbol usepackage[caption=false]\{subfig\}}},% listing only, listing options={% style=siamlatex,basicstyle=\ttfamily\scriptsize}]% {Example figure with subfigures and external files}{tmp_\jobname_fig.tex} \input{tmp_\jobname_fig.tex} Another option for figures is a graphics-generator that is platform- and format-independent. PGF is a TeX macro package for generating such graphics and works together with the most important TeX backend drivers, including pdftex and dvips. The user-friedly syntax layer called TikZ. Here we show an example using \texttt{PGFPLOTS}, useful for drawing high-quality plots directly in \LaTeX. \Cref{ex:data} and \cref{ex:pgfplots} shows the data and code, respectively, to generate \cref{fig:pgfplots}, adapted from \cite{pgfplots}. \examplefile[label={ex:data},listing only, listing options={style=siamlatex,basicstyle=\ttfamily\scriptsize}]% {Example data file (data.dat)}{data.dat} \begin{tcbverbatimwrite}{tmp_\jobname_tikz.tex} \begin{figure}[tbhp] \centering \begin{tikzpicture} \begin{loglogaxis}[height=2.75in, grid=major, xlabel={Degrees of Freedom}, ylabel={$L_2$ Error}, legend entries={$d=2$,$d=3$}] \addplot table [x=d2_dof,y=d2_l2_err] {data.dat}; \addplot table [x=d3_dof,y=d3_l2_err] {data.dat}; \end{loglogaxis} \end{tikzpicture} \caption{Example \texttt{PGFPLOTS} figure.} \label{fig:pgfplots} \end{figure} \end{tcbverbatimwrite} \examplefile[label={ex:pgfplots},% before upper={\preamble[\scriptsize]{\boldsymbol usepackage\{pgfplots\}}},% listing only, listing options={% style=siamlatex}]% {Example TikZ/PGF for platform-independent graphics.}{tmp_\jobname_tikz.tex} \input{tmp_\jobname_tikz.tex} \newpage \section{Algorithms} \label{sec:algs} SIAM automatically includes the \texttt{algorithm} package in the class definition. This provides the float environment. Users have the choice of \texttt{algpseudocode}, \texttt{algorithmic}, and other packages for actually formatting the algorithm. For example, \cref{alg:buildtree} is produced by the code in \cref{ex:alg}. In order to reference lines within the algorithm, we need to tell the \texttt{cleveref} package how to do the referencing, which is the second line of \cref{ex:alg}. Then we can use the code \code{\cref{line3}} to produce \cref{line3}. \begin{tcbverbatimwrite}{tmp_\jobname_alg.tex} \begin{algorithm} \caption{Build tree} \label{alg:buildtree} \begin{algorithmic}[1] \STATE{Define $P:=T:=\{ \{1\},\ldots,\{d\}$\}} \WHILE{$\#P > 1$} \STATE\label{line3}{Choose $C^\prime\in\mathcal{C}_p(P)$ with $C^\prime := \operatorname{argmin}_{C\in\mathcal{C}_p(P)} \varrho(C)$} \STATE{Find an optimal partition tree $T_{C^\prime}$ } \STATE{Update $P := (P{\setminus} C^\prime) \cup \{ \bigcup_{t\in C^\prime} t \}$} \STATE{Update $T := T \cup \{ \bigcup_{t\in\tau} t : \tau\in T_{C^\prime}{\setminus} \mathcal{L}(T_{C^\prime})\}$} \ENDWHILE \RETURN $T$ \end{algorithmic} \end{algorithm} \end{tcbverbatimwrite} \examplefile[float=htpb,label={ex:alg},% before upper={\preamble[\scriptsize]{\boldsymbol usepackage\{algorithmic\}}\\[-0.4em] \preamble[\scriptsize]{\boldsymbol Crefname\{ALC@unique\}\{Line\}\{Lines\}}},% listing only, listing options={% style=siamlatex,basicstyle=\ttfamily\scriptsize}]% {Example algorithm}{tmp_\jobname_alg.tex} \input{tmp_\jobname_alg.tex} \section{Sections} \label{sec:sec} Sections are denoted using standard \LaTeX\ section commands, i.e., \code{\section}, \code{\subsection}, etc. If you wish to end the section title with something other that a period (the default), you have to add the command \code{\nopunct} at the end of the title. Appendices are created with the normal sectioning commands, following the command \code{ \section{A detailed example} Here we include some equations and theorem-like environments to show how these are labeled in a supplement and can be referenced from the main text. Consider the following equation: \begin{equation} \label{eq:suppa} a^2 + b^2 = c^2. \end{equation} You can also reference equations such as \cref{eq:matrices,eq:bb} from the main article in this supplement. \lipsum[100-101] \begin{theorem} An example theorem. \end{theorem} \lipsum[102] \begin{lemma} An example lemma. \end{lemma} \lipsum[103-105] Here is an example citation: \cite{KoMa14}. \section[Proof of Thm]{Proof of \cref{thm:bigthm}} \label{sec:proof} \lipsum[106-114] \section{Additional experimental results} \Cref{tab:foo} shows additional supporting evidence. \begin{table}[htbp] \caption{Example table} \label{tab:foo} \centering \begin{tabular}{|c|c|c|} \hline Species & \bf Mean & \bf Std.~Dev. \\ \hline 1 & 3.4 & 1.2 \\ 2 & 5.4 & 0.6 \\ \hline \end{tabular} \end{table} \bibliographystyle{siamplain}
1,314,259,996,367
arxiv
\subsection*{\bf \large References}} \newcommand{\tikzcircle}[2][orange,fill=orange]{\tikz[baseline=-0.5ex]\draw[#1,radius=#2] (0,0) circle ;}% \begin{document} \title{Nonlinear magnetoelectric effect in atomic vapor} \author{Sushree S. Sahoo} \email{ssahoo@tifrh.res.in} \affiliation{National Institute of Science Education and Research Bhubaneswar, HBNI, Jatni-752050, India.} \affiliation{TIFR Centre for Interdisciplinary Sciences, Tata Institute of Fundamental Research, Hyderabad-500017, India.} \author{Soumya R. Mishra} \affiliation{National Institute of Science Education and Research Bhubaneswar, HBNI, Jatni-752050, India.} \author{G. Rajalakshmi} \affiliation{TIFR Centre for Interdisciplinary Sciences, Tata Institute of Fundamental Research, Hyderabad-500017, India.} \author{Ashok K. Mohapatra} \email{a.mohapatra@niser.ac.in} \affiliation{National Institute of Science Education and Research Bhubaneswar, HBNI, Jatni-752050, India.} \begin{abstract} {\bf Magnetoelectric (ME) effect refers to the coupling between electric and magnetic fields in a medium resulting in electric polarization induced by magnetic fields and magnetization induced by electric fields~\cite{Dell70,Fieb05}. The linear ME effect in certain magnetoelectric materials such as multiferroics has been of great interest due to its application in the fabrication of spintronics devices, memories, and magnetic sensors~\cite{Toku07,Klee13,Scot07,Bibe08,Spal19}. However, the exclusive studies on the nonlinear ME effect are mostly centered on the investigation of second-harmonic generation in chiral materials~\cite{Maki95, Bote05, Fie05}. Here, we report the demonstration of nonlinear wave mixing of optical electric fields and radio-frequency (rf) magnetic fields in thermal atomic vapor, which is the consequence of the higher-order nonlinear ME effect in the medium. The experimental results are explained by comparing with density matrix calculations of the system. We also experimentally verify the expected dependence of the generated field amplitudes on the rf field magnitude as evidence of the magnetoelectric effect. This study can open up the possibility for precision rf-magnetometry due to its advantage in terms of larger dynamic range and arbitrary frequency resolution.} \end{abstract} \maketitle The electrical polarization due to magnetoelectric (ME) effect induced in a medium in response to the applied electric field $E$ and magnetic field $B$ is defined by the general expression, $P_i({ E},{ B})=\chi^{ee}_{ij}E_j+\chi^{em}_{ij} B_{j}+\chi^{emm}_{ijk} B_j B_k+\chi^{eem}_{ijk}E_jB_k+\chi^{eemm}_{ijkl}E_jB_kB_l+...$, where the indices $ijk$ refer to the polarisation components of the fields whereas the indices $e$ and $m$ denote the electric and magnetic fields respectively. $\chi^{ee}_{ij}$ signifies the linear electric susceptibility, $\chi^{em}_{ij}$ describes the linear ME effect while the leading higher-order ME contributions are described by the tensors $\chi^{emm}_{ijk}$, $\chi^{eem}_{ijk}$ and $\chi^{eemm}_{ijkl}$. In this study, we explore the nonlinear polarization terms given by, $P^{(2)}_i=\chi^{eem}_{ijk}E_jB_k$ and $P^{(3)}_i=\chi^{eemm}_{ijkl}E_jB_kB_l$. The polarisation, $P^{(2)}_i$ is a result of mixing of three fields i.e. two input fields (one electric and one magnetic field) and one generated electric field whereas $P^{(3)}_i$ results from mixing of four fields i.e. three input fields (one electric and two magnetic fields) and one generated electric field. The mixing between microwave and optical fields in atomic systems is an example of such mixing processes~\cite{Zibr02,Adwa19}. In this work, we demonstrate the nonlinear ME effects achieved through the parametric interaction of optical and rf fields via multi-wave mixing processes resulting in the efficient generation of optical fields. The studies on the interaction of optical and rf fields so far are based on the induced spin polarization in a system by an rf field while coupling the Zeeman sublevels. This leads to a polarisation rotation of an input linearly polarised light traversing the medium~\cite{Savu05,Lee06,Ledb07,Zigd10,Chal12,Kede14,Cohe19}. We couple one of the ground states of the atomic system to an excited state using an optical field while the rf field couples the neighboring Zeeman sublevels of the ground state such that it induces ground-state coherence in the system facilitating the mixing process. This results in the system producing light at optical frequencies as satisfied by the energy conservation due to the process. We also study the characteristic features of the generated fields such as polarization, resonance width, and the variation of generation amplitudes with input optical power. \begin{figure}[t] \centering \includegraphics[width=100mm,scale=1]{Fig1.pdf} \caption{Depiction of the (a) three-wave and (b) four-wave mixing processes in the schematic energy diagram for $D_2$ line, $^{87}$Rb $F = 1\rightarrow F=0$ transition. Here, the input fields are the pump ($\omega_p$) and the rf field ($\omega_{rf}$) leading to the optical generation at frequencies, $\omega_p-\omega_{rf}$ via three-wave mixing and $\omega_p-2\omega_{rf}$ via four-wave mixing processes. $\Delta$ ($\Delta_{rf}$) is the detuning of the input optical (rf) field from the corresponding atomic transition. c) Schematic of the experimental setup for the observation of the mixing process. PBS: Polarising beam splitter, M: Mirror, AOM: Acousto-Optic Modulator, WP: Wave plate, P: Polariser, PD: Photo-detector, SA: Spectrum analyzer} \label{b1} \end{figure} The schematic of the atomic energy levels coupled by the input optical field and rf magnetic field is shown in Fig.~\ref{b1} (a) and (b). The pump field ($\omega_p$) of $\sigma^{+}$ polarisation, coupling the ground state with $m_F=-1$ to the excited state with $m^{'}_F=0$ drives the population from $m_F=-1$ to $m_F=0$ and $m_F=1$ ground states via optical pumping~\cite{Happ72}. There are two possible parametric cycles in the system. An atom present in $m_F=0$ ground state emits one $\sigma^+$ rf photon to come to $m_F=-1$ state, then absorbs the $\sigma^+$ pump photon to be excited to $m^{'}_F=0$ state and finally emits a $\pi$ optical photon to come back to $m_F=0$ state. This parametric process is a three-wave mixing process, which can be described by $P_{\pi}^{(2)}(=\chi_{\pi\sigma^{+}\sigma^{+}}^{eem}E_{\sigma^{+}}B^*_{\sigma^{+}})$ as discussed before. Similarly, in the four-wave mixing process, the atom starting with $m_F=1$ ground state emits two $\sigma^+$ rf photons to come to $m_F=-1$ state and absorbs one $\sigma^+$ pump photon to be excited to $m^{'}_F=0$ state and then comes back to $m_F=1$ state by emitting a $\sigma^-$ optical photon. This four-wave mixing process is described by $P_{\sigma^{-}}^{(3)}(=\chi_{\sigma^{-}\sigma^{+}\sigma^{+}\sigma^{+}}^{eemm}E_{\sigma^{+}}B_{\sigma^{+}}^{*2})$. The energy conservation leads to optical field generation at frequencies, $\omega_{g1} (=\omega-\omega_{rf}$) and $\omega_{g2}(=\omega-2\omega_{rf}$) via three-wave mixing and four-wave mixing processes respectively. The polarization states of the generated fields are decided by the angular momentum conservation in both the processes i.e. for an input $\sigma^{+}$ polarised pump beam, the three-wave (four-wave) mixing process leads to generated beam with $\pi$ ($\sigma^{-}$) polarisation. Furthermore, as the wave vector of the rf field is negligible compared to that of the optical field, the phase-matching conditions ensure that the direction of the generated beams is same as that of the input pump beam. The schematic of the experimental set-up is depicted in Fig~\ref{b1}(c). The pump field propagates through a magnetically shielded rubidium vapor cell and the generated fields are analyzed by heterodyne detection, combining with a local oscillator (LO) (Refer: Methods). We use three pairs of Helmholtz coils to apply the magnetic field along x-, y- and z-directions. In the first experiment, we apply a static magnetic field along the z-direction and rf magnetic field along the y-direction. The experimental data for the input circularly polarised light is presented in Fig.~\ref{b2} (a) and (b). As expected, when we use the pump with $\sigma^+$ ($\sigma^-$) circular polarization, the optical fields due to the mixing process are generated with lower (higher) optical frequencies than the pump field and hence the interference peak with LO appears at the left (right) side of the main peak. It is interesting to note that this frequency up/down conversion process is a direct method to determine the handedness of circular polarization of light interacting with the medium. \begin{figure}[t] \begin{center} \includegraphics[width=145mm,scale=1]{Fig2.pdf} \caption{Experimental data of the generated beat amplitudes when the input pump beam is a) $\sigma^+$ polarized, b) $\sigma^-$ polarized, c) linearly polarized with equal components of $\sigma^+$ and $\sigma^-$ polarization ($\frac{1}{\sqrt{2}}(|\sigma^+\rangle+|\sigma^-\rangle )$ and d) linearly polarized with equal components of all $\sigma^+$, $\sigma^-$ and $\pi$ polarizations. The larger peak at 40 MHz refers to the beat note corresponding to the interference of the LO and the input pump light. The other peaks corresponding to the generated light fields are indicated by the inset showing the respective wave mixing processes in the energy-level diagrams.} \label{b2} \end{center} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[width=165mm,scale=1]{Fig3.pdf} \caption{a) Resonance curves corresponding to the generated beat signals as a variation of the input rf frequency ($\omega_{rf}$). Here, blue open circles (black open squares) depict the experimental data, and magenta solid line (dashed red line) depicts the theoretical fitting corresponding to the generated field $\omega_{g1}$ ($\omega_{g2}$). The input parameters for the model are, $\Omega_p=0.5$ MHz, $\Omega_{rf}=80$ kHz, $\Delta_p=150$ MHz, $\Gamma=6$ MHz and $\Delta_{0}=425$ kHz whereas the fitting parameter is $\gamma=65$ kHz. b) Variation of the generated beat amplitudes at resonance as a variation of the input pump Rabi frequency ($\Omega_p$). The blue open circles (black open squares) depict the experimental data and the magenta solid line (dashed red line) shows the theoretical fitting corresponding to $\omega_{g1}$ ($\omega_{g2}$). Here, the fitting function is of the form, $a\Omega_p+b\Omega_p^3+c\Omega_p^5$ with the fitting parameters given as, $a=31.72\pm1.88$ $(52.34\pm 3.54)$, $b=17.53\pm0.56$ $ (20.86\pm1.05)$ and $c=-1.02\pm0.04$ $(-1.35\pm 0.07)$ for $\omega_{g1}$ ($\omega_{g2}$) signal. c) Variation of beat amplitudes of the generated fields with the amplitude of rf magnetic field ($B_{rf}$) showing linear (black open squares) and quadratic (blue open circles) behavior for $\omega_{g1}$ and $\omega_{g2}$ signals respectively. The experimental data are fitted with the corresponding functional form as shown by the red dashed line (linear) and orange solid line (quadratic).} \label{b4} \end{center} \end{figure*} In the further experiment, the static and rf magnetic fields are applied along y- and z-directions respectively. The linear polarisation of the input pump field along the x-direction is a linear combination of $\sigma^+$ and $\sigma^-$ polarisations i.e. of the form, $\frac{1}{\sqrt{2}}(\left|\sigma^+\right\rangle+\left|\sigma^-\right\rangle)$. Both the generated light fields in this case are $\pi$-polarised with optical frequencies, $\omega_p+\omega_{rf}$ and $\omega_p-\omega_{rf}$. The respective experimental data for the beat amplitudes are presented in Fig.~\ref{b2}(c). On the other hand, when the input pump is $\pi$-polarised, the generated beams are observed to be $\sigma^+$ ($\omega_p+\omega_{rf}$) and $\sigma^-$ ($\omega_p-\omega_{rf}$) polarized. Furthermore, when the input linear polarization is such that it has equal components of $\sigma^+$, $\sigma^-$, and $\pi$ polarizations, all the ground-states become equally populated leading to no generation due to mixing. In this case, the ground-state coherence, which is responsible for the efficient mixing process is no longer induced in the system as it requires a non-zero population difference between the Zeeman sublevels. It corresponds to the vanishing beat amplitudes for the generated fields as depicted in Fig.~\ref{b2} (d). The mixing process and hence the light generation is most efficient when the frequency of the rf field ($\omega_{rf}$) matches the Zeeman splitting ($\Delta_0$). In the experiment, the rf magnetic field ($\omega_{rf}$) is scanned around the Zeeman splitting to observe the resonance curves for both $\omega_{g1}$ and $\omega_{g2}$ for the case of the circularly polarised input pump beam. Fig.~\ref{b4}(a) shows the experimental data for the resonance curves peaked at $425$ kHz. We use the expression of $\chi_{(g1)}^{eem}$ and $\chi_{(g2)}^{eemm}$ from the theoretical model (refer: Methods) to fit the experimental data. From the fitting, we find the value of $\gamma$ to be $65$ kHz which is the dephasing rate and is mostly dominated by the transit time of the atoms through the laser beams and the magnetic inhomogeneity present in the medium. We also study the beat amplitude of the generated beams by varying the input pump Rabi frequency ($\Omega_p$) and the experimental data for both $\omega_{g1}$ and $\omega_{g2}$ are presented in Fig.~\ref{b4}(b). To model this experimental observation, we consider the propagation equation for the generated field with Rabi frequency $\Omega_{gi} (i=1,2)$, which can be written as, $\frac{d\Omega_{gi}}{dz}= -\alpha_{gi}\Omega_{gi}+\kappa_{gi}$ where $\alpha_{gi}=\frac{k_{gi}}{2}\text{Im}(\chi_{eff}^{(1)})$ corresponds to the gain/absorption in the medium whereas $\kappa_{g1}=i\frac{\hbar}{\mu}k_{g1}\chi^{eem}_{(g1)}$ and $\kappa_{g2}=i\frac{\hbar}{\mu}k_{g2}\chi^{eemm}_{(g2)}$ (Refer: Methods), where $k_{g1}$ and $k_{g2}$ are magnitude of the wave vectors corresponding to the generated fields, $\Omega_{g1}$ and $\Omega_{g2}$ respectively. This equation can be solved using the initial condition; $\Omega_{gi}=0$ at $z=0$ to get, $\Omega_{gi}=\frac{\kappa_{gi}}{\alpha_{gi}}(1-e^{-\alpha_{gi} l})$ where $l$ is the length of the vapor cell. Using the linear dependence of $\kappa_{gi}$ with $\Omega_p$ and expanding $\alpha_{gi}=\alpha_0+\alpha_1\Omega_p^2+\alpha_2\Omega_p^4$ with $\alpha_0$, $\alpha_1$ and $\alpha_2$ being the absorption/gain co-efficients corresponding to the linear and nonlinear processes, the expression for $\Omega_{gi}$ can be simplified under the assumption of $\alpha_{gi} l \ll 1$ to a polynomial of odd orders of $\Omega_p$ in the form, $\Omega_{gi}= a\Omega_p+b\Omega_p^3+c\Omega_p^5$. We use this functional form to fit the experimental data for Fig.~\ref{b4}(b), which shows a good matching between the model and experiment. The wave-mixing process is found to be efficient even when the amplitude of the input rf field is very small. In this limit, the expressions for the susceptibilities of the generated fields can be simplified using the approximation, $\Omega_{rf} << \gamma$ as (refer: Methods), $\chi^{eem}_{(g1)}= \frac{N\mu_{42}^2}{\epsilon_0\hbar\Omega_{g1}}\frac{\Omega_p \Omega_{rf}}{2\Delta_p (\Delta_{rf}+i\gamma)}$ and $\chi^{eemm}_{(g2)}= -\frac{N\mu_{43}^2}{\epsilon_0\hbar\Omega_{g2}}\frac{\Omega_p \Omega_{rf}^2}{2\Delta_p (\Delta_{rf}+i\gamma)(2\Delta_{rf}+i\gamma)}$. These analytical expressions clearly depict the linear dependence of $\chi^{eem}_{(g1)}$ as well as the quadratic dependence of $\chi^{eemm}_{(g2)}$ on $\Omega_{rf}$. To verify this dependence, we experimentally measured the generated beat amplitudes as a function of the input rf field amplitude (B$_{rf}$). The corresponding experimental data with linear and quadratic fittings are presented in Fig.~\ref{b4} (c). This experimental observation is a further confirmation of the nonlinear mixing between the optical field and the rf magnetic field occuring in the system. We have also observed that at low rf frequency i.e. below 100 kHz, another nonlinear process known as the forward four-wave mixing~\cite{Saho17} becomes dominant for input linearly polarized pump beam. In this case, the generated fields due to rf and optical mixing act as a seed for the all-optical forward four-wave mixing process and hence results in large amplification of the signal. These results can have profound applications in the field of rf magnetometry. Previous works on rf magnetometry are based on atomic magnetometers or SQUIDs. RF SQUIDs are less sensitive to their DC counterparts and give sensitivities of the order of 30fT/ Hz$^{1/2}$ at 77K \cite{Brag95}. Atomic magnetometers measure the polarisation rotation of an input linearly polarised beam in presence of an rf field coupling the Zeeman sublevels of the ground state and so far the sensitivity reached is 0.3 fT/Hz$^{1/2}$ at 0.5 MHz in thermal vapor~\cite{Kede14} and 330 pT/ Hz$^{1/2}$ in the cold atomic ensemble~\cite{Cohe19}. In our case, the rf magnetic field sensitivity is anticipated to be of similar order as we exploit the ground-state coherence induced by the rf field, and hence, its effect should be comparable to that due to the ground-state population distribution induced by the rf field~\cite{Chal12, Kede14}. Moreover, the parametric nature of the basic process in our system promises sensitivity with a larger dynamic range. Another interesting feature of the proposed rf magnetometer using our system is its arbitrary resolution in frequency, which is limited by the resolution of the measurement device employed. Therefore, the system has the potential to surpass the state-of-the-art frequency resolution of millihertz in a magnetometer~\cite{Mizu18}. The significance of this work is two-fold; first, it would open up the possibility to an area of nonlinear magnetoelectric phenomena involving the nonlinear mixing between electric and magnetic fields using an atomic system. Secondly, it can be used for precision rf magnetometry by utilizing the simultaneous effect of forward four-wave mixing present in the system. If the expected sensitivity in both magnitude and the frequency resolution is reached, the system can be an ideal candidate for numerous applications such as sensing biological magnetic fields~\cite{Jens18}, detection of signals in magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR)~\cite{Xu06}, investigation of geomagnetic fields~\cite{Dang10}, measurement of the magnetic fields in space as well as for the search of dark matter~\cite{Chri16, Alex18, Chu19} and in general for the application of sensitive magnetometry in different challenging envirnments~\cite{Kai20}.
1,314,259,996,368
arxiv
\section{Introduction} We consider the Dirichlet problem for the generalized H\'{e}non equation \begin{equation}\label{1.4} \left \{ \begin{aligned} -\Delta u &=|x|^\alpha |u|^{p-2}u&&\qquad \text{in $\B$,}\\ u&=0&&\qquad \text{on $\partial \B$,} \end{aligned} \right. \end{equation} where $\B \subset \R^N,N\geq 3$ is the unit ball and $p>2$, $\alpha>0$. This equation originally arose through the study of stellar clusters in \cite{Henon}. One of the first results on (\ref{1.4}) is due to Ni \cite{Ni}, who proved the existence of a positive radial solution in the subcritical range of exponents $2<p<2_\alpha^*$, where $2_\alpha^*:= \frac{2N+2\alpha}{N-2}$. In another seminal paper, Smets, Willem and Su \cite{Smets-Willem-Su} observed that symmetry breaking occurs for fixed $p$ and large $\alpha$, i.e., there exists $\alpha^*>0$ depending on $p$ such that ground state solutions of \eqref{1.4} are nonradial for $\alpha>\alpha^*$. In the sequel, the existence and shape of radial and nonradial solutions of the H\'enon equation has received extensive attention, see e.g. \cite{Smets-Willem, Cao-Peng, Serra, Pistoia-Serra, B-W,B-W2, Amadori-Gladiali,Amadori-Gladiali2,Amadori-Gladiali3,lou-weth-zhang:2018}. In particular, bifurcation of nonradial positive solutions in the parameter $p$ is studied in \cite{Amadori-Gladiali} for fixed $\alpha>0$. Moreover, a related critical parameter-dependent equation on $\R^N$ is considered in \cite{Gladiali-Grossi-Neves}. The main motivation for the present paper is the investigation of bifurcation of nonradial nodal (i.e., sign changing) solutions -- in the parameter $\alpha>0$ -- from the set of radial nodal solutions. To explain this in more detail, let us fix $K \in \N$, an exponent $p>2$ and consider $$ \alpha > \alpha_p:= \max \left\{\frac{(N-2)p - 2N}{2},0 \right\}, $$ which amounts to the subcriticality condition $p<2_\alpha^*$. Under these assumptions, it has been proved by Nagasaki \cite{nagasaki} that (\ref{1.4}) admits a unique classical radial solution $u_\alpha \in C^2(\overline \B)$ with $u_\alpha(0)>0$ and with precisely $K$ nodal domains (i.e., $K-1$ zeros in the radial variable $r= |x| \in (0,1)$). In order to decide whether the branch $\alpha \to u_\alpha$ admits bifurcation of nonradial solutions for large $\alpha$, we need to analyze its spectral asymptotics as $\alpha \to \infty$. More precisely, we wish to derive asymptotic expansions of the eigenvalues of the linearizations of (\ref{1.4}) at $u_\alpha$ as $\alpha \to \infty$. For this we consider the linearized operators \begin{equation} \label{linearized operator} \phi \mapsto L^\alpha \phi:= -\Delta \phi - (p-1) |x|^\alpha |u_\alpha|^{p-2} \phi, \qquad \alpha> \alpha_p, \end{equation} which are self-adjoint operators in $L^2(\B)$ with compact resolvent, domain $H^2(\B) \cap H^1_0(\B)$ and form domain $H^1_0(\B)$. In particular, they are Fredholm operators of index zero. As usual, $u_\alpha$ is called nondegenerate if $L^\alpha: H^2(\B) \cap H^1_0(\B) \to L^2(\B)$ is an isomorphism, which amounts to the property that the equation $L^\alpha \phi = 0$ only has the trivial solution $\phi=0$ in $H^2(\B) \cap H^1_0(\B)$. Otherwise, $u_\alpha$ is called degenerate. By a classical observation, only values $\alpha$ such that $u_\alpha$ is degenerate can give rise to bifurcation from the branch $\alpha \mapsto u_\alpha$. Moreover, properties of the kernel of $L^\alpha$ and the change of the Morse index are of key importance to establish bifurcation. Here we recall that the Morse index of $u_\alpha$ is defined as the number of negative eigenvalues of the operator $L^\alpha$. The first step in deriving asymptotic spectral information of the operator family $L^\alpha$, $\alpha>\alpha_p$ is to characterize the limit shape of the solutions $u_\alpha$ after suitable transformations. Inspired by Byeon and Wang \cite{B-W}, we transform the radial variable and derive a corresponding limit problem. Here, for simplicity, we also regard $u_\alpha = u_\alpha(r)$ as a function of the radial variable $r=|x| \in [0,1]$. Our first preliminary result is the following. \begin{proposition} \label{limit-shape} Let $p>2$, $K \in \N$. Moreover, for $\alpha>\alpha_p$, let $u_\alpha$ denote the unique radial solution of (\ref{1.4}) with $K$ nodal domains and $u_\alpha(0)>0$, and define \begin{equation} \label{eq:U-alpha-definition} U_\alpha: [0,\infty) \to \R, \qquad U_\alpha(t)= (N+\alpha)^{-\frac{2}{p-2}}\: u_\alpha(e^{-\frac{t}{N+\alpha}}). \end{equation} Then $U_\alpha \to (-1)^{K-1} U_\infty$ uniformly on $[0,\infty)$ as $\alpha \to \infty$, where $U_\infty \in C^2([0,\infty))$ is characterized as the unique bounded solution of the limit problem \begin{equation} \label{eq:limit-U} -U'' = e^{-t}|U|^{p-2}U \quad \text{in $[0,\infty)$,}\qquad U(0)=0 \end{equation} with $U'(0)>0$ and with precisely $K-1$ zeros in $(0,\infty)$. \end{proposition} The asymptotic description derived in Proposition~\ref{limit-shape} implies that the solutions $u_\alpha$ blow up everywhere in $\B$ as $\alpha \to \infty$, in contrast to the nonradial ground states considered in \cite{Smets-Willem-Su}. It is therefore reasonable to expect that the Morse index of $u_\alpha$ tends to infinity as $\alpha \to \infty$. This fact has been proved recently and independently for more general classes of problems in \cite{Amadori-Gladiali2,lou-weth-zhang:2018}, extending a result for the case $N=2$ given in \cite{Moreira-dos-Santos-Pacella}. To obtain a more precise description of the distribution of eigenvalues of $L^\alpha$ as $\alpha \to \infty$, we rely on complementary approaches of \cite{Amadori-Gladiali2,lou-weth-zhang:2018} and implement new tools. We note here that \cite{lou-weth-zhang:2018} uses the transformation~(\ref{eq:U-alpha-definition}) in a more general context together with Liouville type theorems for limiting problems on the half line. In the present paper, we build on very useful results obtained recently by Amadori and Gladiali in \cite{Amadori-Gladiali2}. In particular, we use the fact that the Morse index of $u_\alpha$ equals the number of negative eigenvalues (counted with multiplicity) of the weighted eigenvalue problem \begin{equation} \label{eq:weighted-L-alpha-problem} L^\alpha \phi =\frac{\lambda}{|x|^2}\phi,\qquad \phi \in H^1_0(\B), \end{equation} see \cite[Prop. 5.1]{Amadori-Gladiali2}. In various special cases, this observation had already been used before, see e.g. \cite[Section 5]{Dancer-Gladiali-Grossi}. In order to avoid regularity issues related to the singularity of the weight $\frac{1}{|x|^2}$, it is convenient to consider (\ref{eq:weighted-L-alpha-problem}) in weak sense via the quadratic form $q_\alpha$ associated with $L^\alpha$, see Section~\ref{sec:spectr-asympt-refeq} below. The problem (\ref{eq:weighted-L-alpha-problem}) is easier to analyze than the standard eigenvalue problem $L^\alpha \phi =\lambda \phi$ without weight. Indeed, every eigenfunction of (\ref{eq:weighted-L-alpha-problem}) is a sum of functions of the form \begin{equation} \label{eq:eigenfunction-form} x \mapsto \phi(x) = \psi(x)Y_{\ell}\left(\frac{x}{|x|}\right), \end{equation} where $\psi \in H^1_{0,rad}(\B)$ and $Y_\ell$ is a spherical harmonic of degree $\ell$, see \cite[Prop. 4.1]{Amadori-Gladiali2}. Here $H^1_{0,rad}(\B)$ denotes the space of radial functions in $H^1_0(\B)$. We recall that the space of spherical harmonics of degree $\ell \in \N \cup \{0\}$ has dimension $d_\ell := {N+ \ell -1 \choose N-1} - {N+ \ell -3 \choose N-1}$, and that every such spherical harmonic is an eigenfunction of the Laplace-Beltrami operator on the unit sphere $\mathbb{S}^{N-1}$ corresponding to the eigenvalue $\lambda_\ell:= \ell (\ell + N-2)$. For functions $\phi$ of the form (\ref{eq:eigenfunction-form}), the eigenvalue problem (\ref{eq:weighted-L-alpha-problem}) reduces to an eigenvalue problem for radial functions given by \begin{equation} \label{eq:weighted-eigenvalue-problem-reduced} L^\alpha \psi =\frac{\mu}{|x|^2}\psi,\qquad \psi \in H^1_{0,rad}(\B), \end{equation} where $\mu = \lambda - \lambda_{\ell}$. In \cite[p.19 and Prop. 3.7]{Amadori-Gladiali2}, it has been proved that (\ref{eq:weighted-eigenvalue-problem-reduced}) admits precisely $K$ negative eigenvalues \begin{equation} \label{eigenvalue-curves-first} \mu_1(\alpha) <\mu_2(\alpha)<\dots< \mu_K(\alpha)<0 \qquad \qquad \text{for $\alpha> \alpha_p$.} \end{equation} Combining this fact with the observations summarized above, one may then derive the following facts which we cite here in a slightly modified form from \cite{Amadori-Gladiali2}. \begin{proposition} (see \cite[Prop. 1.3 and 1.4]{Amadori-Gladiali2})\\ \label{spectral-curves-0} Let $p > 2$ and $\alpha >\alpha_p$. Then the Morse index of $u_\alpha$ is given by $$ m(u_\alpha)= \sum \limits_{(i,\ell) \in E^-} d_\ell, $$ where $E^-$ denotes the set of pairs $(i,\ell)$ with $i \in \N,\: \ell \in \N \cup \{0\}$ and $\mu_{i}(\alpha)+ \lambda_\ell <0$. Moreover, $u_\alpha$ is nondegenerate if and only if $$ \mu_i(\alpha) + \lambda_\ell \not = 0\qquad \text{for every $i \in \{1,\dots,K\}$, $\ell \in \N \cup \{0\}$.} $$ \end{proposition} In order to describe the asymptotic distribution of negative eigenvalues of $L^\alpha$, it is essential to study the asymptotics of the eigenvalues $\alpha \mapsto \mu_i(\alpha)$, $i=1,\dots,K$. With regard to this aspect, we mention the estimate \begin{equation} \label{gladiali-mu-i-est} \mu_i(\alpha) < - \frac{(\alpha+2)\bigl(\alpha+ 2(N-1)\bigr)}{4} \qquad \text{for $\alpha > \alpha_p$, $i = 1,\dots,K-1$,} \end{equation} which has been derived in \cite[Lemma 5.11 and Remark 5.12]{Amadori-Gladiali2}. In particular, it follows that $\mu_i(\alpha) \to -\infty$ as $\alpha \to \infty$ for $i = 1,\dots,K-1$. In our first main result, we complement this estimate by deriving asymptotics for $\mu_i(\alpha)$. \begin{theorem} \label{spectral-curves} Let $p > 2$ and $\alpha >\alpha_p$. Then the negative eigenvalues of (\ref{eq:weighted-eigenvalue-problem-reduced}) are given as $C^1$-functions $(\alpha_p,\infty) \to \R$, $\alpha \mapsto \mu_{i}(\alpha)$, $i = 1,\ldots,K$ satisfying the asymptotic expansions \begin{equation} \label{expansions} \mu_i(\alpha) = \nu^*_i \alpha^2 + c^*_i \alpha +o(\alpha) \quad \text{and}\quad \mu_i'(\alpha) = 2 \nu^*_i \alpha + c^*_i +o(1) \qquad \text{as $\alpha \to \infty$,} \end{equation} where $c^*_i$, $i=1,\dots,K$ are constants and the values $\nu^*_1 < \nu^*_2 < \dots< \nu^*_K < 0$ are precisely the negative eigenvalues of the eigenvalue problem \begin{equation} \label{eq:weighted-eigenvalue-translimit-preliminaries} \left\{ \begin{aligned} &-\Psi'' - (p-1)e^{-t}|U_\infty(t)|^{p-2}\Psi= \nu \Psi \quad \text{in $[0,\infty)$,}\\ &\qquad \Psi(0)=0,\quad \Psi \in L^\infty(0,\infty), \end{aligned} \right. \end{equation} with $U_\infty$ given in Proposition~\ref{limit-shape}. In particular, there exists $\alpha^*>0$ such that the curves $\mu_i$, $i=1,\dots,K$ are strictly decreasing on $[\alpha^*, \infty)$. \end{theorem} \begin{remark}\label{introduction-remark-1} {\rm The strict monotonicity of the curves $\mu_i$ on $[\alpha^*, \infty)$ will be of key importance for the derivation of bifurcation of nonradial solutions via variational bifurcation theory. For this we require the derivative expansion in (\ref{expansions}), but we do not need additional information on the constants $c_i^*$ since $\nu^*_i<0$ for $i=1,\dots,K$. Our proof of (\ref{expansions}) gives rise to the following characterization of the constants $c_i^*$: For fixed $i \in \{1,\dots,K\}$, we have $$ c_i^*= -(2N \nu_i^* + N-2)(p-1) \int_0^\infty \left( t e^{-t} |U_\infty|^{p-2}\Psi^2 + (p-2) e^{-t} |U_\infty|^{p-4} U_\infty V \Psi^2 \right) \, dt, $$ where $U_\infty$ is given in Proposition~\ref{limit-shape}, $V$ is the unique bounded solution of the problem $$ -V'' - (p-1)e^{-t}|U_\infty|^{p-2}V = U_\infty' - t e^{-t}|U_\infty|^{p-2}U_\infty \quad \text{in $[0,\infty)$,}\qquad V(0)=0 $$ and $\Psi$ is the (up to sign unique) eigenfunction of (\ref{eq:weighted-eigenvalue-translimit-preliminaries}) associated with the eigenvalue $\nu_i^*$ with $\int_0^\infty \Psi^2 \, dt =1$.} \end{remark} The strict monotonicity of the curves $\mu_i$ for large $\alpha$ asserted in Theorem \ref{spectral-curves} allows us to deduce the following useful properties related to nondegeneracy and a change of the Morse index of the functions $u_\alpha$. \begin{corollary} \label{corollary-on-eigenvalue-curves} Let $p > 2$. For every $i \in \{1,\dots,K\}$, there exist $\ell_i \in \N \cup \{0\}$ and sequences of numbers $\alpha_{i,\ell} \in (\alpha_p,\infty)$, $\eps_{i,\ell}>0$, $\ell \ge \ell_i$ with the following properties: \begin{itemize} \item[(i)] $\alpha_{i,\ell} \to \infty$ as $\ell \to \infty$. \item[(ii)] $\mu_i(\alpha_{i,\ell})+ \lambda_\ell = 0$. In particular, $u_{\alpha_{i,\ell}}$ is degenerate. \item[(iii)] $u_\alpha$ is nondegenerate for $\alpha \in (\alpha_{i,\ell}-\eps_{i,\ell},\alpha_{i,\ell}+\eps_{i,\ell})$, $\alpha \not = \alpha_{i,\ell}$. \item[(iv)] For $\eps \in (0,\eps_{i,\ell})$ the Morse index of $u_{\alpha_{i,l}+\eps}$ is strictly larger than the Morse index of $u_{\alpha_{i,l}-\eps}$. \end{itemize} \end{corollary} With the help of Corollary~\ref{corollary-on-eigenvalue-curves} and an abstract bifurcation result in \cite{kielhoefer:1988}, we will derive our second main result on the bifurcation of nonradial solutions from the branch $\alpha \mapsto u_\alpha$. \begin{theorem} \label{thm-bifurcation} Let $2<p<\frac{2N}{N-2}$, and let $K \in \N$, $i \in \{1,\dots,K\}$ be fixed. Then the points $\alpha_{i,\ell}$, $\ell \ge \ell_i$ are bifurcation points for nonradial solutions of (\ref{1.4}). More precisely, for every $\ell \ge \ell_i$, there exists a sequence $(\alpha_n,u^n)_n$ in $(0,\infty) \times C^2(\ov\B)$ with the following properties: \begin{itemize} \item[(i)] $\alpha_n \to \alpha_{i,\ell}$, and $u^n \to u_{\alpha_{i,\ell}}$ in $C^2(\ov\B)$. \item[(ii)] For every $n \in \N$, $u^n$ is a nonradial solution of (\ref{1.4}) with $\alpha= \alpha_n$ having precisely $K$ nodal domains $\Omega_1,\dots,\Omega_K$ such that $0 \in \Omega_1$, $\Omega_1$ is homeomorphic to a ball and $\Omega_2,\dots,\Omega_K$ are homeomorphic to annuli. \end{itemize} Here, $\ell_i \in \N \cup \{0\}$ and the values $\alpha_{i,\ell}$ are given in Corollary~\ref{corollary-on-eigenvalue-curves}. \end{theorem} As mentioned above, Theorem~\ref{thm-bifurcation} will be derived from Corollary~\ref{corollary-on-eigenvalue-curves} and variational bifurcation theory. For this we reformulate (\ref{1.4}) as a bifurcation equation in the Hilbert space $H^1_0(\B)$ and show that, as a consequence of Corollary~\ref{corollary-on-eigenvalue-curves}, the crossing number of an associated operator family is nonzero at the points $\alpha_{i,\ell}$. Thus the main theorem in \cite{kielhoefer:1988} applies and yields that the points $\alpha_{i,\ell}$, $\ell \ge \ell_i$ are bifurcation points for solutions of (\ref{1.4}) along the branch $\alpha \mapsto u_\alpha$. To see that bifurcation of {\em nonradial} solutions occurs, it suffices to note that the solutions $u_\alpha$ are radially nondegenerate for $\alpha>0$, i.e., the kernel of $L^\alpha$ does not contain radial functions. A proof of the latter fact can be found in \cite[Theorem 1.7]{Amadori-Gladiali2}, and it also follows from results in \cite{Yanagida}. Since Corollary~\ref{corollary-on-eigenvalue-curves} is a rather direct consequence of Theorem~\ref{spectral-curves}, the major part of this paper is concerned with the proofs of Proposition~\ref{limit-shape} and Theorem~\ref{spectral-curves}. It is not difficult to see that, via the transformation given in (\ref{eq:U-alpha-definition}), the H\'enon equation (\ref{1.4}) transforms into a family of problems depending on the new parameter $\gamma=\frac{N-2}{N+\alpha}$ which admits a well-defined limit problem as $\gamma \to 0^+$ given by (\ref{eq:limit-U}). It is then necessary to choose a proper function space which allows to apply the implicit function theorem at $\gamma=0$, and this yields the convergence statement in Proposition~\ref{limit-shape}. The idea of the proof of Theorem~\ref{spectral-curves} is similar, as we use the same transformation (up to scaling) to rewrite the $\alpha$-dependent eigenvalue problem (\ref{eq:weighted-eigenvalue-problem-reduced}) as a $\gamma$-dependent eigenvalue problem on the interval $[0,\infty)$. We shall then see that (\ref{eq:weighted-eigenvalue-translimit-preliminaries}) arises as the limit of the transformed eigenvalue problems as $\gamma \to 0^+$. In order to obtain $C^1$-expansions of eigenvalue curves, we wish to apply the implicit function theorem again at the point $\gamma=0$. Here a major difficulty arises in the case where $p \in (2,3]$, as the map $U \mapsto |U|^{p-2}$ fails to be differentiable between standard function spaces. We overcome this problem by restricting this map to the subset of $C^1$-functions on $[0,\infty)$ having only a finite number of simple zeros and by considering its differentiability with respect to a weighted uniform $L^1$-norm, see Sections~\ref{sec:spectr-asympt-refeq} and \ref{sec:differentiability-g}. This is certainly the hardest step in the proof of Theorem~\ref{spectral-curves}. It seems instructive to compare the transformations used in the present paper with the ones used in \cite{Moreira-dos-Santos-Pacella,Amadori-Gladiali2}. Transforming a radial solution $u$ of (\ref{1.4}) by setting $w(\tau)=(\frac{2}{2+\alpha})^{\frac{2}{p-2}}u(\tau^{\frac{2}{2+\alpha}})$ for $\tau \in (0,1)$ leads to the problem \begin{equation} \label{gladiali-transform} -(t^{M-1}w')'=t^{M-1}|w|^{p-2}w\quad \text{in $(0,1)$,}\qquad \quad w'(0)= w(1)=0 \end{equation} with $M=M(\alpha)= \frac{2(N+\alpha)}{2+\alpha}$. Via this transformation, the associated weighted singular eigenvalue problem (\ref{eq:weighted-eigenvalue-problem-reduced}) corresponds to the even more singular eigenvalue equation \begin{equation} \label{gladiali-transform-eigenvalue} -(t^{M-1}\psi')'-(p-1) t^{M-1}|w|^{p-2}\psi= t^{M-3}\hat \nu \psi \qquad \text{in $(0,1)$}, \end{equation} which is considered in $M$-dependent function spaces in \cite{Amadori-Gladiali2}. In principle, it should be possible to carry out our approach also via these transformations, but we found it easier to find appropriate parameter-independent function spaces in the framework we use here. We stress again that finding parameter-independent function spaces is essential for the application of the implicit function theorem. The paper is organized as follows. In Section~\ref{sec:limit-alpha-to}, we first recall some known results on radial solutions of \eqref{1.4} and properties of the associated linearized operators. We then study the asymptotic behavior of the functions $u_\alpha$ as $\alpha \to \infty$ and prove Proposition~\ref{limit-shape}. Section~\ref{sec:spectr-asympt-refeq} is devoted to the proofs of Theorem~\ref{spectral-curves} and Corollary~\ref{corollary-on-eigenvalue-curves}. In Section~\ref{sec:differentiability-g} we prove, in particular, the differentiability of the map $U \mapsto |U|^{p-2}$ for $p \in (2,3]$ in a suitable functional setting. In Section~\ref{sec:bifurc-almost-radi}, we finally prove the bifurcation result stated in Theorem~\ref{thm-bifurcation}. \subsection*{Acknowledgement} The authors wish to thank Francesca Gladiali for helpful discussions and for pointing out the paper \cite{Amadori-Gladiali2}. \section{The limit shape of sign changing radial solutions of (\ref{1.4}) as $\alpha \to \infty$} \label{sec:limit-alpha-to} This section is devoted to the asymptotics of branches of sign changing radial solutions of (\ref{1.4}) as $\alpha \to \infty$. In particular, we will prove Proposition~\ref{limit-shape}. As before, we let $K \in \N$ be fixed, and we first recall a result on the existence, uniqueness and radial Morse index of a radial solution $u_\alpha$ of (\ref{1.4}) with $K$ nodal domains. \begin{theorem} \label{sec:bifurc-nonr-solut-1} For every $p>2$ and $\alpha >\alpha_p$, equation~(\ref{1.4}) has a unique radial solution $u_\alpha \in C^2(\overline \B)$ with precisely $K$ nodal domains such that $u_\alpha(0)>0$. Furthermore, the linearized operator $$ L^\alpha : H^2(\B) \cap H^1_0(\B) \to L^2(\B),\qquad L^\alpha \phi:= -\Delta \phi - (p-1) |x|^\alpha |u_\alpha|^{p-2} \phi $$ is a Fredholm operator of index zero having the following properties for every $\alpha \ge 0$: \begin{enumerate} \item[(i)] $u_\alpha$ is radially nondegenerate in the sense that the kernel of $L^\alpha$ does not contain radial functions. \item[(ii)] $u_\alpha$ has radial Morse index $K$ in the sense that $L^\alpha$ has precisely $K$ negative eigenvalues corresponding to radial eigenfunctions in $H^2(\B) \cap H^1_0(\B)$. \end{enumerate} \end{theorem} Theorem~\ref{sec:bifurc-nonr-solut-1} is merely a combination of results in \cite{nagasaki} and \cite{Amadori-Gladiali2}. More precisely, the existence and uniqueness of $u_\alpha$ is proved in \cite{nagasaki}. Note that the operator $L^\alpha$ is a compact perturbation of the isomorphism $-\Delta: H^2(\B) \cap H^1_0(\B) \to L^2(\B)$, which implies that it is a Fredholm operator of index zero. A proof of the radial nondegeneracy and radial Morse index can be found in \cite[Theorem 1.7]{Amadori-Gladiali2}. We remark here that the radial nondegeneracy can also be deduced from results in \cite{Yanagida}. \begin{remark} (i) Since equation~(\ref{1.4}) remains invariant under a change of sign $u \mapsto -u$, it follows from Theorem~\ref{sec:bifurc-nonr-solut-1} that for every $p>2$ and $\alpha >\alpha_p$, equation~(\ref{1.4}) has precisely two radial solution $\pm u_\alpha \in C^2(\overline \B)$ with precisely $K$ nodal domains.\\ (ii) In \cite{nagasaki} it is also shown that for $p \geq \frac{2N + 2 \alpha}{N-2}$, the trivial solution is the only radial solution of equation~\eqref{1.4}. \end{remark} Next we recall that, in the radial variable, $u_\alpha$ solves \begin{equation} \label{eq:u-alpha-radial-variable-1} -u_{rr} - \frac{N-1}{r}u_r = r^\alpha |u|^{p-2}u, \quad r \in (0,1), \qquad u'(0)=u(1)=0. \end{equation} Inspired by Byeon-Wang \cite{B-W}, we transform equation (\ref{eq:u-alpha-radial-variable-1}), considering $$ U_\alpha: [0,\infty) \to \R, \qquad U_\alpha(t)= (N+\alpha)^{-\frac{2}{p-2}}\: u_\alpha(e^{-\frac{t}{N+\alpha}}). $$ By direct computation, we see that $U_\alpha$ is a bounded solution of the problem \begin{equation} \label{eq:U-gamma} -(e^{-\gamma t}U')' = e^{-t}|U|^{p-2}U \quad \text{in $I:=[0,\infty)$,}\qquad U(0)=0. \end{equation} with $\gamma= \gamma(\alpha)=\frac{N-2}{N+\alpha}$. Moreover, $U_\alpha$ has precisely $K-1$ zeros in $(0,\infty)$ and satisfies $\lim \limits_{t \to \infty}U_\alpha(t)>0$, which implies that $(-1)^{K-1}U_\alpha'(0)>0$. Considering the limit $\alpha \to \infty$ in (\ref{eq:u-alpha-radial-variable-1}) corresponds to sending $\gamma \to 0$ in (\ref{eq:U-gamma}), which leads to limit problem \begin{equation} \label{eq:limit-U-0-1} -U'' = e^{-t}|U|^{p-2}U \quad \text{in $I$,}\qquad U(0)=0. \end{equation} We first note the following facts regarding (\ref{eq:limit-U-0-1}). \begin{proposition} \label{sec:preliminaries-limit-problem-1} Let $p>2$. The problem (\ref{eq:limit-U-0-1}) admits a unique bounded solution $U_\infty \in C^2(\overline I)$ with precisely $K-1$ zeros in $(0,\infty)$ and $U_\infty'(0)>0$. \end{proposition} \begin{proof} The existence of a bounded solution of \eqref{eq:limit-U-0-1} with precisely $K-1$ zeros in $(0,\infty)$ has been proved by Naito \cite[Theorem 1]{Naito}. To prove uniqueness, we first note that every solution $U$ of \eqref{eq:limit-U-0-1} is concave on intervals where $U>0$ and convex on intervals where $U<0$. From this we deduce that every bounded solution $U$ with finitely many zeros has a limit $$ \ell(U)= \lim_{t \to \infty}U(t) \not = 0. $$ Next, we let $U_1$, $U_2$ be bounded solutions of \eqref{eq:limit-U-0-1} with precisely $K-1$ zeros in $(0,\infty)$. Moreover, we let $\kappa= \frac{\ell(U_1)}{\ell(U_2)}$, $c_\kappa:= \ln |\kappa|^{p-2}$ and consider $$ \tilde U_2: [c_\kappa, \infty) \to \R, \qquad \tilde U_2(t)= \kappa U_2(t-c_\kappa). $$ Then $\tilde U_2$ solves the equation in \eqref{eq:limit-U-0-1} on $[c_\kappa, \infty)$ and satisfies $\tilde U_2(c_\kappa) = 0$. By construction we have $$ \lim_{t \to \infty}U_1(t)= \lim_{t \to \infty}\tilde U_2(t), $$ and thus the local uniqueness result at infinity given in \cite[Proposition 3.1]{Naito} implies that $$ U_1(t)= \tilde U_2(t) \qquad \text{for $t \ge \max \{0,c_\kappa\}$.} $$ Since $U_1$ and $\tilde U_2$ have $K-1$ zeros in $(0,\infty)$, $(c_\kappa,\infty)$, respectively and $U_1(0)=\tilde U_2(c_\kappa)=0$, it follows that $c_\kappa=0$, hence $\kappa=1$ and therefore $U_1 \equiv U_2$. The uniqueness of $U_\infty$ thus follows. \end{proof} In the following, it is more convenient to work with the parameter $\gamma= \frac{N-2}{N+\alpha} \in (0,\frac{N-2}{N})$ in place of $\alpha$. Hence, from now on, we will write $U_\gamma$ in place of $U_\alpha$. We also set $U_0:=(-1)^{K-1}U_\infty$, so that \begin{equation} \label{eq:lim-infty-U-0} \lim_{t \to \infty}U_0(t)>0. \end{equation} We wish to consider \eqref{eq:limit-U} and \eqref{eq:U-gamma} in suitable spaces of continuous functions. For $\delta \ge 0$, we let $C_\delta(\overline I)$ denote the space of all functions $v \in C(\overline I)$ such that $$ \|v\|_{C_\delta} := \sup_{t \ge 0} e^{\delta t}|v(t)| < \infty, $$ More generally, for an integer $k \ge 0$, we let $C_\delta^k(\overline I)$ denote the space of all functions $v \in C^k(\overline I)$ such that $v^{(j)} \in C_\delta(\overline I)$ for $j=1,\dots,k$. Then $C_\delta^k(\overline I)$ is a Banach space with norm $$ \|v\|_{C_\delta^k} := \sum_{j=0}^k \|v^{(j)}\|_{C_\delta} . $$ We note the following. \begin{lemma} \label{compactness-C-delta-spaces} Let $k > \ell \ge 0$ and $\delta_1 > \delta_2 \ge 0$. Then the embedding $C^{k}_{\delta_1}(I) \hookrightarrow C^{\ell}_{\delta_2}(I)$ is compact. \end{lemma} \begin{proof} This is a straightforward consequence of the Arzel\`a-Ascoli theorem. \end{proof} For the remainder of this section, we fix $\delta = \frac{2}{N}$ and consider the spaces $$ E := \{ v \in C^2(\ov I)\::\: v(0)=0,\: v' \in C^1_{\delta}(I) \} \qquad \text{and}\qquad F:= C_{\delta}(I). $$ As note above, $F$ is a Banach space with norm $\|\cdot\|_F = \|\cdot\|_{C_{\delta}}$. Moreover, for every $v \in E$ we have $$ |v(t)| \le \Bigl| \int_0^t v'(s)\,ds \Bigr| \le \|v'\|_{C^1_{\delta}} \int_0^t e^{-\frac{2s}{N}}\,ds \le \frac{N}{2} \|v'\|_{C^1_{\delta}} \qquad \text{for all $t \ge 0$} $$ and therefore $\|v\|_{L^\infty(I)} \le \frac{N}{2} \|v'\|_{C^1_{\delta}}$. Hence we may endow $E$ with the norm $$ v \mapsto \|v\|_E := \|v\|_{L^\infty(I)} + \|v'\|_{C^1_{\delta}}. $$ Since $C^1_{\delta}$ is a Banach space, it easily follows that $E$ is a Banach space as well. We also note that \begin{equation} \label{eq:limit-v} \lim_{t \to \infty}v(t) = \int_0^\infty v'(s)\,ds \quad \text{exists for every $v \in E$.} \end{equation} \begin{lemma} \label{bounded-sol-E} Let $p>2$, $\gamma \in [0,\frac{N-2}{N}]$, and let $U \in C^2(\ov I)$ be a bounded nontrivial solution of \eqref{eq:U-gamma}. Then $U \in E$, and $\lim \limits_{t \to \infty}U(t) \not = 0$. \end{lemma} \begin{proof} Since $U$ is bounded, we have $$ |(e^{-\gamma t}U')'| \leq e^{-t} |U|^{p-1} \leq C e^{-t} \qquad \text{for $t \ge 0$} $$ with a constant $C>0$. Furthermore, there exists a sequence $t_n \to \infty$ with $U'(t_n) \to 0$ as $n \to \infty$. Consequently, $$ e^{-\gamma t} |U'(t)| = \lim_{n \to \infty} \left| \int_t^{t_n} (e^{-\gamma s}U'(s))' \, ds \right| \leq \lim_{n \to \infty} C \int_t^{t_n} e^{- s} \, ds = C e^{-t} $$ and therefore $|U'(t)| \leq C e^{(\gamma -1)t} \leq C e^{-\frac{2}{N} t}$ for $t \ge 0$. Since we can write \eqref{eq:U-gamma} as \begin{equation} \label{eq:U-gamma-alternate} -U'' + \gamma U' = e^{(\gamma-1)t} |U|^{p-2} U , \end{equation} it follows that $|U''(t)| \leq |\gamma| |U'(t)| + e^{(\gamma-1)t} |U(t)|^{p-1} \leq C' e^{-\frac{2}{N} t}$ for $t \ge 0$ with a constant $C'>0$, hence $U \in E$. It remains to show that $\lim \limits_{t \to \infty}U(t) \not = 0$. For this we consider the nonincreasing function $m(t):= \sup \limits_{s \ge t}|U(s)|$. Using (\ref{eq:U-gamma}) and the fact that $U \in E$, we find that $$ e^{-\gamma t}|U'(t)| = \Bigl|\int_t^\infty e^{-s}|U(s)|^{p-2}U(s)\,ds\Bigr| \le e^{-t} m^{p-1}(t) \qquad \text{for $t \ge 0$.} $$ and therefore $$ |U(t)| = \Bigl| \int_t^\infty U'(s)\,ds \Bigr| \le \int_{t}^\infty e^{(\gamma-1)s} m^{p-1}(s)\,ds \le \frac{m^{p-1}(t)}{1-\gamma}e^{(\gamma-1)t}\qquad \text{for $t \ge 0$.} $$ Consequently, $$ m(t) = \sup_{s \ge t}|U(s)| \le \sup_{s \ge t}\Bigl(\frac{m^{p-1}(s)}{1-\gamma}e^{(\gamma-1)s}\Bigr) = \frac{m^{p-1}(t)}{1-\gamma}e^{(\gamma-1)t} $$ and hence $m(t)=0$ or $m^{p-2}(t) \ge (1-\gamma)e^{(1-\gamma)t} \ge 1-\gamma$ for $t \ge 0$. Since $m(0)\not = 0$ as $U \not \equiv 0$, we conclude by continuity of $m$ that $m^{p-2}(t)\ge 1-\gamma$ for all $t \ge 0$. Together with \eqref{eq:limit-v}, this shows that $\lim \limits_{t \to \infty}U(t) \not = 0$. \end{proof} We intend to use the implicit function theorem to show that $U_\gamma \to U_0$ in $E$ as $\gamma \to 0$. This requires uniqueness and nondegeneracy properties as given in the following two lemmas. \begin{lemma} \label{uniqueness via zeros} Let $p>2$, $\gamma \in (0,\frac{N-2}{N+\alpha_p})$ and let $\tilde U \in E$ be a solution of \eqref{eq:U-gamma} with precisely $K-1$ zeros in $(0,\infty)$ and $\lim \limits_{t \to \infty}\tilde U(t)>0$. Then $\tilde U = U_\gamma$. \end{lemma} \begin{proof} Let $\alpha>0$ be the unique value such that $\gamma = \gamma(\alpha)= \frac{N-2}{N+\alpha}$, and consider the function $$ u:[0,1] \to \R, \qquad u(r)= \left \{ \begin{aligned} &(N+\alpha)^\frac{2}{p-2} \tilde U(-(N+\alpha)\ln r),&&\qquad r>0,\\ &(N+\alpha)^\frac{2}{p-2} \lim \limits_{t \to \infty}\tilde U(t),&&\qquad r=0. \end{aligned} \right. $$ Since $\tilde U \in E$, the latter limit exists. We then have $u \in C^2((0,1]) \cap C([0,1])$, and $u$ solves equation~\eqref{eq:u-alpha-radial-variable-1} on $(0,1)$. Moreover, we have $u'(r) = - (N+\alpha)^\frac{p}{p-2}\frac{\tilde U'(-(N+\alpha)\ln r)}{r}$ for $r \in (0,1]$ and therefore $$ \lim_{r \to 0}\frac{u'(r)}{r} = - (N+\alpha)^\frac{2}{p-2}\lim_{t \to \infty} e^{\frac{2t}{N+\alpha}} \tilde U'(t). $$ Since $\frac{2}{N+\alpha}< \frac{2}{N}$ and $\tilde U \in E$, we deduce that $\lim \limits_{r \to 0}\frac{u'(r)}{r}=0$. From equation~\eqref{eq:u-alpha-radial-variable-1} it then also follows that $\lim \limits_{r \to 0}u''(r)$ exists, and that $u$ also satisfies the boundary conditions in~\eqref{eq:u-alpha-radial-variable-1}. Moreover, we have $u(0)>0$ since $\lim \limits_{t \to \infty}\tilde U(t)>0$ by assumption. The uniqueness result in Theorem \ref{sec:bifurc-nonr-solut-1} then yields that $u$ is equal to $u_\alpha$. Transforming back, we conclude that $\tilde U= U_\gamma$. \end{proof} \begin{lemma} \label{sec:preliminaries-limit-problem-2} Let $p>2$ and $\gamma \in [0,\frac{N-2}{N+\alpha_p})$. Then the solution $U_\gamma$ of problem (\ref{eq:U-gamma}) is nondegenerate in the sense that the equation $$ -(e^{-\gamma t} v')' -(p-1)e^{-t}|U_\gamma|^{p-2}v = 0 \quad \text{in $[0,\infty)$,}\qquad v(0)=0. $$ has no bounded nontrivial solution. \end{lemma} \begin{proof} We consider the auxiliary function $w:= U_\gamma' + \frac{\gamma-1}{p-2}U_\gamma$, which, by direct computation, solves the linearized equation \begin{equation} \label{eq:nondeg-1} -(e^{-\gamma t} w')' -(p-1)e^{-t}|U_\gamma|^{p-2}w = 0 \qquad \text{in $[0,\infty)$.} \end{equation} Moreover, we have $\lim \limits_{t \to \infty}w'(t)=0$ since $U_\gamma \in E$ by Lemma~\ref{bounded-sol-E}. Suppose by contradiction there exists a bounded function $v \in C^2([0,\infty))$, $v \not \equiv 0$ satisfying \begin{equation} \label{eq:nondeg-2} -(e^{-\gamma t}v')' -(p-1)e^{-t}|U_\infty|^{p-2}v = 0 \quad \text{in $[0,\infty)$,}\qquad v(0)=0. \end{equation} Sturm comparison with $w$ yields that $v$ can only have finitely many zeros in $I$. Let $t_0>0$ denote the largest zero of $w$ in $[0,\infty)$. Since $v$ is bounded, there exists a sequence $(t_n)_n \subset [t_0,\infty)$ such that $t_n \to \infty$ and $v'(t_n) \to 0$ as $n \to \infty$. From (\ref{eq:nondeg-1}) and (\ref{eq:nondeg-2}), we deduce that $$ -\int_{t_0}^\infty (e^{-\gamma t}v')' w = \int_{t_0}^\infty e^{-t}|U_\infty|^{p-2} vw = - \int_{t_0}^\infty (e^{-\gamma t} w')' v . $$ Since $\lim \limits_{n \to \infty} e^{-\gamma t_n} v'(t_n)=\lim \limits_{n \to \infty} e^{-\gamma t_n} w'(t_n)=0$, integration by parts yields \begin{align*} -e^{-\gamma t_0} v'(t_0)w(t_0) &= \lim_{n \to \infty} e^{-\gamma t_n} v'(t_n)w(t_n) - e^{-\gamma t_0} v'(t_0)w(t_0)\\ &= \lim_{n \to \infty} e^{-\gamma t_n} w'(t_n)v(t_n) - e^{-\gamma t_0} w'(t_0)v(t_0) = 0 , \end{align*} which implies $v'(t_0)=0$ or $w(t_0)=0$. In the first case we then have $v \equiv 0$ and the proof is finished. In the other case it also follows that there exists $c \neq 0$ such that $cw'(t_0) = v'(t_0)$, which implies $v \equiv cw$. This contradicts $v(0)=0 \neq U_\infty'(0) = w(0)$. \end{proof} We may now state a continuation result for the map $\gamma \mapsto U_\gamma$ which in particular implies Proposition~\ref{limit-shape}. \begin{proposition} \label{implicit function for U-gamma} Let $p>2$. There exists $\eps_0 >0$ such that the map $(0,\frac{N-2}{N+\alpha_p}) \to E$, $\gamma \mapsto U_\gamma$ extends to a $C^1$-map $g: (-\eps_0,\frac{N-2}{N+\alpha_p}) \to E$ with $g(0)=U_0$. \end{proposition} \begin{proof} We consider the map $$ G: \left(-\infty,\frac{N-2}{N+\alpha_p}\right) \times E \to F,\qquad G(\gamma,U)= -U'' + \gamma U' - e^{(\gamma-1)t} |U|^{p-2} U. $$ Since $e^{(\gamma-1)t} \le e^{-\frac{2}{N}t}$ for $\gamma<\frac{N-2}{N+\alpha_p}$, $G$ is well-defined and of class $C^1$. Moreover, by definition of $U_\gamma$ we have \begin{equation} \label{eq:G-first-equality} G(\gamma,U_\gamma) =0 \qquad \text{for $\gamma \in \left[0,\frac{N-2}{N+\alpha_p}\right)$.} \end{equation} We first show that the linear map \begin{equation} \label{L-gamma-isomorphism} L_\gamma :=d_U G(\gamma,U_\gamma): E \to F,\qquad L \phi= -\phi'' + \gamma \phi' - (p-1)e^{(\gamma-1)t}|U_\gamma|^{p-2} \phi \end{equation} is an isomorphism for $\gamma \in [0,\frac{N-2}{N+\alpha_p})$. For this, we first note that \begin{equation} \label{sec-3-isomorphism} \text{the map $E \to F$, $\phi \mapsto -\phi'' +\gamma \phi'$ is an isomorphism.} \end{equation} Indeed, if $\phi \in E$ satisfies $-\phi''+ \gamma \phi' =0$, then $-\phi'+ \gamma \phi$ is constant and $\phi(0)=0$, hence $\phi(t)= c(e^{\gamma t}-1)$ for $t \in I$ with a constant $c \in \R$. Since $\phi \in E \subset L^\infty(I)$, we conclude that $\phi \equiv 0$. Moreover, if $f \in F$ is given and $\phi: I \to \R$ is defined by $$ \phi(t):= \int_0^t \int_s^\infty e^{\gamma(s-\sigma)}f(\sigma) \, d\sigma ds , $$ we have $-\phi''+ \gamma \phi'=f$ and $\phi(0)=0$. Furthermore, $$ |\phi'(t)|= \Bigl|\int_t^\infty e^{\gamma(t-\sigma)}f(\sigma) \, d\sigma\Bigr| \leq \int_t^\infty |f(\sigma)| \, d\sigma \leq \|f\|_F \int_t^\infty e^{-\frac{2}{N} s} \, ds \leq \frac{N}{2} \|f\|_F\: e^{-\frac{2}{N} t} $$ for $t \ge 0$ and therefore $\phi \in E$. We thus infer (\ref{sec-3-isomorphism}). Next, we note that the linear map $E \to F$, $\phi \mapsto e^{(\gamma-1)(\cdot)} |U_0|^{p-2} \phi$ is compact, since the embedding $E \hookrightarrow C_0(I)$ is compact by Lemma~\ref{compactness-C-delta-spaces} and the map $C_0(I) \to F$, $\phi \mapsto e^{(\gamma-1)(\cdot)} |U_0|^{p-2} \phi$ is continuous. By \eqref{sec-3-isomorphism}, we therefore deduce that $L$ is Fredholm of index zero. Since the equation $L_\gamma v=0$ only has the trivial solution $v=0$ in $E$ by Lemma~\ref{sec:preliminaries-limit-problem-2}, we conclude that $L_\gamma$ is an isomorphism, as claimed. We now apply the implicit function theorem to the map $G$ in the point $(0,U_0)$. This yields $\eps_0>0$ and a differentiable map ${\tilde g}:(-\eps_0,\eps_0) \to E$ with ${\tilde g}(0)=U_0$ and $G(\gamma,{\tilde g}(\gamma)) = 0$ for $\gamma \in (-\eps_0,\eps_0)$.\\ Next we claim that \begin{equation} \label{convergence in E-prop} \text{$U_\gamma = {\tilde g}(\gamma)$ for $\gamma \in [0,\eps_0)$.} \end{equation} Indeed, let $v_\gamma: = {\tilde g}(\gamma) \in E$ for $\gamma \in (-\eps_0,\eps_0)$. By the continuity of ${\tilde g}: (-\eps_0,\eps_0) \to E$ and (\ref{eq:limit-v}), the function $$ (-\eps_0,\eps_0) \to \R, \qquad \gamma \mapsto m_\gamma:= \lim_{t \to \infty}v_\gamma(t) $$ is also continuous, and it is nonzero for $\gamma \in [0,\eps_0)$ by Lemma~\ref{bounded-sol-E}. Moreover, by construction we have $v_0 = U_0$ and therefore $m_0 > 0$. It then follows that \begin{equation} \label{eq:m-gamma-pos} m_\gamma >0 \qquad \text{for all $[0,\eps_0)$.} \end{equation} By Lemma~\ref{uniqueness via zeros}, we thus only need to prove that $v_\gamma$ has $K-1$ zeros in $(0,\infty)$ for $\gamma \in [0,\eps_0)$. This is true for $\gamma= 0$ since $v_0 = U_0$. Moreover, the number of zeros of $v_\gamma$ remains constant for $\gamma \in [0,\eps_0)$. Indeed, as a solution of (\ref{eq:U-gamma}), $v_\gamma$ cannot have double zeros, and the largest zero $t_\gamma$ of $v_\gamma$ in $[0,\infty)$ remains locally bounded for $\gamma \in [0,\eps_0)$ since $$ m_\gamma = \int_{t_\gamma}^\infty v_\gamma'(s)\,ds \le \|v_\gamma\|_{E} \int_{t_\gamma}^\infty e^{-\frac{2}{N} s}\,ds \le \frac{N}{2}\|v_\gamma\|_E \,e^{-\frac{2}{N} t_\gamma}. $$ and therefore $t_\gamma \le - \frac{N}{2} \ln \frac{2 m_\gamma}{N \|v_\gamma\|_E} $. This finishes the proof of (\ref{convergence in E-prop}). By a continuation argument based on (\ref{L-gamma-isomorphism}), an application of the implicit function theorem in points $(\gamma,U_\gamma)$ for $\gamma>0$ and the same continuity considerations as above , we then see that the map $$ g: (-\eps_0,\frac{N-2}{N+\alpha_p}) \to E, \qquad g(\gamma) = \left\{ \begin{aligned} &\tilde g(\gamma),&&\qquad \gamma \in (-\eps_0,0),\\ &U_\gamma,&&\qquad \gamma \in \left[0,\frac{N-2}{N+\alpha_p} \right) \end{aligned} \right. $$ is of class $C^1$. The proof is thus finished. \end{proof} Since $U_0 = (-1)^{K-1} U_\infty$, we have now completed the proof of Proposition~\ref{limit-shape}. \begin{remark} \label{gamma-negative-definition} Using the function $g$ and $\eps_0>0$ from Proposition~\ref{implicit function for U-gamma}, it is convenient to define $$ U_\gamma := g(\gamma) \quad \text{for $\gamma \in (-\eps_0,0).$} $$ With this definition, it follows from Proposition~\ref{implicit function for U-gamma} that the map $(-\eps_0,\frac{N-2}{N+\alpha_p}) \to E$, $\gamma \mapsto U_\gamma$ is of class $C^1$. Moreover, implicit differentiation of (\ref{eq:U-gamma}) at $\gamma=0$ shows that $V= \partial_\gamma \big|_{\gamma=0} U_\gamma$ is given as the unique bounded solution of the problem \begin{equation} \label{implicit-differentiation-formula-1} -V'' - (p-1)e^{-t}|U_0|^{p-2}V = U_0' - t e^{-t}|U_0|^{p-2}U_0 \quad \text{in $[0,\infty)$,}\qquad V(0)=0. \end{equation} \end{remark} \section{Spectral asymptotics} \label{sec:spectr-asympt-refeq} This section is devoted to the proofs of Theorem~\ref{spectral-curves} and Corollary~\ref{corollary-on-eigenvalue-curves}. We fix $p>2$, and we start by recalling some results from \cite{Amadori-Gladiali2} on the eigenvalue problem (\ref{eq:weighted-L-alpha-problem}) and its relationship to the Morse index of $u_\alpha$. Recall that we consider (\ref{eq:weighted-L-alpha-problem}) in weak sense. More precisely, we say that $\phi \in H^1_0(\B)$ is an eigenfunction of (\ref{eq:weighted-L-alpha-problem}) corresponding to the eigenvalue $\lambda \in \R$ if \begin{equation} \label{eq:weak-eigenvalue-singular} q_\alpha(\phi,\psi) = \lambda \int_{\B} \frac{\phi(x) \psi(x)}{|x|^2}dx \qquad \text{for all $\psi \in H^1_0(\B)$,} \end{equation} where \begin{equation} \label{eq:def-q-alpha} q_\alpha: H^1_0(\B)\times H^1_0(\B) \to \R, \qquad q_\alpha(v,w) := \int_{\B} \Bigl(\nabla v \cdot \nabla w - (p-1)|x|^\alpha |u_\alpha|^{p-2} vw\Bigr) \, dx \end{equation} is the quadratic form associated with the operator $L^\alpha$. Note that the RHS of (\ref{eq:weak-eigenvalue-singular}) is well-defined for $\phi, \psi \in H^1_0(\B)$ by Hardy's inequality. \begin{lemma}(see \cite[Prop. 4.1 and 5.1]{Amadori-Gladiali2})\\ \label{sec:spectral-asymptotics-morse-index} Let $\alpha>\alpha_p$. Then we have: \begin{enumerate} \item[(i)] The Morse index of $u_\alpha$ is given as the number of negative eigenvalues of \eqref{eq:weighted-L-alpha-problem}, counted with multiplicity. Moreover, every eigenfunction $v \in H^1_0(\B)$ of (\ref{eq:weighted-L-alpha-problem}) corresponding to a nonpositive eigenvalue is contained in $L^\infty(\B) \cap C^2(\B \setminus \{0\})$. \item[(ii)] Let $\phi \in H^1_0(\B)$ be an eigenfunction of (\ref{eq:weighted-L-alpha-problem}) corresponding to the eigenvalue $\lambda \in \R$. Then there exists a number $\ell_0 \in \N \cup \{0\}$, spherical harmonics $Y_\ell$ of degree $\ell$ and functions $\phi_\ell \in H^1_{0,rad}(\B)$, $\ell=1,\dots,\ell_0$ with the property that $$ \phi(x)= \sum_{\ell =0}^{\ell_0} \phi_\ell(x) Y_\ell\left(\frac{x}{|x|}\right) \qquad \text{for $x \in \B$.} $$ Moreover, for every $\ell \in \{1,\dots,\ell_0\}$, we either have $\phi_\ell \equiv 0$, or $\phi_\ell$ is an eigenfunction of (\ref{eq:weighted-eigenvalue-problem-reduced}) corresponding to the eigenvalue $\mu = \lambda-\lambda_\ell$. \end{enumerate} \end{lemma} Regarding the reduced weighted eigenvalue problem~(\ref{eq:weighted-eigenvalue-problem-reduced}), we also recall the following. \begin{lemma} (see \cite[p.19 and Prop. 3.7]{Amadori-Gladiali2})\\ \label{eigenvalue lemma} Let $\alpha>\alpha_p$. Then $0$ is not an eigenvalue of \eqref{eq:weighted-eigenvalue-problem-reduced}, and the negative eigenvalues of \eqref{eq:weighted-eigenvalue-problem-reduced} are simple and given by \begin{equation} \label{eq:var-char-mu-j} \mu_j(\alpha):= \inf_{\substack{W \subset H_{0,rad}^1(\B)\\ \dim W=j}} \max_{v \in W\setminus \{0\}} \frac{\int_{\B} |\nabla v|^2 - (p-1)|x|^\alpha |u_\alpha|^{p-2} |v|^2 \, dx }{\int_\B |x|^{-2}|v|^2 \, dx }, \qquad j= 1,\dots, K. \end{equation} \end{lemma} Here we point out that Theorem~\ref{sec:bifurc-nonr-solut-1}(i) already implies that zero is not an eigenvalue of \eqref{eq:weighted-eigenvalue-problem-reduced}. We also note that Proposition~\ref{spectral-curves-0} now merely follows by combining Lemma \ref{sec:spectral-asymptotics-morse-index} and Lemma~\ref{eigenvalue lemma}. We now turn to the proof of Theorem \ref{spectral-curves}. For this we transform the radial eigenvalue problem (\ref{eq:weighted-eigenvalue-problem-reduced}). Note that, if we write an eigenfunction $\psi \in H^1_{0,rad}(\B)$ as a function of the radial variable $r = |x|$, it solves $$ -\psi'' -\frac{N-1}{r}\psi' - (p-1)r^{\alpha}|u_\alpha(r)|^{p-2}\psi(r) = \frac{\mu}{r^2} \psi \qquad \text{in $(0,1)$,}\qquad \qquad \psi(1)=0. $$ We transform this problem by considering again $I:=(0,\infty)$ and setting \begin{equation} \label{eq:transformed-variables} \nu = \frac{1}{(N+\alpha)^2} \mu_j(\alpha), \qquad \qquad \Psi(t)=(N+\alpha) \psi(e^{- \frac{t}{N+\alpha}})\quad \text{for $t \in \overline I$.} \end{equation} This gives rise to the eigenvalue problem \begin{equation} \label{eq:trans-weighted-eigenvalue-problem} \left\{ \begin{aligned} &-(e^{-\gamma t}\Psi')' - (p-1)e^{-t}|U_\gamma(t)|^{p-2}\Psi= \nu e^{-\gamma t} \Psi \quad \text{in $I$,}\\ &\qquad \Psi(0)=0, \quad \Psi \in L^\infty(I) \end{aligned} \right. \end{equation} with $\gamma= \gamma(\alpha)=\frac{N-2}{N+\alpha} \in (0,\frac{N-2}{N+\alpha_p})$ as before. Here, we have added the condition $\Psi \in L^\infty(I)$ since we focus on eigenfunctions corresponding to negative eigenvalues, and in this case eigenfunctions $\psi \in H^1_{0,rad}(\B)$ of (\ref{eq:weighted-eigenvalue-problem-reduced}) are bounded by Lemma~\ref{eigenvalue lemma}. In the following, we also consider the case $\gamma=0$ in (\ref{eq:trans-weighted-eigenvalue-problem}), which corresponds to the linearization of (\ref{eq:limit-U-0-1}) at $U_0$: \begin{equation} \label{eq:weighted-eigenvalue-translimit} \left\{ \begin{aligned} &-\Psi'' - (p-1)e^{-t}|U_0(t)|^{p-2}\Psi= \nu \Psi \quad \text{in $I$,}\\ &\qquad \Psi(0) = 0, \quad \Psi \in L^\infty(I). \end{aligned} \right. \end{equation} We note that for $\gamma \in [0,\frac{N-2}{N+\alpha})$ and every solution $\Psi$ of (\ref{eq:trans-weighted-eigenvalue-problem}) there exists a sequence $t_n \to \infty$ with $\Psi'(t_n) \to 0$, which implies that \begin{equation} \label{eq:int-formular-eigenvalue} e^{-\gamma t}\Psi'(t) = \int_{t}^\infty -(e^{-\gamma s}\Psi')'(s)\,ds = \int_{t}^\infty \bigl(\nu e^{-\gamma s} + (p-1)e^{-s}|U_\gamma(s)|^{p-2}\bigr)\Psi(s)\,ds \end{equation} for $t \ge 0$. We also note that problem (\ref{eq:trans-weighted-eigenvalue-problem}) can be rewritten as \begin{equation} \label{eq:trans-weighted-eigenvalue-problem-rewritten} \left\{ \begin{aligned} &-\Psi'' + \gamma \Psi'- (p-1)e^{(\gamma-1)t}|U_\gamma(t)|^{p-2}\Psi= \nu \Psi \quad \text{in $I$,}\\ &\qquad \Psi(0)=0, \quad \Psi \in L^\infty(I). \end{aligned} \right. \end{equation} We need the following estimate in terms of the space $C_\delta^2(I)$ defined in Section~\ref{sec:limit-alpha-to}. \begin{lemma} \label{bounded-sol-exp-decay} Let $\nu_\sdiamond<0$, $\gamma_\sdiamond \in (0,\frac{N-2}{N+\alpha_p})$, and let $\delta = \frac{1}{2}\bigl(\sqrt{1-2\nu_\sdiamond}-1\bigr)>0$. Then there exists a constant $C=C(\nu_\sdiamond,\gamma_\sdiamond)>0$ such that for every solution $\Psi \in L^\infty(I)$ of the equation \begin{equation} \label{eq:bounded-sol-decay-eqn} -\Psi'' + \gamma \Psi'- (p-1)e^{(\gamma-1)t}|U_\gamma(t)|^{p-2}\Psi= \nu \Psi \end{equation} with $\nu \le \nu_\sdiamond$ and $\gamma \in [0,\gamma_\sdiamond]$ we have $\Psi \in C_\delta^2(I)$ with $\|\Psi\|_{C^2_\delta} \le C \|\Psi\|_{L^\infty(I)}$. \end{lemma} \begin{proof} Since $\|U_{\gamma}\|_{L^\infty(I)}$ remains uniformly bounded for $\gamma \in [0,\gamma_\sdiamond]$ by Proposition~\ref{implicit function for U-gamma}, there exists $t_0= t_0(\nu_\sdiamond,\gamma_\sdiamond)>0$ such that $$ (p-1) e^{(\gamma-1)t} |U_{\gamma}(t)|^{p-2} \leq -\frac{\nu_\sdiamond}{2} \quad \text{for $t \geq t_0$, $\gamma \in \left[0,\gamma_\sdiamond\right]$.} $$ Let $\Psi$ be a bounded solution of (\ref{eq:bounded-sol-decay-eqn}) on $I$. Then $\Psi$ solves the differential inequality \begin{equation} \label{eq:psi-diff-ineq} \Psi'' - \gamma \Psi' + \frac{\nu_\sdiamond}{2} \Psi \ge 0 \qquad \text{in the open set $U_\Psi:= \{t \in (t_0,\infty)\,:\, \Psi(t)>0\}.$} \end{equation} For fixed $\eps>0$, we consider the function $$ t \mapsto \phi_\eps(t):= C_\Psi e^{-\delta t} + \eps e^{\delta t} \qquad \text{with $C_\Psi:= e^{\delta t_0} \|\Psi\|_{L^\infty(I)}$.} $$ By (\ref{eq:psi-diff-ineq}) and the definition of $\delta$, the function $v_\eps:= \phi_\eps -\Psi$ satisfies \begin{align*} v_\eps'' - \gamma v_\eps' + \frac{\nu_\sdiamond}{2} v_\eps \le (\delta^2 + \frac{\nu_\sdiamond}{2})\phi_\eps + \gamma \delta C_\Psi e^{-\delta t} - \gamma \delta \eps e^{\delta t} & \le (\delta^2 +|\gamma| \delta + \frac{\nu_\sdiamond}{2})\phi_\eps\\ &\le (\delta^2 + \delta + \frac{\nu_\sdiamond}{2})\phi_\eps = 0 \qquad \text{in $U_\Psi$.} \end{align*} This implies that $v_\eps$ cannot attain a negative minimum in the set $(t_0,\infty)$. Moreover, by definition of $v_\eps$ we have $$ v_\eps(t_0) \ge 0 \qquad \text{and}\qquad \lim_{t \to \infty}v_\eps(t)= \infty. $$ Consequently, we have $v_\eps \ge 0$ and therefore $\Psi \le \phi_\eps$ on $[t_0,\infty)$. Replacing $\Psi$ by $-\Psi$ in the argument above, we find that $|\Psi| \le \phi_\eps$ on $[t_0,\infty)$. By considering the limit $\eps \to 0$, we deduce that $$ |\Psi(t)| \le C_\Psi e^{-\delta t} = C \|\Psi\|_{L^\infty(I)} e^{-\delta t} \qquad \text{for $t \ge t_0$ with $C:= e^{\delta t_0}$.} $$ Since the same inequality obviously holds for $t \in [0,t_0)$, we conclude that $$ |\Psi(t)| \le C\|\Psi\|_{L^\infty(I)} e^{-\delta t} \qquad \text{for $t \ge 0$.} $$ Finally, using (\ref{eq:int-formular-eigenvalue}) and (\ref{eq:bounded-sol-decay-eqn}), we also get that $$ |\Psi'(t)| \le C\|\Psi\|_{L^\infty(I)} e^{-\delta t} \quad \text{and} \quad |\Psi''(t)| \le C\|\Psi\|_{L^\infty(I)} e^{-\delta t} \qquad \text{for $t \ge 0$} $$ after making $C>0$ larger if necessary. The proof is thus finished. \end{proof} \begin{proposition} \label{sec:preliminaries-limit-problem-3} For $\gamma \in [0,\frac{N-2}{N+\alpha_p})$, the eigenvalue problem~(\ref{eq:trans-weighted-eigenvalue-problem}) admits precisely $K$ negative eigenvalues $\nu_1(\gamma) < \nu_2(\gamma) < \dots< \nu_K(\gamma) < 0$ characterized variationally by \begin{equation} \label{var-char-nu-j} \nu_j(\gamma) = \inf_{\substack{W \subset H^1_0(I)\\ \dim W=j}} \max_{\Psi \in W\setminus \{0\}} \frac{\int_0^\infty e^{-\gamma t}\Psi'^2 - (p-1)e^{-t}|U_\gamma|^{p-2}\Psi^2 \, dt}{\int_0^\infty e^{-\gamma t} \Psi^2 \, dt} \qquad \text{for $j=1,\dots,K$.} \end{equation} \end{proposition} \begin{proof} Let $\gamma \in [0,\frac{N-2}{N+\alpha_p})$. We first show that \begin{equation} \label{eq:gamma-K-less-zero} \nu_K(\gamma)<0. \end{equation} For $\gamma>0$, this follows by Lemma~\ref{eigenvalue lemma}. Indeed, in (\ref{eq:var-char-mu-j}) we may, by density, replace $H^1_{0,rad}(\B)$ by the space of radial functions in $C^\infty_c(\B \setminus \{0\})$, and this space corresponds to the dense subspace $C^\infty_c(I) \subset H^1_0(I)$ after the transformation (\ref{eq:transformed-variables}). To show (\ref{eq:gamma-K-less-zero}) in the case $\gamma=0$, we use the auxiliary function $w:= U_0' -\frac{1}{p-2}U_0$, which, by direct computation, solves the linearized equation $-w'' -(p-1)e^{-t}|U_0|^{p-2}w = 0$ in $(0,\infty)$. It is clear that $w$ has a zero between any two zeros of $U_0$ on $[0,\infty)$. Moreover, letting $t_*>0$ denote the largest zero of $U_0$, we find that the numbers $$ w(t_*)=U_0'(t_*)\qquad \text{and}\qquad \lim_{t \to \infty}w(t)=-\frac{1}{p-2}\lim_{t \to \infty}U_0(t) $$ have opposite sign, hence $w$ also has a zero in $(t_*,\infty)$. Since $U_0$ has $K-1$ zeros in $(0,\infty)$ and $U_0(0)=0$, we infer that $w$ has at least $K$ zeros in $(0,\infty)$. From this, it is standard to deduce that $\nu_K(0)<0$. We thus have proved (\ref{eq:gamma-K-less-zero}). Next we note that eigenfunctions $\Psi$ of (\ref{eq:trans-weighted-eigenvalue-problem}) corresponding to an eigenvalue $\nu_j(\gamma)<0$ have precisely $j-1$ zeros in $I$. Indeed, this follows from standard Sturm-Liouville theory since any such eigenfunction decays exponentially as $t \to \infty$ together with their first and second derivatives by Lemma~\ref{bounded-sol-exp-decay}. It also follows that $\nu_j(\gamma)$ is simple in this case, i.e., the corresponding eigenspace is one-dimensional. In the case $\gamma>0$, the claim now follows from Lemma~\ref{eigenvalue lemma}, which guarantees that $\nu_1(\gamma),\dots,\nu_K(\gamma)$ are precisely the negative eigenvalues of (\ref{eq:trans-weighted-eigenvalue-problem-rewritten}). It remains to show that (\ref{eq:weighted-eigenvalue-translimit}) has precisely $K$ negative eigenvalues given by (\ref{var-char-nu-j}) in the case $\gamma=0$. Since the essential spectrum of the linearized operator $L_0: H^2(I) \cap H^1_0(I) \to L^2(I)$, $L_0 \Psi = -\Psi'' - (p-1)e^{-t}|U_0(t)|^{p-2}\Psi$ is given by $[0,\infty)$, standard compactness arguments show that $\nu_j(0)$ is an eigenvalue of (\ref{eq:weighted-eigenvalue-translimit}) whenever $\nu_j(0)<0$. Suppose by contradiction that $\nu_{K+1}(0)<0$, and let $v$ be a corresponding eigenfunction. Then $v$ has $K$ zeros in $(0,\infty)$, and $\lim \limits_{t \to \infty}v(t)=\lim \limits_{t \to \infty}v'(t)= 0$ as $t \to \infty$ by Lemma~\ref{bounded-sol-exp-decay}. By Sturm comparison, it then follows that $w$ has at least $K+1$ zeros in $(0,\infty)$. On the other hand, since $$ \Bigl(e^{-t}|U_0|^{p-2}+\frac{1}{(p-2)^2}\Bigr)U_0 =-U_0''+\frac{1}{(p-2)^2}U_0 = -w' - \frac{1}{p-2}w, $$ $U_0$ has a zero between any two zeros of $w$. This contradicts the fact that $U_0$ has precisely $K-1$ zeros in $(0,\infty)$. We thus conclude that (\ref{eq:weighted-eigenvalue-translimit}) admits precisely $K$ negative eigenvalues given by (\ref{var-char-nu-j}) in the case $\gamma=0$. \end{proof} We may now deduce the continuous dependence of the negative eigenvalues of (\ref{eq:trans-weighted-eigenvalue-problem}). \begin{lemma} \label{eq:gamma-sufficient} For $j=1,\dots,K$, the function $\nu_j: [0,\frac{N-2}{N+\alpha_p}) \to (-\infty,0)$ is continuous. \end{lemma} \begin{proof} Let $\gamma_0 \in [0,\frac{N-2}{N+\alpha_p})$, and let $(\gamma_n)_n \subset [0,\frac{N-2}{N+\alpha_p})$ be a sequence with $\gamma_n \to \gamma_0$. Recall that $U_{\gamma_n} \to U_{\gamma_0}$ uniformly on $[0,\infty)$ as $n \to \infty$ by Proposition~\ref{implicit function for U-gamma}. We fix $j \in \{1,\ldots,K\}$ and consider the space $W \subset H^1_0(I)$ spanned by the first $j$ eigenfunctions of (\ref{eq:trans-weighted-eigenvalue-problem}) in the case $\gamma= \gamma_0$. Moreover, we let $\cM:= \{\Psi \in W\::\: \int_0^\infty \Psi^2dt =1\}$. Since $\nu_j(\gamma_0)<0$, $\cM$ is a compact subset of $C^2_\delta(I)$ for some $\delta>0$ by Lemma~\ref{bounded-sol-exp-decay}. From this we deduce that \begin{align*} &\int_0^\infty \Bigl( e^{-\gamma_n t}{\Psi'}^2 - (p-1)e^{-t}|U_{\gamma_n}|^{p-2}\Psi^2 \,\Bigr) dt \;\to \; \int_0^\infty \Bigl(e^{-\gamma_0 t} {\Psi'}^2 - (p-1)e^{-t}|U_0|^{p-2}\Psi^2\Bigr) \, dt \quad \text{and}\\ &\int_0^\infty e^{-\gamma_n t} \Psi^2 \, dt \;\to\; \int_0^\infty e^{-\gamma_0 t}\Psi^2 \, dt \qquad \text{as $n \to \infty$ uniformly in $\psi \in \cM$,} \end{align*} and this implies that \begin{align*} \limsup_{n \to \infty} \nu_j(\gamma_n) &\leq \limsup_{n \to \infty} \max_{\Psi \in \cM} \frac{\int_0^\infty \Bigl( e^{-\gamma_n t}{\Psi'}^2 - (p-1)e^{-t}|U_{\gamma_n}|^{p-2}\Psi^2\Bigr) \, dt}{\int_0^\infty e^{-\gamma_n t} \Psi^2 \, dt}\\ &= \max_{\Psi \in \cM} \frac{\int_0^\infty \Bigl( e^{-\gamma_0 t} {\Psi'}^2 - (p-1)e^{-t}|U_0|^{p-2}\Psi^2\Bigr) \, dt}{\int_0^\infty e^{-\gamma_0 t}\Psi^2 \, dt} = \nu_j(\gamma_0) . \end{align*} To show that $\liminf \limits_{n \to \infty} \nu_j(\gamma_n) \ge \nu_j(\gamma_0)$, we argue by contradiction and assume that, after passing to a subsequence, we have \begin{equation} \label{lower eigenvalue convergence} \nu_j(\gamma_n) \to \sigma_j < \nu_j(\gamma_0) . \end{equation} Passing again to a subsequence, we may then also assume that \begin{equation} \label{lower eigenvalue convergence-1} \nu_k(\gamma_n) \to \sigma_k \le \sigma_j <0 \qquad \text{for $k=1,\dots,j$.} \end{equation} Let, for $k=1,\dots,j$, the function $\Psi_{k,n}$ denote an eigenfunction of (\ref{eq:trans-weighted-eigenvalue-problem}) corresponding to the eigenvalue $\nu_k(\gamma_n)$ such that $\|\Psi_{k,n}\|_{L^\infty(I)}=1$. Since eigenfunctions corresponding to different eigenvalues are orthogonal with respect to the weighted scalar product $(v,w) \mapsto \int_{0}^\infty e^{-\gamma_n t} vw\,dt$, we may assume that \begin{equation} \label{eq:proof-orthogonal} \int_{0}^\infty e^{-\gamma_n t}\Psi_{k,n} \Psi_{\ell,n}\,dt = 0 \qquad \text{for $k,\ell \in \{1,\dots,j\}$, $k \not = \ell.$} \end{equation} By Lemma~\ref{bounded-sol-exp-decay} and (\ref{lower eigenvalue convergence-1}), there exists $\delta>0$ such that $\|\Psi_{k,n}\|_{C^2_\delta} \le C$ for all $n \in \N$, $k \in \{1,\dots,j\}$. By Lemma~\ref{compactness-C-delta-spaces}, we may therefore pass to a subsequence again such that $$ \Psi_{k,n} \to \Psi_{k} \qquad \text{uniformly in $I$,} $$ where $\Psi_k \in C^2_\delta(I)$ is a solution of \begin{equation} \label{eq:limit-eq-proof} -(e^{\gamma_0 t}\Psi')' - (p-1)e^{-t}|U_0(t)|^{p-2}\Psi= \sigma_k e^{-\gamma_0 t}\Psi\qquad \text{in $I$},\qquad \Psi_k(0)=0 \end{equation} for $k = 1,\dots,j$. Moreover, since the sequences $(\Psi_{k,n})_n$, $k = 1,\dots,j$ are uniformly bounded in $C^2_\delta(I)$, we may pass to the limit in (\ref{eq:proof-orthogonal}) to get that \begin{equation} \label{eq:proof-orthogonal-1} \int_{0}^\infty e^{\gamma_0 t} \Psi_{k} \Psi_{\ell}\,dt = 0 \qquad \text{for $k,\ell \in \{1,\dots,j\}$, $k \not = \ell.$} \end{equation} Consequently, for $\gamma=\gamma_0$, the problem (\ref{eq:trans-weighted-eigenvalue-problem}) has $j$ eigenvalues $\sigma_1,\dots,\sigma_j$ (counted with multiplicity) in $(-\infty,\nu_j(\gamma_0))$. This contradictions Proposition~\ref{sec:preliminaries-limit-problem-3}. The proof is finished. \end{proof} Next, we wish to derive some information on the derivative $\partial_\gamma \nu_j(\gamma)$ of the negative eigenvalues of (\ref{eq:trans-weighted-eigenvalue-problem}) as $\gamma \to 0^+$. We intend to derive this information via the implicit function theorem applied to the map $G: \left(-\eps_0, \frac{N-2}{N+\alpha_p}\right) \times \tilde E \times \R \to \tilde F \times \R$ defined by \begin{equation} \label{def-G-implicit} G(\gamma, \Psi, \nu)= \begin{pmatrix} -\Psi''+\gamma \Psi' - (p-1)e^{(\gamma-1)t}|U_{\gamma}|^{p-2}\Psi - \nu \Psi \\ \int_0^\infty \Psi^2 \, dt -1, \end{pmatrix} \end{equation} Here, $\eps_0$ is given in Proposition~\ref{implicit function for U-gamma}, so that $(-\eps_0,\frac{N-2}{N+\alpha_p}) \to C^1_0(I)$, $\gamma \mapsto U_\gamma$ is a well defined $C^1$-map by Remark~\ref{gamma-negative-definition}. Moreover, $\tilde E$ and $\tilde F$ are suitable spaces of functions on $I$ chosen in a way that eigenfunctions and eigenvalues of \eqref{eq:trans-weighted-eigenvalue-problem-rewritten} and \eqref{eq:weighted-eigenvalue-translimit} correspond to zeros of this map. However, in the case $p \in (2,3]$, the function $|\cdot|^{p-2}$ is not differentiable at zero and therefore it is not a priori clear how $\tilde E$ and $\tilde F$ need to be chosen to guarantee that $G$ is of class $C^1$. In particular, spaces of continuous functions will not work in this case, so we need to introduce different function spaces. For $\delta>0$ and $1 \le r < \infty$, we let $L^r_\delta(I)$ denote the space of all functions $f \in L^r_{loc}(I)$ such that $$ \|f\|_{r,\delta} := \sup_{t \ge 0}e^{\delta t}[f]_{t,r} < \infty,\qquad \text{where}\quad [f]_{t,r} : = \Bigl(\int_{t}^{t+1}|f(s)|^r\,ds\Bigr)^{\frac{1}{r}}= \|f\|_{L^r(t,t+1)}. $$ The completeness of $L^r$-spaces readily implies that the spaces $L^r_\delta(I)$ are also Banach spaces. We will need the following observation: \begin{lemma} \label{sec:case-2p3-1-lemma} Let $\delta>0$ and $f \in L^1_\delta(I)$. Then we have \begin{equation} \int_{t}^\infty e^{\mu s} |f(s)|\,ds \le C_{\mu,\delta}\|f\|_{1,\delta}\: e^{(\mu-\delta) t} \qquad \text{for $\mu < \delta$, $t \ge 0$ with $C_{\mu,\delta}:= \frac{\max\{1,e^{\mu}\}}{1-e^{\mu-\delta}}$} \label{eq:L-1-delta-est} \end{equation} and \begin{equation} \int_{0}^t e^{\mu s} |f(s)|\,ds \le D_{\delta,\mu} \|f\|_{1,\delta}\: e^{(\mu-\delta)t}\qquad \text{for $\mu> \delta$, $t \ge 0$ with $D_{\delta,\mu}:= \frac{e^{2\mu-\delta}}{e^{\mu-\delta}-1}$.} \label{eq:L-1-delta-est-1} \end{equation} \end{lemma} \begin{proof} Let $f \in L^1_\delta(I)$ and $t \ge 0$. If $\mu<\delta$, we have \begin{align} &\int_{t}^\infty e^{\mu s} |f(s)|\,ds = \sum_{\ell= 0}^\infty \int_{t+\ell}^{t+\ell+ 1}e^{\mu s} |f(s)|\,ds \le \max\{1,e^\mu\} \sum_{\ell= 0}^\infty e^{\mu (t+\ell)}[f]_{t+\ell,1}\nonumber\\ &\le \max\{1,e^\mu\}\|f\|_{1,\delta} \sum_{\ell= 0}^\infty e^{(\mu-\delta)(t+ \ell)}\nonumber = C_{\mu,\delta}\|f\|_{1,\delta}\, e^{(\mu-\delta) t},\nonumber \end{align} and in the case $\mu>\delta$ we have \begin{align} &\int_{0}^t e^{\mu s} |f(s)|\,ds \le \sum_{\ell= 0}^{\lfloor t \rfloor} \int_{\ell}^{\ell+ 1}e^{\mu s} |f(s)|\,ds \le \sum_{\ell= 0}^{\lfloor t \rfloor} e^{\mu (\ell+1)}[f]_{\ell,1}\nonumber\\ &\le e^\mu \|f\|_{1,\delta} \sum_{\ell= 0}^{\lfloor t \rfloor} e^{(\mu-\delta)\ell}= e^\mu \|f\|_{1,\delta} \frac{e^{(\mu-\delta)(\lfloor t \rfloor+1)}-1}{e^{\mu-\delta}-1} \le D_{\delta,\mu} \|f\|_{1,\delta}\, e^{(\mu-\delta)t} \nonumber \end{align} with $C_{\mu,\delta}$ and $D_{\delta,\mu}$ given above. \end{proof} Next, for $\delta>0$, we define the function space $$ W^{2}_{\delta}(I) := \left\{ u \in C_\delta^{1}(\overline I) \cap W^{2,1}_{loc}(\ov I)\:: \: u(0)= 0,\: u'' \in L^1_\delta(I) \right\} $$ and endow this space with the norm $$ \|u\|_{W^{2}_\delta} := \|u\|_{C_\delta^1} + \|u''\|_{1,\delta} $$ We first note that $$ u'(t)= -\int_{t}^\infty u''(s)\,ds \qquad \text{for $u \in W^{2}_\delta(I)$ and $t \ge 0$.} $$ \begin{lemma} \label{sec:case-2p3-2-W-banach} $W^{2}_{\delta}(I)$ is a Banach space. \end{lemma} \begin{proof} Consider a Cauchy sequence $(u_n)_n$ in $W^2_\delta(I)$. Then we have \begin{equation} \label{eq:Banach-space-W-0} u_n \to u \quad \text{in $C_\delta^{1}(\overline I)$}\qquad \text{and}\qquad u_n'' \to v \quad \text{in $L^1_\delta(I)$.} \end{equation} Moreover, we have \begin{equation} \label{eq:Banach-space-W} u'(t)=\lim_{n \to \infty}u_n'(t) = -\lim_{n \to \infty}\int_{t}^\infty u_n''(s)ds=- \int_{t}^\infty v(s)\,ds \qquad \text{for all $t >0$,} \end{equation} since $$ \int_{t}^\infty |u_n''(s)-v(s)|\,ds \le C_{0,\delta}\|u'' - v\|_{1,\delta}\, e^{-\delta t} \to 0 \qquad \text{as $n \to \infty$} $$ by (\ref{eq:L-1-delta-est}). From (\ref{eq:Banach-space-W}) we deduce that $u'' = v \in L^1_\delta(I)$ in weak sense. Then it follows from (\ref{eq:Banach-space-W-0}) that $u_n \to u$ in $W^2_\delta(I)$. \end{proof} The following simple lemma is essential. \begin{lemma} \label{isomorphism-p-2-3} Let $\delta, \gamma, \mu\ge 0$ satisfy $\delta <\sqrt{\frac{\gamma^2}{4} + \mu^2} -\frac{\gamma}{2}$. Then the map $W^{2}_\delta(I) \to L^1_\delta(I)$, $T\Psi = -\Psi'' + \gamma \Psi' + \mu^2 \Psi$ is an isomorphism. \end{lemma} \begin{proof} Let $\lambda:= \sqrt{\frac{\gamma^2}{4} + \mu^2}$. Any solution of the equation $-\Psi'' + \gamma \Psi' +\mu^2 \Psi=0$ is given by $\Psi(t)=Ae^{(\frac{\gamma}{2}-\lambda) t} + B e^{(\frac{\gamma}{2} + \lambda) t}$ with suitable $A,B \in \R$. If $\Psi \in W^2_\delta(I)$, then $\Psi$ is bounded and therefore $B=0$. Moreover, $A=0$ since $\Psi(0)=0$, and therefore $\Psi \equiv 0$. Hence $T$ has zero kernel. For $g \in L^1_\delta(I)$, a solution of $-\Psi'' + \gamma \Psi' + \mu^2 \Psi = g$ is given by $$ \Psi(t)=\frac{1}{2 \lambda}e^{(\frac{\gamma}{2}+\lambda) t}\int_t^\infty e^{-(\frac{\gamma}{2}+\lambda) s}g(s) \, ds + \frac{1}{2 \lambda}e^{ (\frac{\gamma}{2}-\lambda) t}\int_0^t e^{(-\frac{\gamma}{2}+\lambda) s}g(s) \, ds . $$ By (\ref{eq:L-1-delta-est}) and (\ref{eq:L-1-delta-est-1}), we have \begin{align*} \Bigl| e^{(\frac{\gamma}{2}+\lambda) t}\int_t^\infty e^{-(\frac{\gamma}{2}+\lambda) s}g(s) \, ds \Bigr| & \leq C_{-(\frac{\gamma}{2}+\lambda),\delta} \|g\|_{1,\delta}\: e^{-\delta t} , \\ \Bigl| e^{ (\frac{\gamma}{2}-\lambda) t}\int_0^t e^{(-\frac{\gamma}{2}+\lambda) s}g(s) \, ds \Bigr| & \leq D_{-\frac{\gamma}{2}+\lambda,\delta} \|g\|_{1,\delta}\: e^{-\delta t} \end{align*} for $t \ge 0$. Hence $\Psi \in C_\delta(I)$. Since \begin{equation} \label{eq:psi-prim-rep} \Psi'(t)= \frac{\frac{\gamma}{2}+\lambda}{2 \lambda}e^{(\frac{\gamma}{2}+\lambda) t}\int_t^\infty e^{-(\frac{\gamma}{2}+\lambda) s}g(s) \, ds + \frac{\frac{\gamma}{2}-\lambda}{2 \lambda}e^{ (\frac{\gamma}{2}-\lambda) t}\int_0^t e^{(-\frac{\gamma}{2}+\lambda) s}g(s) \, ds \end{equation} it also follows that $\Psi' \in C_\delta(I)$. Additionally, we have $\Psi''=\mu^2 \Psi + \gamma \Psi' -g \in L^1_\delta$. By adding a multiple of the function $t \mapsto e^{(\frac{\gamma}{2}-\lambda) t}$, we can ensure that $\Psi(0)=0$ and therefore $\Psi \in W^2_\delta(I)$. We conclude that $T$ is an isomorphism. \end{proof} From now on, we fix $\gamma_\sdiamond \in (0,\frac{N-2}{N+\alpha_p})$, By Proposition~\ref{sec:preliminaries-limit-problem-3} and Lemma~\ref{eq:gamma-sufficient}, we have \begin{equation} \label{eq:def-gamma-max} \nu_\sdiamond:= \sup_{0 \le \gamma \le \gamma_\sdiamond}\nu_K(\gamma)<0. \end{equation} Moreover, we fix \begin{equation} \label{eq:fix-delta-spectral} \delta := \min \left\{\frac{\sqrt{1-2\nu_\sdiamond}-1}{2}\:,\:\frac{1}{2}\Bigl(\sqrt{\frac{\gamma_\sdiamond^2}{4} - \nu_\sdiamond} -\frac{\gamma_\sdiamond}{2}\Bigr)\:,\:\frac{2}{N}\right\} \end{equation} for the remainder of this section. By Lemma~\ref{bounded-sol-exp-decay} and since $\delta \le \frac{1}{2}\bigl(\sqrt{1-2\nu_\sdiamond}-1\bigr)$, there exists $C>0$ such that \begin{equation} \label{eq:fix-delta-spectral-consequence} \|\Psi\|_{C^2_\delta(I)} \le C \|\Psi\|_{L^\infty(I)} \end{equation} for every eigenfunction of (\ref{eq:trans-weighted-eigenvalue-problem-rewritten}) corresponding to $\gamma \in [0,\gamma_\sdiamond]$ and $\nu = \nu_j(\gamma)$, $j=1,\dots,k$. We consider the spaces $E_\delta:= W^{2}_\delta(I)$ and $F_\delta := L^1_\delta(I)$. The key observation of this section is the following. \begin{proposition} \label{G-differentiable-p-2-3} Let $\eps_0>0$ be given by Proposition~\ref{implicit function for U-gamma}, so that $(-\eps_0,\gamma_\sdiamond) \to C^1_0(I)$, $\gamma \mapsto U_\gamma$ is a well defined $C^1$-map by Remark~\ref{gamma-negative-definition}. Moreover, let the map $$ G: \left(-\eps_0,\gamma_\sdiamond \right) \times {E_\delta} \times \R \to {F_\delta} \times \R $$ be defined by (\ref{def-G-implicit}). Then $G$ is of class $C^1$ with \begin{align*} &\partial_{\gamma}G(\gamma,\Psi,\nu) = \begin{pmatrix} \Psi' - (p-1)e^{(\gamma-1)t}|U_\gamma|^{p-2} \Bigl(t + (p-2)\frac{U_\gamma \partial_\gamma U_\gamma}{|U_\gamma|^2}\Bigr)\Psi\\ 0 \end{pmatrix}, \quad \partial_{\nu}G(\gamma,\Psi,\nu)= \begin{pmatrix} -\Psi \\ 0 \end{pmatrix}\\ &\text{and}\qquad d_{\Psi}G(\gamma,\Psi,\nu)\phi = \begin{pmatrix} -\phi'' +\gamma \phi' - (p-1)e^{(\gamma-1)t}|U_\gamma|^{p-2}\phi - \nu \phi \\ \int_0^\infty \Psi \phi \, dt \end{pmatrix} \quad \text{in $F_\delta \times \R$} \end{align*} for $\phi \in E_\delta$. \end{proposition} We postpone the somewhat lengthy proof of this proposition to the next section and continue the main argument first. We fix $j \in \{1, \ldots, K\}$ and for $\gamma \ge 0$ we let $\Psi_{\gamma,j}$ denote an eigenfunction of the eigenvalue problem \eqref{eq:trans-weighted-eigenvalue-problem-rewritten} corresponding to the eigenvalue $\nu_j(\gamma)$. We thus have $$ -\Psi_{\gamma,j}'' + \gamma \Psi_{\gamma,j}' - (p-1)e^{(\gamma-1)t}|U_\gamma(t)|^{p-2}\Psi_{\gamma,j}= \nu_j(\gamma) \Psi_{\gamma,j} \; \text{in $[0,\infty)$,} \quad \Psi_{\gamma,j}(0)=0,\: \Psi_{\gamma,j} \in L^\infty(I). $$ By (\ref{eq:fix-delta-spectral-consequence}) we have $\Psi_{\gamma,j} \in {E_\delta}$. Moreover, we can assume $\int_0^\infty \Psi_{\gamma,j}^2 \, dt=1$ so that $$ G(\gamma,\Psi_{\gamma,j},\nu_j(\gamma))=0. $$ To apply the implicit function theorem to $G$ at the point $(\gamma,\Psi_{\gamma,j},\nu_j(\gamma))$, we need the following property. \begin{proposition} \label{deriv-isomorphism} Let $\gamma \in [0,\gamma_\sdiamond]$. Then the map \begin{align*} & L:=d_{\Psi,\nu}G(\gamma,\Psi_{\gamma,j},\nu_j(\gamma)): {E_\delta} \times \R \to {F_\delta} \times \R \\ & (\phi,\rho) \mapsto \begin{pmatrix} -\phi'' + \gamma \phi' - (p-1)e^{(\gamma-1)t}|U_\gamma|^{p-2}\phi - \nu_j(\gamma) \phi - \rho \Psi_{\gamma,j} \\ \int_0^\infty \Psi_{\gamma,j} \phi \, dt \end{pmatrix} \end{align*} is an isomorphism. \end{proposition} \begin{proof} Since, by definition, $$ \delta < \sqrt{\frac{\gamma_\sdiamond^2}{4} -\nu_\sdiamond} -\frac{\gamma_\sdiamond}{2} \le \sqrt{\frac{\gamma^2}{4} -\nu_j(\gamma)} -\frac{\gamma}{2}, $$ we may apply Lemma~\ref{isomorphism-p-2-3} with $\mu = \sqrt{-\nu_j(\gamma)}$. Hence the map ${E_\delta} \to {F_\delta}$, $\phi \mapsto -\phi'' + \gamma \phi' - \nu_j(\gamma) \phi$ is an isomorphism. Since the linear map ${E_\delta} \to {F_\delta}$, $\phi \mapsto (p-1)e^{(\gamma-1)t}|U_\gamma|^{p-2}\phi$ is compact, the map \begin{align*} T: {E_\delta} &\to {F_\delta} \\ \phi & \mapsto -\phi'' + \gamma \phi' - (p-1)e^{(\gamma-1)t}|U_\gamma|^{p-2}\phi - \nu_j(\gamma) \phi \end{align*} is a Fredholm operator of index zero. The kernel of this map is one dimensional, since it consists of eigenfunctions corresponding to $\nu_j(\gamma)$. Hence the codimension of the image of $T$ is one, and we claim that $\Psi_{\gamma,j}$ is not contained in the image of $T$. Otherwise, there exists $\phi \in {E_\delta}$ such that $-\phi'' + \gamma \phi' - (p-1)e^{(\gamma-1)t}|U_\gamma|^{p-2}\phi - \nu_j(\gamma) \phi=\Psi_{\gamma,j}$. Multiplying with $\Psi_{\gamma,j}$ and integrating by parts then yields \begin{align*} 0 < \int_0^\infty e^{-\gamma t}\Psi_{\gamma,j}^2 \, dt &= \int_0^\infty (-(e^{-\gamma t}\phi')' - (p-1)e^{-t}|U_\gamma(t)|^{p-2}\phi- \nu_j(\gamma) e^{-\gamma t} \phi)\Psi_{\gamma,j} \, dt \\ &= \int_0^\infty(-(e^{-\gamma t}\Psi')' - (p-1)e^{-t}|U_\gamma(t)|^{p-2}\Psi - \nu_j(\gamma) e^{-\gamma t}\Psi) \phi \, dt =0, \end{align*} a contradiction. It follows that \begin{equation} \label{decomposition} {E_\delta}=\textrm{range}\, T \oplus \text{span} \{\Psi_{\gamma,j}\} . \end{equation} We now show that $L$ is an isomorphism. First assume $L(\phi,\rho)=0$ for some $(\phi,\rho) \in {E_\delta} \times \R$, i.e., $$ -\phi'' + \gamma \phi' - (p-1)e^{(\gamma-1)t}|U_\gamma|^{p-2}\phi - \nu_j(\gamma) \phi = \rho\Psi_{\gamma,j} \quad \text{in ${F_\delta}$} \qquad \text{and}\qquad \int_0^\infty\Psi_{\gamma,j} \phi \, dt =0. $$ Since $\Psi_{\gamma,j} \not \in \textrm{range}\, T $, the first equality yields $\rho=0$. But then $\phi$ itself is an eigenfunction and therefore $\phi = c\Psi_{\gamma,j}$ for some $c \in \R$. The second equality then yields $c=0$, and thus $(\phi,\rho)=(0,0)$. Hence $L$ is injective. Now let $(g, \sigma) \in {F_\delta} \times \R$. By \eqref{decomposition} there exist $g_0 \in \textrm{range}\, T$, $\kappa \in \R$ such that $g=g_0 + \kappa\Psi_{\gamma,j}$. Since $g_0 \in \textrm{range}\, T$, there exists a solution $\phi_0 \in {E_\delta}$ of $$ -\phi'' + \gamma \phi' - (p-1)e^{(\gamma-1)t}|U_\gamma|^{p-2}\phi - \nu_j(\gamma) \phi = g_0 \qquad \text{in $I$.} $$ Furthermore, for any $\eta \in \R$, $\phi_0 + \eta \Psi_{\gamma,j} \in {E_\delta}$ is also a solution. Taking $\eta=\sigma-\int_0^\infty\Psi_{\gamma,j} \phi_0 \,dt$ yields $$ \int_0^\infty\Psi_{\gamma,j} (\phi_0+\eta\Psi_{\gamma,j}) \, dt=\sigma . $$ Consequently, we have $$ L(\phi_0+\eta\Psi_{\gamma,j}, -\kappa) = {g \choose \sigma}. $$ Hence $L$ is surjective. \end{proof} With the help of Propositions \ref{G-differentiable-p-2-3} and \ref{deriv-isomorphism}, we may now apply the implicit function theorem to $G$ at $(\gamma,\Psi_{\gamma,j},\nu_j(\gamma))$. This yields the following result. \begin{corollary} \label{implicit function-spectral-asymptotics} There exist $\eps_1 \in (0,\eps_0)$ and, for $j=1,\dots,K$, $C^1$-maps $h_j: (-\eps_1,\gamma_\sdiamond) \to \R$ with the property that \begin{equation} \label{eq:g-j-2-equality} h_j(\gamma) = \nu_j(\gamma) \qquad \text{for $j = 1,\dots,K$, $\gamma \in [0,\gamma_\sdiamond)$} \end{equation} and \begin{equation} \label{eq:limit-der-expression-gamma} h_j'(0)= -(p-1) \int_0^\infty \left( t e^{-t} |U_0|^{p-2}\Psi_{0,j}^2 + (p-2) e^{-t} |U_0|^{p-4} U_0 (\del_\gamma \big|_{\gamma=0}\, U_\gamma)\Psi_{0,j}^2 \right) \, dt \end{equation} for $j=1,\dots,K$. \end{corollary} \begin{proof} By Propositions \ref{G-differentiable-p-2-3}, \ref{deriv-isomorphism} and the implicit function theorem applied to the map $G$ at $(0,\Psi_{0,j},\nu_j(0))$, there exists $\eps_1 \in (0,\eps_0)$ and $C^1$-maps $g_j: (-\eps_1,\eps_1) \to F_\delta \times \R$ with the property that $g_j(0)=(\Psi_{0,j},\nu_j(0))$ and $G(\gamma,g_j(\gamma))=0$ for $\gamma \in (-\eps_1,\eps_1)$. Let $h_j$ denote the second component of $g_j$. Since $$ \nu_1(0) = h_1(0) < \nu_2(0) = h_2(0) < \dots < \nu_K(0) = h_K(0)< 0, $$ we may, after making $\eps_1$ smaller if necessary, assume that also $$ h_1(\gamma) < h_2(\gamma) < \dots < h_K(\gamma)<0 \qquad \text{for $\gamma \in (0,\eps_1)$.} $$ Since, by construction, the values $h_j(\gamma)$ are eigenvalues of (\ref{eq:trans-weighted-eigenvalue-problem}) and the negative eigenvalues of (\ref{eq:trans-weighted-eigenvalue-problem}) are precisely given by (\ref{var-char-nu-j}), the equality (\ref{eq:g-j-2-equality}) follows for $\gamma \in (0,\eps_1)$. Using Propositions \ref{G-differentiable-p-2-3}, \ref{deriv-isomorphism} and applying the implicit function theorem at $(\gamma,\Psi_{\gamma,j},\nu_j(\gamma))$, the functions $h_j$ may be extended as $C^1$-functions to $(-\eps_1,\gamma_\sdiamond)$ such that (\ref{eq:g-j-2-equality}) holds for $(0,\gamma_\sdiamond)$. Moreover, (\ref{eq:limit-der-expression-gamma}) is a consequence of implicit differentiation of the equation $G(\gamma,g_j(\gamma))=0$. \end{proof} We may now complete the \begin{proof}[Proof of Theorem~\ref{spectral-curves}] We first note that -- since $U_0:=(-1)^{K-1}U_\infty$ -- the eigenvalue problem (\ref{eq:weighted-eigenvalue-translimit-preliminaries}) coincides with (\ref{eq:weighted-eigenvalue-translimit}), and it has precisely $K$ negative eigenvalues $\nu_j^*:= \nu_j(0)$, $j=1,\dots,K$ by Proposition~\ref{sec:preliminaries-limit-problem-3}. To prove the expansions~(\ref{expansions}), we fix $j \in \{1,\dots,K\}$. By Remark \ref{introduction-remark-1} and Corollary \ref{implicit function-spectral-asymptotics}, the constant $c_j^*$ appearing in \eqref{expansions} is given by $c_j^* = 2 N \nu_j^* + (N-2) h_j'(0).$ Now Corollary \ref{implicit function-spectral-asymptotics} yields the expansions \begin{equation} \label{eigenvalue: Taylor expansion} \nu_j(\gamma) = \nu_j^* + \gamma h_j'(0) + o(\gamma)\qquad \text{and}\qquad \partial_\gamma \nu_j(\gamma) = h_j'(0) + o(1) \qquad \text{as $\gamma \to 0^+$.} \end{equation} Writing $\gamma= \gamma(\alpha)= \frac{N-2}{N+\alpha}$ as before and recalling \eqref{eq:transformed-variables}, we thus have \begin{align*} \mu_j(\alpha) & = (N+\alpha)^2 \nu_j(\gamma(\alpha)) = (N+\alpha)^2 \left( \nu_j^* + \frac{N-2}{N+\alpha} h_j'(0) + o\left(\frac{1}{\alpha}\right)\right)\\ & = \nu_j^* \,\alpha^2 + \bigl[2N \nu_j^* + (N-2) h_j'(0)\bigr]\alpha + o(\alpha) = \nu_j^* \,\alpha^2 + c_j^*\,\alpha + o(\alpha) \end{align*} and \begin{align*} \mu_j'(\alpha) & = 2(N+\alpha) \nu_j(\gamma(\alpha)) - (N-2) [\del_\gamma \nu_j](\gamma(\alpha)) \\ & = 2(N+\alpha) \left(\nu_j^* + \frac{N-2}{N+\alpha} h_j'(0) + o\left(\frac{1}{\alpha} \right) \right) - (N-2)(h_j'(0) + o(1)) \\ & = 2 \nu_j^* \,\alpha + 2 N \nu_j^* + (N-2) h_j'(0) + o(1) = 2 \nu_j^*\, \alpha + c_j^* + o(1)\quad \text{as $\alpha \to \infty$.} \end{align*} \end{proof} We may also complete the \begin{proof}[Proof of Theorem~\ref{corollary-on-eigenvalue-curves}] By Theorem~\ref{spectral-curves} we have $$ \mu_{i}'(\alpha) = 2 \alpha \nu^*_i + c^*_i +o(1) \qquad \text{as $\alpha \to \infty$} $$ for $i=1,\dots,K$. Since the values $\nu^*_i$ are negative, we may thus fix $\alpha_*> 0$ such that \begin{equation} \label{eq:strict-deriv-ineq} \mu_{i}'(\alpha)< 0 \qquad \text{for $\alpha \ge \alpha_*$, $i=1,\dots,K$.} \end{equation} We now fix $i \in \{1,\dots,K\}$. Then there exists a minimal positive integer $\ell_i$ such that $$ \mu_{i}(\alpha_*)+ \lambda_\ell > 0 \qquad \text{for $\ell \ge \ell_i$.} $$ Moreover, since $\mu_{i}(\alpha) \to -\infty$ as $\alpha \to \infty$ by Theorem~\ref{spectral-curves}, there exists, for every $\ell \ge \ell_i$, precisely one value $\alpha_{i,\ell} \in (\alpha_*,\infty)$ such that $$ \mu_{i}(\alpha_{i,\ell})+ \lambda_\ell = 0. $$ Fix such a value $\alpha_{i,\ell}$ and put $\delta_{i,\ell}= \alpha_{i,\ell}-\alpha_*$. Since the curves $\alpha \mapsto \mu_j(\alpha)$, $j=1,\dots,K$ are bounded on the interval $[\alpha_*,\alpha_{i,\ell}+\delta_{i,\ell}]$, it follows that the set $$ N_{i,\ell}:= \left \{ \begin{aligned} &(j,\ell') \in \{1,\dots,K\} \times (\N \cup \{0\}) \::\\ &\mu_{j}(\alpha)+ \lambda_{\ell'}= 0 \; \text{for some $\alpha \in [\alpha_*,\alpha_{i,\ell}+\delta_{i,\ell}]$} \end{aligned} \right \} $$ is finite. Combining this fact with (\ref{eq:strict-deriv-ineq}), we find $\eps_{i,\ell} \in (0,\delta_{i,\ell})$ such that $$ \mu_{j}(\alpha)+\lambda_{\ell'} \not = 0 \qquad \text{for $\alpha \in (\alpha_{i,\ell}-\eps_{i,\ell},\alpha_{i,\ell}+\eps_{i,\ell}) \setminus \{\alpha_{i,\ell}\}$, $j=1,\dots,K$ and $\ell' \in \N \cup \{0\}$.} $$ From Proposition~\ref{spectral-curves-0}, it then follows that $u_\alpha$ is nondegenerate for $\alpha \in (\alpha_{i,\ell}-\eps_{i,\ell},\alpha_{i,\ell}+\eps_{i,\ell})$, $\alpha \not = \alpha_{i,\ell}$. Finally, it also follows from Proposition~\ref{spectral-curves-0} and (\ref{eq:strict-deriv-ineq}) that $$ m(u_{\alpha_{i,\ell}+\eps})-m(u_{\alpha_{i,\ell}-\eps}) = \sum_{(j,\ell') \in M_{i,\ell}} d_{\ell'} >0 \qquad \text{for $\eps \in (0,\eps_{i,\ell})$,} $$ where $M_{i,\ell} \subset \{1,\dots,K\} \times (\N \cup \{0\}$ is the set of pairs $(j,\ell')$ with $\mu_{j}(\alpha_{i,\ell})+ \lambda_{\ell'}= 0$ and, as before, $d_{\ell'}$ is the dimension of the space of spherical harmonics of degree $\ell'$. Here we note that $M_{i,\ell} \not = \varnothing$ since it contains $(i,\ell)$. \end{proof} \section{Differentiability of the map $G$} \label{sec:differentiability-g} In this section, we give the proof of Proposition~\ref{G-differentiable-p-2-3}, which we restate here in a slightly more general form. As before, we fix $p>2$ and $\gamma_\sdiamond \in [0,\frac{N-2}{N+\alpha_p})$. \begin{proposition} \label{G-differentiable-p-2-3-restated} Let $\eps_0 \in (0,\frac{1}{2})$ be given by Proposition~\ref{implicit function for U-gamma}, so that the map $(-\eps_0,\gamma_\sdiamond) \to C^1_0(I)$, $\gamma \mapsto U_\gamma$ is well defined and differentiable by Remark~\ref{gamma-negative-definition}. Let, furthermore, $\delta \in (0,\frac{2}{N})$, and let the map $$ G: \left(-\eps_0,\gamma_\sdiamond \right) \times W^2_\delta(I) \times \R \to L^1_\delta(I) \times \R $$ be defined by (\ref{def-G-implicit}). Then $G$ is of class $C^1$ with \begin{align*} &d_{\gamma}G(\gamma,\Psi,\nu) = \begin{pmatrix} \Psi' - (p-1)e^{(\gamma-1)t}|U_\gamma|^{p-2} \Bigl(t + (p-2)\frac{U_\gamma \partial_\gamma U_\gamma}{|U_\gamma|^2}\Bigr)\Psi\\ 0 \end{pmatrix}, \quad &d_{\nu}G(\gamma,\Psi,\nu)= \begin{pmatrix} -\Psi \\ 0 \end{pmatrix}\\ &\text{and}\qquad d_{\Psi}G(\gamma,\Psi,\nu)\phi = \begin{pmatrix} -\phi'' +\gamma \phi' - (p-1)e^{(\gamma-1)t}|U_\gamma|^{p-2}\phi - \nu \phi \\ \int_0^\infty \Psi_0 \phi \, dt \end{pmatrix}. \end{align*} \end{proposition} The remainder of this section is devoted to the proof of this proposition. We first note that, by Lemma~\ref{bounded-sol-E}, $U_\gamma$ has a finite number of simple zeros and satisfies $\lim \limits_{t \to \infty}|U_\gamma(t)|>0$ for $\gamma \in \left(-\eps_0, \gamma_\sdiamond \right)$. The key step in the proof of Proposition~\ref{G-differentiable-p-2-3-restated} is the following lemma. \begin{lemma} \label{continuity-2-p-3} Let $q >0$, and let $\cU \subset C^1_0(I)$ be the open subset of functions $u \in C^1_0(I)$ which have a finite number of simple zeros and satisfy $\lim \limits_{t \to \infty}|u(t)|>0$. Then the nonlinear map $$ h_q: \cU \to L^1_0(I),\qquad u \mapsto |u|^q $$ is of class $C^1$ with $$ h_q'(u) w = q|u|^{q-2}u w \;\in\; L^1_0(I)\qquad \text{for $u \in \cU, w \in C^1_0(I)$.} $$ Here we identify $|u|^{q-2}u$ with $\sgn(u)$ in the case $q=1$. \end{lemma} \begin{proof} We only consider the case $q \in (0,1)$. The proof in the case $q=1$ is similar but simpler, and the proof in the case $q>1$ is standard. We first prove\\ {\em \underline{Claim 1:}} If $1 \le r < \frac{1}{1-q}$, then the map $\sigma_q : \cU \to L^r_0(I),\: \sigma_q(u)=|u|^{q-2}u$ is well defined and continuous.\\ To see this, we note that, by definition of $\cU$, for every $u \in \cU$ we have\begin{equation} \label{parini-w-est-1} \kappa_u:= \sup\left\{ \frac{|\{|u| \le \tau\} \cap (t,t+1)|}{\tau}: \tau>0, \ t \geq 0 \right\} <\infty. \end{equation} More generally, if $K \subset \cU$ is a compact subset (with respect to $\|\cdot\|_{C^1_0}$), we also have that $$ \kappa_{\text{\tiny $K$}}:= \sup_{u \in K}\kappa_u < \infty. $$ As a consequence of (\ref{parini-w-est-1}), we have \begin{align*} \int_{t}^{t+1}|\sigma_q(u)|^{r}\,dx &= \int_{t}^{t+1}|u|^{(q-1)r}\,dx =\int_0^\infty |(t,t+1) \cap \{|u|^{(q-1)r} \ge s\}|\,ds \\ &= \int_0^\infty |(t,t+1) \cap \{|u| \le s^{\frac{1}{(q-1)r}}\}|\,ds \le \int_0^{\infty} \min \{1, \kappa_u\, s^{\frac{1}{(q-1)r}}\}\,ds < \infty \end{align*} for every $u \in \cU$ and $t \ge 0$, since $\frac{1}{(q-1)r}<-1$ by assumption. Hence $\sigma_q(u) \in L^r_0(I)$ for every $u \in \cU$, so the map $\sigma_q$ is well defined. To see the continuity of $\sigma_q$, let $(u_n)_n \subset \cU$ be a sequence such that $u_n \to u \in \cU$ as $n \to \infty$ with respect to the $C^1_0$-norm. We then consider the compact set $K:= \{u_n,u \::\: n \in \N\}$. For given $\eps>0$, we fix $c \in (0,1)$ sufficiently small such that \begin{equation} \label{eq:parini-weth-9} c^{(q-1)r+1} < \frac{\eps}{2^{r} \kappa_{\text{\tiny $K$}} \Bigl(\frac{ 2^{1+(q-1)r}}{1+(q-1)r} \Bigr)} . \end{equation} Since $u_n \to u$ uniformly on $[0,\infty)$, it is easy to see that \begin{equation} \label{eq:11} \sup_{t \ge 0} \int_{t}^{t+1} 1_{\{|u| > c\}} \bigl|\sigma_q(u_n) -\sigma_q(u)\bigr|^r \,dx \to 0 \qquad \text{as $n \to \infty$.} \end{equation} Moreover, there exists $n_0 \in \N$ with the property that $$ \{|u| \le c \} \subset \{|u_n| \le 2c\} \qquad \text{for $n \ge n_0$.} $$ Consequently, setting $v_n:= |u_n|^{(q-1)r}$ for $n \ge n_0$ and $v:= |u|^{(q-1)r}$, we find that \begin{align*} &\sup_{t \ge 0} \int_{t}^{t+1} 1_{\{|u| \le c\} } \Bigl|\sigma_q(u_n)-\sigma_q(u)\Bigr|^r \,dx \le 2^{r-1} \int_{\{|u| \le c\} \cap (t,t+1)}\Bigl(|u_n|^{(q-1)r}+ |u|^{(q-1)r}\Bigr) \,dx\\ &\le 2^{r-1} \Bigl(\int_{\{|u_n| \le 2 c\} \cap (t,t+1)} |u_n|^{(q-1)r} \,dx + \int_{\{|u| \le c\} \cap (t,t+1)} |u|^{(q-1)r}\,dx \Bigr)\\ &= 2^{r-1} \Bigl(\int_{\{v_n \ge (2 c)^{(q-1)r}\} \cap (t,t+1)} v_n \,dx + \int_{\{v \ge c^{(q-1)r}\} \cap (t,t+1)}v \,dx \Bigr)\\ &= 2^{r-1} \Bigl( \int_{(2c)^{(q-1)r}}^\infty |\{v_n \ge s\} \cap (t,t+1)|\,ds + (2c)^{(q-1)r} |\{v_n \ge (2c)^{(q-1)r}\} \cap (t,t+1)|\\ &+ \int_{c^{(q-1)r}}^\infty |\{v \ge s\} \cap (t,t+1)|\,ds +c^{(q-1)r} |\{v_n \ge c^{(q-1)r}\} \cap (t,t+1)|\Bigr) \\ &= 2^{r-1} \Bigl( \int_{(2c)^{(q-1)r}}^\infty |\{|u_n| \le s^{\frac{1}{(q-1)r}}\} \cap (t,t+1)|\,ds + (2c)^{(q-1)r} |\{|u_n| \le 2c\} \cap (t,t+1)|\\ &+ \int_{c^{(q-1)r}}^\infty |\{|u| \le s^{\frac{1}{(q-1)r}}\} \cap (t,t+1)|\,ds + c^{(q-1)r} |\{|u| \le c\} \cap (t,t+1)|\Bigr)\\ &\le 2^{r} \kappa_{\text{\tiny $K$}} \Bigl( \int_{(2c)^{(q-1)r}}^\infty s^{\frac{1}{(q-1)r}}ds + (2c)^{1+(q-1)r}\Bigr)\\ &= 2^{r} \kappa_{\text{\tiny $K$}} \Bigl( - \frac{(2c)^{(q-1)r+1}}{\frac{1}{(q-1)r}+1} + (2c)^{1+(q-1)r}\Bigr)\\ &= 2^{r} \kappa_{\text{\tiny $K$}} \Bigl(\frac{ 2^{1+(q-1)r}}{1+(q-1)r} \Bigr)c^{(q-1)r+1} < \eps \qquad \text{for $n \ge n_0$} \end{align*} by (\ref{eq:parini-weth-9}). Combining this with (\ref{eq:11}) yields $$ \limsup_{n \to \infty} \|\sigma_q(u_n)-\sigma_q(u)\|_{r,0}^r = \limsup_{n \to \infty} \sup_{t \ge 0} [\sigma_q(u_n)- \sigma_q(u)]_{t,r}^r \le \eps. $$ Since $\eps>0$ was given arbitrarily, we conclude that $$ \|\sigma_q(u_n)-\sigma_q(u)\|_{r,0}^r \to 0 \qquad \text{as $n \to \infty$.} $$ Hence Claim 1 follows.\\ Next, we let $u \in \cU$ and $w \in C^1_0(I)$ with $\|w\|_{L^\infty(I)}<1$. For $\tau \in \R \setminus \{0\}$ we then have $$ \frac{1}{\tau}\Bigl(h_q(u+\tau w) -h_q(u)\Bigr)=I_{{\tau}}+J_{{\tau}}\quad \text{in $L^1_0(I)$} $$ with $$ I_{{\tau}}(x)= 1_{\{|u|> |{\tau}|\} }\frac{|u+{\tau}w|^{q} -|u(x)|^{q}}{{\tau}} ,\quad J_{{\tau}}= 1_{\{|u|\le |{\tau}|\} }\frac{|u+{\tau}w|^{q} -|u|^{q}}{{\tau}} $$ Note that $$ I_{{\tau}}(x)= q \int_0^1 1_{\{|u|> |{\tau}|\} }(x)\sigma_q(u(x)+\rho{\tau}w(x))w(x) \,d\rho. $$ Hence $$ \bigl[I_{\tau} - q \sigma_q(u) w\bigr](x) = q \int_{0}^1 \Bigl[\sigma_q(u+\rho {\tau} w)w - \sigma_q(u)w\Bigr](x)\,d\rho - q \int_0^1 \Bigl[1_{\{|u| \le |{\tau}|\} }\sigma_q(u+\rho {\tau} w)w\Bigr](x) d\rho $$ where $$ \int_{t}^{t+1} \Bigl| \int_{0}^1 \Bigl[\sigma_q(u+\rho {\tau} w)w - \sigma_q(u)w \Bigr](x)\,d\rho\Bigr|dx \le \|w\|_{L^\infty(I)} \sup_{0 \le \rho \le 1} \| \sigma_q(u+\rho\tau w)-\sigma_q(u)\|_{1,0} \quad \text{for $t \ge 0$} $$ and, by H\"older's and Jensen's inequality, \begin{align*} \int_{t}^{t+1} &\Bigl|\Bigl[ 1_{\{|u| \le |{\tau}|\} } \int_0^1 \sigma_q(u+\rho {\tau} w)wd\rho \Bigr](x)\Bigr| dx \\ &\le |\{|u| \le {\tau}\} \cap (t,t+1)|^{1/r'}\|w\|_{L^\infty(I)} \Bigl(\int_0^1 \int_{t}^{t+1} |\sigma_q(u+\rho\tau w)|^r dx d\rho\Bigr)^{1/r}\\ &\le |\{|u| \le {\tau}\}|^{1/r'}\|w\|_{L^\infty(I)} \sup_{0 \le \rho \le 1} \| \sigma_q(u+\rho\tau w)\|_{r,0} \qquad \text{for $t \ge 0$.} \end{align*} Combining these two estimates with Claim 1 and (\ref{parini-w-est-1}), we deduce that \begin{equation} \label{I-tau-est} \|I_{\tau} - q \sigma_q(u) w\|_{1,0} \to 0 \qquad \text{as $\tau \to 0$.} \end{equation} Next we estimate \begin{align*} \int_{t}^{t+1} |J_{{\tau}}| dx &\le \frac{1}{|{\tau}|} \int_{t}^{t+1} 1_{\{|u|\le |{\tau}|\}} \Bigl(|u+{\tau}w|^{q} +|u|^{q}\Bigr)\,dx\\ &=|{\tau}|^{q-1} \int_{t}^{t+1} 1_{\{|u|\le |{\tau}|\}} \Bigl|\frac{u}{{\tau}}+w\Bigr|^{q} +\Bigl|\frac{u}{{\tau}}\Bigr|^{q}\,dx\\ &\le |{\tau}|^{q-1}(2^{q}+1)|\{u| \le |{\tau}|\}\cap(t,t+1)| \le {\kappa_{\text{\tiny $K$}}} |{\tau}|^{q}(2^{q}+1)\qquad \text{for $t \ge 0$} \end{align*} and therefore \begin{equation} \label{J-tau-est} \|J_{\tau}\|_{1,0} \to 0 \qquad \text{as $\tau \to 0$.} \end{equation} Combining (\ref{I-tau-est}) and (\ref{J-tau-est}), we deduce the existence of $$ h_q'(u)w = \lim_{{\tau} \to \infty}\frac{1}{{\tau}}\Bigl(h_q(u+{\tau}w) -h_q(u)\Bigr)= \sigma_q(u)w \qquad \text{in $L^1_0(I)$.} $$ Together with Claim 1, this yields that $h_q$ is of class $C^1$, as claimed. \end{proof} We may now complete the \begin{proof}[Proof of Proposition~\ref{G-differentiable-p-2-3-restated}] The $C^1$-regularity of $G$ follows easily once we have seen that the map $$ H: \left(-\eps_0, \gamma_\sdiamond \right) \times W^{2}_\delta(I) \to L^1_\delta(I), \qquad (\gamma, \Psi) \mapsto e^{(\gamma-1)t}|U_{\gamma}|^{p-2}\Psi $$ is of class $C^1$. Note that we can write $H= H_3 \circ H_2 \circ H_1$ with \begin{align*} H_1&: \left(-\eps_0, \gamma_\sdiamond \right) \times W^{2}_\delta(I) \to \left(-\eps_0, \gamma_\sdiamond \right) \times L^\infty(I) \times C^1_0(I), \qquad (\gamma, \Psi) \mapsto (\gamma,\Psi,U_{\gamma})\\ H_2&: \left(-\eps_0, \gamma_\sdiamond \right) \times L^\infty(I) \times \cU \to \left(-\eps_0, \gamma_\sdiamond \right) \times L^\infty(I) \times L^1_0(I), \qquad (\gamma, \Psi,v) \mapsto (\gamma,\Psi,|v|^{p-2})\\ H_3&: \left(-\eps_0, \gamma_\sdiamond \right) \times L^\infty(I) \times L^1_0(I) \to L^1_\delta(I), \qquad (\gamma,\psi,v) \mapsto e^{(\gamma-1)(\cdot)}v \psi \end{align*} The $C^1$-regularity of $H_1$ is a consequence of Proposition~\ref{implicit function for U-gamma}, and the $C^1$-regularity of $H_2$ is a consequence of Lemma~\ref{continuity-2-p-3}. Finally, the $C^1$-regularity of $H_3$ is easy to check since $e^{(\gamma-1)t} \le e^{-\delta t}$ for $\gamma <\gamma_\sdiamond$. Hence we conclude that $H$ is of class $C^1$, and this finishes the proof. \end{proof} \section{Bifurcation of almost radial nodal solutions} \label{sec:bifurc-almost-radi} In this section, we prove the bifurcation result stated in Theorem~\ref{thm-bifurcation}. \begin{proof}[Proof of Theorem~\ref{thm-bifurcation}] The proof relies on Corollary~\ref{corollary-on-eigenvalue-curves} and a result by Kielh{\"o}fer \cite{kielhoefer:1988}. To adapt our problem to the setting of \cite{kielhoefer:1988}, we consider the Hilbert space $E:=L^2(\B)$, $D:=H^2(\B) \cap H_0^1(\B)$, fix $\alpha:=\alpha_{i,\ell}$ as in the assumption and consider the map $$ G: (-\alpha,\infty) \times D \to E, \quad [G(\lambda,u)] = -\Delta (u+u_{\alpha + \lambda}) - |x|^{\alpha + \lambda} |u+u_{\alpha + \lambda}|^{p-2} (u+u_{\alpha + \lambda}) . $$ Then $G$ is continuous with $G(\lambda,0)=0$ for $\lambda>-\alpha$. Moreover, the Fr\'echet derivative $A(\lambda):=G_u(\lambda,0)$, given by $$ A(\lambda) \phi = -\Delta \phi - (p-1) |x|^{\alpha + \lambda} |u_{\alpha + \lambda}|^{p-2} \phi , $$ exists for $\lambda>-\alpha$ and coincides with the linearized operator $L^{\alpha + \lambda}$ from \eqref{linearized operator}. Hence it is a Fredholm operator of index zero having an isolated eigenvalue 0. Furthermore, there is a differentiable potential $g:\R \times D \to \R$ such that $g_u(\lambda,u)h=(G(\lambda,u),h)_{L^2}$ for all $h \in D$ in a neighborhood of $(0,0)$, given by $$ g(\lambda,u)=\int_\B \Bigl( \frac{1}{2}|\nabla (u+ u_{\alpha + \lambda})|^2 - \frac{|x|^{\alpha + \lambda}}{p}|u+u_{\alpha + \lambda}|^{p}\Bigr) \, dx . $$ To apply the main theorem in \cite{kielhoefer:1988}, we need to ensure that the crossing number of the operator family $A({\lambda})$ through $\lambda =0$ is nonzero. This is a consequence of Corollary \ref{corollary-on-eigenvalue-curves}(iii), which implies that the number of negative eigenvalues of the linearized operator $L^{\alpha + \eps}=A(\eps)$ is strictly larger than that of $L^{\alpha - \eps}=A(-\eps)$ for small $\eps>0$. Therefore, \cite[Theorem, p.4]{kielhoefer:1988} implies that $(0,0)$ is a bifurcation point for the equation $G(\lambda,u)=0$, $(\lambda,u) \in \R \times D$, i.e. there exists a sequence $\left( (\lambda_n,v_n) \right)_n \subset \R \times D \setminus\{0\}$ such that \begin{align*} G(\lambda_n,v_n)=0 \quad \text{for all } n, \qquad (\lambda_n,v_n) \to (0,0) \quad \text{in $\R \times D$ as } n \to \infty . \end{align*} Setting $\alpha_n:= \alpha + \lambda_n$, $u^n:=v_n + u_{\alpha_n}$ we conclude $$ -\Delta u^n - |x|^{\alpha_n}|u^n|^{p-2} u^n =G(\lambda_n,v_n)=0 , $$ i.e. $u^n$ is a solution of \eqref{1.4}. Moreover, $u^n \to u_\alpha$ in $D$. We may therefore deduce by elliptic regularity -- using the fact that the RHS of (\ref{1.4}) is H\"older continuous in $x$ and $u$ -- that the sequence $(u^n)_n$ is bounded in $C^{2,\rho}(\ov\B)$ for some $\rho>0$, and from this we deduce that $u^n \to u_\alpha \in C^2(\ov\B)$. Since $u_{\alpha}$ is radially symmetric with precisely $K$ nodal domains, there exist $r_0 := 0<r_1 < \cdots < r_K := 1$ such that, for $i=1, \ldots, K$, $$ u_\alpha(x) =0, \;(-1)^i \del_r u^n (x) >0 \quad \text{for } |x|=r_i \qquad \text{and}\qquad (-1)^{i-1} u_\alpha(x) >0 \; \text{for } r_{i-1} < |x| < r_i, $$ where $\del_r$ denotes the derivative in the radial direction. Consequently, there exist $\eps,\delta>0$ such that, after passing to a subsequence, $$ (-1)^{i+1} u^n(x) > \eps \quad \text{for $r_{i-1} + \delta < |x| < r_i - \delta$, $n \in \N$} $$ and $$ (-1)^i \del_r u^n (x) >0 \quad \text{for $r_{i} - \delta < |x| < r_i + \delta$, $n \in \N$.} $$ We conclude that for $i=1,\dots,K-1$ and each direction $w \in \mathbb {S}^{N-1}$ the function $$ (r_i - \delta, r_i + \delta) \to \R, \quad t \mapsto u^n(tw) $$ has precisely one zero, which we denote by $r_{i,n}(w)$. In particular, the nodal domains of $u^n$ are given by $$ \Omega_1 :=\left\{ x \in \B : |x|< r_{1,n}\left(\frac{x}{|x|} \right) \right\} \quad \text{and}\quad \Omega_i := \left\{ x \in \B: r_{i-1,n}\left(\frac{x}{|x|}\right) <|x|< r_{i,n}\left(\frac{x}{|x|}\right) \right\} $$ for $i=2, \ldots K$. Consequently, $0 \in \Omega_1$, $\Omega_1$ is homeomorphic to a ball, and $\Omega_2, \ldots, \Omega_K$ are homeomorphic to annuli. Finally, we note that $u^n=v_n + u_{\alpha_n}$ is nonradial, since $v_n \not \equiv 0$ and $u_{\alpha_n}$ is the \emph{unique} radial solution of \eqref{1.4} with $\alpha= \alpha_n$ and with $K$ nodal domains. \end{proof}
1,314,259,996,369
arxiv
\section{Introduction}\label{secIntro} In this paper we address the following question which will soon be explained in detail. \begin{que} Which sparseness types of Hermitian (or real symmetric) matrices can be diagonalized by QR-type algorithms? \end{que} Let $M_\lambda$ denote the set of all Hermitian matrices of size $n$ with the given spectrum $\lambda=\{\lambda_1,\ldots,\lambda_n\}$. The classical QR-algorithm can be viewed as a cascade (dynamical system with discrete time) generated by $\QR\colon M_\lambda\to M_\lambda$ with the property that \[ \lim_{n\to+\infty}\QR^n(A)=\diag(\lambda_{\sigma(1)},\ldots,\lambda_{\sigma(n)}) \] for any initial matrix $A\in M_\lambda$ and some permutation $\sigma\in\Sigma_n$. There also exists a continuous version of the QR-algorithm, namely the flow of the full symmetric Toda lattice: $\dot{A}=[A,P(A)]$, where $P(L)$ is the antisymmetrization of $L$, see~\cite{Chu}. It is known that, for a simple spectrum $\lambda$ (which means there are no multiple eigenvalues $\lambda_i$), the space $M_\lambda$ is diffeomorphic to the variety $\Fl(\mathbb C^n)$ of full flags in $\mathbb C^n$. It is also known (see~\cite{BBR,BG,dMP,ChShSo}) that there exists a Riemannian metric $g$ and a Morse function $f$ on $M_\lambda$ such that the Toda flow is the gradient flow of~$f$. The stationary points of the Toda flow (the critical points of $f$) are again diagonal matrices with spectrum $\lambda$. We consider the submanifolds of $M_\lambda$ given by some sparse forms of matrices. By a sparse matrix we mean a matrix with vanishing condition for some off-diagonal entries. It is convenient to encode the sparseness type by a simple graph $\Gamma$ with the vertex set $[n]=\{1,2,\ldots,n\}$ and an edge set $E_\Gamma$. A matrix $A=(a_{ij})$ is called \emph{$\Gamma$-shaped} if $a_{ij}=0$ for all $\{i,j\}\notin E_\Gamma$. Consider the space $M_{\Gamma,\lambda}$ of all $\Gamma$-shaped Hermitian matrices with spectrum $\lambda$. By Sard's lemma, the space $M_{\Gamma,\lambda}$ is a smooth manifold for generic $\lambda$. Although the precise description of all $\lambda$ for which $M_{\Gamma,\lambda}$ is smooth is unknown in general, in the following we will always assume that spectra are simple. There are particular cases of sparse matrices playing important roles in applications, namely Hessenberg (or staircase) matrices, where nonzero entries are allowed in a contiguous interval adjacent to main diagonal. It is convenient to encode staircase form by a Hessenberg function. A function $h\colon [n]\to[n]$ is called \emph{a Hessenberg function} if $h(i)\geqslant i$ for each $i\in[n]$, and $h(1)\leqslant h(2)\leqslant \cdots\leqslant h(n)$. Let $\Gamma(h)$ be the graph on the set $[n]$ with the edge set $E_h=\{(i,j)\mid i\leqslant j\leqslant h(i)\}$. Then $\Gamma(h)$-shaped matrices are the matrices $A$ whose entries $a_{ij}$ vanish for $j>h(i)$ (or $i>h(j)$). These matrices have staircase form determined by $h$: the value $h(i)$ encodes the lowest position at $i$-th column, where non-zero element is allowed. It is well known that both the QR-algorithm and the flow of the full symmetric Toda lattice can be restricted to the submanifold $M_{\Gamma(h),\lambda}\subset M_\lambda$ of staircase matrices. This motivates the following definition. \begin{defin}\label{definDiagonClass} A class of $\Gamma$-shaped matrices is called \emph{a diagonalizable class} if there exists a Morse--Smale flow on a manifold $M_{\Gamma,\lambda}$ whose set of periodic trajectories is the discrete set of all diagonal matrices. In this case the graph $\Gamma$ is said to have \emph{diagonalizable type}. \end{defin} Instead of Morse--Smale flow in the definition one can use a Morse--Smale cascade (dynamical system with discrete time), see Remark~\ref{remSmaleCascades}. The existence of Toda flow on staircase matrices proves that $\Gamma(h)$ has diagonalizable type for each Hessenberg function $h$, see details in~\cite{AyzStaircase}. Notice that some matrices are not staircase, however they become staircase after relabeling rows and columns with the same permutation of indices. For example, \begin{equation}\label{eqMatrixToAnother} \begin{pmatrix} \ast & \ast & \ast \\ \ast & \ast & 0 \\ \ast & 0 & \ast \end{pmatrix} \stackrel{(2,3)}{\longrightarrow} \begin{pmatrix} \ast & \ast & 0 \\ \ast & \ast & \ast \\ 0 & \ast & \ast \end{pmatrix}, \end{equation} the matrix becomes staircase after permuting 2-nd and 3-rd rows and columns. In general, if $\sigma\in\Sigma_n$ is a permutation of the set $[n]$, and $\sigma\Gamma$ is the graph obtained from $\Gamma$ by relabelling the vertices with $\sigma$, there is a natural diffeomorphism \begin{equation}\label{eqRelabelDiffeo} \diff_\sigma\colon M_{\Gamma,\lambda}\to M_{\sigma\Gamma,\lambda} \end{equation} given by $\diff_\sigma(A)=P_\sigma AP_{\sigma}^{-1}$ for the permutation matrix $P_\sigma$. Since a manifold remains diffeomorphic under the permutation action, the diagonalizable property of a graph does not depend on a particular labelling of vertices: it depends only on the isomorphism class of a graph. Consequently, all graphs isomorphic to $\Gamma(h)$ for a Hessenberg function $h$, have diagonalizable type. The purpose of this paper is to prove the converse. \begin{thm}\label{thmMainDtypeChar} A class of $\Gamma$-shaped matrices is diagonalizable if and only if $\Gamma$ is isomorphic to a Hessenberg graph $\Gamma(h)$ for some Hessenberg function $h$. \end{thm} There is a name for graphs isomorphic to Hessenberg graphs. \begin{defin}\label{definIndifGraph} A graph $\Gamma$ is called \emph{an indifference graph} (or \emph{a unit interval graph}, or \emph{a proper interval graph}) if it is the intersection graph of some collection of closed unit intervals on a line $\mathbb R$. \end{defin} So far, $\Gamma=([n],E_\Gamma)$ is an indifference graph if and only if there exists a set of points $x_1,\ldots,x_n\in\mathbb R$ on a line such that $\{i,j\}\in E_\Gamma \Leftrightarrow |x_i-x_j|\leqslant 1$. We refer to the work of Roberts~\cite{RobPsych} who introduced the notion and the term. \begin{prop}[\cite{Mertz}]\label{propMertzios} A graph $\Gamma$ is isomorphic to $\Gamma(h)$ for some Hessenberg function $h$ if and only if $\Gamma$ is an indifference graph. \end{prop} If, in the definition of an indifference graph, we have $x_1<\cdots<x_n$, then $\Gamma=\Gamma(h)$ so the order of points correspond to the correct order of vertices inducing a Hessenberg function. It follows from Proposition~\ref{propMertzios}, that all indifference graphs determine diagonalizable matrix classes as explained above. The nontrivial part of Theorem~\ref{thmMainDtypeChar} is therefore the following statement. \begin{prop}\label{propMainOneSide} If $\Gamma$ is not an indifference graph, then $\Gamma$ is not of diagonalizable type. \end{prop} The proof of Proposition~\ref{propMainOneSide} breaks into the following steps. \begin{enumerate} \item Characterization of indifference graphs in terms of forbidden induced subgraphs obtained by Roberts~\cite{Roberts}. The forbidden subgraphs are cycle graphs $\Cy_k$ with $k\geqslant 4$, the claw graph $\St_3$, the 3-sun graph $\Sunn$, and the net graph $\Net$, see Fig.~\ref{figRoberts} below. \item Morse inequalities. In order to prove that $M_{\Gamma,\lambda}$ does not support a Morse--Smale flow with $n!$ critical points, it is sufficient to prove that the total Betti number $\rk H_*(M_{\Gamma,\lambda};R)$ is greater than $n!$ for some coefficient ring $R$. \item Each manifold $M_{\Gamma,\lambda}$ carries a natural compact torus action. We observe that the inequality $\rk H_*(M_{\Gamma,\lambda};R)>n!$ holds if and only if the manifold $M_{\Gamma,\lambda}$ is not \emph{cohomologically equivariantly formal} over $R$ in the sense of Goresky--Kottwitz--MacPherson. The results of toric topology allow to simplify the proof of non-equivariant formality. To prove the general result, it is sufficient to prove that forbidden subgraphs from item 1 produce non-equivariantly formal manifolds. \item Non-equivariant formality in case of $\Cy_k$, $k\geqslant 4$, and $\St_3$ was proved in~\cite{AyzStaircase} and~\cite{AyzArrows} respectively, and got a uniform explanation in terms of graphicahedra and cluster-permutohedra in the recent paper~\cite{AyzBuchGraph}. Graphicahedron~\cite{Graphicahedron} is a certain finite poset associated with a graph and cluster-permutohedron is its core in the sense of finite topology. Cluster-permutohedron $\Cl_\Gamma$ is the face poset of the torus action on $M_{\Gamma,\lambda}$. To prove non-formality of $M_{\Gamma,\lambda}$, we apply the result of~\cite{AyzMasSolo}, which states that face posets of equivariantly formal actions satisfy certain acyclicity conditions. The main observation of~\cite{AyzBuchGraph} is that graphicahedra (and cluster-permutohedra) of $\Cy_k$, $k\geqslant 4$, and $\St_3$ have nontrivial simplicial cohomology in degree $1$ and this contradicts equivariant formality of the corresponding matrix manifolds. \item In the current paper we finalize the proof of Proposition~\ref{propMainOneSide} by proving non-formality of isospectral matrix manifolds corresponding to $\Sunn$ and $\Net$. It happens that the idea used for $\Cy_k$ and $\St_3$ also works in these cases, however one has to compute 3-rd homology groups of the corresponding cluster-permutohedra. This cannot be done by hand, however the problem is solvable by a script in Sage~\cite{AyzCode}. \item We also developed an alternative and potentially more general computational approach to prove non-formality which independently confirms our result. Conceptually, instead of looking at ordinary cohomology of cluster-permutohedra, we look at the cohomology of certain sheaves over these posets, namely the GKM-sheaves. Originally, such sheaves were defined over equivariant 1-skeleta of torus actions~\cite{Baird}, but there exists a natural way to extend them to higher dimensional structures: the face posets of torus actions. If a torus action is equivariantly formal, the GKM-sheaf satisfies certain homological properties. The most basic and general property is Atiyah--Bredon--Franz--Puppe (ABFP) exact sequence~\cite{FP}: this is the strengthening of Chang--Skjelbred theorem~\cite{ChSk} (which is, in turn, the principal tool used in GKM-theory~\cite{GKM}). Checking exactness of ABFP sequence is an algorithmic task. Although we could not treat the whole ABFP sequence of $M_{\Sunn,\lambda}$ and $M_{\Net,\lambda}$ due to extremely high computational complexity of this problem, we were able to make calculations which contradict to the exactness of the ABFP sequence. This approach proves non-formality of all manifolds $M_{\St_3,\lambda}$, $M_{\Cy_k,\lambda}$, $M_{\Net,\lambda}$, $M_{\Sunn,\lambda}$ as well. \end{enumerate} It should be noticed that the study of topology of isospectral matrix manifolds of various types using integrable dynamical systems is a common area of research, see~\cite{Tomei,dMP,Penskoi}. In this paper we solve a somewhat opposite task: we apply topology to prove that dynamical systems with certain properties do not exist. The paper has the following structure. In Section~\ref{secDefResults} we give all the required definitions and details missing in the introduction. In Section~\ref{secKnownToricCySt} we review two proofs of non-formality of matrix manifolds corresponding to $\Cy_k, k\geqslant 4$ and $\St_3$. In Section~\ref{secSunAndNet}, the basics of ABFP sequence and GKM theory are recalled. They are used to prove non-formality of manifolds corresponding to $\Net$ and $\Sunn$ with computer-aided experiments. In Section~\ref{secLast}, we observe that all arguments of the paper remain valid for the real versions of $M_{\Gamma,\lambda}$: the manifolds of isospectral real symmetric matrices. In the real case, instead of torus actions we have discrete 2-torus actions, and several recent results of real toric topology can be applied. Finally, in Section~\ref{secLast} we introduce a new graph invariant motivated by the current study, which can be applied in the design of diagonalization algorithms. \section{Definitions, results, and basic steps of proof}\label{secDefResults} Let $M_n$ denote the vector space of all Hermitian matrices of size $n$. We have $\dim_\mathbb R M_n=n^2$. For a given set $\lambda = \{\lambda_1,\ldots,\lambda_{n}\}$ of pairwise distinct real numbers consider the subset $M_{\lambda}\subset M_n$ of all matrices with eigenvalues $\{\lambda_1,\ldots,\lambda_{n}\}$ where it is assumed that $\lambda_1<\lambda_2<\cdots<\lambda_{n}$. Let $U(n)$ be the group of unitary matrices and $T^{n}\subseteq U(n)$ be the compact torus of diagonal unitary matrices \[ T^{n}=\left\{\diag(t_1,\ldots,t_{n}), t_i\in \mathbb C, |t_i|=1\right\} \] The group $U(n)$ acts on $M_{n}$ by conjugation: this is essentially the coadjoint representation of $U(n)$. It easily follows that $M_\lambda$ is diffeomorphic to the manifold $\Fl(\mathbb C^n)=U(n)/T^{n}$ of full complex flags, since both are homogeneous spaces of $U(n)$ with the same stabilizer $T^n$. Moreover, there is a trivial smooth fibration \begin{equation}\label{eqGmap} g\colon M_n\setminus\Sigma\to C, \end{equation} where $\Sigma$ is the set of Hermitian matrices with multiple eigenvalues, $C=\{(\lambda_1,\ldots,\lambda_n)\in\mathbb R^{n}\mid \lambda_1<\cdots<\lambda_n\}$ is an open Weyl chamber, and $g$ maps the matrix to its eigenvalues, listed in the increasing order. The fiber of $g$ over a point $\lambda$ is the manifold $M_\lambda$. We have $\dim_\mathbb R M_{\lambda}=n(n-1)$. The group $T^{n}$ acts on $M_{n}$ by conjugation: $A\mapsto DAD^{-1}$. In coordinate notation we have \begin{equation}\label{eqActionExplicit} (a_{ij})_{\substack{i=1,\ldots,n\\j=1,\ldots,n}}\mapsto (t_it_j^{-1}a_{ij})_{\substack{i=1,\ldots,n\\j=1,\ldots,n}} \end{equation} Scalar matrices commute with every matrix $A$, therefore the diagonal subgroup of the torus acts non-effectively. The fixed points of the $T^n$-action on $M_{\lambda}$ are diagonal matrices with spectrum $\lambda$. These are the diagonal matrices of the form $A_\sigma=\diag(\lambda_{\sigma(1)},\lambda_{\sigma(2)},\ldots,\lambda_{\sigma(n)})$ for all possible permutations $\sigma\in \Sigma_{n}$. Let $\Gamma$ be a simple graph by which we mean a finite graph without multiple edges and loops, on the vertex set $[n]$, and an edge set $E_\Gamma$. A graph $\Gamma$ is assumed connected unless stated otherwise. \begin{defin}\label{definGammaShaped} Consider the vector subspace of Hermitian matrices \begin{equation}\label{eqMGamma} M_\Gamma=\{A\in M_{n}\mid a_{ij}=0, \mbox{ if } \{i,j\}\notin E_\Gamma\}. \end{equation} Matrices from $M_\Gamma$ are called \emph{$\Gamma$-shaped}. Consider \emph{the subspace of isospectral $\Gamma$-shaped matrices}: \begin{equation}\label{eqMGammaLambda} M_{\Gamma,\lambda}=M_{\Gamma}\cap M_{\lambda}\subset M_n. \end{equation} \end{defin} \begin{rem}\label{remSard} According to Sard's theorem, the subset $M_{\Gamma,\lambda}$ is a smooth submanifold of the vector space $M_\Gamma\cong \mathbb R^{n+2|E_\Gamma|}$ for generic $\lambda$. Indeed, all regular values of the map $g|_{M_\Gamma}\colon M_\Gamma\to C$, see~\eqref{eqGmap}, determine smooth isospectral submanifolds $M_{\Gamma,\lambda}=(g|_{M_\Gamma})^{-1}(\lambda)$. Let $U$ be the set of regular values, it is open and dense. In the following, we always assume that $\lambda$ is simple and generic, i.e. lies in $U$, so that $M_{\Gamma,\lambda}$ is a smooth compact manifold. The precise description of $U$ depends on a graph $\Gamma$, and in some cases $U$ may be nontrivial, as shown in~\cite{AyzPeriodic} for periodic tridiagonal matrices. However, the precise description of the set of regular values $U$ is irrelevant to our current study. \end{rem} \begin{rem}\label{remOrientableParallel} For generic $\lambda$, the manifold $M_{\Gamma,\lambda}$ is given by a nondegenerate system of $n$ equations in $M_\Gamma\cong\mathbb R^{n+2|E_\Gamma|}$. Therefore this manifold is normally parallelizable. In particular, $M_{\Gamma,\lambda}$ is orientable and its tangent characteristic classes vanish. \end{rem} The $T^n$-action on $M_n$ preserves all subsets $M_\Gamma$, $M_\lambda$, $M_{\Gamma,\lambda}$. In particular, we have a canonical smooth $T^n$-action on $M_{\Gamma,\lambda}$. Simple count of parameters implies \begin{equation}\label{eqDimM} \dim_{\mathbb R} M_{\Gamma,\lambda}=2|E_\Gamma|. \end{equation} We now review some basic facts from the theory of compact torus actions on manifolds. Let $R$ denote a coefficient ring (either $\mathbb Z$ or a field). Assume that a torus $T=T^k$ acts on a smooth closed manifold $X$. Let $ET\to BT$ be the classifying principal $T$-bundle, and $X_T=X\times_TET$ be the Borel construction of $X$. We have a Serre fibration $p\colon X_T\stackrel{X}{\to}BT$. The cohomology ring $H^*_T(X;R)=H^*(X_T;R)$ is called \emph{the equivariant cohomology ring} of~$X$. Via the induced map $p^*\colon H^*(BT;R)\to H^*(X_T;R)$, the equivariant cohomology attain the natural structure of a graded module over $H^*(BT;R)\cong R[k]$, the polynomial ring in $k=\dim T$ generators of degree $2$. Since $BT$ is simply connected, the fibration $p$ induces Serre spectral sequence: \begin{equation}\label{eqSerreSpSec} E_2^{p,q}\cong H^p(BT;R)\otimes H^q(X;R)\Rightarrow H_T^{p+q}(X;R). \end{equation} The $T$-action on $X$ is called \emph{cohomologically equivariantly formal} (over $R$) if~\eqref{eqSerreSpSec} collapses at $E_2$. The following characterization of equivariant formality is known. \begin{lem}\label{lemEquivFormFixedPoints} Consider a smooth $T$-action on $X$, such that $X^T$ is finite and nonempty, and let $R$ be either $\mathbb Z$ or a field. Then the following conditions are equivalent \begin{enumerate} \item The $T$-action on $X$ is cohomologically equivariantly formal over $R$. \item $H^{\odd}(X;R)=0$. \item $H^*_T(X;R)$ is a free $H^*(BT;R)$-module. \end{enumerate} \end{lem} This lemma was proved in \cite[Lm.2.1]{MasPan} for $R=\mathbb Z$ and the proof for fields follows the same lines. It is known that the Euler characteristic $\chi(X)$ equals the number $\#X^T$ of fixed points for torus actions with isolated fixed points (see e.g.~\cite[Ch.III]{Bred}). Therefore, if $R$ is a field (so the torsion can be neglected), the following conditions are equivalent: \begin{enumerate} \item the $T$-action on $X$ is equivariantly formal; \item the total Betti number equals the number of fixed points \begin{equation}\label{eqTotalBettiNumberFormal} \dim H_*(X;R) = \chi(X) = \#X^T. \end{equation} \end{enumerate} If the action is not equivariantly formal, we have \begin{equation}\label{eqIneqBetti} \dim H_*(X) = \dim H_{\even}(X)+\dim H_{\odd}(X)=\chi(X)+2\dim H_{\odd}(X)>\#X^T. \end{equation} In~\cite{AyzStaircase} we proved that $M_{\Gamma(h),\lambda}$ admits Morse function with all critical points having even index, which implies that there is a Morse decomposition with even-dimensional cells. Therefore $H^{\odd}(M_{\Gamma(h),\lambda};R)$ for any ring $R$, so these manifolds are equivariantly formal. Therefore, diffeomorphism~\eqref{eqRelabelDiffeo} and Proposition~\ref{propIndiffCharact} imply \begin{prop} If $\Gamma$ is an indifference graph, then the canonical $T$-action on $M_{\Gamma,\lambda}$ is equivariantly formal. \end{prop} In this paper we prove the converse. \begin{thm}\label{thmNotEquivFormal} If $\Gamma$ is not an indifference graph, then $M_{\Gamma,\lambda}$ is not equivariantly formal over $\mathbb Z$, $\mathbb Q$, and $\mathbb Z_2$. \end{thm} The proof is based on the following sequence of statements. The first statement is a well-known result in the intersection graph theory, proved by Roberts~\cite{Roberts} (see also~\cite[Exer.3.12]{McMc}). \begin{prop}[\cite{Roberts}]\label{propIndiffCharact} A graph $\Gamma$ is an indifference graph if and only if $\Gamma$ does not contain induced subgraphs of the types shown on Fig.~\ref{figRoberts}: (1) the cycle graphs $\Cy_k$ with $k\geqslant 4$ vertices; (2) the claw graph, also known as $3$-star graph $\St_3$; (3) the net graph $\Net$; (4) the 3-sun graph $\Sunn$. \end{prop} \begin{figure}[h] \begin{center} \includegraphics[scale=0.35]{roberts.pdf} \end{center} \caption{Forbidden subgraphs for the class of indifference graphs}\label{figRoberts} \end{figure} We recall that the induced subgraph of $\Gamma$ on a vertex subset $B\subset[n]$ is the subgraph $\Gamma_B$ which contains all edges of $\Gamma$ incident to vertices from $B$. For example, the complete graph $K_4$ contains the claw $\St_3$ as a subgraph, but not as an induced subgraph. It follows that whenever $\Gamma$ is not an indifference graph, it contains one of the graphs: $\Cy_k$ ($k\geqslant 4$), $\St_3$, $\Net$, $\Sunn$ as an induced subgraph. The next 3 lemmata are proved in Section~\ref{secKnownToricCySt}. \begin{lem}\label{lemInducedGeneral} Assume that $\lambda$ is generic and $M_{\Gamma,\lambda}$ is equivariantly formal. Let $\Gamma_B$ be the induced subgraph of $\Gamma$ on a vertex subset $B\subset[n]$ and $\lambda_B$ be any subset of $\{\lambda_1,\ldots,\lambda_n\}$ of cardinality $|B|$. Then $M_{\Gamma_B,\lambda_B}$ is also equivariantly formal. \end{lem} This lemma shows that to prove Theorem~\ref{thmNotEquivFormal}, it is sufficient to prove non-formality of the matrix manifolds corresponding to the forbidden subgraphs in Fig.~\ref{figRoberts}. \begin{lem}\label{lemCycleGraphs} For generic $\lambda$ and $k\geqslant 4$, the space $M_{\Cy_k,\lambda}$ is not equivariantly formal over any $R$. \end{lem} \begin{lem}\label{lemStarGraph} For generic $\lambda$, the space $M_{\St_3,\lambda}$ is not equivariantly formal over any $R$. \end{lem} The proof of non-equivariant formality of $M_{\Gamma,\lambda}$ for the graphs $\Net$ and $\Sunn$ can be done similarly to $\Cy_k$ and $\St_3$. However, in these cases, the proof is much harder computationally, so we outline another approach to check ourselves. The next two lemmata are proved in Section~\ref{secSunAndNet}. \begin{lem}\label{lemNetGraph} For generic $\lambda$, the space $M_{\Net,\lambda}$ is not equivariantly formal over $\mathbb Z$, $\mathbb Q$, and $\mathbb Z_2$. \end{lem} \begin{lem}\label{lemSunGraph} For generic $\lambda$, the space $M_{\Sunn,\lambda}$ is not equivariantly formal over $\mathbb Z$, $\mathbb Q$, and $\mathbb Z_2$. \end{lem} These lemmata prove Theorem~\ref{thmNotEquivFormal}. Proposition~\ref{propMainOneSide} together with the main Theorem~\ref{thmMainDtypeChar} follow from Theorem~\ref{thmNotEquivFormal} using Morse arguments as follows. \begin{proof}[Proof of Proposition~\ref{propMainOneSide} and Theorem~\ref{thmMainDtypeChar}] If there exists a Morse--Smale system on a compact closed manifold where every trajectory has a limiting stationary point of a hyperbolic type, then there exists a gradient Morse flow with the same indices of stationary points as proved by Smale~\cite{Smale}. Morse inequalities imply that any Morse flow on $M_{\Gamma,\lambda}$ has at least $\dim H_*(M_{\Gamma,\lambda})$ stationary points. Since $M_{\Gamma,\lambda}$ is not equivariantly formal (at least for some field $R$), we have \[ \dim H_*(M_{\Gamma,\lambda};R)>\#M_{\Gamma,\lambda}^T=n!, \] according to~\eqref{eqIneqBetti}. Hence $M_{\Gamma,\lambda}$ cannot have $n!$ stationary points, thus violating Definition~\ref{definDiagonClass}. \end{proof} \begin{rem}\label{remSmaleCascades} Instead of Morse--Smale flows one can use Morse--Smale cascades (dynamical systems with discrete time) both in Definition~\ref{definDiagonClass} and in the proof above. Morse inequalities hold for such systems, see~\cite{Smale1} and~\cite{Smale2}. The discrete time setting is more natural if one speaks about QR-type algorithms instead of Toda flows. Notice that the classical QR-algorithm is a Morse--Smale cascade on the manifold $M_\lambda$ of isospectral matrices. Indeed, QR-algorithm can be treated as a modified version of the Toda flow sampled at integer times, see~\cite{Chu}, while the latter Toda flow is a Morse--Smale system on $M_\lambda$, as follows from the study of its center manifolds in the same paper. \end{rem} \section{The Cycles, the Claw, and the topology of graphicahedra}\label{secKnownToricCySt} \subsection{Review of known results} Let us recall the following result from~\cite{MasPan}. \begin{lem}[{\cite[Lem.2.2]{MasPan}}]\label{lemInvarIsFormal} Let $T$ act on $X$, and $Y$ be a connected component of the fixed point set $X^H$ for some closed subgroup $H\subseteq T$. Then condition $H^{\odd}(X)=0$ implies $H^{\odd}(Y)=0$ and $Y^T\neq\varnothing$. Equivariant formality of $X$ implies equivariant formality of $Y$. \end{lem} We use it in a natural way to prove Lemma~\ref{lemInducedGeneral}. \begin{proof}[Proof of Lemma~\ref{lemInducedGeneral}] Consider the coordinate subtorus $H=T^{[n]\setminus B}$ of the torus $T^n$ acting on $M_{\Gamma,\lambda}$. According to the expression~\eqref{eqActionExplicit}, the matrix $A\in M_{\Gamma,\lambda}$ is fixed by $H$ if and only if its off-diagonal elements $a_{ij}$ vanish whenever either $i$ or $j$ belongs to $[n]\setminus B$. Therefore $A$ has a block form, with a big block $A_B$ corresponding to the index set $B$, and all other blocks of unit size. The fixed point submanifold $M_{\Gamma,\lambda}^H$ have connected components defined by collections $\lambda_B$ of eigenvalues, which live in the block $A_B$. Each connected component of $M_{\Gamma,\lambda}^H$ is therefore diffeomorphic to $M_{\Gamma_B,\lambda_B}$ for some subset $\lambda_B\subset\lambda$. Lemma~\ref{lemInvarIsFormal} finishes the proof. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemCycleGraphs}] The topology of $M_{\Cy_k,\lambda}$ was described in detail in~\cite{AyzPeriodic}. Although the paper contains the combinatorial formulae for Betti numbers of $M_{\Cy_k,\lambda}$, it is quite complicated to extract their precise values in general. We may use another approach. In~\cite{AyzPeriodic}, it was proved that whenever $M_{\Cy_k,\lambda}$ is a smooth manifold, there holds $\pi_1(M_{\Cy_k,\lambda})\cong\mathbb Z^{k-3}$. This implies $H_1(M_{\Cy_k,\lambda};R)\neq 0$ for $k\geqslant 4$ and any coefficient ring $R$. According to Lemma~\ref{lemEquivFormFixedPoints}, this fact proves the lemma. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemStarGraph}] In~\cite{AyzArrows} we studied more general class of matrix spaces given by star graphs $\St_k$ with arbitrary number $k$ of rays. In particular, it was proved that $M_{\St_k,\lambda}$ is not equivariantly formal for $k\geqslant 3$. For the particular case $k=3$ we computed all Betti numbers: \[ (\beta_0,\beta_1,\ldots,\beta_6)=(1,1,12,0,12,1,1) \] independently of the coefficient ring $R$. These results are based on the study of the orbit space $M_{\St_3,\lambda}/T^3$ which is proved to be homeomorphic to $D^2\times S^1$. \end{proof} \begin{rem} Notice that $M_{\St_3,\lambda}$ is a torus manifold, which means that the dimension of the acting torus equals half the real dimension of the manifold. Such actions are well studied. The orbit space criterion of equivariant formality of torus manifolds was proved in~\cite{MasPan}. This criterion implies in particular, that the orbit space of any equivariantly formal torus manifold is a homology disk (see Remark~\ref{remOnMasPanovHomologyCells} below). Since $M_{\St_3,\lambda}/T^3$ is not a disk, this fact already implies non-formality. \end{rem} \subsection{Face posets of torus actions} Let us formulate several other approaches to study general torus actions, as well as particular torus actions on manifolds $M_{\Gamma,\lambda}$. \begin{con}\label{conInvarFacesEtc} Consider a smooth action of a compact torus $T$ on a closed manifold $X$, having isolated fixed points. The details of the following construction, the missing proofs and references can be found in~\cite{AyzCherep} and~\cite{AyzMasSolo}. For any connected closed subgroup $H\subseteq T$ we consider the subset $X^H$ fixed by $H$, this is a closed smooth submanifold of $X$. Connected components of $X^H$ are called invariant submanifolds. An invariant submanifold $Y$ is called \emph{a face submanifold}, if it contains a $T$-fixed point (that is $Y\cap X^T\neq\varnothing$). Each face submanifold is $T$-stable. Its orbit space by the $T$-action is called \emph{a face}. We denote a face by a letter $F$, while the corresponding face submanifold is denoted $X_F$, so that there holds $X_F/T=F$. Let $T_F\subseteq T$ denote the noneffective kernel of the $T$-action on $X_F$ (this can be treated as the stabilizer of a generic point of $X_F$). So far, we have the effective action of $T/T_F$ on $X_F$. The dimension $\dim T/T_F$ is called the rank of $F$ (or the rank of $X_F$) and denoted by $\rk F$. All face submanifolds (or all faces) are ordered by inclusion. They form a finite poset graded with the rank function. We denote this poset by $S(X)$. The poset has the greatest element $\hat{1}$, the manifold $X$ itself. The minimal elements are the fixed points of the action, they have rank 0. \end{con} \begin{con}\label{conWeights} If $x\in X^T$ is an isolated fixed point of a torus action, the tangent representation $T_xX$ decomposes into a sum of irreducible representations. All irreducible representations of a torus have real dimension 2 (unless they are trivial), so that we have \[ T_xX\cong V(\alpha_{x,1})\oplus\cdots\oplus V(\alpha_{x,n}), \] where $\alpha_{x,i}\in\Hom(T;T^1)\cong\mathbb Z^{\dim T}$ are determined up to sign and called \emph{the tangent weights} at $x$. Here $V(\alpha)\cong\mathbb C$ is the irreducible representation given by $t\cdot z=\alpha(t)z$ for $\alpha\in\Hom(T;T^1)$. We say that \emph{an action is $j$-independent}, if, for any isolated point $x\in X^T$, any $\leqslant j$ of its tangent weights are linearly independent over $\mathbb Q$ (this means linear independency in the vector space $\Hom(T;T^1)\otimes\mathbb Q\cong \mathbb Q^{\dim T}$). \end{con} In~\cite{AyzCherep}, we studied the properties of posets $S(X)$ arising from general torus actions. In particular, it was proved that, for any element $s\in S(X)$, the upper ideal $S(X)_{\geqslant s}$ is a geometric lattice. In case $x\in X^T$ is a fixed point, the ideal $S(X)_{\geqslant x}$ is isomorphic to the lattice of flats of the linear matroid corresponding to the collection of tangent weights at~$x$. For equivariantly formal torus actions, the poset of faces exhibits nice acyclicity properties, similar to Cohen--Macaulayness of simplicial complexes corresponding to toric varieties. The poset $S(X)$ itself is not interesting from topological point of view, since it has the greatest element, so its geometrical realization $|S(X)|$ is a cone, hence contractible. However, we can restrict to its ``skeleta'' $S(X)_r=\{s\in S(X)\mid \rk s\leqslant r\}$. In~\cite{AyzMasSolo} we proved the following \begin{prop}[{\cite[Thm.1]{AyzMasSolo}}]\label{propAcyclicityOfPoset} Consider a $T$-action on a manifold $X$. Assume that one of the two conditions is satisfied: \begin{itemize} \item all stabilizers of the action are connected; \item $R=\mathbb Q$. \end{itemize} Also assume that the action is equivariantly formal over $R$ and $j$-independent. Then, for any $r$, the poset $S(X)_r$ is $\min(j+1,r-1)$-acyclic, that is \[ \widetilde{H}_i(|S(X)_r|;R)=0\mbox{ for } i\leqslant \min(j+1,r-1). \] \end{prop} \begin{ex}\label{exAcyclGKMactions} An important class of torus actions is given by GKM-actions, see Definition~\ref{definGKMmfd}. They are always $2$-independent and equivariantly formal, so that we get $3$-acyclicity of their skeleta $S(X)_r$ (unless the dimension $r=\dim |S(X)_r|$ is $3$ or lower). \end{ex} \subsection{Cluster-permutohedra} The general review of cluster-permutohedra as they appear in the study of isospectral matrix spaces, as well as their relation to the (very similar) notion of graphicahedra, is given in~\cite{AyzBuchGraph}. Here we give the necessary definitions and recall the results needed for the calculations to follow. \begin{con}\label{conClusterPerm} As before, let $\Gamma$ be a connected graph on the vertex set $V_\Gamma$, $|V_\Gamma|=n$ with an edge set $E_\Gamma$. We call an unordered subdivision $\ca{C}=\{V_1,\ldots,V_k\}$ of the set $V_\Gamma$ a \emph{clustering}, if each induced subgraph $\Gamma_{V_i}$ is connected (these components are ``the clusters'' which explains the name). The set $\ca{L}_\Gamma$ of all clusterings is partially ordered by refinement: $\ca{C}'\leq \ca{C}$ if each $V_i'\in \ca{C}'$ is a subset of some $V_j\in \ca{C}$. The poset $\ca{L}_\Gamma$ has the least element $\hat{0}=\{\{1\},\ldots,\{n\}\}$ and the greatest element $\{V_\Gamma\}$, it is graded by $\rk\ca{C}=n-1-|\ca{C}|$. It can be seen that the poset $\ca{L}_\Gamma$ is a geometric lattice. Indeed, $\ca{L}_\Gamma$ is the lattice of flats of the graphical matroid corresponding to the graph $\Gamma$. Now we consider all possible bijections $p$ from $V_\Gamma$ to $[n]=\{1,\ldots,n\}$. We say that two bijections $p_1,p_2\colon V\to[n]$ are equivalent with respect to a clustering $\ca{C}$ (or simply $\ca{C}$-\emph{equivalent}), denoted $p_1\stackrel{\ca{C}}{\sim}p_2$, if $p_1,p_2$ differ by permutations within clusters $V_i$ of $\ca{C}$. In other words, \[ p_1^{-1}\circ p_2 \in \Sigma_{\ca{C}}=\Sigma_{V_1}\times\cdots\times\Sigma_{V_k}\subseteq\Sigma_{V_\Gamma}. \] The class of $\ca{C}$-equivalent bijections will be called \emph{an assignment} for the clustering $\ca{C}$. Assignments for $\ca{C}$ are naturally identified with the cosets $\Sigma_V/\Sigma_{\ca{C}}$. Let $\Cl_\Gamma$ denote the set of all possible pairs $(\ca{C},A)$ where $\ca{C}\in \ca{L}_\Gamma$ is a clustering, and $A\in \Sigma_V/\Sigma_{\ca{C}}$ is an assignment for this clustering. Notice that any refinement $\ca{C}'\leq \ca{C}$ induces the inclusion of subgroups $\Sigma_{\ca{C}'}\hookrightarrow \Sigma_{\ca{C}}$ hence the natural surjection on the cosets \[ \pr_{\ca{C}'\leq \ca{C}}\colon \Sigma_V/\Sigma_{\ca{C}'}\to \Sigma_V/\Sigma_{\ca{C}}. \] Define the partial order on the set $\Cl_\Gamma$ by setting $(\ca{C}',A')\leq (\ca{C},A)$ if and only if $\ca{C}'\leq \ca{C}$ and $\pr_{\ca{C}'\leq \ca{C}}(A')=A$. This poset is naturally graded by $\rk((\ca{C},A))=\rk\ca{C}$. \end{con} \begin{defin} The poset $\Cl_\Gamma$ is called \emph{the cluster-permutohedron} of a graph $\Gamma$. \end{defin} \begin{ex} If $\Gamma=\mathbb{I}_n$ is a simple path on $n$ vertices, the assignments for clusterings bijectively correspond to linearly ordered partitions of $[n]$. Hence $\Cl_{\mathbb{I}_n}$ is isomorphic to the poset of linearly ordered partitions of $[n]$. This poset is in turn isomorphic to the face poset of the classical \emph{permutohedron} $\Pe^{n-1}$, see~\cite[Ex.0.10]{Zieg}. This example explains the general name of cluster-permutohedra. A generalization of a permutohedron given by the poset of all cyclically ordered partitions of $[n]$ was introduced in the work of Panina~\cite{Panina} by the name \emph{cyclopermutohedron}. In our terms, this poset is $\Cl_{\Cy_n}$. \end{ex} \begin{rem} For any $\Gamma$, there exists $n!$ minimal elements (that are elements of rank $0$) in $\Cl_{\Gamma}$, they correspond to cosets of the trivial subgroup: $\Sigma_{V_\Gamma}/1\cong \Sigma$. For any $\sigma$ of rank $0$, the upper order ideal $(\Cl_{\Gamma})_{\geqslant \sigma}$ is isomorphic to the geometric lattice $\ca{L}_\Gamma$ of a graphical matroid corresponding to $\Gamma$. \end{rem} There exists another construction: \emph{the graphicahedron} of a graph. \begin{con} Let $\Gamma=(V_\Gamma,E_\Gamma)$ be a graph as before. The graphicahedron $\Gr_{\Gamma}$, as a set, consists of pairs $(D,A)$, where $D\subseteq E_\Gamma$ is any set of edges, and $A$ is an assignment for the clustering, given by connected components of the subgraph $(V_\Gamma,D)$ of $\Gamma$. The order is induced from the natural inclusion order on $2^{V_\Gamma}$ similarly to Construction~\ref{conClusterPerm}. \end{con} Graphicahedra were introduced in~\cite{Graphicahedron} and studied further in~\cite{SymGraph}. In~\cite{AyzBuchGraph} we described the precise relations between graphicahedra and cluster-permutohedra, and understood that cluster-permutohedra are better suited for the tasks of toric topology. \begin{rem} In the case of graphicahedron, we still have $n!$ minimal elements corresponding to permutations of $V_\Gamma$. But in this case, for any minimal element $\sigma$, the upper order ideal $(\Gr_{\Gamma})_{\geqslant\sigma}$ is isomorphic to the boolean lattice $2^{E_\Gamma}$. \end{rem} \begin{rem}\label{remTreeClusterGraphic} If $\Gamma$ is a tree, then the cluster-permutohedron $\Cl_\Gamma$ is isomorphic to the graphicahedron $\Gr_\Gamma$. On the other hand, if $\Gamma$ has cycles, the posets $\Cl_\Gamma$ and $\Gr_\Gamma$ are non-isomorphic, they even have different cardinalities. However there exists a natural Galois insertion $\iota\colon \Cl_\Gamma\hookrightarrow\Gr_\Gamma$. In particular, for properly defined skeleta $(\Gr_\Gamma)_r$ and $(\Cl_\Gamma)_r$ this Galois insertion induces homotopy equivalences of the geometrical realizations. On the level of $1$-skeleta, the maps $\iota\colon (\Cl_\Gamma)_1 \rightleftarrows (\Gr_\Gamma)_1\colon \rho$ are inverses of one another. Both 1-skeleta $(\Cl_\Gamma)_1\cong (\Gr_\Gamma)_1$ are isomorphic to the Cayley graph of the permutation group $\Sigma_V$ with the set of transpositions $\{(i,j)\mid \{i,j\}\in E_\Gamma\}$ taken as the generators' set. See~\cite[Thm.1]{AyzBuchGraph} for details. \end{rem} Our original motivation to introduce cluster-permutohedra in~\cite{AyzArrows} was the following statement, which provides the link to isospectral matrix spaces. It was proved in its full generality in~\cite{AyzBuchGraph}. \begin{prop}[{\cite[Thm.1]{AyzBuchGraph}}]\label{propGirthAndGraphicahedron} Let $\Gamma$ be a graph on $n$ vertices. Assume that the isospectral space $M_{\Gamma,\lambda}$ is smooth. Then the following holds for the torus action on this manifold. \begin{enumerate} \item The face poset $S(M_{\Gamma,\lambda})$ is isomorphic to the cluster-permutohedron $\Cl_\Gamma$. \item Let $g$ denote the girth of $\Gamma$. Then the torus action on $M_{\Gamma,\lambda}$ is $(g-1)$-independent. \item All stabilizers of the torus action on $M_{\Gamma,\lambda}$ are connected. \end{enumerate} \end{prop} Recall that the girth is the minimal length of cycles in $\Gamma$ (assumed $+\infty$, if $\Gamma$ is acyclic). \subsection{Obstructions to equivariant formality in the topology of graphicahedra} We already proved that $M_{\St_3,\lambda}$ and $M_{\Cy_k,\lambda}$, $k\geqslant 4$, are not equivariantly formal. However, there is another way to observe these facts coming from the known results on graphicahedra. As mentioned in~\cite{SymGraph}, the graphicahedron $\Gr_{\St_3}$ corresponding to the claw graph $\St_3$ is isomorphic to the toroidal regular map $\{6,3\}_{(2,2)}$ in~\cite[Sect.8.4]{CoxMoser}. Therefore, the 2-skeleton $(\Gr_{\St_3})_2$ is homeomorphic to the 2-torus $T^2$, and hence $H_1((\Gr_{\St_3})_2)\neq 0$. Since $\St_3$ is a tree, Remark~\ref{remTreeClusterGraphic} implies that $(\Gr_{\St_3})_2\cong (\Cl_{\St_3})_2$, so graphicahedron is a torus as well. The poset $\Cl_{\St_3}$ is isomorphic to the poset of faces of the isospectral manifold $M_{\St_3,\lambda}$. Since $H_1((\Cl_{\St_3})_2)\neq 0$, Proposition~\ref{propAcyclicityOfPoset} implies that $M_{\St_3,\lambda}$ is not equivariantly formal. The argument with the cycle graphs $\Cy_k$, $k\geqslant 4$ is pretty much similar. As shown in~\cite[Thm.8]{SymGraph} (and independently in~\cite{AyzPeriodic}), the poset $\Gr_{\Cy_k}$ is the face poset of a regular cell subdivision of the $(k-1)$-dimensional torus $T^{k-1}$. In particular, it follows that \[ H_1((\Gr_{\Cy_k})_2)\neq 0\mbox{ for }k\geqslant 4. \] The homotopy equivalence between skeleta of graphicahedra and cluster-permutohedra observed in Remark~\ref{remTreeClusterGraphic} implies that $H_1((\Cl_{\Cy_k})_2)\neq 0$ as well, since $|(\Gr_{\Cy_k})_2|\simeq|(\Cl_{\Cy_k})_2|$. Since the 2-skeleton of the face poset of $M_{\Cy_k,\lambda}$ has nontrivial homology in degree $1$, Proposition~\ref{propAcyclicityOfPoset} again implies that $M_{\Cy_k,\lambda}$ is not equivariantly formal for $k\geqslant 4$. \section{The Net, the Sun, and computer algebra}\label{secSunAndNet} \subsection{Geometrical realizations of graphicahedra} The arguments of the previous paragraph suggest the following strategy to prove that $M_{\Net,\lambda}$ and $M_{\Sunn,\lambda}$ are not equivariantly formal. \begin{itemize} \item Construct cluster-permutohedra $\Cl_{\Net}$ and $\Cl_{\Sunn}$, and their rank-selected skeleta. \item Compute simplicial homology in low degrees of the skeleta. \item If at least one of the homology groups is nontrivial, then Proposition~\ref{propAcyclicityOfPoset} implies that the corresponding isospectral manifold is not equivariantly formal. \end{itemize} Notice that both graphs $\Net$ and $\Sunn$ have girth $3$, therefore the torus actions on the corresponding manifolds are $2$-independent by Proposition~\ref{propGirthAndGraphicahedron}. Proposition~\ref{propAcyclicityOfPoset} therefore assures $3$-acyclicity of the $4$-skeleta, and $2$-acyclicity of $3$-skeleta --- in case the action is formal, see Example~\ref{exAcyclGKMactions}. To pursue the above strategy, we prepared a script in Sage available at~\cite{AyzCode}\footnote{the file titled ``Homology\_Of\_Cluster-permutohedra''}. The script was run at a local machine in a single thread. This required about 80Gb RAM, since it involved linear algebra calculations with big matrices. Experiments revealed the following. \begin{prop} The following hold for the posets $\Cl_{\Net}$ and $\Cl_{\Sunn}$. \begin{enumerate} \item $\widetilde{H}_i(|(\Cl_{\Net})_3|;\mathbb Z)=0$ for $i=0,1,2$; \item $\widetilde{H}_i(|(\Cl_{\Sunn})_3|;\mathbb Z)=0$ for $i=0,1$ \item Over $\mathbb Z_2$ and $\mathbb Q$, Betti numbers of $|(\Cl_{\Net})_4|$ are $\beta_1=\beta_2=0$, while $\beta_3=5$ and $\beta_4=7$. \item Over $\mathbb Z_2$, Betti numbers of $|(\Cl_{\Sunn})_4|$ are equal to $\beta_1=\beta_2=0$, while $\beta_3=5$ and $\beta_4=310$. \end{enumerate} \end{prop} Items 3 and 4 of this proposition show that $|(\Cl_{\Net})_4|$ and $|(\Cl_{\Sunn})_4|$ are not 3-acyclic over $\mathbb Z_2$, and therefore they are not 3-acyclic over $\mathbb Z$. According to Proposition~\ref{propAcyclicityOfPoset} $M_{\Net,\lambda}$ and $M_{\Sunn,\lambda}$ are not equivariantly formal, at least over $\mathbb Z_2$ and $\mathbb Z$. This already proves Lemmata~\ref{lemNetGraph} and~\ref{lemSunGraph} for these coefficient rings. However, in the next part of paper we develop other approaches which independently confirm the result. \subsection{ABFP sequence} In this subsection we review another block of facts known in toric topology. More detailed exposition of some of these facts can be found in~\cite{AyzMasEquiv} and~\cite{AyzMasSolo}. Let a $k$-dimensional compact torus act on a manifold $X$. Consider the equivariant filtration \begin{equation}\label{eqXfiltr} X_0\subset X_1\subset X_2\subset\cdots\subset X_k=X \end{equation} where $X_j$ is the union of all orbits of dimension at most $j$. Notice that if the action is noneffective, the filtration stabilizes at $X$ earlier than at $k$-th step. There holds $X_0=X^T$, this is the fixed point set. \begin{rem}\label{remFiltrationIsUnionFaces} The filtration term $X_j$ is the union of all invariant submanifolds of rank $j$ in $X$, see Construction~\ref{conInvarFacesEtc}. If $X$ is equivariantly formal, then every invariant submanifold is a face submanifold, see Lemma~\ref{lemInvarIsFormal}. In this case we have $X_j=\bigcup_{\rk F=j}X_F$. \end{rem} Filtration~\eqref{eqXfiltr} induces the filtration of the orbit space $Q=X/T$: \begin{equation}\label{eqQfiltr} Q_0\subset Q_1\subset Q_2\subset\cdots\subset Q_k=Q,\qquad Q_j=X_j/T. \end{equation} If $X$ is equivariantly formal, we have $Q_j=\bigcup_{\rk F=j}F$. The following result was proved by Franz and Puppe in~\cite{FP} in the most general form, however, they refer to Atiyah and Bredon who proved it in equivariant K-theory and rational cohomology respectively. \begin{prop}[Atiyah--Bredon--Franz--Puppe exact sequence] Let a $T$-action on $X$ be equivariantly formal and \begin{itemize} \item either $R=\mathbb Q$ \item or all stabilizers of the action are connected, and $R=\mathbb Z$ or any field. \end{itemize} Then there exists a long exact sequence of $H^*(BT;R)$-modules \begin{multline}\label{eqABseqForX} 0\to H^*_T(X;R)\stackrel{i^*}{\to} H^*_T(X_0;R)\stackrel{\delta_0}{\to} H^{*+1}_T(X_1,X_0;R)\stackrel{\delta_1}{\to}\cdots\\\cdots \stackrel{\delta_{k-2}}{\to}H^{*+k-1}_T(X_{k-1},X_{k-2};R)\stackrel{\delta_{k-1}}{\to}H^{*+k}_T(X,X_{k-1};R)\to 0. \end{multline} Here the first map is induced by the inclusion $i\colon X_0\hookrightarrow X$, and all other maps $\delta_j$ are the connecting homomorphisms in the long exact sequences of equivariant cohomology of the triples $X_{j-1}\subset X_j\subset X_{j-1}$. \end{prop} \begin{rem} The exactness in the first terms \[ 0\to H^*_T(X;R)\stackrel{i^*}{\to} H^*_T(X_0;R)\stackrel{\delta_0}{\to} H^{*+1}_T(X_1,X_0;R) \] for equivariantly formal actions, is the classical result, known as Chang--Skjelbred theorem~\cite{ChSk}. \end{rem} One of the important consequences of the Chang--Skjelbred theorem is the ability to compute equivariant cohomology $H^*_T(X;R)$ as the kernel of the homomorphism $\delta_0$. The GKM-theory is based on this observation. We review the topological version of GKM-theory which does not assume algebraical torus actions on complex manifolds, as was originally formulated by Goresky--Kottwitz--MacPherson in~\cite{GKM}. The general exposition is compatible with the one given by Kuroki~\cite{Kur}. \subsection{Topological GKM-theory} In the following, we assume that all manifolds are orientable. \begin{defin}\label{definGKMmfd} A manifold $X$ with $T$-action is called a GKM-manifold (over $R$) if the following holds. \begin{enumerate} \item The action is equivariantly formal (over $R$). \item The fixed point set $X^T$ is finite. \item The action is $2$-independent, that is, at any fixed point $x\in X^T$, any two tangent weights are non-collinear. \end{enumerate} \end{defin} The third condition implies that the equivariant $1$-skeleton $X_1$ is the union of finitely many invariant 2-spheres $S^2_{pq}$ connecting some pairs $\{p,q\}$ of fixed points. The torus $T$ acts on a sphere $S^2_{pq}$ with some weight $\alpha_{pq}\in \Hom(T,S^1)$. Therefore the structure of the action of $T$ on $X_1$ is encoded by a GKM-graph $G(X)$ which contains the following information: \begin{itemize} \item Vertices of $G(X)$ correspond to fixed points of the action. \item For any invariant 2-sphere $S^2_{pq}$ there is an edge $e_{pq}$ between $p$ and $q$. \item An edge $e_{pq}$ is labelled with the weight $\alpha_{pq}\in \Hom(T,S^1)\cong H^2(BT;\mathbb Z)$. \end{itemize} \begin{rem} Actually, in the construction of a GKM-graph, we did not use the fact that $X$ is equivariantly formal. So far, with a little abuse of terminology, we can construct GKM-graphs for manifolds satisfying the properties 2-3 in Definition~\ref{definGKMmfd}, even if they are not equivariantly formal. In particular, each isospectral matrix manifold $M_{\Gamma,\lambda}$ satisfies items 2-3, see Proposition~\ref{propGirthAndGraphicahedron}. Therefore the GKM-graph $G(M_{\Gamma,\lambda})$ is well defined. Combinatorially, this graph coincides with the 1-skeleton $(\Cl_\Gamma)_1$ of the cluster-permutohedron. According to Remark~\ref{remTreeClusterGraphic}, the underlying graph of this GKM-graph is nothing but the Cayley graph of $\Sigma_n$ generated by transpositions corresponding to edges of $\Gamma$. \end{rem} Chang--Skjelbred theorem asserts that, in equivariantly formal case, the information about equivariant cohomology is contained already in the equivariant 1-skeleton. For GKM-manifolds this implies the following principal result. \begin{prop}[GKM theorem~\cite{GKM}]\label{propGKMthm} Let $X$ be a GKM-manifold (over $R$) and $G(X)$ its GKM-graph with the vertex set $\ca{V}$, the edge set $\ca{E}$, and the weights $\alpha=\{\alpha_e\mid e\in \ca{E}\}$. Then there is an isomorphism of $H^*(BT;R)$-algebras: \[ H_T^*(X;R)\cong \{\phi\colon \ca{V}\to H^*(BT;R)\mid\phi(p)\equiv\phi(q)\mod (\alpha_{e})\mbox{ for any } e\in \ca{E}\}, \] where the weight $\alpha_{e}$ is considered as an element of $H^2(BT;R)\cong H^2(BT;\mathbb Z)\otimes R$. \end{prop} Here, as before, we assume that either all stabilizers of the torus action are connected, and $R$ can be anything, or, in general, $R=\mathbb Q$. \begin{rem}\label{remEqToOrdinaryRel} For the ordinary cohomology ring we have the graded ring isomorphism $H^*(X;R)\cong H^*_T(X;R)\otimes_{H^*(BT;R)}R$, according to the equivariant formality of $X$, see Lemma~\ref{lemEquivFormFixedPoints}. Since $H^*(X;R)$ is free over $H^*(BT;R)$, we also have an isomorphism of graded $H^*(BT;R)$-modules \begin{equation}\label{eqIsoModules} H^*_T(X;R)\cong H^*(X;R)\otimes_RH^*(BT;R). \end{equation} \end{rem} It is convenient to make computations with Hilbert--Poincare series of graded modules. In most cases, the graded modules are located in even degrees, so we consider series of the form \[ \Hilb(H^*(X),\sqrt{t})=\sum_{i=0}^{n}\beta_{2i}(X)\cdot t^i, \mbox{ and } \Hilb(H^*_T(X),\sqrt{t})=\sum_{i=0}^{+\infty}\dim H_T^{2i}(X)\cdot t^i, \] neglecting odd degrees. In this notation, isomorphism~\eqref{eqIsoModules} implies \begin{equation}\label{eqHilbertsRelation} \Hilb(H^*_T(X),\sqrt{t})=\dfrac{\Hilb(H^*(X),\sqrt{t})}{(1-t)^k},\mbox{ where }k=\dim T. \end{equation} \begin{rem}\label{remGenSeries} If the first $r$ equivariant Betti numbers $\dim H^{2i}_T(X;R)$ are known, then, according to~\eqref{eqHilbertsRelation}, the first $r$ ordinary Betti numbers can be computed by expanding the polynomial \begin{equation}\label{eqPolynomialsExpansion} \left(\sum_{i=0}^{r}\dim H^{2i}_T(X;R)\cdot t^i\right)\cdot(1-t)^k=\beta_0+\beta_2t+\beta_4t^2+\cdots+\beta_{2r}t^r+o(t^r). \end{equation} \end{rem} \begin{algor}\label{algorGKM} Theorem~\ref{propGKMthm} and Remark~\ref{remGenSeries} provide the algorithm to compute first $r$ Betti numbers of a GKM-manifold: \begin{enumerate} \item For each $i=0,1,\ldots,r$ initialize the linear map \[ L_i\colon \bigoplus_{v\in \ca{V}}H^{2i}(BT;R)\to \bigoplus_{e\in\ca{E}}(H^*(BT;R)/(\alpha_e))_{2i}, \] where $(H^*(BT;R)/(\alpha_e))_{2i}$ is the $2i$-th graded component of the quotient algebra. Any homogeneous polynomial $P_v$ of degree $i$ from the summand $H^{2i}(BT;R)$ attached to $v\in\ca{V}$ is mapped to the sum of $[e\colon v]\cdot P_v\mod \alpha_e$ over all edges $e$ incident to $v$. Here $[e\colon v]$ are the incidence signs, defined from arbitrary orientations of edges. \item Compute $\dim H_T^{2i}(X;R)=\dim\Ker L_i$. \item Compute ordinary Betti numbers $\beta_{2i}(X)$ for $i=0,\ldots,r$ using~\eqref{eqPolynomialsExpansion}. \end{enumerate} \end{algor} \begin{rem}\label{remBaird} Notice that step 1 in Algorithm~\ref{algorGKM} corresponds to the computation of 0-degree cohomology of a certain sheaf on a GKM graph. The stalks of this sheaf on vertices are the copies of the polynomial algebra, and the stalk on an edge $e$ is the quotient of the polynomial algebra by the ideal $(\alpha_e)$. This sheaf, called the GKM-sheaf, was introduced by Baird in~\cite{Baird}. Since graded components of this sheaf are finite dimensional, the problem of computing Betti numbers can be, in principle, solved algorithmically. \end{rem} \begin{rem}\label{remBettiMorse} The papers~\cite{GZ} and~\cite{BGH} provide an alternative way to compute Betti numbers of GKM-manifolds, and more general abstract GKM-graphs. The technique is based on a combinatorial analogue of Morse theory, this approach is computationally much faster. However, the result that the Morse-type Betti numbers coincide with the Betti numbers computed by Algorithm~\ref{algorGKM}, is proved only for the class of ``inflection-free'' graphs. The 1-skeleta of cluster-permutohedra considered in our paper do not satisfy this condition, so we cannot expect the combinatorial Morse approach to give meaningful results. \end{rem} \subsection{GKM-theory and the Net} Now we are ready to prove that $M_{\Net,\lambda}$ is not equivariantly formal over $\mathbb Z_2$, $\mathbb Q$, and $\mathbb Z$. \begin{proof}[Proof of Lemma~\ref{lemNetGraph}] Assume that $M_{\Net,\lambda}$ is equivariantly formal, so its odd-degree Betti numbers vanish. Then $M_{\Net,\lambda}$ is a GKM-manifold, and its even-degree Betti numbers can be computed by Algorithm~\ref{algorGKM}. We ran a script~\cite{AyzCode}\footnote{the file titled ``GKM\_for\_Cluster-Permutohedra''} to find $\beta_0,\beta_2,\beta_4,\beta_6$ and obtained the result shown in Table~\ref{tableNetGKMbetti}. The Betti numbers are the same over $\mathbb Z_2$ and over $\mathbb Q$. \begin{table}[h] \centering \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline $i$&$0$&$2$&$4$&$6$&$8$&$10$&$12$\\ \hline $\beta_i$ & $1$ & $20$ & $146$ & $396$ & $146$ & $20$ & $1$ \\ \hline \end{tabular} \caption{Betti numbers of $M_{\Net,\lambda}$ if GKM theorem was applicable}\label{tableNetGKMbetti} \end{table} Since $M_{\Net,\lambda}$ is a closed orientable $12$-dimensional manifold, Poincare duality allows to restore the rest Betti numbers. It can be seen that $\sum_{i}\beta_i(M_{\Net,\lambda})=730$. This contradicts to~\eqref{eqTotalBettiNumberFormal}, since $\chi(M_{\Net,\lambda})=6!\neq 730$. \end{proof} \begin{rem}\label{remSizesOfMatrices} For a general graph $\Gamma=([n],E_\Gamma)$, our script computes equivariant Betti numbers of $M_{\Gamma,\lambda}$. We consider noneffective action of $T^n$ on $M_{\Gamma,\lambda}$, since it is easier to generate the matrix $L_i$ in Algorithm~\ref{algorGKM} rather than the matrix corresponding to the effective action of $T^n/\Delta(T^1)$. The matrix $L_i$ has size $A_i\times B_i$ where $A_i={n+i-1\choose i}\cdot n!$ (the number of degree $i$ monomials in $n$ commuting variables times the number of vertices of the graphicahedron), and $B_i={n+i-2\choose i}\cdot \frac{n!|E_\Gamma|}{2}$ (the number of degree $i$ monomials in $n-1$ commuting variables times the number of edges of the graphicahedron). \end{rem} In order to obtain a similar contradiction for $M_{\Sunn,\lambda}$ we have to compute Betti numbers up to $\beta_8$ since $\dim_\mathbb R M_{\Sunn,\lambda}=18$. We could not perform this calculation on the ordinary computer, so we developed another approach leading to contradiction. \subsection{ABFP sequence and the Sun} In this subsection we assume that all stabilizers of a $T$-action on a manifold $X$ are connected, so we don't care too much about the coefficient ring. This assumption holds for all manifolds $M_{\Gamma,\lambda}$ according to Proposition~\ref{propGirthAndGraphicahedron}. We formulate a technical statement about the structure of the orbit space of the action, the proof of this statement will be used in the subsequent calculations. \begin{prop}\label{propInductiveRelative} Let a $T$-action on $X$ be equivariantly formal. Assume that the following information is known: \begin{itemize} \item The face poset $S(X)$ of the action. \item Betti numbers $\beta_i(X_F)$ and ranks of all face submanifolds of $X$, including $X$ itself. \end{itemize} Then there exists an algorithm to compute the numbers $\rk H^i(Q,Q_{-1})$ where $Q=X/T$ is the orbit space, and $Q_{-1}$ is the union of all its proper faces. \end{prop} \begin{proof} The proof is by induction on $k=\dim T$, the dimension of the effectively acting torus. The base $k=0$ is trivial, since in this case $Q=X$, $Q_{-1}=\varnothing$, and the Betti numbers of $X$ are known by assumption. Now consider an arbitrary $k>0$. Since the action is equivariantly formal, we have ABFP sequence~\eqref{eqABseqForX}. This is a long exact sequence of graded vector spaces, therefore taking Euler characteristic in each degree we get \begin{equation}\label{eqABFPfoHilb} \Hilb(H_T^*(X);\sqrt{t})=\sum_{j=0}^{k}(-1)^j\Hilb(H_T^{*+j}(X_j, X_{j-1});\sqrt{t}). \end{equation} Notice that $X_j$ is the union of all face submanifolds $X_F$ of rank $j$ (see Remark~\ref{remFiltrationIsUnionFaces}), so we have an isomorphism \[ H_T^*(X_j, X_{j-1})\cong \bigoplus_{\rk F=j} H_T^*(X_F, (X_F)_{-1}), \] where $(X_F)_{-1}$ denotes the union of all proper face submanifolds of $X_F$, by the definition it lies in $X_{j-1}$. Furthermore, there is an effective action of $T/T_F$ on $X_F$, which is free\footnote{It would be almost free if we have not required all stabilizers to be connected.} on $X_F\setminus (X_F)_{-1}$. Therefore, \[ H_T^*(X_F, (X_F)_{-1})\cong H^*(F, F_{-1})\otimes H^*(BT_F). \] Summarizing the above isomorphisms, we get \begin{equation}\label{eqHilbIntermediate} \Hilb(H_T^{*+j}(X_j, X_{j-1});\sqrt{t})=\sum_{\rk F=j}\dfrac{\Hilb(H^{*+j}(F, F_{-1});\sqrt{t})}{(1-t)^{k-j}} \end{equation} Notice that in the case $j=k$ the sum on the r.h.s. consists of a single summand, having trivial polynomial component. This last summand equals $\Hilb(H^{*+k}(Q, Q_{-1});\sqrt{t})$. Substituting~\eqref{eqHilbIntermediate} and~\eqref{eqHilbertsRelation} into~\eqref{eqABFPfoHilb} we get \[ \dfrac{\Hilb(H^*(X);\sqrt{t})}{(1-t)^k}=\sum_{j=0}^{k}\dfrac{(-1)^j}{(1-t)^{k-j}}\sum_{\rk F=j}\Hilb(H^{*+j}(F, F_{-1});\sqrt{t}). \] Multiplying by $(1-t)^k$ and separating the last term, we obtain \begin{equation}\label{eqABInter} \underbrace{\sum_i\beta_i(X)t^i}_{B_X(t)} = \underbrace{\sum_{j=0}^{k-1}\sum_{\rk F=j}\Hilb(H^{*+j}(F, F_{-1});\sqrt{t})(t-1)^j}_{\Inter_X(t)} + \underbrace{\Hilb(H^{*+k}(Q, Q_{-1});\sqrt{t})}_{A_X(t)}(t-1)^k. \end{equation} Here the notation $\Inter_X(t)$ stands for ``the intermediate polynomial''. We need to prove that $A_X(t)$ is computable. Notice that \[ A_X(t)=\dfrac{B_X(t)-\Inter_X(t)}{(t-1)^k}, \] and $B_X(t)$ is known. For the polynomial $\Inter_X(t)$, there is an expression \begin{equation}\label{eqInterInductive} \Inter_X(t)=\sum_{\rk F<k}A_{X_F}(t)(t-1)^j, \end{equation} which follows from the definition of all polynomials. The terms on the r.h.s. of \eqref{eqInterInductive} are already computed by induction, since all proper face submanifolds have ranks $<k$. This proves the statement. \end{proof} \begin{rem}\label{remOnMasPanovHomologyCells} Using the inductive argument, described in the proof of Proposition~\ref{propInductiveRelative}, one can prove that, whenever $T^n$ acts on $X^{2n}$ equivariantly formally over $\mathbb Z$ with isolated fixed points, there holds \[ H^*(F,F_{-1};\mathbb Z)\cong H^*(D^{\rk F},\partial D^{\rk F};\mathbb Z). \] Therefore the orbit type filtration of $Q=X/T$ is a homology cell complex, and $Q$ itself is a homology cell. This result was first proved by Masuda and Panov in~\cite{MasPan}. The proof which utilizes ABFP sequence was proposed by the first author in~\cite{AyzMasEquiv}. \end{rem} Let $X_h=M_{\Gamma(h),\lambda}$ be the isospectral manifold of staircase matrices determined by a Hessenberg function $h\colon[n]\to[n]$. According to~\cite{AyzStaircase}, $X_h$ is equivariantly formal, and its even degree Betti numbers can be computed through Morse theory. More precisely, \begin{equation}\label{eqBettiHessenberg} \Hilb(H^*(X_h);\sqrt{t})=\sum_{\sigma\in\Sigma_n}t^{\inv_h(\sigma)}, \end{equation} where $\inv_h(\sigma)=\#\{1\leqslant i<j\leqslant h(i)\mid \sigma(i)>\sigma(j)\}$ is the number of inversions in a permutation $\sigma$ subject to Hessenberg function $h$. Since the face poset $S(X_h)$ is known (this is the cluster permutohedron $\Cl_{\Gamma(h)}$), and every face submanifold of $X_h$ is again a manifold $X_{h'}$ for some Hessenberg function $h'$, Proposition~\ref{propInductiveRelative} implies the following \begin{cor} For any indifference graph $\Gamma(h)$ there exists an algorithm to compute $\rk H^i(Q,Q_{-1})$ where $Q=M_{\Gamma(h),\lambda}/T$. \end{cor} \begin{rem}\label{remWeComputedHessenbergs} We implemented the algorithm described in the proof of Proposition~\ref{propInductiveRelative} in~\cite{AyzCode}\footnote{the file titled ``Characteristics\_of\_Hess\_Varieties''}. It outputs the list of all indifference graphs with up to 5 vertices, shows the ordinary Betti numbers of the corresponding manifolds $M_{\Gamma(h),\lambda}$ (the polynomials $B_{M_{\Gamma(h),\lambda}}(t)$), and computes relative cohomology of the orbit spaces (the polynomials $A_{M_{\Gamma(h),\lambda}}(t)$). \end{rem} \begin{ex} Our computational result can be checked in the case $\Gamma=K_3$, corresponding to the full flag variety $\Fl_3$. This particular case is well studied. It was proved in~\cite{BT2}, that the orbit space $Q=\Fl_3/T^2$ is homeomorphic to the 4-sphere $S^4$. On the other hand $Q_{-1}$ is just the GKM-graph of the torus action on $\Fl_3$. The latter, as an ordinary graph, is known to be homeomorphic to the complete bipartite graph $K_{3,3}$, see~\cite{GHZ}. Therefore \[ H^j(Q,Q_{-1};\mathbb Z)\cong H^j(S^4,K_{3,3};\mathbb Z)\cong\begin{cases} \mathbb Z, & \mbox{if } j=4 \\ \mathbb Z^4, & \mbox{if } j=2 \\ 0, & \mbox{otherwise}, \end{cases} \] as follows from the long exact sequence of the pair $(S^4,K_{3,3})$. Our script outputs $A(t)=4+t$ for this graph, as expected (notice the degrees' shift in the definition of $A(t)$). \end{ex} Finally, we are ready to prove non-equivariant formality of $M_{\Sunn,\lambda}$. \begin{proof}[Proof of Lemma~\ref{lemSunGraph}] Within this proof, we write $M$ for $M_{\Sunn,\lambda}$ to simplify notation. Again, we assume that, on the contrary, the effective action of $T^5$ on $M$ is equivariantly formal, and this will lead to contradiction. Equivariant formality implies that the arguments from the proof of Proposition~\ref{propInductiveRelative} are applicable to $M$, in particular, relation~\eqref{eqABInter} holds true: \begin{equation}\label{eqInterABforSun} B_M(t)=\Inter_M(t)+A_M(t)\cdot (t-1)^5. \end{equation} The polynomial $\Inter_M(t)$ is expressed as the sum over all proper face submanifolds of~$M$. However, these submanifolds correspond to proper induced subgraphs of $\Sunn$, which are all indifference graphs. Therefore topological characteristics of their isospectral manifolds are computed by induction, see Remark~\ref{remWeComputedHessenbergs}. Gathering all computations together, we get \[ \Inter_M(t)=306-1362t+2322t^2-1560t^3+540t^4+384t^5+72t^6+18t^7. \] Let $B_M(t)=\sum_{j=0}^{9}b_jt^j$ and $A_M(t)=\sum_{j=0}^{4}a_jt^j$, where $a_j$ and $b_j$ are unknown. Formal relation~\eqref{eqInterABforSun} and Poincare duality $b_j=b_{9-j}$, $j=0,\ldots,4$ give the linear system of $14$ equations in $14$ variables $a_j$, $b_j$. Unfortunately, this system is degenerate: there exists a 2-parametric space of solutions, in particular, we have \begin{equation}\label{eqB0B1B2} b_0 = -r_1 + 306,\quad b_1 = 5r_1 + r_2 - 1530,\quad b_2 = -10r_1 - 5r_2 + 3120, \end{equation} for arbitrary $r_1,r_2\in\mathbb Z$ (other values $b_j$ and $a_j$ are not essential for the arguments). Next, we have $b_0=1$ since $M$ is connected, therefore $r_1=305$, but $r_2$ still remains in the expression. Now let us compute $\beta_{2j}(M)$ by Algorithm~\ref{algorGKM} using GKM-theory for $j\leqslant 2$, see~\cite{AyzCode}\footnote{the file titled ``GKM\_for\_Cluster-Permutohedra''}. The calculation gives \begin{equation}\label{eqBeta24Sunn} \beta_2(M)=5,\quad \beta_4(M)=29 \end{equation} Putting $b_1=5$ in~\eqref{eqB0B1B2} determines the value $r_2=10$, and we get $b_1=5$, $b_2=20$. This contradicts to $\beta_4(M)=29$ obtained from GKM-theory. This inconsistency shows that the assumption of equivariant formality of $M=M_{\Sunn,\lambda}$ was false. \end{proof} Notice that most arguments in the proof above are purely combinatorial, hence they do not depend on the coefficient field. The calculation of Betti numbers by Algorithm~\ref{algorGKM} outputs the same values of $\beta_2$ and $\beta_4$ for the fields $\mathbb Q$ and $\mathbb Z_2$. \section{Real symmetric matrices and graph invariants}\label{secLast} \subsection{Real symmetric matrices} In the previous parts of the paper we considered isospectral manifolds of Hermitian complex matrices. We can do the same calculations for their real versions: the manifolds of isospectral real symmetric matrices. The proofs follow the same lines, but some references should be substituted by their discrete torus versions proved recently. \begin{con} Let $M_{\Gamma,\lambda}^{\mathbb R}$ denote the space of all $\Gamma$-shaped real symmetric matrices with the given spectrum $\lambda$. If $\lambda$ is generic, $M_{\Gamma,\lambda}^{\mathbb R}$ is a smooth closed manifold of real dimension $|E_\Gamma|$. Let $T_\mathbb R$ denote the discrete group $\{\pm 1\}^n\cong\mathbb Z_2^n$, we call it a 2-torus or a discrete torus. The group $T_\mathbb R$ can be identified with the subgroup of $O(n)$ which consists of diagonal matrices with $\pm 1$ on the diagonal. Then $T_\mathbb R$ acts on symmetric matrices by conjugation: this action preserves both the sparseness type $\Gamma$ and the spectrum. Therefore, we have a smooth $T_\mathbb R$-action on $M_{\Gamma,\lambda}^{\mathbb R}$. Notice that the fixed points set $(M_{\Gamma,\lambda}^{\mathbb R})^{T_\mathbb R}$ consists of diagonal matrices with $\lambda_i$'s at the diagonal; there are $n!$ isolated fixed points. \end{con} We have the following real version of Theorems~\ref{thmMainDtypeChar} and~\ref{thmNotEquivFormal}. \begin{thm}\label{thmRealShit} The following are equivalent \begin{enumerate} \item A manifold $M_{\Gamma,\lambda}^{\mathbb R}$ admits a Morse--Smale system whose stationary points are the diagonal matrices. \item The 2-torus action on $M_{\Gamma,\lambda}^{\mathbb R}$ is equivariantly formal over $\mathbb Z_2$. \item $\Gamma$ is an indifference graph. \end{enumerate} \end{thm} Before proving the theorem we give several important remarks. First, we need to explain what is meant by equivariant formality in the case of discrete torus. The definition of formality can be rewritten, replacing $T$ with $T_\mathbb R$, and the coefficient ring with the particular field $\mathbb Z_2$. However, there is an equivalent way to define equivariant formality which is more classical, as well as more convenient in practice. The proof of equivalence of these two approaches can be found e.g. in~\cite[Ch.IV(B) Cor.2]{Hsiang}. \begin{con} Let a 2-torus $T_\mathbb R$ act on a space\footnote{Some assumptions should be imposed on a space, which are certainly satisfied for smooth actions on compact manifolds.} $X$ with $m$ isolated fixed points. Smith theory implies \begin{equation}\label{eqSmithMain} m=\dim_{\mathbb Z_2}H_*(X^{T_\mathbb R};\mathbb Z_2)\leqslant \dim_{\mathbb Z_2}H_*(X;\mathbb Z_2). \end{equation} If there is an equality in~\eqref{eqSmithMain}, the action is called \emph{equivariantly formal} over $\mathbb Z_2$. \end{con} \begin{lem} Let a 2-torus $T_\mathbb R$ act on $X$ with $m$ isolated fixed points. If $X$ has a cell structure with $m$ cells, then the 2-torus action is equivariantly formal over $\mathbb Z_2$. \end{lem} \begin{proof} Cell structure implies \[ \dim_{\mathbb Z_2}H_*(X;\mathbb Z_2)\leqslant \dim_{\mathbb Z_2}C_*(X;\mathbb Z_2)=m=\dim_{\mathbb Z_2} H_*(X^{T_\mathbb R};\mathbb Z_2), \] which proves the statement. \end{proof} Morse theory then implies \begin{cor}\label{corMorseFormReal} Let a 2-torus $T_\mathbb R$ act on $X$ with $m$ isolated fixed points. If there is a Morse--Smale flow on $X$ with $m$ stationary points, then the action is equivariantly formal. \end{cor} Now let us prove Theorem~\ref{thmRealShit}. \begin{proof}[Proof of Theorem~\ref{thmRealShit}] The 2-torus action on $M_{\Gamma,\lambda}^{\mathbb R}$ has $n!$ isolated fixed points. \textbf{(3)$\Rightarrow$(1).} If $\Gamma$ is an indifference graph, then, probably after some relabelling of vertices, we have $\Gamma=\Gamma(h)$ for some Hessenberg function. There is a Toda flow on $M_{\Gamma,\lambda}^{\mathbb R}$ having $n!$ stationary points. See~\cite{dMP} for details and generalizations to other Lie types. \textbf{(1)$\Rightarrow$(2).} Apply Corollary~\ref{corMorseFormReal}. \textbf{(2)$\Rightarrow$(3).} We prove that if $\Gamma$ is not an indifference graph, then $M_{\Gamma,\lambda}^{\mathbb R}$ is not formal. The complete analogue of Lemma~\ref{lemInvarIsFormal} holds for 2-torus actions, the proof follows from Smith theory. The real analogue of Lemma~\ref{lemInducedGeneral} follows as well: if $\Gamma'$ is an induced subgraph of $\Gamma$, then $M_{\Gamma',\lambda}^{\mathbb R}$ is contained among invariant submanifolds of $M_{\Gamma,\lambda}^{\mathbb R}$. Therefore we only need to prove non-formality of $M_{\Gamma,\lambda}^{\mathbb R}$ for $\Gamma$ being one of the forbidden subgraphs: $\Cy_k, (k\geqslant 4)$, $\St_3$, $\Net$, $\Sunn$. Notice that the face poset of (the 2-torus action on) $M_{\Gamma,\lambda}^{\mathbb R}$ is isomorphic to the cluster-permutohedron $\Cl_\Gamma$, the proof is completely similar to its complex version~\cite[Thm.1]{AyzBuchGraph}. All homological arguments used in our proofs can be translated to 2-torus actions, up to division of degrees by $2$. For example, it is a general phenomenon that \begin{equation}\label{eqRealCompl} H^{2j}(X;\mathbb Z_2)\cong H^j(X^{\mathbb R};\mathbb Z_2) \end{equation} for the real locus $X^{\mathbb R}$ of a (complex or symplectic) manifold $X$, see~\cite{HHP} for a general exposition of this subject. We give a bit more details and references below. \begin{enumerate} \item[$\St_3$.] In this case the torus action has complexity zero. In the complex case, we utilized the equivariant formality criterion proved by Masuda and Panov~\cite{MasPan}. The real version of this criterion is proved in the recent paper of Yu~\cite{LiYu}. This criterion applies to prove non-formality of $M_{\St_3,\lambda}^{\mathbb R}$. \item[$\Cy_k$.] The acyclicity of skeleta of face posets was proved in~\cite{AyzMasSolo} for torus actions with the proof based on ABFP sequence. The version of ABFP sequence with coefficients in $\mathbb Z_2$ for actions of discrete 2-tori seem to first be discussed in~\cite{Puppe}. The exactness of this sequence for equivariantly formal actions was proved in~\cite{AFP}. Acyclicity of the skeleta $S(X)_r$ stated in Proposition~\ref{propAcyclicityOfPoset} is proved by specializing ABFP sequence to degree 0 (and applying some technical machinery of homotopy colimits). The same argument works for 2-torus actions: if an action of~$T_\mathbb R$, $\rk T_\mathbb R\geqslant 3$ on $X$ is equivariantly formal, then $H_1(|S(X)_2|;\mathbb Z_2)=0$. Since $H_1(|(\Cl_{\Cy_k})_2|;\mathbb Z_2)\neq 0$ for $k\geqslant 4$, the manifold $M_{\Cy_k,\lambda}^{\mathbb R}$ is not formal. \item[$\Net$.] In the complex case, we came to contradiction by computing Betti numbers using GKM-theory. The real version of GKM-theory exists as well. One can notice that ``real GKM'' is the consequence of Chang--Skjelbred theorem, which is a part of ABFP sequence, so the fact that ABFP is exact for equivariantly formal 2-torus actions implies the real version of GKM-theory. We also refer to~\cite{Biss} for the related discussion. Since GKM-theory holds true, our computations over $\mathbb Z_2$ in~\cite{AyzCode} are still valid. Computational experiments show that \[ \dim H_*(M_{\Net,\lambda}^{\mathbb R};\mathbb Z_2)=630>6!=\dim H_*((M_{\Net,\lambda}^{\mathbb R})^{T_\mathbb R};\mathbb Z_2) \] which contradicts to the definition of equivariant formality of a 2-torus action. \item[$\Sunn$.] Again, everything follows from the real version of the exact ABFP sequence. It should be noticed that in this case we also used the precise formula~\eqref{eqBettiHessenberg} for Betti numbers of the manifolds of isospectral staircase matrices. The same formula holds for their real loci, up to division of degrees by $2$ and changing coefficient ring to $\mathbb Z_2$, see~\cite{dMP} and~\cite{AyzStaircase}. \end{enumerate} Therefore the whole pipeline of the proof works for discrete torus as well. \end{proof} \subsection{Indifference hulls} The graphs $\Gamma$ which are not indifference graphs produce non-diagonalizable matrix types. However, if $\Gamma\subset\Gamma'$ for an indifference graph $\Gamma'$, then $\Gamma$-shaped matrix can be considered as $\Gamma'$-shaped matrix and can be asymptotically diagonalized in the class of $\Gamma'$-shaped matrices. The motivates the following definition. \begin{defin} Let $\adi(\Gamma)$ denote the minimal number of edges needed to be added to $\Gamma$ so that the resulting graph is an indifference graph. \end{defin} The number $\adi(\Gamma)$ stores the information on how many additional entries of $\Gamma$-shaped matrix should be stored in memory in one wants to perform an asymptotic diagonalization (e.g. QR-algorithm) on a matrix. Since indifference graphs are represented on a line (see Definition~\ref{definIndifGraph}), the problem of computing $\adi(\Gamma)$ is related to the problem of finding the layout of $\Gamma$ on $\mathbb R$ which is optimal in some sense. This invariant is closely related to the invariant studied in~\cite{KapSham}: the minimal size of the maximal clique among all indifference graphs $\Gamma'$ containing $\Gamma$. The latter invariant is proved to be one greater than \emph{the bandwidth} of a graph. In terms of matrices, the problem of computing the bandwidth corresponds to embedding a $\Gamma$-shaped matrix into a band matrix of the minimal width. Relations between graph algorithms and matrix diagonalization problems are also described with the notion of \emph{treewidth}, see~\cite{FHTr} and references therein. \begin{ex} Cycle graphs $\Cy_n$ correspond to periodic tridiagonal matrices, \begin{equation}\label{eqCyclicMatrix} \begin{pmatrix} a_1 & b_1& 0 & \cdots & \overline{b}_n\\ \overline{b}_1& a_2 & b_2 & 0 & \vdots\\ 0 & \overline{b}_2 & a_3 & \ddots & 0\\ \vdots & 0 & \ddots &\ddots& b_{n-1}\\ b_n& \cdots & 0 & \overline{b}_{n-1} &a_n \end{pmatrix}, \end{equation} see~\cite{AyzPeriodic} for details. We have $\adi(\Cy_n)=n-3$. Indeed, a particular way of turning a cycle into an indifference graph is shown on Fig.~\ref{figCycleSew}. The optimality of such pattern can be easily proven by induction on $n$. This means that a periodic tridiagonal matrix can be embedded into a pentadiagonal matrix: the one which corresponds to the Hessenberg function $(3,4,5,\ldots,n,n)$, or the graph shown on the right part of Fig.~\ref{figCycleSew}. This may seem counterintuitive at first glance, since the obvious way of making~\eqref{eqCyclicMatrix} into a Hessenberg matrix is to fill out the whole matrix. However, one should remember that reordering of rows and columns is allowed, which makes the described trick possible. \end{ex} \begin{figure}[h] \begin{center} \includegraphics[scale=0.35]{cycleSew.pdf} \end{center} \caption{Embedding of a cycle into an indifferent graph}\label{figCycleSew} \end{figure} \begin{rem} The previous example implies that for any graph $\Gamma$, there holds \begin{equation}\label{eqGirthSeam} \adi(\Gamma)\geqslant\girth(\Gamma)-3. \end{equation} For certain, there exists a number of relations of $\adi(\cdot)$ to other known graph invariants. These relations, as well as computational complexity issues will be addressed in a different paper. \end{rem} \section*{Acknowledgements} We thank Oleg Kachan and Eduard Tulchinskiy for their persistent help with parallelizing some of the computations at HSE University supercomputer ``cHARISMa''. Li Yu and Vlad Gorchakov had shared some relevant references on 2-torus actions which were quite helpful. The first author thanks Prof. Mikiya Masuda for organizing the workshop ``Hessenberg varieties in Osaka 2019'' where a very fruitful discussion of graph-theoretical approaches to Stanley--Stembridge conjecture had emerged that gave a push to this study. The conference on data science in Voronovo organized by Evgeny Sokolov, motivated the first author to find connections between toric topology and gradient descent algorithms which eventually led to this research.
1,314,259,996,370
arxiv
\section{Introduction} Quantitative description of the reggeization of QCD still remains a challenge for the Leading Logarithmic scheme and its extensions \cite{firsta,firstb}. In the first approximation the problem separates into sectors with fixed number $n$ of the reggeized gluons propagating in the $t$ channel. The lowest nontrivial case, $n=2$, was solved in the classical papers by Balitskii, Kuraev, Fadin and Lipatov \cite{BFKL} resulting in the simple expression for the intercept of the hard pomeron. The notable progress for arbitrary $n$ was achieved by Lipatov and Faddeev and Korchemsky \cite{LIP0,FK} who have established exact equivalence with the one dimensional chain of $n$ noncompact spins. The success of this approach was confirmed by rederiving the Lipatov et al. result in the $n=2$ case \cite{FK,kor1}. However, the adopted procedure requires an analytic continuation from the integer values of the relevant conformal weight $h$ (see later) because only for integer $h$ they were able to diagonalize the two spin hamiltonian. The $n=3$ case, which gives the lowest contribution to the odderon exchange, was studied by Lipatov, Faddeev and Korchemsky, \cite{LIP1,FK,kor1}. Again, the spectrum of the system for integer $h$ can be found for any finite $h=m$. However, the general expression for arbitrary $m$ is not known, and consequently the analytical continuation to $h=1/2$ is not available \footnote{The lowest state of the $n=3$ hamiltonian is believed to occur at $h=1/2$.}. We have developed a new approach which a) works for arbitrary values of the conformal weight $h$, providing explicitly above continuation, and b) gives the analytic solution of the $n=3$ problem for arbitrary $h$ and $q_3$. Here we will apply the new method to the $n=2$ case rederiving directly the BFKL result without need of the analytical continuation. Our new results in the $n=3$ case \cite{we1} will be also shortly summarized. The intercept of the Pomeron trajectory is given by \begin{equation} \alpha_P(0)=1+{\alpha_s N_c \over 4\pi}\left(\epsilon_2(h)+ \overline{\epsilon}_2(\overline{h})\right), \label{inter} \end{equation} where $\epsilon_2$ and $\overline{\epsilon}_2 $ are respectively the largest eigenvalues of the $n=2$ reggeon hamiltonian and its antiholomorphic counterpart \cite{FK,kor1}. This system is equivalent to the misleadingly simple set of the two noncompact spins which for higher $n$ generalizes to the one dimensional chain with nearest-neighbour interactions. Applying Bethe ansatz one obtains in the $n=2$ case \begin{equation} \epsilon_2=i \left({\dot{Q}_2(-i)\over Q_2(-i)}-{\dot{Q}_2(i)\over Q_2(i)} \right)-4, \label{holo} \end{equation} where $Q_2(\lambda)$ satisfies the following Baxter equation \begin{equation} (\lambda+i)^2 Q_2(\lambda+i)+(\lambda-i)^2 Q_2(\lambda-i)= (2\lambda^2+q_2) Q_2(\lambda). \label{bax} \end{equation} $q_2$ is the eigenvalue of the square of the total spin of the system $\hat{q}_2$. It commutes with the hamiltonian and its spectrum is known from the symmetry considerations \begin{equation} q_2=h(1-h),\;\;\;\;h={1\over 2}(1+m) -i\nu,\;\; m\in Z, \nu\in R. \label{spec} \end{equation} In order to solve the Baxter equation, (\ref{bax}), the following integral representation is customarily used \begin{equation} Q_2(\lambda)=\int_{C_I} z^{-i\lambda-1} (1-z)^{i\lambda+1} Q(z) dz. \label{oldan} \end{equation} Then, if the boundary terms do not contribute, Eq.(\ref{bax}) is equivalent to the simple hypergeometric equation for $Q(z)$ \begin{equation} \left[ {d\over dz}z(1-z){d\over dz} -q_2 \right] Q(z)=0, \label{diff} \end{equation} with the well known solutions. However, for arbitrary value of the conformal weight, $h$ the singularity structure of the hypergeometric functions together with the nontrivial monodromy of the kernel $K(z, \lambda)=z^{-i\lambda-1}(1-z)^{i\lambda+1}$ precludes existence of the contour such that the boundary contributions cancel. For integer $h=m$, however, the solution regular at $z=0$ does not have a cut and consequently the simple contour encircling both $z=0$ and $z=1$ points guarantees vanishing of the boundary terms. This observation was exploited in Refs{\cite{FK,kor1}} leading to the elegant solution of the $n=2$ problem for integer conformal weight. The BFKL formula resulted after the analytic continuation in $h$ to $h=1/2$. However, the case of noninteger $h$ requires further insight. In particular the boundary conditions for $Q_2(\lambda)$ are not fully understood. For integer $h$, again, they can be deduced from the polynomial Bethe ansatz and are consistent with the above choice of the integration contour in Eq.(\ref{oldan}). For arbitrary $h$, they are not available. It would be very instructive to investigate the so called functional Bethe ansatz in this connection. We will present here a different approach. It was observed in Ref.\cite{jan1} that the {\em double contour} representation (c.f. Fig.1) \begin{eqnarray} Q_2(\lambda)&=&\int_{C_I} z^{-i\lambda-1} (1-z)^{i\lambda+1} Q_I(z) dz \label{dcon} \\ &+&\int_{C_{II}} z^{-i\lambda-1} (1-z)^{i\lambda+1} Q_{II}(z) dz, \nonumber \end{eqnarray} together with simple boundary conditions on $Q_{I/II}(z)$, reproduced numerically the holomorphic energy in the half-integer case $h=m+1/2$. Using the double contour representation we have subsequently derived the analytic expression for the holomorphic energy for arbitrary complex $h$. With the aid of the new formalism of the transition matrix this method was applied to the $n=3$ case and led to the analytic expression for the intercept of the odderon trajectory for arbitrary values of relevant parameters. We begin with the general solutions of Eq.(\ref{diff}) and then show how the original freedom is restricted leading to the unique solution. \begin{figure}[htb] \vspace{9pt} \framebox[65mm]{ \epsfxsize=6cm \epsfbox{r2.eps} } \caption{ Integration contours used in Eq.(\protect\ref{dcon}). Start $z^{start}$, middle $z^{mid}$, and end $z^{end}$ points coincide but they lie on the different sheets of the Riemann surface of the integrands. } \label{fig:f2} \end{figure} To this end we write the two fundamental sets of two, linearly independent solutions of Eq.(\ref{diff}) \begin{eqnarray} \vec{u}(z) = (u_1(z),u_2(z)), \\ \nonumber \vec{v}(z) = (v_1(z),v_2(z)), \nonumber \end{eqnarray} around $z=0$ and $z=1$ respectively. \begin{eqnarray} u_1(z)&=& F(h,1-h,1;z)=\sum_{n=0}^{\infty}f_n z^z, \nonumber \\ u_2(z)&=& {s(h)\over \pi i}\log{z}\; u_1(z) - {s(h) \over \pi i}\sum_{n=0}^{\infty} g_n z^n, \label{uba} \\ g_n&=& f_n[2\psi(n+1)-\psi(n+h)-\psi(n+1-h)] , \nonumber \end{eqnarray} where $F(a,b,c;z)$ is the hypergeometric function, $\psi(z)$ denotes the digamma function and $s(h)=\sin{(\pi h)}$. The series in Eq.(\ref{uba}) are convergent in the unit circle $K_0$ around $z=0$. Similarly one can construct the $\vec{v}(z)$ solutions in the unit circle $K_1$ around $z=1$. In fact, because of the symmetry of Eq.(\ref{diff}), we take \begin{equation} v_1(z)=i u_1(1-z),\;\;\; v_2(z)=-i u_2(1-z). \label{vba} \end{equation} Since any solution is a linear combination of the fundamental solutions, we have in general \begin{eqnarray} Q_I(z)&=&a u_1(z)+b u_2(z) \nonumber \\ &\equiv & A\cdot\vec{u}(z)=A\cdot\Omega \vec{v}(z),\nonumber \\ Q_{II}(z)&=&c u_1(z)+d u_2(z) \label{abf} \\ & \equiv & B\cdot\vec{u}(z)=B\cdot \Omega \vec{v}(z), \nonumber \end{eqnarray} with an obvious vector notation. The transition matrix $\Omega$ is defined by \begin{equation} \vec{u}(z)=\Omega \vec{v}(z), \label{trans} \end{equation} and provides the analytic continuation of our solutions $Q(z)$ between $K_0$ and $K_1$. It plays an important role for higher $n$ and its direct calculation for $n>2$ is rather nontrivial. For the hypergeometric equation, and for the special choice of both bases, Eqs.(\ref{uba},\ref{vba}), $\Omega$ is very simple. Due to the identity $u_2(z)=i u_1(1-z)$ \begin{equation} \Omega=\left( \begin{array}{cc} 0&1 \\ 1&0 \end{array} \right). \label{omega} \end{equation} \begin{figure}[htb] \vspace{9pt} \framebox[75mm]{ \epsfxsize=4cm \epsfbox{r1.eps} } \caption{ Closed contour used to define the monodromy matrix, Eq.(\protect\ref{mon}). $z^{start} =z^{end}$, however they belong to the different sheets of the Riemann surface. } \label{fig:f1} \end{figure} Next we introduce the monodromy matrix $M_u$ which describes the behaviour of the basis $\vec{u}$ in the vicinity of the branch point $z=0$ (see Fig.2). \begin{equation} \vec{u}(z_{end})=M_u \vec{u} (z_{start}), \;\; M_u=\left( \begin{array}{cc} 1&0 \\ 2s(h)&1 \end{array} \right) , \label{mon} \end{equation} and similarly for the $v$ basis. It is easy to see that $M_v=M_u^{-1}$. We are now ready to write the condition for the cancellation of the boundary contributions in Eq.(\ref{dcon}). With the choice of the contours $C_I$ and $C_{II}$ as shown in Fig.1, the boundary contributions cancel if \begin{equation} A^T M_I + B^T M_{II}= 0, \label{canc} \end{equation} where the combined monodromy matrices for the corresponding contours read \begin{equation} M_I=\Omega M_v \Omega^{-1} - M_u^{-1}, \;\; M_{II}=\Omega M_v^{-1} \Omega^{-1} - M_u. \label{com} \end{equation} In terms of the coefficients, condition (\ref{canc}) reads simply \begin{equation} a=c,\;\;\;\ b=d. \label{para} \end{equation} Hence the original freedom of four coefficients in Eqs.(\ref{abf}) was reduced to the two free parameters. In fact the energy of the system, Eq.(\ref{holo}), is insensitive to the absolute normalization, hence only the ratio \begin{equation} \rho=a/b, \label{ratio} \end{equation} remains relevant. This variable parametrizes all possible boundary conditions which are consistent with the cancellation of the end-point contributions in the sum (\ref{dcon}). The role of the remaining freedom is better seen when the explicit result for $\epsilon_2$ is derived. To this end we substitue Eq.(\ref{abf}) with (\ref{para}) in (\ref{dcon}) and integrate resulting expression term by term expanding $Q_{I/II}(z)$ in the $u$ basis on $C_I$, and in the $v$ basis on $C_{II}$. Since the involved series are absolutely convergent in corresponding domains, the final result for $\epsilon_2(h)$ is the analytic function of $h$. Consistent choice of the branches of the kernel $K(z,\lambda)$ and of $Q(z)$ must be made. After some calculations we obtain \begin{eqnarray} \epsilon_2(h)=4\psi(1)-2\psi(h)-2\psi(1-h) \nonumber \\ -{i\pi\over s } (\rho-\rho^{-1}) . \label{fin} \end{eqnarray} It is instructive to compare this result with the original hamiltonian of the two spins \cite{kor1} \begin{equation} \hat{\cal{H}}_2=4\psi(1)-2\psi(-\hat{J}_{12})-2\psi(1+\hat{J}_{12}). \label{ham} \end{equation} where the eigenvalues of $\hat{J}_{12}$ are equal to $-h$ c.f. Eq.(\ref{spec}). It is now evident that the choice \begin{equation} \rho=\pm 1, \label{choice} \end{equation} gives the correct spectrum of energies. We emphasize, however, that the additional information was required to fix the remaining freedom. This is different in the $n=3$ case (see below). It is important to note that the above choice is independent of $h$ which {\em a priori} is not guaranteed. Substituting Eq.(\ref{fin}), with (\ref{choice}), in Eq.(\ref{inter}), and setting $h=\overline{h}=1/2$, we reproduce the BFKL formula \begin{equation} \alpha_P(0)=1+{\alpha_s N \over \pi} 4 \log{2}. \end{equation} This was also obtained in Ref.\cite{kor1} after analytic continuation of their result from integer values of $h$. The difference between both approaches is best seen by comparing Eq.(\ref{fin}) with Eq.(6.31) of Ref.\cite{kor1}. It follows from the form of the hamiltonian, Eq.(\ref{ham}), that the complete holomorphic eigenenergy $\epsilon_2(h)$ is singular also at positive integer $h$. This is true for our result, Eq.(\ref{fin}). On the other hand, as seen from Eq.(\ref{inter}), in order to calculate the physical intercept only the {\em real} part of $\epsilon_2$ is required. It is finite for positive integer $h$ and was correctly reproduced by the method of Faddeev and Korchemsky, c.f. Eq.(6.31) in Ref.\cite{kor1}. One of the ingredients of the calculation presented in Ref.\cite{kor1} is the prescription how to fix an overall constant term in the two spin Hamiltonian, Eq.(\ref{ham}). In the present formalism the result (\ref{fin}) also has a freedom which is parametrized by $\rho$. It would be interesting to see if the arbitrariness seen in both methods had the same origin. Our method can be extended to higher $n$. For $n=3$ we have carried out this procedure explicitly \cite{we1}. The complete set of linearly independent solutions of the corresponding third order differential equation was constructed. The transition matrix between the $\vec{u}$ and $\vec{v}$ bases was also obtained. Since in this case there is no simple identity connecting linearly independent solutions, the $\Omega$ matrix is nontrivial. Remarkably it turns out that the condition for cancellation of the end-point contributions in the double integral representation determines {\em uniquely} the final solution of the Baxter equation. Existing arbitrariness in both transforms $Q_{I/II}(z)$ is irrelevant. Consequently we have obtained the holomorphic (and antiholomorphic) energies as the analytic function of the two relevant parameters $h$ and $q_3$. The new variable $q_3$ is the eigenvalue of the second, commuting with hamiltonian, observable $\hat{q}_3$ which is known but unfortunately was not diagonalized in spite of many attempts \cite{LIP2,jan2,kor3}. We have therefore mapped numerically the analytic structure of $\epsilon_3(1/2, q_3)$ in the complex $q_3$ plane. Result is sketched in Fig.3. The holomorphic energy has a series of poles at imaginary $q_3$ \footnote{Our definition of $q_3$ is the same as in Ref.\cite{kor1}}. The intercept of the odderon trajectory is smaller than one for almost all values of $q_3$ including all $q_3\in R$. However in the vicinities of the poles it can be arbitrarily large. Therefore any further conclusion about the numerical value of the $\alpha_O(0)$ depends crucially on the spectrum of $q_3$. \begin{figure}[htb] \vspace{9pt} \framebox[75mm]{ \epsfxsize=5cm \epsfbox{r4.eps} } \caption{ Schematic map of the analyticity structure of $\protect\epsilon_3(1/2,q_3)$ in the complex $q_3$ plane. $E_3$ is positive only in the vicinity of the poles. } \label{fig:f4} \end{figure} We would like to thank L. N. Lipatov and G. P. Korchemsky for interesting discussions. This work is supported by the Polish Committee for Scientific Research under the grants no PB 2P03B19609 and PB 2P03B08308. \section*{References}
1,314,259,996,371
arxiv
\section{Introduction} We classified in \cite{BCS} all non-symplectic automorphisms of prime order $p$ acting on \ihskcom i.e. fourfolds which are deformation equivalent to the Hilbert scheme of two points of a smooth $K3$ surface, for $p=2,3$ and $7\leq p\leq 19$. Our classification relates certain invariants of the fixed locus with the isometry classes of two natural lattices, associated to the action of the automorphism on the second integral cohomology group. Then, in \cite{BCMS} and in \cite{kevin} the cases $p=23$ and $p=5$ were solved, thus completing the classification for \ihsk and prime order. Here, we want to parametrize IHS manifolds which admit an action of a given non-symplectic automorphism of prime order $p$. For this we use its action on the second cohomology: given $\sigma$ acting on $X$, $\sigma^*$ acts on $H^2(X,\mathbb{Z})$ as a monodromy operator which is a Hodge isometry and preserves a K\"ahler class. If the only automorphism acting trivially on cohomology is the identity (satisfied for $K3^{[n]}$-type by \cite[Proposition 10]{BeauvilleKaehler}), then the monodromy $\sigma^*$ reconstructs $\sigma$ by the Hodge-theoretic Torelli Theorem \ref{HTT}. In particular, given a Hodge monodromy that preserves a K\"ahler class, it lifts to exactly one automorphism. In the case of \ihskcom we know from \cite{BCS} that the action on cohomology is classified once given three numerical invariants $(p,m,a)$, or equivalently once given the invariant sublattice $T$ inside the second cohomology lattice $L$ (compare with \cite[Corollary 5.7]{BCS}). For the other deformation classes this is not known yet, even for \ihsknpts the invariant sublattice $T$ is a necessary information, but may not be sufficient to determine completely the action. That is why we need to fix the isometry in $O(L)$ representing the automorphism, and this leads us to the study of $(\rho,T)$-polarizations, as defined in \S \ref{definitions}. The study of the moduli spaces of projective irreducible holomorphic symplectic manifolds (IHS for short) was started by Gritsenko, Hulek and Sankaran in \cite{GKS}, where the authors consider polarized IHS manifolds and they show that, for IHS manifolds that are deformations of the Hilbert scheme of $n$ points on a $K3$ surface (we say that these are $\mathrm{IHS}-K3^{[n]}$), if the polarization has degree large enough, then the corresponding moduli space is algebraic, and in fact of general type. On the other hand, when the Picard rank of the considered projective family grows, the period map is a priori non-injective, because of the existence of non-isomorphic birational models in its fibres. In a recent work, Amerik and Verbitsky \cite{AmerikVerbitsky} were able to give a precise description of the K\"ahler cone of an IHS manifold. Their results are fundamental to start the description of the moduli space of IHS manifolds with a non-symplectic automorphism and were first applied by Joumaah in \cite{Joumaah} to describe the moduli space of \ihskn with a non-symplectic involution. In this paper, we generalize to \ihskn the construction of Dolgachev and Kond\=o \cite{DK} of the moduli space of $K3$ surfaces with a non-symplectic automorphism of prime order $p\geq 3$. By using results of \cite{BCS}, we first construct in \S \ref{surjective-period} a surjective period map to the complement of a hyperplane arrangement inside a complex ball; these hyperplanes are the analogous for \ihskn manifolds of the hyperplanes determined by $(-2)$-curves in similar moduli problems for $K3$ surfaces, see \cite{DK}. Then, in \S \ref{injectivityperiod}, by using the notion of $K(T)$-generality, we are able to exhibit a bijective period map. Finally, in \S \ref{arithmetic-quot} we obtain a quasi-projective variety parametrizing isomorphism classes of $K(T)$-general \ihsknpt \subsection*{Acknowledgements} The second named author was partially supported by the Research Network Program GDRE-GRIFGA. The authors would like to thank Prof. Bert van Geemen for an enlightening conversation and Prof. Shigeyuki Kond\=o for his suggestion about moduli spaces of cubic threefolds. \section{Preliminary notions} \subsection{Lattices} A {\it lattice} $L$ is a free $\mathbb{Z}$-module equipped with a non-degenerate symmetric bilinear form $\langle \cdot, \cdot\rangle$ with integer values. Its {\it dual lattice} is $L^{\vee}:=\Hom_{\mathbb{Z}}(L,\mathbb{Z})$. It can be also described as follows: $$ L^{\vee}\cong\{x\in L\otimes \mathbb{Q}~|~\langle x,v\rangle\in \mathbb{Z}\quad \forall v\in L\}. $$ Clearly $L$ is a sublattice of $ L^{\vee}$ of the same rank, so the \emph{discriminant group} ${A_L:=L^{\vee}/L}$ is a finite abelian group whose order is denoted $\discr(L)$ and called the {\it discriminant of $L$}. We denote by $\ell(A_L)$ the \emph{length} of $A_L$, i.e. the minimal number of generators of $A_L$. Let $\{e_i\}_i$ be a basis of~$L$ and $M:=(\langle e_i,e_j\rangle)_{i,j}$ the Gram matrix, then one has $\discr(L)=|\det(M)|$. A lattice $L$ is called \emph{even} if $\langle x,x\rangle\in 2\mathbb{Z}$ for all $x\in L$. In this case the bilinear form induces a quadratic form $q_L: A_L\longrightarrow \mathbb{Q}/2\mathbb{Z}$. Denoting by $(s_{(+)},s_{(-)})$ the signature of $L\otimes\mathbb{R}$, the triple of invariants $(s_{(+)},s_{(-)},q_L)$ characterizes the \emph{genus} of the even lattice $L$ (see \cite[Chapter 15, \S 7]{conwaysloane}, \cite[Corollary 1.9.4]{Nikulinintegral}). A sublattice $M\subset L$ is called \emph{primitive} if $L/M$ is a free $\mathbb{Z}$-module. Let $p$ be a prime number. A lattice $L$ is called $p$-\emph{elementary} if $A_L\cong\left(\frac{\mathbb{Z}}{p\mathbb{Z}}\right)^{\oplus a}$ for some non negative integer $a$ (also called the \emph{length} $\ell(A_L)$ of $A$). We write $\frac{\mathbb{Z}}{p\mathbb{Z}}(\alpha)$, $\alpha\in\mathbb{Q}/2\mathbb{Z}$ to denote that the quadratic form $q_L$ takes value $\alpha$ on the generator of the $\frac{\mathbb{Z}}{p\mathbb{Z}}$ component of the discriminant group. We denote by $U$ the unique even unimodular hyperbolic lattice of rank two and by $A_k, D_h, E_l$ the even, negative definite lattices associated to the Dynkin diagrams of the corresponding type ($k\geq 1$, $h\geq 4$, $l=6,7,8$). We denote by $L(t)$ the lattice whose bilinear form is the one on $L$ multiplied by $t\in\mathbb{N}^\ast$. In the sequel we will be using the lattice $E_6^\vee(3)$ (see \cite{AST}): it is even, negative definite and $3$-elementary with $a=5$. To get a simple form of its discriminant group one can proceed as follows. By~\cite[Table 2]{AS} the lattice $U(3)\oplus E_6^\vee(3)$ admits a primitive embedding in the unimodular $K3$ lattice with orthogonal complement isometric to $U\oplus U(3)\oplus A_2^{\oplus 5}$. It follows that the discriminant form of $E_6^\vee(3)$ is the opposite of those of $A_2^{\oplus 5}$, so it is $\mathbb{Z}/3\mathbb{Z}(2/3)^{\oplus 5}$. \subsection{IHS manifolds and their moduli spaces} A compact complex K\"{a}hler manifold $ X $ is {\it irreducible holomorphic symplectic} (IHS) if it is simply connected and admits a holomorphic $2$-form $ \omega_X \in H^{2,0}(X) $ everywhere non degenerate and unique up to multiplication by a non-zero scalar. The existence of such a symplectic form $\omega_X$ immediately implies that the dimension of $X $ is an even integer. Moreover, the canonical divisor $ K_X$ is trivial, $c_1(X)=0$, and $ T_X \cong \Omega _X^1 $. For a complete survey of this topic we refer the reader to the book \cite{GrossJoyceHuy} and references therein. The second cohomology group $H^2(X,\mathbb{Z})$ is an integral lattice for the Beauville--Bogomolov--Fujiki quadratic form see \cite{Beauvillec1Nul}. One of the most studied deformation families is that of $ X=S^{\left[n\right]} $, with $n\geq 2$, the Hilbert scheme of $0$-dimensional subschemes of length $n$ of a smooth $K3$ surface $ S $. The lattice $(H^2(X,\mathbb{Z}),q)$ in this case is $L=U^{\oplus 3}\oplus E_8^{\oplus 2}\oplus \langle -2(n-1)\rangle$; we say that an IHS manifold $X$ is an \ihskn if it is deformation equivalent to the Hilbert scheme of $n$ points on a $K3$ surface. We recall some well known facts from \cite{Huybrechts} and \cite{MarkmanTorelli}. If $X$ is an IHS manifold, a {\it marking} for $X$ is an isometry $\eta: L \longrightarrow H^2(X,\mathbb{Z})$; the manifold $X$ is sometimes said to be {\it of type $L$}. An isomorphism $f:X_1\longrightarrow X_2$ is an isomorphism of marked pairs $(X_1,\eta_1)$ and $(X_2,\eta_2)$ if $\eta_1=f^*\circ \eta_2$. There exists a coarse moduli space $\mathcal{M}_{L}$ that parametrizes isomorphism classes of marked pairs of type $L$; it is a non-Hausdorff smooth complex manifold (see \cite{Huybrechts}). If $X$ is an \ihskn then $\mathcal{M}_{L}$ has dimension $21$. Denote by $$ \Omega_L:=\{\omega\in\mathbb{P}(L\otimes \mathbb{C})\, |\, q(\omega)=0,\, q(\omega+\bar{\omega})>0\} $$ the {\it period domain}; it is an open (in the analytic topology) subset of the non-singular quadric defined by $q(\omega)=0$. The period map $$ \mathcal{P}:\mathcal{M}_{L} \longrightarrow \Omega_L, (X,\eta)\mapsto \eta^{-1}(H^{2,0}(X)) $$ is a local isomorphism by the Local Torelli Theorem \cite[Th\'eor\`eme 5]{BeauvilleKaehler}. For $\omega\in \Omega_L$ we consider $$ L^{1,1}(\omega):=\{\lambda\in L\,|\, (\lambda,\omega)=0\}, $$ where $(\cdot,\cdot)$ is the bilinear form associated to the quadratic form $q$. Then $L^{1,1}(\omega)$ is a sublattice of $L$, and, given a marked pair $(X,\eta)$, we get $\eta^{-1}(\NS(X))= L^{1,1}(P(X,\eta))$. The set $\{\alpha\in H^{1,1}(X)\cap H^2(X,\mathbb{R})\,|\, q(\alpha) >0\}$ has two connected components; the {\it positive cone} $\mathcal{C}_X$ is the connected component containing the {\it K\"ahler cone $\mathcal{K}_X$}. Recall that two points $x,y$ of a topological space $M$ are called {\it inseparable} if every pair of open neighbourhoods $x\in U$ and $y\in V$ has non-empty intersection; a point $x\in M$ is called a {\it Hausdorff point} if for every $y\in M$, $y\not= x$, then $x$ and $y$ are separable. \begin{theorem}[Global Torelli Theorem] \cite{Verbitsky},\cite[Theorem 2.2]{MarkmanTorelli} \label{GTT} Let $\mathcal{M}^0_L$ be a connected component of $\mathcal{M}_L$. \begin{enumerate} \item The period map $\mathcal{P}$ restricts to a surjective holomorphic map $$\mathcal{P}:\mathcal{M}^0_L\longrightarrow \Omega_L.$$ (we call it again $\mathcal{P}$ for simplicity). \item For each $\omega\in\Omega_L$, the fiber $\mathcal{P}^{-1}(\omega)$ consists of pairwise inseparable points. \item Let $(X_1,\eta_1)$ and $(X_2,\eta_2)$ be two inseparable points of $\mathcal{M}_L^0$. Then $X_1$ and~$X_2$ are bimeromorphic. \item The point $(X,\eta)\in\mathcal{M}_L^0$ is Hausdorff if and only if $\mathcal{C}_X=\mathcal{K}_X$. \end{enumerate} \end{theorem} In the sequel we will be using also the following Hodge theoretic version of the Torelli theorem. \begin{theorem}\cite[Theorem 1.3]{MarkmanTorelli}\label{HTT}. Let $X$ and $Y$ be two irreducible holomorphic symplectic manifolds deformation equivalent one to each other. Then: \begin{enumerate} \item $X$ and $Y$ are bimeromorphic if and only if there exists a parallel transport operator $f:H^2(X,\mathbb{Z})\rightarrow H^2(Y,\mathbb{Z})$ that is an isomorphism of integral Hodge structures; \item if this is the case, there exists an isomorphim $\tilde{f}:X\rightarrow Y$ inducing $f$ if and only if $f$ preserves a K\"ahler class. \end{enumerate} \end{theorem} Recall that a parallel transport operator $f:H^*(X,\mathbb{Z})\longrightarrow H^*(Y,\mathbb{Z})$ is called a {\it monodromy operator}. If $\Mo (X)\subset \GL(H^*(X,\mathbb{Z}))$ denotes the subgroup of monodromy operators, we denote by $\Mo^2(X)$ its image in $O(H^2(X,\mathbb{Z}))$. So we have \begin{defi} Given a marked pair $(X,\eta)$ of type $L$, we define the \textit{monodromy group} as $\Mo^2(L):=\{\eta^{-1}\circ f\circ\eta\,|\, f\in \Mo^2(X)\}\subset \GL(L)$. \end{defi} A priori, the definition of $\Mo^2(L)$ depends on the choice of $(X,\eta)$, but it is easy to show that it is well-defined on the connected component $\mathcal{M}^0_L$. It was proven by Verbitsky in \cite{VerTorelli} that $\Mo^2(L)$ is an arithmetic subgroup of $O(L)$. By a result of Markman \cite[Theorem 1.2]{Markmanintegral}, if $X$ is \ihskncom then $\Mo^2(X)$ is a normal subgroup of $O(H^2(X,\mathbb{Z}))$. In particular, if $n=2$ then $\Mo^2(X)=O^{+}(H^2(X,\mathbb{Z}))$, which are the isometries of $H^2(X,\mathbb{Z})$ that preserve the positive cone, so in this case Theorem \ref{HTT} can be restated in the following way (which is essentially the same statement as for $K3$ surfaces):\\ Let $X$ be an \ihskpt Then: \begin{enumerate} \item Let $h\in O^+(H^2(X,\mathbb{Z}))$ be an isomorphism of integral Hodge structures, then there exists $f\in \Bir(X)$, the group of birational transformations of $X$, such that $f^*=h$; \item Let $h\in O^+(H^2(X,\mathbb{Z}))$ be an isomorphism of integral Hodge structures. There exists $f\in\Aut(X)$ such that $f^*=h$ if and only if $h$ preserves a K\"ahler class. \end{enumerate} \section{Non-symplectic automorphisms of IHS manifolds and $(\rho, T)$-polarizations} We briefly review here what is known for non-symplectic automorphisms of IHS manifolds. Let $X$ be an IHS manifold and $f$ be a holomorphic automorphism of $X$ of prime order $p$ acting non-symplectically: $f^*$ acts on $H^{2,0}(X)$ by multiplication by a primitive $p$-th root of the unity. Such automorphisms can exist only when $X$ is projective. It follows that the invariant lattice $T\subset H^2(X,\mathbb{Z})$ is a primitive sublattice of the N\'eron--Severi group $\NS(X)$, and consequently the characteristic polynomial of the action of $f$ on the transcendental lattice $\Trans(X)$ is the $k$-th power of the $p$-th cyclotomic polynomial $\Phi_p$. Thus $k\varphi(p)=k(p-1)=\rank_\mathbb{Z}\Trans(X)$, and in particular $$ \varphi(p)\leq b_2(X)-\rho(X), $$ where $\varphi$ is the Euler's totient function and $\rho(X)=\rank_\mathbb{Z}\NS(X)$ is the Picard number of $X$. If $X$ is \ihskncom since $b_2(X)=23$, the maximal prime order for $f$ is $p=23$, and this can happen only when $\rho(X)=1$. Observe that a very general projective \ihsk has no non trivial automorphisms. Here, by {\it very general} we mean that the manifold has Picard number one and it is not a special member in the moduli space. We believe that this fact is well known, but since we could not find an explicit proof in the literature, we give here a proof that uses our previous results: \begin{theorem} Let $X$ be a very general projective \ihsk, $\NS(X)\not=\langle 2\rangle$, then $\Aut(X)=\id$. \end{theorem} \begin{proof} A generic projective IHS manifold $X$ has N\'eron--Severi group of rank $1$ equal to $\langle 2t \rangle$, and so $\rk \Trans(X)=22$. Since an automorphism of $X$ induces a Hodge isometry on $H^2(X,\mathbb{Z})$, it preserves $\NS(X)$ and so it preserves an ample class. This means that every element of $\Aut(X)$ preserves a K\"ahler metric, hence it is an isometry. In conclusion we have that $\Aut(X)$ is a discrete Lie subgroup of a compact group, so it is finite. We are now left to study automorphisms of finite order on $X$, and it is easy to show that we can restrict to the case of prime order. Recall that if $X$ admits a symplectic automorphism then $\rk \NS(X)\geq 8$, see \cite[Section 6.2]{mongardiPhD}, so we do not have such automorphisms. If $\sigma$ is non-symplectic of prime order $p$, then $p-1$ must divide $22$. So we have the possibilities $p=2, 3, 23$. For $p=23$ then $\NS(X)=\langle 46 \rangle$, and only a very special \ihsk carries an order $23$ non-symplectic automorphism as shown in \cite{BCMS}. For $p=3$ the only possibility is $\NS(X)=\langle 6 \rangle$, and for these \ihsk we do not always have a non-symplectic automorphism, see \cite{BCS}. If $p=2$, then $\NS(X)= \langle 2 \rangle$ by \cite{BCS}, and this case corresponds to an \ihsk with an involution that deforms to Beauville's involution on the Hilbert scheme of two points of a quartic in $\mathbb{P}^3$ not containing a line. \end{proof} \subsection{$(\rho,T)$-polarized marked pairs}\label{definitions} Let now $T$ be an even non-degenerate lattice of rank $r\geq 1$ and signature $(1,r-1)$. A \emph{$T$-polarized} IHS manifold is a pair $(X,\iota)$, where $X$ is a projective IHS manifold and $\iota$ is a primitive embedding of lattices $\iota:T\hookrightarrow \NS(X)$ (see also \cite{C4}). Observe that we are then assuming that $T$ has a primitive embedding in $L$, and we identify $T$ with its image as sublattice of $L$. Let $(\bar{X},\bar{\iota})$ be a $T$-polarized IHS manifold such that there exists a cyclic group $G=\langle \bar{\sigma}\rangle\subset \Aut(\bar{X})$ of prime order $p\geq 3$ acting non-symplectically on $\bar{X}$. Assume that the action of $G$ on $\bar{\iota}(T)$ is the identity and that there exists a group homomorphism $\rho: G\longrightarrow O(L)$ such that $$ T=L^{\rho}:=\{x\in L\,|\, \rho(\bar \sigma)(x)=x\}. $$ \begin{defi} A {\it $(\rho, T)$-polarization} of a $T$-polarized $(X,\iota)$ is a marking $\eta\colon L\to H^2(X,\mathbb{Z})$ such that $\eta_{|T}=\iota$ and such that there exists $\sigma\in\Aut(X)$ satisfying $\sigma^\ast=\eta\circ\rho({\bar\sigma})\circ\eta^{-1}$. The pair $(X,\eta)$ is said to be {\it $(\rho, T)$-polarized} (in order to keep a light notation, we forget about $\iota$, though it is part of the data). \end{defi} \begin{rem}\label{rho-monodromy} It follows immediately from the definition and from the Hodge-theoretic Torelli Theorem \ref{HTT} that a necessary condition for the existence of $(\rho,T)$-polarized marked pairs is that $\rho(\bar{\sigma})\in \Mo^2(L)$. \end{rem} Two $(\rho, T)$-polarized marked IHS manifolds $(X_1, \eta_1)$ and $(X_2,\eta_2)$ are isomorphic if there is an isomorphism $f\colon X_1\to X_2$ such that $\eta_1=f^*\circ \eta_2$. Let $\omega$ be the line in $L\otimes \mathbb{C}$ defined by $\omega=\eta^{-1}(H^{2,0}(X))$ and let $\xi\in\mathbb{C}^*$ be such that $\rho(\bar{\sigma})(\omega)=\xi \omega$. Observe that $\xi\not= 1$, since the action is non-symplectic, and $\xi$ is a primitive $p$-th root of unity not equal to $-1$ since $p$ is a prime number $p\geq 3$. The period $\omega$ belongs to the eigenspace $S(\xi)$ of $S\otimes \mathbb{C}$ relative to the eigenvalue $\xi$, where $S$ is the orthogonal complement of $T$ in $L$. Then the period belongs to the space $$ \Omega_T^{\rho,\xi}:=\{x\in \mathbb{P}(S(\xi))\,|\, q(x+\bar{x})>0\} $$ of dimension $\dim S(\xi)-1$, which is a complex ball if $\dim S(\xi)\geq 2$. It is easy to check that every point $x\in \Omega_T^{\rho,\xi}$ satisfies automatically the condition $q(x)=0$. We recall now some of the results of \cite{BCS} which contains the classification of non--symplectic automorphisms of prime order $3\leq p\leq 19$, $p\not=5$ and partial results on involutions, completing the ones in \cite{BeauvilleInv}. The cases $p=5$ and $p=23$ were then discussed respectively in \cite{kevin} and in \cite{BCMS}. Such automorphisms are classified in terms of their invariant sublattice $T$ (see \cite[Appendix A]{BCS} and \cite[Section 3.4]{kevin}). For higher dimensional \ihskn a classification is not known yet, there are only partial results due to \cite{Joumaah} in the case of involutions. Moduli spaces of \ihskn endowed with a non-symplectic involution are studied in \cite{Joumaah}; in the case of \ihsk, for odd primes, some partial results were contained in \cite{BCS}. There the authors deal with the case $T=\overline{T}\oplus \langle -2\rangle$, where $\overline{T}$ is an even non degenerate lattice of signature $(1,21-(p-1) m)$ with fixed primitive embedding in the $K3$ lattice $\Lambda$, and $S=T^{\perp}\cap L\subset \Lambda$. \begin{theorem}{\cite[Theorem 5.5]{BCS}}\label{ball} Let $X$ be a $(\rho, T)$-polarized \ihsk such that $H^{2,0}(X)$ is contained in the eigenspace of $H^2(X,\mathbb{C})$ relative to $\xi$ (with $T$ with a decomposition as above). Then $\omega_X\in \Omega_T^{\rho,\xi}$ and conversely, if $\dim S(\xi)\geq 2$ every point of $\Omega_T^{\rho,\xi}\setminus\bigcup_{\delta\in S, q(\delta)=-2} \left(H_{\delta}\cap \mathbb{P}(S(\xi))\right)$ is the period point of some $(\rho, T)$-polarized \ihsk (where $H_\delta$ is the hyperplane in $\mathbb{P}(S(\xi))$ orthogonal to $\delta$). \end{theorem} A comparison with \cite[Appendix A]{BCS} shows that there are only two cases in the classification of non-symplectic automorphisms of prime order on \ihsk where the assumption on $T$ of Theorem \ref{ball} is not satisfied, namely $T=\langle 6\rangle$ and $T=\langle 6\rangle \oplus E_6^*(3)$. \section{The image of the period map}\label{surjective-period} The aim of this section is now to compute which are the period points, via the period map $\mathcal{P}$, corresponding to \ihskn which are $(\rho,T)$-polarized for all primes $p\geq 3$, with given invariant lattice $T$; in particular, in the case of \ihskcom we consider $T$ as classified in \cite[Appendix A]{BCS}. Recall the following definition (see \cite[Definition 1.13]{AmerikVerbitsky} and also \cite[Definition 5.10]{MarkmanTorelli}): \begin{defi} Let $X$ be an IHS manifold. A rational non-zero class $\delta\in H^{1,1}(X)\cap H^2(X,\mathbb{Q})$ with $q(\delta)<0$ is said to be \textit{monodromy birationally minimal} (MBM) if there exists a bimeromorphic map $f:X\dashrightarrow Y $ and a monodromy operator $g\in\Mo^2(X)$ which is also a Hodge isometry such that the hyperplane $\delta^{\perp}\subset H^{1,1}(X)\cap H^2(X,\mathbb{R})$ contains a face of $g(f^*(\mathcal{K}_{Y}))$. Let $\Delta(X)$ be the set of integral MBM classes $\delta\in H^{1,1}(X)\cap H^2(X,\mathbb{Z})$ on $X$. \end{defi} We call the classes in $\Delta(X)$ {\it wall divisors} (see also \cite{Knutsen-Lelli-Chiesa-Mongardi}). An essential result for what follows is: \begin{theorem}\cite[Theorem 6.2]{AmerikVerbitsky}\label{wallsMBM} Let $X$ be an IHS manifold, $\Delta(X)$ as above and $$ \mathcal{H}:=\bigcup_{\delta\in \Delta(X)} \delta^{\perp}\subset H^{1,1}(X)\cap H^2(X,\mathbb{R}) $$ Then the K\"ahler cone of $\mathcal{K}_X$ is a connected component of $\mathcal{C}_X\setminus\mathcal{H}$. \end{theorem} The previous theorem generalizes the analogous result for $K3$ surfaces, where MBM classes replace the $(-2)$--curves. Recall also the following: \begin{defi}\cite[Definition 6.1]{AmerikVerbitsky} A {\it K\"ahler-Weyl chamber} of $X$ is the image $g(f^*\mathcal{K}_Y)$ of the K\"ahler cone of $Y$ under some $g\in \Mo^2(X)$ which is also an isomorphism of Hodge structures, where $Y$ runs through all birational models of $X$ and $f:X\dashrightarrow Y$. \end{defi} \begin{rem}\label{utile} \begin{itemize} \item[1)] By \cite[Lemma 5.12]{MarkmanTorelli} if $X_1$ and $X_2$ are birational, then the birational map defines a parallel transport operator which is an Hodge isometry. This maps K\"ahler-Weyl chambers in $\mathcal{C}_{X_1}$ to K\"ahler-Weyl chambers in $\mathcal{C}_{X_2}$. In particular the number of connected components in Theorem \ref{wallsMBM} does not depend on the birational model we have chosen. \item[2)] Fix an MBM class $\delta\in\NS(X)$, then as seen in the definition $\delta^{\perp}$ does not necessarily contain a face of the K\"ahler cone $\mathcal{K}_X$ of $X$ but it has constant sign on it, i.e. $(\delta,k)>0$ for all $k\in \mathcal{K}_X$ or $(-\delta,k)>0$ for all $k\in\mathcal{K}_X$. In particular $\delta$ can not be zero on $\mathcal{K}_X$, otherwise $\mathcal{K}_X\subset \delta^\perp$, which is not possible by definition of MBM class. \end{itemize} \end{rem} Let $\mathcal{M}_L^+$ be a connected component of the moduli space of marked \ihskn and from now on let $\mathcal{P}$ be the restriction of the period map to a connected component $\mathcal{M}_L^+\rightarrow \Omega_L$. Define $\Delta(L)$ as the set of $\bar{\delta}\in L$ such that there exists $(X,\phi)\in \mathcal{M}_L^+$ with $\phi(\bar{\delta})\in \NS(X)$ a MBM class. Observe that by \cite[Theorem 2.16]{AmerikVerbitskyMK} we have $\Delta(X)=\phi(\Delta(L))\cap \NS(X)$. We denote $\Delta(S)=\Delta(L)\cap S$. \begin{theorem}\label{surjectivity} Let $T\subset L$ be a fixed primitive embedding and $\rho:G\rightarrow O(L)$ be a group homomorphism such that there exists a $(\rho,T)$-polarized \ihskn $(\bar{X},\bar{\phi})\in\mathcal{M}_L^+$. \begin{enumerate} \item \label{surjectivity-i} Let $(X,\phi)\in \mathcal{M}_L^+$ be a $(\rho, T)$-polarized \ihskn such that $H^{2,0}(X)$ is contained in the eigenspace of $H^2(X,\mathbb{C})$ relative to $\xi$. Then $ \mathcal{P}(X,\phi)\in \Omega_T^{\rho,\xi}\setminus\Delta$, where \[ \Delta:=\bigcup_{\delta\in \Delta(S)} (H_{\delta}\cap \Omega_T^{\rho,\xi}). \] and $H_{\delta}$ is the hyperplane orthogonal to $\delta$ in $\mathbb{P}(L_{\mathbb{C}})$. \item \label{surjectivity-ii} Conversely, every point of $\Omega_T^{\rho,\xi}\setminus\Delta$ is the period point of some $(\rho, T)$-polarized \ihskn. \end{enumerate} \end{theorem} \begin{proof} Given a $(\rho, T)$-polarized $(X,\phi)\in\mathcal{M}_L^+$, where $\phi$ denotes the marking, let $\omega:=\phi^{-1}(H^{2,0}(X))$ be its period. If we had $\omega\in H_{\delta}$ for a $\delta \in \Delta(S)$, then, by definition of $\NS(X)$ as orthogonal complement of $H^{2,0}(X)$ in $H^2(X,\mathbb{Z})$, we would get $\phi(\delta)\in\NS(X)$. But then $\phi(\delta)$ is a MBM class on $X$ by \cite[Theorem 2.16]{AmerikVerbitskyMK}. Now by Remark \ref{utile} we have $(\pm\phi(\delta), k)>0$ for all $k\in\mathcal{K}_X$, in particular this is not zero. Conversely, let $\mathbb{C}\omega\in \Omega_T^{\rho,\xi}\setminus\Delta$. By the surjectivity of the period map of $T$-polarized manifolds (see \cite[Proposition 3.2]{C4}), we know that there exists a $T$-polarized marked pair $(X,\phi)$ such that its period is $\mathbb{C}\omega$; let $\mathbb{C}\omega_X:=\phi(\mathbb{C}\omega)$ be the line spanned by a symplectic holomorphic 2-form on $X$. Define $\psi=\phi\circ\rho(\bar{\sigma})\circ\phi^{-1}$. It is an isometry of $H^2(X,\mathbb{Z})$ and it preserves the Hodge structure on $H^2(X,\mathbb{C})$ since $$ \psi(\omega_X)=\phi(\rho(\bar{\sigma})(\omega))=\phi(\xi\omega)=\xi\omega_X. $$ It follows from Remark \ref{rho-monodromy} that $\psi\in \Mo^2(X)$. We want now to use Markman's Torelli theorem to show that on $X$, or on a birational model of it, we can find a non-symplectic automorphism and thus we get the surjectivity of the restriction of the period map. We study now the behaviour of $\psi$ with respect to the K\"ahler cone. {\bf First case}. If $\mathcal{K}_X\cap \phi(T)\neq \emptyset $, this means that $\psi$ fixes a K\"ahler class. By Markman's Torelli theorem \cite[Theorem 1.3]{MarkmanTorelli}, $\psi$ is then induced by an automorphism $\sigma$, i.e. $\sigma^*=\psi$ and $(X,\phi)$ is $(\rho,T)$-polarized. {\bf Second case}. If $\mathcal{K}_X\cap \phi(T)= \emptyset$, we remark that for all $\delta\in \Delta(X)$, the invariant sublattice $H^2(X,\mathbb{Z})^{\psi}=\phi(T)$ is not contained in $\delta^{\perp}$. Otherwise there would exist a $\delta\in \Delta(X)$ orthogonal to $\phi(T)$, hence such that $\phi^{-1}(\delta)\in \Delta(S)$ (in particular is contained in $S$) but since $\delta\in\NS(X)$ we have also $(\phi^{-1}(\delta),\omega)=0$ which is in contradiction with $\omega\notin H_{\phi^{-1}(\delta)}\cap \Omega_T^{\rho,\xi}$. As a consequence, $\phi(T)$ is not contained in $\bigcup_{\delta\in\Delta(X)}\delta^{\perp}$. In particular, this shows that the intersection of $H^2(X,\mathbb{Z})^{\psi}=\phi(T)$ with $\mathcal{C}_X\setminus\bigcup_{\delta\in\Delta(X)}\delta^{\perp}$ is non-empty. In other words, there exists $h\in H^2(X,\mathbb{Z})^{\psi}\cap \mathcal{K}$ for $\mathcal{K}$ a K\"ahler-Weyl chamber (see \cite[Definition 5.10]{MarkmanTorelli}) and so a birational map $f:X\dashrightarrow\tilde{X}$, $y$ a K\"ahler class on $\tilde{X}$ and $\tau$ a monodromy operator preserving the Hodge decomposition on $X$ such that $\tau(f^*(y))=h$. Now consider the isometry $\tilde{\psi}=g^{-1}\circ\psi\circ g$ on $H^2(\tilde{X},\mathbb{Z})$, for $g=\tau\circ f^*$: it is easy to see (with a computation as above) that this is a monodromy operator preserving the Hodge decomposition and it satisfies $\tilde{\psi} (y)=y$, hence it fixes a K\"ahler class, and moreover $\tilde{X}$ has period $\omega$ and is $(\rho,T)$-polarized. \end{proof} \begin{rem} The previous theorem does not say that every birational model in a fiber of the previous period map admits a non symplectic automorphism. This is a priori not true, it becomes true if for a certain $X$ we have $\mathcal{C}_X\subset \phi(T)_\mathbb{R}$. See the discussion in the next section. \end{rem} As a byproduct of the first part of the proof of Theorem \ref{surjectivity} above we obtain the following \begin{cor}\label{iperpianiDeltaS} Given a marked pair $(X,\phi)\in\mathcal{M}_L^+$, if $\mathcal{P}(X,\phi)\in\Delta$ then there is no automorphism of $X$ of prime order $p$ with invariant sublattice isometric to $T$. \end{cor} \begin{rem} Observe that if $\dim S(\xi)=1$ then one has $\dim \Omega_T^{\rho,\xi}=0$, so that, if this space is not empty, the period domain $\Omega_T^{\rho,\xi}$ consists exactly of one point and there exist finitely many $(\rho, T)$-polarized \ihskn. \end{rem} Let $\mathcal{M}_T^{\rho,\xi}$ be the set of $(\rho,T)$-polarized pairs $(X,\phi)\in\mathcal{M}_L^+$ such that $\rho(\bar{\sigma})(\omega)=\xi \omega$, for $\omega$ the line in $L\otimes \mathbb{C}$ defined by $\omega=\phi^{-1}(H^{2,0}(X))$. Theorem \ref{surjectivity} implies that the period map restricts to a surjection $\mathcal{P}:\mathcal{M}_T^{\rho,\xi}\rightarrow\Omega_T^{\rho,\xi}\setminus\Delta$. \section{Injectivity locus of the period map}\label{injectivityperiod} In this section we use techniques developed in \cite{Joumaah} to construct an injective restriction of the period map. Given a non-symplectic automorphism $\sigma$ of order $p\geq 3$ on $(X,\phi)\in \mathcal{M}_T^{\rho,\xi}$, where $\phi$ is the marking, let $\mathcal{C}_X^{\sigma}=\{y\in\mathcal{C}_X |, \sigma^*(y)=y\}$ denote the connected component of the fixed part of the positive cone that contains a K\"ahler class. Observe that this is surely non empty since there exists a $\sigma$-invariant ample class on $X$. Consider the following chamber decomposition: \begin{equation}\label{invdec} \mathcal{C}_X^{\sigma}\setminus \bigcup_{\delta\in \Delta_{\sigma }(X)}\ \delta^{\perp} \end{equation} where $\Delta_{\sigma }(X)=\lbrace \delta\in\Delta(X)| \sigma^*(\delta)=\delta\rbrace$. Clearly a priori this set contains less walls. One can see $\mathcal{C}_X^{\sigma}$ also as the intersection: $$ \mathcal{C}_X\cap\phi(T)_{\mathbb{R}}. $$ Remark also that since $\delta$ is $\sigma$-fixed then $\delta^{\perp}\subset H^{1,1}(X)\cap H^2(X,\mathbb{R})$ is $\sigma$-invariant so that $\delta^{\perp}\cap \phi(T)_{\mathbb{R}}$ is non empty. \begin{defi} The {\it stable invariant K\"ahler cone} $\tilde{\mathcal{K}}_X^{\sigma}$ of $X$ is the chamber of (\ref{invdec}) containing the invariant K\"ahler cone $\mathcal{K}_X^{\sigma}:=\mathcal{K}_X\cap\mathcal{C}_X^{\sigma}$. \end{defi} One can show easily that for any $(X,\phi)\in\mathcal{M}_T^{\rho,\xi}$ we have $\phi(\Delta(T))=\Delta_{\sigma }(X)$. Let $C_T$ be the connected component of the cone $\lbrace x\in T_{\mathbb{R}}\, |\, q(x)>0\rbrace\subset L_{\mathbb{R}}$ such that $\phi(C_T)=\mathcal{C}_{X}^{\sigma}$. Moreover, let $K(T)$ be a connected component of the chamber decomposition \begin{equation}\label{decomp-C-T} C_T\setminus\bigcup_{\delta\in \Delta(T)}\delta^{\perp}\subset T_{\mathbb{R}}. \end{equation} Remark that here we work only with classes in $T_\mathbb{R}$ and not with the whole lattice $L_{\mathbb{R}}$: this will be fundamental to get an injective restriction of the period map. \begin{lemma}\label{forany} The following hold: \begin{enumerate} \item For any $(X_0,\phi_0)\in\mathcal{M}_T^{\rho,\xi}$ with non--symplectic automorphism $\sigma_0\in\Aut(X_0)$ such that $\mathcal{K}_{X_0}^{\sigma_0}\subset \phi(K(T))$, then $\phi(K(T))=\tilde{\mathcal{K}}_{X_0}^{\sigma_0}$. \item If $\NS(X)=\phi(T)$, then all MBM classes are $\sigma$-invariant. Hence we have that $\mathcal{C}_X^{\sigma}=\mathcal{C}_X$. Moreover if $\mathcal{K}_X^{\sigma}\subset \phi(K(T))$ we have $$ \mathcal{K}_{X}^{\sigma}= \phi(K(T))=\tilde{\mathcal{K}}_X^{\sigma}=\mathcal{K}_X. $$ \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item It follows immediately from $\phi(\Delta(T))=\Delta_{\sigma_0 }(X)$ and from the fact that $\phi_0(C_T)=\mathcal{C}_X^{\sigma_0}$. \item If $\NS(X)=\phi(T)$, $\phi(\Delta(T))=\Delta_{\sigma }(X)=\Delta(X)$, hence the chamber decomposition is exactly the one defining the invariant K\"ahler cone $\mathcal{K}_X^{\sigma}$. Observe moreover that in this case $H^{1,1}(X)\cap H^2(X,\mathbb{R})$ is equal to $\phi(T)_{\mathbb{R}}$ so it is invariant by $\sigma$. Since the positive cone is an open subset there we have that $\mathcal{C}_X^{\sigma}=\mathcal{C}_X$ in particular $\mathcal{K}_X^{\sigma}=\mathcal{K}_X$. \end{enumerate} \end{proof} \begin{rem} Observe that item $(2)$ in Lemma \ref{forany} corresponds to the ``generic'' case, since a generic \ihskn in the moduli space has N\'eron-Severi group equal to $\phi(T)$. \end{rem} When the equality $\mathcal{K}_X^{\sigma}= \phi(K(T))$ fails, there exists an algebraic wall $\delta\in\Delta(X)$ which is not $\sigma$-fixed, i.e. it is not in $\Delta_{\sigma}(X)$, but $\delta^{\perp}\cap \mathcal{C}_X^{\sigma} \not=\emptyset$. In particular, it does not appear in the decomposition \eqref{invdec}. More precisely since we have that $\mathcal{K}_X^{\sigma}\subsetneq \phi(K(T))$ it exists an algebraic wall $\delta\in\Delta(X)$ such that $\delta^{\perp}\cap \phi(K(T))\neq\emptyset$: such a wall $\delta$ is neither in $\phi(\Delta(T))$ nor in $\phi(\Delta(S))$. The first fact is clear, if $\delta\in \phi(\Delta (S))$ then there would be an ample invariant class ($X$ is projective) that has zero intersection with it, which is impossible (see the proof of Theorem \ref{surjectivity}). In fact $\phi^{-1}(\delta)$ belongs to the set $\Delta'(L)$ of elements $\nu\in\Delta(L)$ such that there is a decomposition $\nu=\nu_T+\nu_S$ with $\nu_T\in T_{\mathbb{Q}}$ and $\nu_S\in S_{\mathbb{Q}}$ and $q(\nu_T)<0$, $q(\nu_S)<0$ (see \cite{Joumaah}). Recall the following: \begin{lemma} \cite[Lemma 7.6]{Joumaah}\label{characterization-Delta'} The following are equivalent: \begin{enumerate} \item $\delta\in\Delta'(L)$; \item $\sign(T\cap\delta^{\perp})=(1,\rank T-2)$ and $\sign(S\cap\delta^{\perp})=(2,\rank S-3)$ ; \item $\delta\notin T$, $\delta\notin S$, $\Omega_L\cap \mathbb{P}(S)\cap\delta^{\perp}\neq\emptyset$ and $\lbrace x\in T_{\mathbb{R}}\, |\, q(x)>0\rbrace\cap \delta^{\perp}\neq\emptyset$. \end{enumerate} \end{lemma} The chambers of $$\phi(K(T))\setminus\bigcup_{\delta\in \Delta(X)\cap\phi(\Delta'(L))}\delta^{\perp}$$ will turn out to correspond to elements $(Y,\eta)$ in the fiber of $\mathcal{P}^{-1}(\mathcal{P}(X,\phi))$ satisfying $\mathcal{K}_Y^{\sigma}\subset\eta(K(T))$. We introduce the following definition: \begin{defi} A $(\rho,T)$-polarized manifold $(X,\phi)$ is {\it $K(T)$-general} if $\phi(K(T))=\mathcal{K}_X^{\sigma}$, where $\sigma$ is the automorphism of order $p$ induced by $\rho(\bar{\sigma})$. \end{defi} Let $\mathcal{M}_{K(T)}^{\rho,\xi}$ be the set of $(\rho,T)$-polarized pairs $(X,\phi)\in \mathcal{M}_{T}^{\rho,\xi}$ that are $K(T)$-general. We denote by $\Delta'(K(T))$ the set of $\delta\in\Delta'(L)$ such that $\delta^{\perp}\cap K(T)\neq\emptyset$. \begin{theorem}\label{injectivity} The period map induces a bijection $$\mathcal{P}:\mathcal{M}_{K(T)}^{\rho,\xi}\rightarrow \Omega:=\Omega_T^{\rho,\xi}\setminus(\Delta\cup\Delta'),$$ where $\Delta$ is the one defined in Theorem \ref{surjectivity} and $\Delta':=\bigcup_{\delta\in \Delta'(K(T))} (H_{\delta}\cap \Omega_T^{\rho,\xi})$. \end{theorem} \begin{proof} Given a $(X,\phi)\in\mathcal{M}_{K(T)}^{\rho,\xi}$, where $\phi$ denotes the marking, let $\omega:=\phi^{-1}(H^{2,0}(X))$ be its period. By Theorem \ref{surjectivity} (\ref{surjectivity-i}), we know that $ \omega\notin\Delta$. If we had $\omega\in H_{\delta}$ for a $\delta \in \Delta'(K(T))$, then by definition of $\NS(X)$ as orthogonal complement of $H^{2,0}(X)$ in $H^2(X,\mathbb{Z})$, we would get $\phi(\delta)\in\NS(X)$. But then $\phi(\delta)$ is a MBM class on $X$ by \cite[Theorem 2.16]{AmerikVerbitskyMK}. By construction, we would have $\phi(\delta)^{\perp}\cap \phi(K(T))\neq\emptyset$ and this immediately implies that $\mathcal{K}_X^{\sigma}\subsetneq \phi(K(T))$, against the assumption that $(X,\phi)$ is $K(T)$-general. Hence $ \omega\in\Omega$. Given $\omega\in\Omega$, by Theorem \ref{surjectivity}, there exists $(X,\phi)\in\mathcal{M}_T^{\rho,\xi}$ such that its period is $\omega$; we want to show that there exists exactly one $K(T)$-general marked pair $(X',\phi')$ inside $\mathcal{P}^{-1}(\omega)$. Let $\sigma$ be the automorphism on $X$ that induces $\phi\circ\rho(\bar{\sigma})\circ\phi^{-1}$ on $H^2(X,\mathbb{Z})$. Either $\mathcal{K}_X^{\sigma}\subset \phi(K(T))$ or $\mathcal{K}_X^{\sigma}\cap \phi(K(T))=\emptyset$. In the first case, suppose that we have $\mathcal{K}_X^{\sigma}\subsetneq \phi(K(T))$: this means that there exists a wall $\delta\in\Delta(X)\setminus\Delta_{\sigma}(X)$ such that $\delta^{\perp}\cap \phi(K(T))\neq\emptyset$. This implies that $\delta\in \phi(\Delta'(K(T)))\cap\Delta(X)$, against the assumption that $\omega \notin\Delta'$. Hence $(X,\phi)$ is $K(T)$-general. Otherwise, if $\mathcal{K}_X^{\sigma}\cap \phi(K(T))=\emptyset$, let $\mathcal{K}'$ be a chamber of $\mathcal{C}_X\setminus \bigcup_{\delta\in \Delta(X)}\delta^{\perp}$ such that $\mathcal{K}'\cap\mathcal{C}_X^{\sigma}\subset\phi(K(T))$. As in the proof of Theorem \ref{surjectivity}, let $(X',\phi')\in \mathcal{M}_{T}^{\rho,\xi}$ be the birational model of $(X,\phi)$ such that $\tau(f^*(\mathcal{K}_{X'}))=\mathcal{K}'$ for $\tau$ a Hodge monodromy on $X$ and $f:X\dashrightarrow X'$ a birational map, moreover $\phi':=g^{-1}\circ\phi$ where $g=\tau\circ f^*$ and the non-symplectic automorphism on $X'$ is $\sigma'=g^{-1}\circ \sigma\circ g$. Then by construction $\mathcal{K}_{X'}^{\sigma'}\subset \phi'(K(T))$ and we can repeat the above proof. Suppose now that $\mathcal{P}(X_1,\phi_1)=\mathcal{P}(X_2,\phi_2)=\omega$ for two $K(T)$-general pairs $(X_1,\phi_1)$ and $ (X_2,\phi_2)$, and consider $f=\phi_2\circ\phi_1^{-1}$: it is a Hodge isometry and a parallel transport operator, by \cite[Theorem 3.2]{MarkmanTorelli}. Moreover, $f(\mathcal{K}_{X_1}^{\sigma_1})=f(\phi_1(K(T)))=\phi_2(K(T))=\mathcal{K}_{X_2}^{\sigma_2}$, where $\sigma_1$ and $\sigma_2$ denote the non-symplectic automorphisms on $X_1$ and $X_2$ respectively. We have shown that $f$ sends a K\"ahler class to a K\"ahler class so by Theorem \ref{HTT} we have that $f$ is induced by an isomorphism $\overline{f}:X_1\rightarrow X_2$. \end{proof} \section{Complex ball quotients}\label{arithmetic-quot} We study now how the period point $\mathcal{P}(X,\phi)$ depends on the marking. By taking a monodromy operator $h\in \Mo^2(L)$ which is the identity on $T$, i.e. an element of $\lbrace g\in\Mo^2(L)|g_{|T}=\id_T\rbrace$, we get another marking $\phi\circ h^{-1}$ of the $T$-polarized \ihskn $(X,\iota)$. \begin{defi} Let $(X_1, \eta_1)$ and $(X_2,\eta_2)$ be two $(\rho, T)$-polarized marked IHS manifolds and let $\sigma_i$ be a generator of $G\subset \Aut(X_i)$, for $i=1,2$. A {\it $(\rho, T)$-polarized parallel transport operator} is an isometry $f:H^2(X_1,\mathbb{Z})\rightarrow H^2(X_2,\mathbb{Z})$ such that there exists a smooth and proper family $p:\mathcal{X}\rightarrow D$, isomorphisms $\psi_i:X_i\rightarrow\mathcal{X}_i$ and a path $\gamma:[0,1]\rightarrow B$ with $\gamma(0)=b_1$, $\gamma(1)=b_2$, such that parallel transport in $R^2\pi_*\mathbb{Z}$ along $\gamma$ induces $\tilde{f}:=(\psi_2)_*\circ f\circ(\psi_1)^*$, and a holomorphic map $F:\mathcal{X}\rightarrow\mathcal{X}$ such that $F_b$ is an automorphism for all $b\in B$, $F_{b_i}$ induces $\sigma_i^* $ and $F_{b_1}^*=\tilde{f}^{-1}\circ F_{b_2}^*\circ \tilde{f}$. In particular, a {\it $(\rho, T)$-polarized monodromy operator} is a monodromy operator on $X$ satisfying $F_{b_1}^*\circ \tilde{f}=\tilde{f}\circ F_{b_1}^*$. \end{defi} Let $\Gamma_T$ be $\alpha(\lbrace g\in\Mo^2(L)|g_{|T}=\id_T\rbrace)$, with $\alpha: O(L)\rightarrow O(S)$ the restriction map. Let this group act on $\Omega_T^{\rho,\xi}$; the stabilizer of $\Omega_T^{\rho,\xi}$ is equal to the image $\Gamma_{T}^{\rho,\xi}\subset O(S)$, via the restriction map $\alpha$, of the group of $(\rho,T)$-polarized monodromy operators $$ \Mo^2(T,\rho)=\lbrace g\in\Mo^2(L)|g_{|T}=\id_T,\ g\circ \rho(\bar\sigma)=\rho(\bar\sigma)\circ g\rbrace\subset O(L). $$ \begin{prop}\label{equivariance} The group $\Mo^2(T,\rho)$ acts on the set $\mathcal{M}_{K(T)}^{\rho,\xi}$. Moreover, the restriction of the period $\mathcal{P}:\mathcal{M}_{K(T)}^{\rho,\xi}\rightarrow \Omega$ is equivariant with respect to the action of $\Mo^2(T,\rho)$ and $\Gamma_{T}^{\rho,\xi}$, so it induces a bijection between the quotients. \end{prop} \begin{proof} Given $(X,\phi)\in\mathcal{M}_{K(T)}^{\rho,\xi}$, we want to show that $(X,\phi\circ g^{-1})\in\mathcal{M}_{K(T)}^{\rho,\xi}$. Indeed, $(\phi\circ g^{-1})_{|T}=\iota$, $\phi\circ g^{-1}\circ\rho(\bar{\sigma})\circ g\circ\phi^{-1}=\phi\circ g^{-1}\circ g\circ\rho(\bar{\sigma})\circ \phi^{-1}=\bar{\sigma}^*$ and $\phi(g^{-1}(K(T)))=\phi(K(T))$ because $g^{-1}_{|T}=\id_T$. The equivariance is obvious. \end{proof} \begin{lemma}\label{arithmeticGroup} The group $\Gamma_{T}^{\rho,\xi}$ is an arithmetic group. \end{lemma} \begin{proof} We remark first that $\Gamma_{T}^{\rho,\xi}=\lbrace g\in\Gamma_T| g\circ \rho(\bar\sigma)=\rho(\bar\sigma)\circ g\rbrace\subset O(S)$. The group $Z=\lbrace g\in O(S,\mathbb{Q})| g\circ\rho(\bar\sigma)=\rho(\bar\sigma)\circ g\rbrace$ is a linear algebraic group over $\mathbb{Q}$, because the condition $g\circ\rho(\bar\sigma)=\rho(\bar\sigma)\circ g$ gives polynomial equations in the coefficients of the matrix associated to $g$. Hence $Z(\mathbb{Z})=Z\cap O(S)$ is an arithmetic group. We know from \cite[Proposition 3.5]{C4} and its generalization in \cite{C5} that $\Gamma_T$ is of finite index inside $O(S)$ and this implies that $\Gamma_{T}^{\rho,\xi}=Z(\mathbb{Z})\cap \Gamma_T$ is of finite index inside $O(S)\cap Z(\mathbb{Z})=Z(\mathbb{Z})$, thus it is an arithmetic group. \end{proof} A straightforward generalization of \cite[Lemma 7.7]{Joumaah} gives the following result. \begin{lemma}\label{loc-finiteness} The collections of hyperplanes $\Delta$ and $\Delta'$ in $\Omega_L\cap\mathbb{P}(S)$ are locally finite. \end{lemma} \begin{proof} The local finiteness of $\Delta$ is proven in \cite[Lemma 7.7]{Joumaah}, where it is also proven for $\Delta'$ in the case in which $T$ is the invariant sublattice of a non-symplectic involution. That proof can be easily generalized in the following way: given $\delta\in \Delta'(L)$, remark that there exists an integer $d$ such that $d\delta=\delta_T+\delta_S$ with $\delta_T\in T$ and $\delta_S\in S$. Indeed, $T\oplus S$ is a sublattice of finite index in $L$, and it is enough to take $d:=|L/(T\oplus S)|$ to obtain such a decomposition (in particular, in our case $d=p^a$ by \cite[Lemma 4.3]{BNWS}). Then, by Lemma \ref{characterization-Delta'} we deduce $q(\delta_T)<0$ and $q(\delta_S)<0$, but we know that there are only finitely many possible values for $q(\delta)<0$ for an MBM class, so this holds as well for $\delta_S$. It is now enough to remark that $H_{\delta}=H_{\delta_S}$ in $\Omega_L\cap\mathbb{P}(S)$ and that $\Gamma_T$ acts on the set of hyperplanes $H_{\delta_S}$ with a proper and discontinuous action, so that every orbit is closed, and hence locally finite. \end{proof} \begin{cor}\label{algebraicModuli} The complex ball quotient $\mathcal{M}_{K(T)}^{\rho,\xi}/\Mo^2(T,\rho)\cong \Omega/\Gamma_{T}^{\rho,\xi}$ parametrizes isomorphism classes of $K(T)$-general $(\rho,T)$-polarized \ihskn, and it is a quasi-projective variety of dimension $\dim S(\xi)-1$. \end{cor} \begin{proof} The first part of the statement is a direct consequence of Theorem \ref{injectivity} and Proposition \ref{equivariance}. The sets of hyperplanes $\Delta$ and $\Delta':=\bigcup_{\delta\in \Delta'(K(T))} H_{\delta}$ are locally finite in the period domain $\Omega_{T^\perp}$ of $T$-polarized \ihskn by Lemma \ref{loc-finiteness}. Now $\Omega_T^{\rho, \xi}$ is contained in $\Omega_{T^\perp}$ so that $\Delta$ and $\Delta'$ remain locally finite. Then by Proposition \ref{arithmeticGroup} and Baily-Borel's Theorem \cite{BailyBorel} the arithmetic quotient $\Omega/\Gamma_{T}^{\rho,\xi}$ is a quasi-projective variety of dimension $\dim S(\xi)-1$. \end{proof} We have seen in Corollary \ref{iperpianiDeltaS} that, if $\mathcal{P}(X,\phi)\in\Delta$, there is no automorphism on $X$ of prime order $p$ with invariant sublattice $T$. The next statement explains what happens when the period belongs to $\Delta'$. We need to recall the following notation: given a period point $\pi\in\Omega_L$, the group of monodromies which are isomorphisms of Hodge structures is the same for all marked pairs in $\mathcal{P}^{-1}(\pi)$; we denote it $\Mo^2_{\mathrm{Hdg}}(\pi)$. \begin{prop}\label{Delta'} If $\pi\in\Delta'\setminus\Delta$, each element $(X,\phi)$ of $\mathcal{P}^{-1}(\pi)$ such that $\phi(K(T))\cap \mathcal{K}_X\neq\emptyset$ has an automorphism of order $p$ with invariant sublattice $T$ but it is not $K(T)$-general. There is a bijection between $\Mo^2_{\mathrm{Hdg}}(\pi)$-orbits of elements $(Y,\eta)$ of $\mathcal{P}^{-1}(\pi)$ such that $\eta(K(T))\cap \mathcal{K}_Y\neq\emptyset$ and $\Mo^2_{\mathrm{Hdg}}(X)$-orbits of chambers of $U:=\phi(K(T))\setminus\bigcup_{\delta\in \Delta(X)\cap\phi(\Delta'(K(T)))}\delta^{\perp}$, for any $(X,\phi)\in\mathcal{P}^{-1}(\pi)$. \end{prop} \begin{proof} Let $(Y,\eta)\in\mathcal{P}^{-1}(\pi)$ be such that $\eta(K(T))\cap \mathcal{K}_Y\neq\emptyset$. It follows from the proof of Theorem \ref{surjectivity} that there is an automorphism $\sigma_Y$ of order $p$ with invariant sublattice $T$, since by assumption there is a K\"ahler class invariant under the action of $\eta\circ\rho(\bar{\sigma})\circ\eta^{-1}$. On the other hand, $\pi\in\Delta'\setminus\Delta$ implies that there is $\delta\in\Delta'(K(T))$ such that $\eta(\delta)\in \NS(Y)$. In particular, $\eta(\delta)^{\perp}\cap \eta(K(T))\neq\emptyset$ and there is more than one chamber in $U$; this immediately implies that $(Y,\eta)$ is not $K(T)$-general. Given $(Y,\eta)\in\mathcal{P}^{-1}(\pi)/\Mo^2_{\mathrm{Hdg}}(\pi)$ such that $\eta(K(T))\cap \mathcal{K}_Y\neq\emptyset$, we define $\beta(Y,\eta)=[\mathcal{K}_Y^{\sigma_Y}]$, the equivalence class of $\mathcal{K}_Y^{\sigma_Y}$ with respect to the action of $\Mo^2_{\mathrm{Hdg}}(Y)$, and we will show that $\beta$ is the desired bijection. It is clearly well-defined. Suppose that $\beta(Y,\eta)=\beta(Y',\eta')$, or equivalently $\eta'(\eta^{-1}(\mathcal{K}_Y^{\sigma_Y}))=\mathcal{K}_{Y'}^{\sigma_{Y'}}$. Since $\mathcal{P}(Y,\eta)=\mathcal{P}(Y',\eta')$, it follows from the global Torelli theorem \cite[Theorem 1.3]{MarkmanTorelli} that $\eta'\circ\eta^{-1}$ is induced by an isomorphism $f:Y'\rightarrow Y$. Let $(X,\phi)\in\mathcal{P}^{-1}(\pi)$ such that $\phi(K(T))\cap \mathcal{K}_X\neq\emptyset$ (such $(X,\phi)$ can be constructed as in the proof of Theorem \ref{injectivity}) and let $\mathcal{K}_0$ be a chamber of $U$. Remark that $$U=\phi(K(T))\cap\left(\mathcal{C}_X^{\sigma_X}\setminus\bigcup_{\delta\in \Delta(X)}\delta^{\perp}\right)=\phi(K(T))\cap \left(\mathcal{C}_X\setminus\bigcup_{\delta\in \Delta(X)}\delta^{\perp}\right)$$ Indeed, the elements $\delta\in\Delta(X)$ such that $\delta^{\perp}\cap\phi(K(T))\neq\emptyset$ are by definition those in $\phi(\Delta'(K(T)))$ and $\mathcal{C}_X^{\sigma_X}\cap\phi(K(T)) = \mathcal{C}_X\cap\phi(K(T))$. This tells us that there is one chamber $C_0$ of $\mathcal{C}_X\setminus\bigcup_{\delta\in \Delta(X)}\delta^{\perp}$, and in fact a unique one, such that $C_0\cap\phi(K(T))=\mathcal{K}_0$. By \cite[Proposition 5.14 ]{MarkmanTorelli} we know that there is $(Y,\eta)\in\mathcal{P}^{-1}(\pi)$ such that $\mathcal{K}_Y=\eta(\phi^{-1}(C_0))$ and, in particular, $\eta(K(T))\cap \mathcal{K}_Y\neq\emptyset$. Moreover, $\beta(Y,\eta)=\mathcal{K}_Y^{\sigma_Y}=\mathcal{K}_0$. \end{proof} \section{Examples and final remarks} \subsection{A special example: $T=\langle 6 \rangle$} If $(X,\phi)$ is a point in $\mathcal{M}^{\rho,\xi}_T$, then $\mathcal{C}_X^{\sigma}=\mathcal{C}_X\cap\phi(T)_{\mathbb{R}}=\phi(T)_{\mathbb{R}}$, where $\sigma$ denotes the non-symplectic automorphism of order three on $X$. In particular, this is a one-dimensional space and we have also $\mathcal{K}_X^{\sigma}=\phi(T)_\mathbb{R}$, so that $\phi(K(T))=\phi(T)_{\mathbb{R}}=\mathcal{K}_X^{\sigma}$. This means that any point in $\mathcal{M}^{\rho,\xi}_T$ is automatically $K(T)$-general, so that $\mathcal{M}^{\rho,\xi}_T=\mathcal{M}^{\rho,\xi}_{K(T)}$. On the other hand, here clearly $\Delta'=\emptyset$ so the period map: $$ \mathcal{M}_T^{\rho,\xi}/\Mo^2(\rho,T)\rightarrow (\Omega_T^{\rho,\xi}\setminus\Delta)/\Gamma_T^{\rho,\xi} $$ is in fact bijective. This ball quotient has been already thoroughly studied by Allcock, Carlson and Toledo in \cite{Allcock-Carlson-Toledo}, where they study the moduli space of smooth cubic threefolds by associating to each such threefold $Y\subset \mathbb{P}^4$ a triple cover of $\mathbb{P}^4$ ramified exactly along $Y$. This construction gives exactly the family of smooth cubic fourfolds \[ V_1:L_3(x_0,\dots,x_4)+x_5^3=0,\] where $L_3$ defines a smooth cubic $Y$; we have shown in \cite[Example 6.4]{BCS} that the Fano varieties of lines $F(V_1)$ of cubics $V_1$ give a $10$-dimensional family of \ihsk admitting a non-symplectic automorphism of order three with invariant sublattice $T=\langle 6 \rangle$. The hyperplane arrangement $\Delta$ coincides exactly with the union of the two arrangements $\mathcal{H}_{\Delta}$ and $\mathcal{H}_c$ corresponding respectively to discriminant and chordal cubics (compare with the definition given in \cite{Allcock-Carlson-Toledo} before Theorem 3.7). We briefly explain here why: the hyperplanes in $\mathcal{H}_{\Delta} \cup \mathcal{H}_c$ are the orthogonal to the roots, of square $3$, of the Eisenstein lattice associated to $S$, and the quadratic form $q_S$ on $S$ is then recovered by the quadratic form $q_{\mathcal{E}}$ via $q_S(\delta)=-\frac{2}{3}q_{\mathcal{E}}(\delta)$. Thus, in $\mathbb{P}(S\otimes\mathbb{C})$ they correspond exactly to hyperplanes orthogonal to the roots, of square $-2$, in $S$. On the other hand, we know, by results of Bayer, Hassett and Tschinkel \cite{BayerHassettTschinkel} and of Mongardi \cite{MongardiCone}, that $\delta\in\Delta(S)$ are elements in $S$ of square $-2$ or of square $-10$ and divisibility $\mathrm{div}\delta=2$, and these second ones cannot exist in a lattice $S$ of determinant $3$. \subsection{Two different chambers}The theory of \S \ref{injectivityperiod} also offers an interesting explanation of \cite[Remark 7.7]{BCS}. In that case, $p=3$ and $T=U\oplus A_2^{\oplus 5}\oplus\langle -2\rangle$ and we were able to construct to families of examples, one of natural automorphisms on the Hilbert scheme of two points of a $K3$ surface and one on a family of Fano varieties of smooth cubic foufolds. The peculiarity of this example is that, for the first time, it gives an example of two families with automorphisms having the same action on cohomology but non-isomorphic fixed loci. In this example, we obtain a four-dimensional complex ball quotient $\Omega$. Theorem \ref{ball}, and his proof, shows that, for every $\pi\in\Omega$, there exists a marked pair $(\Sigma^{[2]},\phi)\in\mathcal{P}^{-1}(\pi)$, with $\Sigma$ a smooth $K3$ surface, with the natural automorphism of order three acting on it. On the other hand, also the family of Fano varieties of cubics of the form \[ V_2:L_3(x_0,x_1,x_2,x_3)+M_3(x_4,x_5)=0\] gives an open dense subset of $\mathcal{M}_T^{\rho,\xi}$ of maximal dimension four. Our interpretation of this phenomenon is that over the very general period point $\pi\in\Omega$, the fibre $\mathcal{P}^{-1}(\pi)$ will contain a natural marked pair and at least one marked pair coming from the Fano variety $F(V_2)$. Corollary \ref{algebraicModuli} then implies that there exists two chambers $K_1$ and $K_2$ of (\ref{decomp-C-T}) such that the general element in $\mathcal{M}_{K_1}^{\rho,\xi}$ is a Hilbert scheme of two points of a smooth $K3$ surface with a natural automorphism, while the general element in $\mathcal{M}_{K_2}^{\rho,\xi}$ is the Fano variety of a cubic in $V_2$. \subsection{Rationality} If we restrict once more to the assumptions of Theorem \ref{ball}, namely if we suppose that $T=\bar{T}\oplus\langle -2\rangle$, with $\bar{T}\subset L_{K3}$, and we consider isomorphism classes of $K(T)$-general $(\rho,T)$-polarized \ihskcom Corollary \ref{algebraicModuli} gives an isomorphism $\mathcal{M}_{K(T)}^{\rho,\xi}/\Mo^2(T,\rho)\cong \Omega/\Gamma_{T}^{\rho,\xi}$. On the other hand, the arithmetic group $\Gamma_{T}^{\rho,\xi}$ has index two inside the arithmetic group $\Gamma_{K3,\bar{T}}^{\rho,\xi}:=\lbrace g\in O(S)| g\circ \rho(\bar\sigma)=\rho(\bar\sigma)\circ g\rbrace\subset O(S)$. Hence, $\Omega/\Gamma_{T}^{\rho,\xi}$ is generically a double cover of $\Omega/\Gamma_{K3,\bar{T}}^{\rho,\xi}$, and this is a Zariski-open in $\Omega_{K3}/\Gamma_{K3,\bar{T}}^{\rho,\xi}$, where $\Omega_{K3}:=\Omega_{T}^{\rho,\xi}\setminus\Delta$; it is classically known (see \cite{DK}) that $\Omega_{K3}/\Gamma_{K3,\bar{T}}^{\rho,\xi}$ parametrizes isomorphism classes of $(\rho,\bar{T})$-polarized $K3$ surfaces. It follows that, even in the case of \ihskcom it is not possible to deduce any information about the rationality of $\mathcal{M}_{K(T)}^{\rho,\xi}/\Mo^2(T,\rho)$ from the work of Ma, Ohashi and Taki \cite{Ma-Ohashi-Taki} about the rationality of $\Omega_{K3}/\Gamma_{K3,\bar{T}}^{\rho,\xi}$ for $p=3$. The rationality problem is still open for all prime orders $p>3$ also in the case of $K3$ surfaces. \bibliographystyle{amsplain}
1,314,259,996,372
arxiv
\section{Introduction} \label{label_section_Introduction} The Fermi Paradox is the apparent absence of evidence for extraterrestrial intelligence (ETI) in our galaxy, or at least in our local neighborhood, despite calculations suggesting that galactic colonization should be feasible within the age of the galaxy. Numerous theories, aside from an extraordinary rarity of ETI, have been offered to explain the lack of observables. The willingness toward ETI-optimism is understandable not only because the alternative is undesirable, but because it is fairly unactionable; the null conclusion prescribes no future experiments. Nevertheless, we should not be overly exclusive of incompatible yet reasonable theories of galactic exploration. In section \ref{label_section_VonNeumannSelfReplicatingProbes} we consider that one such theory, the use of robotic self-replicating space-probes (SRPs) for exploration, has all but disappeared from the literature, leaving a gap in our investigation of the Fermi Paradox. In section \ref{label_section_PercolationTheory} we consider a previous explanation for the Fermi Paradox based on a percolation model. In section \ref{label_section_InterstellarSocietalCollapse} we evaluate theories of interstellar societal-collapse which attempt to resolve the Fermi Paradox. In response, we present a new theory, the \textit{interstellar transportation bandwidth} (ITB), which we believe provides fresh insight into the feasibility of such models. We also offer a refined version of the Drake equation which incorporates the ITB. In section \ref{label_section_BonusStimulation} we consider an analysis of the Fermi Paradox based on mutual benefit following contact. Finally, in section \ref{label_section_ETIMayStillExistinourGalaxy} we cite a previous theory which we believe best resolves the Fermi Paradox without excluding intragalactic ETI, and in section \ref{label_section_SETI} we discuss how SETI might benefit from the conclusions of this paper. \section{Von Neumann Self-Replicating Probes} \label{label_section_VonNeumannSelfReplicatingProbes} The Fermi Paradox is greatly exacerbated by the proposition that galactic colonization might proceed by purposeful exploration and expansion as opposed to mere population diffusion. Rapid exploration would logically benefit from robotic probes. Such a proposition closely mirrors our own space exploration efforts to date. In this paper we refer to \mbox{\textit{super-ETI}} as ETI $10^3+$ years more advanced than us. Exploration undertaken by super-ETI would not only utilize probes of vast intelligence, but would greatly benefit from life-like self-replication mechanisms. Exploring the galaxy solely with resources mined from, and machinery built in, the homeworld solar system is a tremendous energy and economic drain. Self-replication solves this problem by offloading the energy and economic expenditure to the progressing mission itself. For the mere cost of an initial generation of probes, the entire galaxy can be explored. The feasibility of self-replicating machinery is a difficult proposition for many people to accept, but life itself is proof-positive that matter and energy can be organized in such a fashion. The notion that super-ETI could not artificially emulate life-like processes is completely absurd. Life is admittedly very complicated, but it is not \textbf{\textit{that}} complicated! SRPs dramatically decrease the exploration time. While colonization efforts might be slow (a point we reconsider in section \ref{label_section_AFullyComputerizedCivilization}), intentional self-replicating exploration ahead of the colonization wave has been shown to be extremely fast: \mbox{$4$$\times$$10^6$}--\mbox{$3$$\times$$ 10^8$} years is sufficient to fill the galaxy \citep{Hart:1975,Tipler:1980,Jones:1981}. Thus, they ought to be here already, and many times over at that, as shown in section \ref{label_section_EstimatingtheSRPPopulation}. The debate over SRPs has gained a certain degree of infamy. Two seminal papers have dominated the literature, one by Tipler, espousing SRPs \citep{Tipler:1980} which extends an argument by Hart \citep{Hart:1975}, and one opposing Tipler by Sagan and Newman \citep{SaganNewman:1983}. On the basis of this debate, and perhaps due to a strong desire for ETI to exist in the face of paradoxical theories, recent models of galactic exploration have explicitly excluded SRPs, citing these (and other) papers as their reasoning. In other words, an isolated thirty-year-old argument has redirected the entire field and shuttered possible avenues of research. The field of Fermi Paradox research would benefit if explanations other than the exclusion of SRPs were considered. One point we take issue with is an inherent and frequently unconscious \textit{biological bias} that pervades consideration of computerized intelligence, including SRPs. We expand on this idea in sections \ref{label_section_BiologicalBias} and \ref{label_section_AFullyComputerizedCivilization}. In section \ref{label_section_EstimatingtheSRPPopulation} we estimate the number of SRPs in our solar system. Then, in sections \ref{label_section_SelfReplicatingProbeVoluntaryRefrainment} and \ref{label_section_SelfReplicatingProbeInefficacy} we consider two arguments against SRPs, one by Sagan and Newman which we call the \textit{voluntary refrainment} argument and one by Chyba and Hand which we call the \textit{inefficacy} argument. In both cases, we believe there is opportunity to revisit their conclusions and that the total abandonment of SRPs from recent models of galactic exploration is premature. \subsection{Estimating the SRP Population} \label{label_section_EstimatingtheSRPPopulation} To investigate how many SRPs have reached our solar system, we can use an equation which mirrors the method employed by the Drake equation \citep{Brin:1982, Walters:1980}, \ie by combining fractional parameters and a starting value. Thus, on the assumption that each SRP mission targets one SRP per star, the number of SRPs that reach each star in the galaxy is \mbox{$N_r=N_s\cdot f_r\cdot n_r\cdot G_r$} where $N_s$ is the total number of stellar societies to ever arise (including colonies), $f_r$ is the fraction of societies that dispatch SRPs, $n_r$ is the number of missions they initiate, and $G_r$ is the fraction of the galaxy that each mission reaches. There are virtually no estimates of $N_s$ in the literature; most models either derive the \textit{current} ETI population, \eg \citep{CottaMorales:2010} and any application of the Drake equation, or disregard the effects of interstellar colonization, \eg \citep{Forgan:2009, Bjoerk:2007, CottaMorales:2010}. A broad overview of the literature suggests a consensus for the \textit{total} number of \textit{civilizations} (species) to be \mbox{$3$$\times$$10^2$}--\mbox{$10^{10}$}, but common estimates practically never fall below \mbox{$10^4$}. Likewise, we can rightly add at least an order of magnitude to account for colonization. We will assume \mbox{$10^5 \le N_s \le 10^{10}$}. $f_r$ should be quite high; once SRP technology is available, its advantages are too significant to ignore. In section \ref{label_section_SelfReplicatingProbeVoluntaryRefrainment}, we consider a counter-argument, but find it unsatisfactory. $f_r$ is primarily impeded by the mean stellar society lifetime, $L_s$ ($\sim$$L$ from the Drake equation but $L_s$ applies to stellar societies, not species) (see sections \ref{label_section_PercolationTheorywithColonyDeath} and \ref{label_section_RefiningtheDrakeEquation}). However, SRPs are practically immune to societal death. By definition, SRPs can continue their mission even if their society dies. If humanity's technological progress is any indication, SRPs should be feasible within $\sim$200-500 years of technological ascendancy (assuming we started \mbox{$\sim$100} years ago), thus invalidating all but the briefest estimates for $L_s$. Furthermore, secondary colonies should produce SRPs \textbf{\textit{much}} more quickly than the homeworld since they are already technologically advanced. We will assume \mbox{$f_r \ge 0.1$}. The purpose of $n_r$ is to emphasize that over centuries or millennia, societies may easily undertake otherwise redundant projects. We will assume \mbox{$1 \le n_r \le 10$}. $G_r$ is arguably very high. In section \ref{label_section_SelfReplicatingProbeInefficacy}, we consider a counter-argument, but find it unsatisfactory. As a concession, we will assume \mbox{$G_r \ge .01$}, but there is little justification for such deficiency. When we populate the equation with the lower and upper estimates, it yields \mbox{$N_r=10^2$}--\mbox{$10^{11}$} SRPs in our solar system at this very moment. The absurdity of this result underlies the tremendous burden of the Fermi Paradox and demonstrates why many ETI-hopefuls have shied away from even the remotest consideration of SRPs. \subsection{Self-Replicating Probe Voluntary Refrainment} \label{label_section_SelfReplicatingProbeVoluntaryRefrainment} One popular argument against SRPs is presented by Sagan and Newman \citep{SaganNewman:1983}. They argue that any presumably wise and cautious civilization would never develop SRPs because such machines would pose an existential risk to the original civilization. The concern is that the probes may undergo a mutation which permits and motivates them to either wipe out the homeworld or overcome any reasonable limit on their reproduction rate, in effect becoming a technological cancer that converts every last ounce of matter in the galaxy into SRPs. Ironically, one of the best counter-arguments to the Sagan/Newman theory is described in Tipler's original paper to which they were responding. Tipler conceded the danger of mutation and unintended behavior or reproduction. He pointed out that unlike biology, which is subject only to the limitations imposed by unintentional and unconscious evolution, machines designed with foresight and intelligent engineering may incorporate fundamental restrictions on erroneous reproduction. Such restrictions could be so deeply ingrained as to render any mutated individual a still-birth (a simple checksum might suffice). We know that engineering can achieve remarkably high data integrity rates. Consider that each individual bit in a computer's dynamic memory (amongst billions) is not merely in stasis, but rather constantly leaks (it is a capacitor). The computer must recharge every bit thousands of times per second (while also fixing radiation damage), with virtually no room for undetectable or uncorrectable errors. The number of times this process occurs in all the world's computers in a single second is beyond comprehension, yet the worldwide undetected-uncorrected-RAM-bit-error rate is negligible (note that transient errors do occur, but the overall engineered process generally rectifies them). More complex processes, such as file duplication or transmission, exhibit equally impressive consistency. The sheer number of bits being duplicated and transmitted around the world in any given second is testament to the ability of purposeful engineering to achieve unfathomable data integrity rates. To be clear, the point is not that super-ETI would use anything remotely resembling 20th century technology in SRPs; the point is that engineering as a general method for designing and constructing complex devices can achieve extremely high data integrity rates. In addition to using our own technological experiences as a reference, we can consider a biological analogy. Sagan and Newman are saying that detecting and correcting effectively cancerous events is not a safe bet on scales approximating galactic exploration. The number of replication events in a galactic exploration mission, $R$, closely approximates the number of stars in the galaxy, $N$, so \mbox{$R \simeq N \simeq 100 \text{ billion}$} (within an order of magnitude). The human body undergoes $\sim$10,000 trillion cell divisions over the course of its lifetime \citep{Turner:1978} or \mbox{$\sim$100,000$\times R$}. Biology employs numerous tactics to stem cancerous events, including detection of deleterious cells, repair of damaged DNA, and ultimately apoptosis if necessary. Many humans go their entire lives without experiencing a single malignancy. Of the cancers that do occur, many are benign, suggesting that not every cancerous event must necessarily yield unmitigated reproduction. Lastly, and most crucially, biology is the product of an unconscious evolutionary process with no foresight or intelligent planning. At the very least, we should expect intentional engineering to achieve biology's level of success, and more likely, to vastly exceed it. In summary, $R$ is simply not all \textbf{\textit{that}} large a number compared to the biological equivalent and considering biology's success rate against cancer, and especially when considering intentional engineering. Even if we discard engineering, we should still expect a galaxy-scale SRP cancer risk $\sim$1/100,000th that of an individual human. Taking engineering into account could easily improve the situation by many orders of magnitude. \subsubsection{Biological Bias} \label{label_section_BiologicalBias} Another counter-argument to Sagan and Newman's concern is to question the presumption that computerized intelligence must inherently be susceptible to such deficiencies in the first place. We refer to this as \textit{biological bias}, the belief that computers cannot possess the intellectual power, mental generality, or cognitive richness of humans. As with many forms of bias, it often goes unnoticed by its own purveyors -- they may happily grant the concept of machine intelligence for the sake of argument without consciously realizing that their subsequent criticisms imply machines which nevertheless lack human-level consciousness and self-actualization. Fears of cancerous mutations that hyperbolically lead to the complete matter-assimilation of the cosmos (a scenario literally described by Sagan and Newman) are in accordance with biological bias and underly one of the primary arguments against SRPs. While these are reasonable criticisms of contemporary human technology, there is no justification short of prejudice to extend them to super-ETI. If we are to truly consider the nature of an advanced SRP, then we must grant that probe a versatile and brilliant intellect comparable to -- more likely vastly surpassing -- that of a human; it should possess deep powers of introspection and investigation and should easily recognize when one of its offspring is severely and harmfully abnormal. We believe that much of the disagreement over this matter stems from a difference of philosophical stance. Consider three scenarios under which an SRP cancer could occur: \begin{enumerate} \item A malevolent ETI deliberately releases a cancerous SRP. \item An ETI slightly more advanced than us recklessly incorporates mechanized self-replication into a space-probe otherwise comparable to our current probes. \item A super-ETI creates a brilliant and intellectually embodied SRP for the purpose of benevolent galactic exploration. \end{enumerate} We can dismiss the first scenario since as it is not in the minds of either side of this particular debate. Of the remaining two scenarios, the Tipler camp envisions the third while the Sagan/Newman camp envisions the second. The disparity between those two scenarios underlies the discord over this issue. We recommend considering human cognition to be a lower-bound on the intellectual capacity not only of super-ETI, but of their computerized machinery as well. As a litmus test for detecting potentially unconscious biological bias we propose the following: \begin{quote} \textit{If we cannot imagine the Sagan/Newman cancer scenario applying to a group of brilliant and wise humans exploring the galaxy, then we should dismiss it out of hand for SRPs as well.} \end{quote} Clearly, this test can be generalized to any argument against computerized intelligence. \subsubsection{A Fully Computerized Civilization} \label{label_section_AFullyComputerizedCivilization} Another point is the presumed distinction between the SRP exploration phase and the subsequent populace-colonization phase. One notable difference is the assumption that SRP exploration would proceed much more rapidly than colonization. Given recent theories that technological species may evolve into a computerized intelligence, this distinction evaporates \citep{Shostak:2010,Wiley:2011}. Exploration and colonization missions become one and the same; they even ride the same starships. Here is a possible scenario: a ship travels at maximum velocity regardless of an exploration or colonization mission. Upon arrival, while nascent colonization begins to populate the new solar system, construction ($\sim$ self-replication) of the next generation of ships commences immediately and simultaneously along-side the nascent colony. The new ships are launched just as quickly as in the original SRP-exploration scenario. In this way, a computerized civilization might simultaneously explore and colonize the galaxy at an astonishing rate. If this scenario offers any insight whatsoever into plausible methods of galactic colonization, it has dire implications for the Fermi Paradox. Not only would SRPs fill the galaxy very rapidly (an event which we might miss if the probes undertake minimal planetary reorganization), but now full galactic colonization might proceed at a similar rate due to the computerization of a civilization's primary population. We will return to the notion of computerized intelligence when we consider recent models of galactic exploration by Bj$\o$rk, and Cotta and Morrales. \subsection{Self-Replicating Probe Inefficacy} \label{label_section_SelfReplicatingProbeInefficacy} Chyba and Hand \citep{ChybaHand:2005} offer a completely different argument against SRPs: that the efficacy of SRPs (their expansion rate) would be intrinsically truncated by the occurrence of mutated predators. The postulated mutants would discard their original imperative of exploration to prey upon the remaining nonmutated probes. Such behavior would greatly impede the efficiency of the original mission and would undermine claims of rapid SRP exploration. In an extended version of their argument a complex ecology (a food web) of SRPs might arise and disband from the original mission to deal with the newly evolved business of chasing and eating one another in some confined corner of the galaxy. Chyba and Hand's reasoning is fundamentally different from that of Sagan and Newman. The latter argue that no civilization would bother to create SRPs in the first place. Chyba and Hand argue that even if attempted, the presumed expansion rate would be greatly impeded by predation. Earthly examples inspire confidence in their theory. Predators can have a drastic impact on a prey's population. However, one unspoken and numbingly obvious reason that this is possible is that the predators can actually \textbf{\textit{catch}} the prey. Furthermore, the predation rate must approach the prey reproduction rate if the prey's population growth is to be noticeably impeded. Natural ecosystems display these patterns because the prey exhibit a few notable properties: \begin{enumerate} \item They don't spend their entire lives dispersing radially from a locus of universal origin. \item They don't always travel at maximum speed. \item They periodically rest, sleep, and eat. \end{enumerate} These properties permit predators to physically catch up with the prey. Yet they are unlikely to represent a galactic exploration scenario. First of all, we should assume that interactions are only possible in solar systems. That is to say, one probe cannot easily chase down another in the lonely void between stars, much less at a tenth the speed of light! The initial probes are dispatched in a radial pattern away from the homeworld at the maximum interstellar speed. In each solar system they self-replicate as quickly as possible and dispatch offspring into an approximate hemisphere oriented away from the homeworld. This period of regeneration offers predators a chance to descend upon them. However, it does not indicate an increase in the relative predator/prey speed since predators must, of course, undertake the same respite to self-replicate. An alternate scenario in which the predators don't self-replicate as often might permit them a briefer regeneration period, thus enabling them to overtake the prey...but only in one direction -- the distribution of the nonmutated probes renders this strategy incapable of culling their numbers in all directions at once. These behaviors make it exceedingly difficult for the predators to overtake and wipe out the prey. The dispersal pattern is not terribly unlike that of a bacterial culture growing in a petri-dish. In fact, to the extent that the galaxy is quasi-planar, the analogy is that much more apt. If we treat the initial mission as a disk expanding outward from the homeworld, and if we subsequently treat the point mutation of a predator as the origin of a new expansion disk (perhaps only expanding back into the original disk, but not outward), then it is clear that the predator disk is always smaller than the prey disk -- the predators can never hope to overtake and wipe out the nonmutated probes in an infinite space. However, petri-dishes and galaxies are not infinite and this definitely impacts the model. For example, if we assume that the predators are perfectly lethal \textbf{\textit{and}} that the prey are perfectly defenseless (and why \textbf{\textit{should}} we assume this anyway?!) then the predators will eventually wipe out the mission-mindful probes. Two questions then arise: can predators prevent the nonmutated probes from reaching the far boundary, and if not, can predators prevent any other region of the galaxy from being reached, thus leaving voids in potential contact and detection? The answer to the former question is no. Clearly, if predators arise east of the homeworld (relatively speaking), they can never catch the wavefront of probes expanding westward before those probes reach the west edge of the galaxy. However, some areas of the galaxy may never be visited, namely the region beyond the frontier of the predators. Thus, we can speak of a \textit{predator shadow}, a region of the galaxy which is not reached by that particular ETI civilization (see Fig. \ref{figPredatorShadows}). Furthermore, if there are a sufficient number of independent predatory mutation sites surrounding the homeworld, they may even bound the nonmutated probes' expansion. In this way, it is possible to enable Chyba and Hand's model to finitely confine a civilization's reach via SRPs (see Fig. \ref{figPredatorBounded}). This would seem to be a concession to Chyba and Hand's point, but it suffers two weaknesses. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig_predShadow_t99_m0_L-1_n2_d15_Nf___2x2} \caption{\textbf{Predator Shadows (Dark squares: healthy probes, Light circles: predators):} While predators cannot overtake distant prey until the boundary is reached (\eg left side of the board), they can nevertheless produce a shadow of unexplored territory (\eg right side of the board).} \label{figPredatorShadows} \end{figure} First, we must consider a scenario involving numerous independent ETI distributed throughout the galaxy and time. Even if each civilization is bounded by an outer shell of predators, the union (overlap) of their reach may still fill the galaxy. Furthermore, if there are not enough predatory mutations to fully confine each civilization, but merely enough to shadow them, then far fewer independent ETI homeworlds are necessary to fill the galaxy; a handful should suffice. The second weakness of this argument was previously stated in section \ref{label_section_SelfReplicatingProbeVoluntaryRefrainment}. We must take some realistic account of the likelihood of the mutations that Chyba and Hand are positing, especially at sufficient rates to fully bound expansion as opposed to merely shadow it. As described, it is possible to achieve data replication rates which render $R$, the number of replication events for galactic exploration, quite safe. Thus, while predatory mutations may not be fundamentally impossible, let us not overestimate their risk when considering the efficacy of SRPs. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig_predBounded_t9_m0_L-1_n2_d15_Nf___2x2} \caption{\textbf{Predator Confinement (Dark squares: healthy probes, Light circles: predators):} If a sufficient number of mutated predators emerge, they can finitely confine the civilization's expansion.} \label{figPredatorBounded} \end{figure} \subsection{Validity of Models that Exclude Self-Replicating Probes and Assume Slow Colonization Rates} \label{label_section_ValidityofModelsthatExcludeSelfReplicatingProbesandAssumeSlowColonizationRates} Two impressively detailed models of galactic exploration are described by Bj$\o$rk \citep{Bjoerk:2007}, and Cotta and Morales \citep{CottaMorales:2010}. Both models explicitly exclude SRPs and cite Sagan and Newman, and Chyba and Hand, as their primary reasons for doing so. The weaknesses of the Sagan/Newman and Chyba/Hand arguments therefore directly compromise Bj$\o$rk and Cotta/Morales' results; their models may deliberately incorporate handicaps based on unreasonable assumptions. A third model, by Forgan \citep{Forgan:2009}, is at best vague about the effects of SRPs in that it makes practically no mention of them. In fact, Forgan's current model explicitly excludes any form of interstellar migration. Furthermore, given the theory described in section \ref{label_section_AFullyComputerizedCivilization} that colonization by a computerized society might proceed just as rapidly as exploration, the fact that both Bj$\o$rk and Cotta/Morales also exclude colonization renders their models potentially less realistic on that point as well. Namely, realistic models of galactic exploration and colonization should consider not only the likely scenario of SRPs, but also the possibility that colonization itself may proceed at a phenomenal rate. We conclude that Bj$\o$rk and Cotta/Morales's models are unlikely to inform us about realistic circumstances of galactic exploration and colonization. While Forgan's model is also incomplete with respect to both SRPs and colonization, he specifically emphasizes the model's flexibility and potential for extension in the future. We look forward to it. In this section we summarized our thoughts on the Fermi Paradox with reference to the specific subtopic of SRPs. In the next section we switch gears to consider the Fermi Paradox from the point of view of percolation theory. \section{Percolation Theory} \label{label_section_PercolationTheory} Landis explores the Fermi Paradox by considering a model based on percolation theory \citep{Landis:1998}. Each new colony is created in one of two states: \textit{colonizing} or \textit{noncolonizing}, the reason being that while some cultures opt toward exploration and expansion, others develop nonexpansionist values and display minimal interstellar reach. In the model, only colonizing colonies propagate offspring to neighboring sites. Each new colony is assigned one of the two states with some predetermined probability, $P$. Landis uses \mbox{$P=1/3$}. The only other factor which impacts the behavior of a percolation system is the connectivity or \textit{degree} of the graph connecting the sites, $N$. Landis argues in favor of \mbox{$N$ $\simeq$} 5 for natural interstellar neighborhoods, although in the one example he shows he uses \mbox{$N=6$} because the simulation resides in a cubical packing. Landis explains that given the two parameters, $P$ and $N$, there exists some critical value, $P_c$, such that for \mbox{$P < P_c$}, colonization inevitably peters out with an exterior shell comprised solely of noncolonizing colonies which entrap any colonizing colonies. Thus, civilizations are stochastically bounded to some finite region of the galaxy. If this property holds for virtually all civilizations, and if the number of independent civilizations is low enough that their union (overlap) does not fill the galaxy, then this theory speaks to the Fermi Paradox. Even for \mbox{$P > P_c$}, in which the percolation model permits civilizations to grow indefinitely, thus spanning the galaxy, there still occur inverted shells whose surfaces comprise solely of noncolonizing colonies such that their interior regions are never visited. Thus, even for large values of $P$ it is still possible, albeit less likely, for Earth to reside in an inaccessible region. Obviously, the larger $P$ is, the fewer independent ETI are required such that their union is likely to be complete, perhaps as few as two. Percolation theory offers a tantalizing explanation for the Fermi Paradox. One of its strongest assets is that it minimizes our reliance on speculative or whimsical musings about the sociological motives and behavior of alien species. Rather, it suggests that there may be fundamental graph-properties of expansion which describe galactic exploration in purely mathematical terms. It is still somewhat speculative however, in that we must choose $P$, and to a lesser extent, $N$. A pure percolation model does indeed split the galaxy into distinct regions, some of which are never visited (see Fig. \ref{figPercolationOriginal}). However the model is extremely simple and may not reflect realistic scenarios. Landis concedes this point but argues that while extensions to the basic model could certainly be envisaged, they would not be likely to alter the fundamental property that certain regions remain unvisited. This assumption is incorrect however. There are perfectly reasonable extensions to Landis' model which completely alter its behavior. In sections \ref{label_section_PercolationTheorywithColonyDeath} and \ref{label_section_PercolationTheorywithColonyStateMutation} we introduce two such extensions and show how they affect the model. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig_perc_t333_m0_L-1_n3_d15_Nf___2x2} \caption{\textbf{Landis' Percolation Model (Dark squares: colonizing, Light circles: noncolonizing):} For sufficiently small values of $P$, colonization eventually halts with all colonizing colonies trapped within a shell of noncolonizing colonies. In the figure, \mbox{$N$=$6$} (the figure depicts one layer of a cubical simulation) and \mbox{$P$=$1/3$}. We observe that every colonizing colony becomes trapped and thus no further expansion is possible.} \label{figPercolationOriginal} \end{figure} \subsection{Percolation Theory with Colony Death} \label{label_section_PercolationTheorywithColonyDeath} From the very beginning, the Drake equation has included the parameter $L$, the lifetime of a technological civilization. The expectation that advanced civilizations may die is a widely accepted idea in the SETI community (popularly credited to our own nuclear standoff) and therefore is a perfectly reasonable addition to any model of galactic colonization. Introducing an analogous \textbf{\textit{colony}} lifetime, $K$, to Landis' model implies that colonies periodically die. Notice that $K$ differs from $L$ in that $L$ refers to entire civilizations, not individual colonies. When a colony dies on the frontier, it opens a valve in the otherwise impenetrable shell through which trapped colonizing colonies can leak out. Since the point of the percolation model is that certain locations are \textbf{\textit{never}} visited, and since this leakage occurs in the outward direction, this modified model is now fundamentally different -- it permits the expansion process to potentially reach every star in the galaxy (see Fig. \ref{figPercolationWithDeath}). It is not guaranteed however. Now that colonies can die, the entire civilization might \textit{evaporate} before it reaches every locus. Much as $P_c$ defined a threshold which differentiated between bounded and indefinite expansion, we can now speak of a threshold $K_c$, such that for \mbox{$K < K_c$}, the civilization evaporates before filling an infinite space, and for \mbox{$K > K_c$}, the civilization may span the space (and even fill the inverted shells previously described). Given that $K_c$ varies with $P$ (larger values of $P$ permit lower values of $K$ without evaporation) we can state it more precisely as $K_{Pc}$, the threshold for a given $P$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig_perc_t333_m0_L20_n3_d15_Nf___2x2} \caption{\textbf{Percolation Model with Periodic Colony Death (Dark squares: colonizing, Light circles: noncolonizing, Xs: vacant but visited):} For the exact same value of $P$ which yielded a halting expansion in the original model (see Fig. \ref{figPercolationOriginal}), we can achieve saturation by introducing $K$, a colony's lifetime. Death on the entrapping shell permits trapped colonizing colonies to leak through and continue the colonization effort. (\mbox{$N$=$6$}, \mbox{$P$=$1/3$}, and \mbox{$K$=$20$}).} \label{figPercolationWithDeath} \end{figure} The notion of $K$ speaks to a broader philosophical issue when considering the nature of ETI, which is that perhaps our conceptualization of $L$ is not the correct property of interest in regard to the Drake equation's intended purpose. If the Drake equation is used to count the number of detectable \textit{species}, then so be it, L is appropriate, but if the intended use is to count the number of culturally distinct societies or to estimate the odds of SETI detection, then perhaps we should count \textit{stellar societies}, not species (an idea previously proposed by both Brin and Walters et. al. \citep{Brin:1982, Walters:1980}). If every colonized solar system is regarded as a separate ETI event that may make itself detectable via either radio or visitation, we can effectively replace $f_c$ (the fraction of intelligent species that communicate) with a new parameter $n_s$, the number of stellar societies per species, including the homeworld. We expand on this idea in section \ref{label_section_RefiningtheDrakeEquation} when we offer a revised version of the Drake equation. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig_perc_t333_m01_L-1_n3_d15_Nf___2x2} \caption{\textbf{Percolation Model with Periodic Colony Mutation (Dark squares: colonizing, Light circles: noncolonizing):} For the exact same value of $P$ which yielded a halting expansion in the original model (see Fig. \ref{figPercolationOriginal}), we can achieve saturation by introducing $M$, the probability that a colony will mutate its colonizing state in a given timestep. Mutation on the entrapping shell permits noncolonizing colonies to become colonizing and restart the colonization effort from their location. (\mbox{$N$=$6$}, \mbox{$P$=$1/3$}, and \mbox{$M$=$.01$}).} \label{figPercolationWithMutation} \end{figure} \subsection{Percolation Theory with Colony State Mutation} \label{label_section_PercolationTheorywithColonyStateMutation} In addition to introducing the concept of colony death, we have added $M$, a mutation rate in the colonizing state of a colony. M permits any colony to switch its state at any time with some low likelihood. The justification for such a parameter seems quite reasonable: it is absurd to assume that colonies will remain culturally fixated on either of the two motives (colonizing or noncolonizing) forever. Societies evolve, beliefs and values vacillate, cultures meander. $M$ permits currently noncolonizing colonies on the frontier to periodically mutate and consequently promote a new wave of exploration outward from their location, once again undermining the conclusion of the original percolation model (see Fig. \ref{figPercolationWithMutation}). Furthermore, note that by combining both of the new parameters, $K$ and $M$, our simulations confirm that we can achieve saturation more quickly than with either parameter alone for a given $P$. The point isn't merely that $K$ and $M$ permit civilizations to reach a little further or last a little longer. The point is that the fundamental property of the original model is overturned. Given enough time, \mbox{$K > K_{Pc}$} or any nonzero value of $M$ permit every locus in the simulation to be reached -- 100\% saturation is guaranteed. The implication of our modified model for the Fermi Paradox is obvious: it greatly eases the task of exploring the galaxy without being bounded by a shell of noncolonization. Ultimately, simplistic models such as Landis' and our variant cannot hope to perfectly represent a topic as complex as galactic colonization. We do not use such models to draw final conclusions about the uncompromising truth of reality. Rather, we ask which models are most likely to indicate natural circumstances with the greatest verisimilitude. We believe that where the phenomenon under investigation is the supposition that shells of noncolonization form permanently impenetrable barriers, our modified model more closely resembles reality than Landis' original model. In this section we summarized our thoughts on Landis' adaptation of percolation theory to galactic colonization and its implications on the Fermi Paradox. In the next section we introduce ITB theory and consider its implications for theories of interstellar societal collapse. \section{Interstellar Societal Collapse} \label{label_section_InterstellarSocietalCollapse} Some researchers have considered the Fermi Paradox from the perspective of interstellar societal collapse. We introduce a new theory of \textit{interstellar transportation bandwidth} (ITB) which may impact such theories. We begin with a survey of two such papers and present some reasonable challenges in the absence of the new theory. We then introduce the ITB theory and consider its additional implications for theories of interstellar societal collapse. Later, we show how ITB theory can be used to refine the Drake equation so as to produce more precise estimates of the intragalactic ETI population, and finally show how to calculate the ITB. \subsection{Light Cage} \label{label_section_LightCage} McInnes shows that an expanding sphere of colonization cannot expand fast enough to avoid complete societal collapse due to exponential population growth and increasing population density \citep{McInnes:2002}. Furthermore, after such a crash, the society may be so impoverished of resources that it is forever blocked from making a second attempt. If this theory is universally true of all civilizations, then it offers significant insight into the Fermi Paradox. McInnes extends the intuitive notion of a constant expansion rate with a steadily increasing population density to a linear expansion rate which maintains constant population density. However, the expansion velocity must have some maximum speed limit. To generate an upper-bound for this theory, McInnes considers a maximum expansion velocity of 1.0c. Obviously, in a real scenario, the expansion rate would be considerably slower, but that only exacerbates McInnes' fundamental point which is that once the maximum expansion velocity is reached, population density once again begins to rise. Eventually, total societal collapse is inevitable. The only way to avoid catastrophe is to shift from exponential growth to logistic growth. Given a specified growth rate and an initial sphere of some specified size, one can calculate the size of the expanded sphere when the increasing expansion rate reaches the 1.0c limit. McInnes calls this sphere the \textit{light cage}. Assuming 1\% annual growth and a starting sphere the size of Earth, he determines that the light cage is at a 300ly (light-years) radius and that the time to reach the light cage is 8000 years. Out of curiosity, we ran the numbers for a more realistic maximum expansion velocity. If we assume a maximum flight speed of 0.1c, a common estimate for fusion-powered starships such as Daedalus \citep{Daedalus:1978, Matloff:2005}, which yields voyages to nearby stars on the order of decades, and if we assume a regeneration period of a few decades -- and therefore comparable to the duration of the voyages themselves -- then we estimate a more realistic maximum expansion velocity to be \mbox{$\sim$.05c}. When applied to McInnes' equations, this velocity reveals a far more realistic cage of a mere 15ly, and an associated saturation time of 7000 years. Interestingly, while a considerably slower expansion rate drastically reduces the cage, it only slightly decreases the time until the cage is hit. This follows naturally from exponential growth. To make matters worse, .05c may be entirely too fast if the stasis period between voyages is longer than the few decades of the previous estimate. If we briefly consider a maximum expansion velocity of .01c, we derive a cage of only 3ly! Considering that there are only 52 stars within 15ly of Earth (and \textbf{\textit{none}} within 3ly!), and that only some fairly small subset will support habitation, this does not provide us with very many systems to colonize before we implode in a population density catastrophe. More crucially, 15ly probably lies within the single-voyage horizon, which implies that early waves of colonization may fill the cage all at once as opposed to diffusing radially. In other words, according to light cage theory, interstellar travel hardly provides any population relief at all, right from the beginning. McInnes is careful to admit that his model is continuous and symmetrical. Admittedly, galactic colonization is loosely like an expanding sphere, but more precisely it is like traversal of a rooted tree (a graph), where the root is the homeworld, vertices are solar systems, and labeled edges connect stars whose distances are below a maximum traversal threshold with the labels indicating interstellar distances (and associated travel times). In addition to the inherently discrete nature of this graph, the natural distribution of stars will impose notable asymmetries on the edge length and per-node degree (branching factor). McInnes admits to the simplicity of his model, but points out that such details should not affect the model's outcome. We agree, but in practicality, a tree of no more than 52 nodes (probably far fewer), all of which have direct edges to the homeworld, no longer resembles a continuous sphere in even the weakest sense. A Monte Carlo simulation might help illuminate any interesting properties of the discrete model. Assuming that the continuous model is apropos to the discussion, then we envisage two ways by which the model may not predict actual events. The first is admitted by McInnes, that populations may succeed in converting to logistic growth and therefore avoid the prescribed collapse. Given the calculation of a 15ly cage and the observation that it would saturate in a single emigration wave due to direct homeworld access, any hope of survival would depend on adopting logistic growth prior to interstellar colonization; any later would be too late to avoid disaster. However, Earth appears to be following just such a course, leveling off its population growth in the 21st century, long before undertaking interstellar travel \citep{MisraBaum:2009}. Not only does this bode well for humanity, it suggests that other ETI civilizations can do the same...but such a conclusion weakens McInnes' theory that the Fermi Paradox is resolved due to a finite expansion before societal collapse. If ETI are once again enabled to continue steady outward expansion, then the original questions underlying the paradox resurface. The second challenge we would raise in response to McInnes' model will be explained in section \ref{label_section_InterstellarTransportationBandwidth} when we introduce ITB theory. Briefly, we are unconvinced that interstellar distances don't impose an insurmountable paucity of interaction events between solar systems such that contagions of societal collapse fail to adequately infect neighbors. Our final thoughts on McInnes' model reflect on his post-analysis. He proposes the example of a civilization which permits exponential growth at its frontier but enforces logistic growth wherever resource limits are reached, \ie deeper within the sphere. He shows how such a civilization could achieve phenomenal rates of unbounded expansion, filling the galaxy in a few millions years. While McInnes' concedes this scenario, he nevertheless questions its validity by citing Landis' percolation model, namely to doubt that the civilization would expand indefinitely. Considering that in section \ref{label_section_PercolationTheory} we demonstrate some weaknesses of the percolation model, we likewise conclude that the proposed scenario stands relatively unchallenged. Should it turn out to be reasonable, then the Fermi Paradox remains, well, paradoxical. \subsection{Unsustainable Expansion} \label{label_section_UnsustainableExpansion} Haqq-Misra and Baum present a theory which closely resembles McInnes' light cage, namely that practically all civilizations which are ravenous in their expansion behavior might collapse, leaving only slowly expanding civilizations \citep{MisraBaum:2009}. To prove that societies capable of maintaining a slow expansion rate are possible, Haqq-Misra and Baum cite the Kung San of the Kalahari Desert. However, we feel that this example proves the opposing point: the Kung San did not colonize and inherit the Earth; Western civilization did, precisely because of its rapid expansion. Haqq-Misra and Baum propose that such civilizations may ubiquitously burn out, leaving only the slower civilizations. While this may be a reasonable supposition, it is not strongly supported by our single datapoint, humanity. After all, in Earth's case, the rapidly expanding civilizations won, in effect. We do not live in a Kung San world precisely because they did not colonize it, and likewise, Western civilization did not burn out before colonizing a substantial portion of the planet. Not only do we observe that rapidly expanding civilizations successfully dominated the planet (ethical considerations aside), but furthermore it looks as if the global human population will in fact level off in the 21st century (a point Haqq-Misra and Baum themselves cite), calling into question the secondary presumption that perhaps such civilizations must inevitably implode after reaching a planet's carrying capacity. Thus, humanity's own history looks like a strong point of evidence \textbf{\textit{against}} sustainability theory. If anything, this discussion suggests a natural selection favoring expanding civilizations. Much as Western civilization overtook Earth at the Kung San's expense, perhaps we should expect rapidly expanding galactic civilizations to be the victors in the struggle for the galaxy. This may very well be an imperative for survival, again using the Kung San and other indigenous tribes as an unfortunate example. Perhaps only one civilization can dominate the galaxy while the rest must, if not go extinct, nevertheless lose considerable self-autonomy and cultural influence. Ecologically, only one species can generally occupy a given niche at any place and time while, anthropologically, human history exhibits similar patterns in the struggles between colocated societies. No doubt, as a progressive and intellectual species (one would hope), we can work actively against such forces, but even the need for such efforts is a blatant concession that the \textbf{\textit{natural}} progression of events is counter to our urbane values of equality and diversity. This view is directly opposed to the suggestion that the most dominant and assertively expansionist societies should be the very ones we expect to fail. Nevertheless, Haqq-Misra and Baum propose a fair question: whether rapid expansion can be sustained long enough to colonize the galaxy. Perhaps the Earthly analogy is inappropriate in that rapid expansion is feasible on planetary scales but not on interstellar scales. However, we are unsure \textbf{\textit{why}} such a supposition should be true. Haqq-Misra and Baum support this claim by citing Landis and McInnes, but in sections \ref{label_section_PercolationTheory} and \ref{label_section_LightCage} we demonstrated possible weaknesses of both explanations. Our strongest counter-argument to sustainability theory is described in section \ref{label_section_InterstellarTransportationBandwidth}. Namely, our proposal of a possible ITB calls into question the ability of overpopulation, resource exhaustion, and other forms of societal strife to infect sufficiently remote societies. If our theory is valid, then the only way sustainability theory can work is if societal collapse occurs prior to interstellar colonization...and furthermore, only if it occurs virtually ubiquitously across independent ETI homeworlds. If sustainability theory only applies to some ETI before they embark upon interstellar colonization, and if ITB is itself a valid theory, then sustainability is intrinsically insufficient to resolve the Fermi Paradox. The remaining civilizations should have colonized the galaxy in no uncertain terms. \subsection{Interstellar Transportation Bandwidth} \label{label_section_InterstellarTransportationBandwidth} We believe the greatest challenge to theories of interstellar societal collapse is that it might be impossible for interstellar societies to contaminate one another with their respective problems, namely population and resource pressures, religious, socioeconomic, or political disputes, or other social strife which can cause complete societal disintegration. There are three ways in which societies can interact, potentially imposing problems on one another: \begin{enumerate} \item Transportation of physical objects (colonists or otherwise) to a neighbor \item Transportation of physical objects from a neighbor \item Transmission or communication of immaterial data or ideas to a neighbor. \end{enumerate} In the first case, societal pressure is caused by the arrival of new colonists, and to a lesser degree, tangible goods which require either storage space or support. In the second case, societal pressure may be a form of strip-mining which leaves the colony impoverished. In the third case, suffering might be transmitted through some remarkably destructive meme. However, even if societally vanquishing memes are possible, such failure does not represent a sustainability pressure like population density or resource scarcity. As such, it is not relevant to this discussion. Interstellar resource extraction makes very little sense. An important parameter in any space-travel equation is mass. The mass of the various resources people require vastly exceeds the mass of the people themselves. Consequently, it is always more efficient to move the people to the resources than vs/va, and by extension, even if interstellar hegemony and associated forced strip-mining is possible, it will not involve transport back to the homeworld, but rather, an invasion to use the resources where they reside, \ie we are back to the first scenario. The arrival of unwelcome colonists is admittedly difficult for stellar societies to contend with because presumably magnanimous cultures cannot simply turn interstellar immigrants away; they will surely die after such a long trip if support is not provided very quickly. In analyzing this pressure, we consider the \textit{interstellar transportation bandwidth (ITB): the number of people capable of moving from one solar system to another per unit time}. We propose that the ITB is sufficiently low to shield each solar system from the pressures of its neighbors. If this theory is valid, individual worlds might still collapse, but each colonized solar system effectively represents a unique experiment in the struggle against population growth and resource exhaustion. For example, consider that the disaster of Easter Island's collapse did not infect any other society, including those of common origin, other Polynesian islands \citep{Diamond:2005}. Easter Island was simply too isolated to transmit its failure elsewhere. In fact, the inhabitants stranded themselves by deforesting to such a degree that they could no longer build ocean-going canoes. By another analogy, extremely virulent diseases like Ebola do not necessarily wipe out enormous populations because the sick die too quickly to infect large numbers of people. Consequently, in addition to our proposal of a bounded ITB in times of prosperity, perhaps during periods of degradation societies entirely lose the ability to sustain vast interstellar emigration (their ITB drops to zero). Perhaps they can no longer build any canoes, as it were. Another approach is to try to visualize how McInnes' propagating population pressure would actually occur. The suggestion that an ever-rising population near the center of the interstellar civilization imparts an ever-increasing pressure on the frontier societies requires the central societies to \textit{exponentially} increase their starship production rate. Not only is such a scenario seemingly illogical, it also violates the notion of an ITB. More plausible scenarios would admittedly leave troubled worlds crushed by their own catastrophes, but would likewise shield other societies from accepting their burden. Given that Haqq-Misra and Baum cite McInnes as one possible cause of their sustainability pressures, this point is germane to their argument as well. One interesting consequence of the ITB is that for each new solar system reached, the odds of long-term survival by the species (survival by \textit{any} society within the collective) actually go \textbf{\textit{up}}, in direct opposition to McInnes' prediction. This has a certain intuitive logic to it: each independent society increases the likelihood that someone somewhere survives. A proverb involving eggs and baskets comes to mind. It seems unlikely that both of these theories can be correct if one predicts an upward trend in survivability and the other predicts the opposite. \subsubsection{Refining the Drake Equation} \label{label_section_RefiningtheDrakeEquation} ITB theory, and the related theory that societies undergoing a collapse may explicitly lose the ability to construct starships (the \textit{canoe theory} if we may be so bold), speaks more generally to the validity of $f_c$ and $L$ in the Drake equation \citep{Brin:1982, Walters:1980} (respectively, the fraction of intelligent-life-bearing planets that produce communicating civilizations, and the mean civilization lifetime). As noted in section \ref{label_section_PercolationTheorywithColonyDeath}, a similar idea has been proposed by both Brin and Walters et. al. \citep{Brin:1982, Walters:1980}. Perhaps each stellar society should be regarded as a separate entity, thus replacing $f_c$ with a new parameter, $n_s$, indicating the number of communicating stellar societies per species. We can refine the meaning of $L$ as $L_\Omega$, the mean lifetime of the collection of societies originating from a single homeworld (from technological emergence on the homeworld to the death of the last associated society). We may now consider a new related term, $L_s$, the mean lifetime of a single stellar society. These new parameters, $n_s$ and $L_s$ replace $f_c$ and $L$ to produce an improved version of the Drake equation: \mbox{$N=R^*\cdot f_p\cdot n_e\cdot f_l\cdot f_i\cdot n_s\cdot L_s$}. Consider how $f_c$ and $n_s$ differ. Given a galaxy with ten planets surviving the $f_i$ filter (the fraction of life-bearing planets which produce an intelligent species), if only one of those species produces detectable technology, then \mbox{$f_c=0.1$}, but if that civilization colonizes two solar systems, the corresponding \mbox{$n_s=0.2$}. Furthermore, $n_s$ can exceed 1, \eg if that one civilization colonizes twenty solar systems, then \mbox{$n_s=2$}. Not only does $f_c$ fail to account for slight increases in colonization, as in the first example, but it is totally incapable of representing the second situation. Interestingly, while $n_s$ is likely to exceed $f_c$, $L_s$ is likely to be less than $L$ (each society must last no longer than the collective civilization). Thus, it is difficult to determine whether this new equation should produce a higher or lower value than the original (we're guessing higher)...but it should be more precise. The number of stars within a single species' collective, $\xi$, scales cubically with radius out to the galactic thickness (\mbox{$\sim$1000ly}), and quadratically thereafter, and assuming a constant radial expansion rate, $v_e$, scales accordingly with time (incidentally, for \mbox{$v_e=.05\text{c}$} and a \mbox{$\sim$1000ly} galactic thickness, expansion undergoes this transition \mbox{$\sim$10,000yr} after initiation). The number of habitable solar systems, $\xi_h$, is simply some fraction, $f_h$, of $\xi$ (where $f_h$ is some contortion of $f_p$ and $n_e$ from the Drake equation, so as to apply to stars rather than planets). We can state $\xi$ (and therefore $\xi_h$) at a given time after technological emergence by applying the stellar density, $\rho$, to this sphere or cylinder (\mbox{$\rho=.004 \text{ stars}/\text{ly}^3$} near the sun, higher elsewhere \citep{Gregersen:2009}). Taking into account $L_s$ and $r_g$ ($r_g$ = the mean regeneration period before dispatching new voyages), and assuming normal distributions, we can now estimate the probability that a stellar society dies childless, $P_\omega$, as the ratio of the \textit{normal difference distribution} (NDD) of $L_s$ and $r_g$ that is positive (\eg if \mbox{$r_g=L_s$} then the NDD is centered at 0 and the positive ratio = 0.5, but if \mbox{$r_g \gg L_s$} or \mbox{$r_g \ll L_s$}, then the NDD resides almost entirely to the right or left of 0, and thus the positive ratio \mbox{$\simeq$1} or \mbox{$\simeq$0}. In the former case, the homeworld never spawns any offspring; in the latter, all societies produce offspring). Furthermore, we can calculate the probability of total collapse at any given time as the binomial probability that all existing colonies die before dispatching new voyages: \mbox{$P_\Omega(t)=$}\mbox{${P_\omega}^{\xi_h(t)}$}. It should be possible to extend this line of reasoning to deduce the expected time until a total collapse occurs, that is, solve for $t_\Omega$ such that \mbox{$\int_{t}^{t_\Omega} P_\Omega(t) \text{ d}t=0.5$}. This would, in effect, yield $L_\Omega$. This is a complex formulation however since $P_\Omega$ varies with so many other parameters including $t$ and $r_g$; it therefore lies beyond the scope of this paper. Notably, the further a civilization expands, the greater its chances of indefinite survival. Perhaps we can speak of a threshold $L_{\Omega c}$ (or an analogous threshold $L_{sc}$), such that for \mbox{$L_\Omega < L_{\Omega c}$} (\mbox{$L_s < L_{sc}$}), civilizations tend to evaporate and for \mbox{$L_\Omega > L_{\Omega c}$} (\mbox{$L_s > L_{sc}$}), civilizations tend to expand forever, consequently filling the galaxy. In fact, the proposed model looks hauntingly similar to the modified percolation model presented in section \ref{label_section_PercolationTheorywithColonyDeath} with \mbox{$L_s \sim K$} and \mbox{$L_{sc} \sim K_{Pc}$}, such that $L_\Omega$ closely resembles the concept of evaporation presented in the percolation model. \subsubsection{Calculating the Interstellar Transportation Bandwidth} \label{label_section_CalculatingtheInterstellarTransportationBandwidth} In this section, we show one approach to calculating the ITB. We loosely demonstrate our method with an example, but owing to the scarcity of research on some of the relevant parameters, it is difficult to be specific, or in some cases to even find reliable references. Given a specified means of interstellar travel we can consider an associated crew size (colonist population) per ship, \eg a Daedalus starship \citep{Daedalus:1978, Matloff:2005}, modified to accommodate a manned mission. Since Daedalus was never intended for manned missions, no crew estimates are available. However, other designs offering comparable payloads (\mbox{$\sim$450} tons), such as Orion, support \mbox{$\sim$200} colonists (Orion was primarily intended for interplanetary travel which is why we aren't using it as our example). The ability to dispatch ships is bounded by many factors such as construction materials (inconsequential \citep{Freitas:1983a}) and fuel, \eg 20K tons of deuterium and 30K tons of helium-3 for Daedalus \citep{Daedalus:1978}. Besides the required quantity of materials, another relevant factor is time. Daedalus is primarily time-bounded by the obtaining of scarce $^3$He. The original design was for a twenty-year Jovian-atmosphere harvesting mission, although the schedule was amenable to scaling via concurrency (at an obvious corresponding scaling in cost). In fact, another important consideration which we do not expand on here is economic limitations \citep{Dyson:1968,Daedalus:1978}. With an estimate of the ship's crew size and the various production limitations, we can calculate the rate at which ships (and colonists) may emigrate, $C_\lambda$. Our example yields one ship and 200 colonists every twenty years. We could also estimate the total number of missions (colonists) the home solar system can physically support, $C_\Delta$, but this value is so large as to disregard (\eg Jupiter has $10^{16}$ tons of $^3$He, although perhaps heavy metals crucial to the ship itself would dominate such an analysis). Dividing $C_\lambda$ by the number of neighboring stellar societies, $N_s$ (\eg \mbox{$N_s=4$}) yields the rate at which colonists can emigrate to a single colony, \mbox{$C_{s\lambda}=C_\lambda / N_s$} \mbox{$=(200/20\text{yr})/4$} \mbox{$=50/20\text{yr}$} (fifty colonists per system every twenty years). In this fashion, we have defined an upper-bound on the \textit{colonist flux} that may be delivered to neighboring stellar societies and by which one society can potentially inflict its problems upon others. We are now squarely in the realm of the intended analytical purpose of the ITB: whether it is sufficiently high for societal collapse in one solar system to detrimentally affect a neighbor. To properly answer this question we would have to hear the input of sociologists and anthropologists. Note that the ITB does not rely heavily on the flight speed. Such a concept is akin to the notion of \textit{latency}, not bandwidth, and as such is less relevant to this discussion. We have intentionally left this discussion quite vague. Consider the effect upon the ITB by the type of starship. Crucial parameters, such as the solar system's material and fuel supply, the crew size, and the rate of ship and fuel production, exhibited by various interstellar travel proposals should yield a wide range of ITB estimates. For example, Super-Orion could transport several thousand colonists and was intended for interstellar missions \citep{Dyson:1968}, although its production timeline and economic limitations might amortize its long-term ITB relative to smaller ships. It also seems that the final result of our example is almost unbelievable low. It may be reasonable to permit super-ETI faster or concurrent ship-production and fuel-harvesting rates. Of course, such benefits would come at increased cost (which must factor into any proper ITB calculation, despite its absence from our cursory example). Furthermore, extremely advanced civilizations may span a wider interstellar collective, thus thinning the delivery to any individual neighbor. Alternatively, if travel is concentrated unequally upon one colony (thus increasing the ITB), then we can increase the possibility of infectious societal collapse, but only by decreasing the ITB to the other colonies, thus undermining a theory of \textit{ubiquitous or total} collapse. We will dispense with any further speculation on this matter, but the example shown here suggests that the ITB may be a very real bound on the transmission of interstellar societal pressure. As an additional point of obfuscation, we could consider suspended animation or the transition to an entirely computerized species \citep{Wiley:2011, Shostak:2010}, as discussed in section \ref{label_section_AFullyComputerizedCivilization}. Such methods could certainly increase the crew size of a starship. Whether such increases would be sufficient for infectious societal pressures is uncertain. In addition, with respect to a computerized species, the population of computerized individuals supportable by a given solar system would likely be greater as well and therefore the proportional impact of strife-ridden immigrants would not necessarily be any worse. Most importantly, it is unclear how the concept of starship-population in human terms translates to that of an alien species. This may be one of the most difficult aspects of ITB theory to resolve, aside from the general question of how societal pressure translates to an alien species' psychology and sociology. The theory of an ITB exacerbates the Fermi Paradox by protecting each stellar society from the suffering of the others, thus mitigating theories of total civilization collapse based on a single point of origin from where harmful effects spread outward like an infection. Total collapse is still a viable theory, but the likelihood of it occurring now becomes a binomial probability of independent collapse events. Thus, with a steadily increasing expansion footprint, a homeworld's reach may easily fill the galaxy. In this section we introduced ITB theory and considered its implications for the Fermi Paradox based on theories of societal collapse. The rest of the paper considers one additional paper on the Fermi Paradox, then offers our thoughts on the strongest theory yet presented in the literature which may resolve the paradox without excluding the possibility of intragalactic ETI. Finally, we offer our thoughts on how to better design future SETI programs. \section{Bonus Stimulation} \label{label_section_BonusStimulation} Bezsudnov and Snarskii \citep{BezsudnovSnarskii:2010} present a model in which civilizations expand outward for a time, then succumb to parameter $L$ of the Drake equation and shrink back to their homeworld to eventually wink out. However, if two civilizations collide along their expanding frontiers, they receive a \textit{bonus stimulation} (BS) to their lifetime. In this manner, civilizations which meet other civilizations last longer, presumably due to the influx of new scientifically, technologically, and culturally invigorating ideas. One possible weakness of this model is the manner in which civilizations die, \ie the youngest colonies die first, and so on back to the origin. This model seems almost perfectly opposed to resource-limited models such as McInnes' which suggest that societal collapse propagates outward from the homeworld. If we alter the BS model to die from the inside out, then the frontier colonies get a much longer time to meet other civilizations and receive the model's longevity reward. Furthermore, it is unclear how the reward should affect societies within the civilization which have already died. For both reasons, the dynamics of such a model are completely different from the original model. More seriously, this model seems to disregard both the speed and the splintered nature of communication within an interstellar collective. Consider the following peculiar behavior: if two civilizations, A and B, meet along their adjoining frontiers immediately prior to the moment when at least one of them, say A, would have begun to shrink, then both civilizations gain an immediate boost to their longevity. Thus, the impending shrink is averted and both civilizations continue to expand, including on the opposite side of each civilization, maximally distant from the point of contact. Contrarily, if the meeting had not occurred, then civilization A would have begun to shrink back to the homeworld. Thus, the model posits an instantaneous (or at least temporally unresolvable) transmission of the BS. Not only are we unconvinced by the near-instantaneous transmission of this salvaging effect across vast reaches of the galaxy, but more critically, we question whether any sort of cultural message of reinvigoration could maintain coherence given potentially thin channels of communication and visitation as per the ITB, especially considering the tremendously divergent societies that would have to pass the baton. We do not mean to over-interpret Bezsudnov and Snarskii's model. Surely they did not intend for it to be inspected at this level of precision. Nevertheless, we propose that each stellar society be regarded as culturally unique. While such an assumption may not seem reasonable in the initial decades after colonization, it is more reasonable for offspring societies separated from their parents by many millennia, with only intermittent and ITB-bounded physical visitation, and EM communication involving multi-decadal to multi-centennial delays. This idea has a significant effect on the overall conceptualization of galactic exploration and colonization; perhaps interstellar colonization is, in effect, a speciation event. \section{ETI May Still Exist in our Galaxy} \label{label_section_ETIMayStillExistinourGalaxy} While arguments against SRPs are unconvincing, and while the other theories discussed may impose unrealistic assumptions, intragalactic ETI may yet exist. The most convincing theory to date which permits intragalactic ETI is by Freitas who explains that exploratory probes could very well have reached our solar system and that we have overestimated the ease of detecting them \citep{Freitas:1983a}. This theory still imposes restrictions on the nature of ETI, in that while it might accommodate a few unnoticed SRPs, it is unclear how it would fare against the potential for millions of SRPs (as described in section \ref{label_section_EstimatingtheSRPPopulation}). Likewise, this theory strictly excludes pervasive colonization since we can all agree no such effort has reached us. Nevertheless, it is compelling and has implications for future SETI programs: we should search more aggressively within our solar system. Freitas has also described what to look for and where to look for it \citep{Freitas:1983b}. Furthermore, few of the arguments on either side of the debate have much implication for other galaxies. One attempt at such an argument is by Sagan and Newman \citep{SaganNewman:1983}, in which they conclude -- with what can only be regarded as scathing sarcasm -- that the notion of SRPs requires every galaxy in the cosmos to be completely mass-converted into probes. Therefore, they conclude that SRPs are a flawed theory. As explained in sections \ref{label_section_SelfReplicatingProbeVoluntaryRefrainment} and \ref{label_section_BiologicalBias}, we believe the more likely explanation is that their glib corollary speaks first to an overestimation of the probability of a successful cancerous mutation, and second to a deep biological bias with regard to computerized intelligence. If we disregard Sagan and Newman's claim, and accept a subluminal speed limit, then it is perfectly reasonable for extragalactic ETI to exist. This conclusion offers plenty of headroom in the argument over who should be viewed as an optimist as opposed to a pessimist. Those who argue against numerous intragalactic ETI are often labeled pessimists. However, if extragalactic ETI are brought back into the fold, and given that there are several 100 billion galaxies in the universe, we can easily permit comparable numbers of ETI without the slightest conflict with the Fermi Paradox, clearly not a pessimistic perspective. Of greater import, this idea has tremendous implications for SETI: we need to search other galaxies. \section{SETI} \label{label_section_SETI} One way of categorizing potential SETI targets is to break them into three distances ranges: the solar system, the galaxy, and the universe. Most SETI programs have focused on the galaxy. Ironically, of the three potential targets, we may be directing most of our efforts on the least likely category to yield fruit. Ideally, our society would value research of this sort to such a degree that no hard choices would have to be made and we could simply search everywhere. However, if choices must be made, perhaps we should target at least a few SETI programs to the solar system to look for SRPs and to other galaxies to look for possible indicators of intelligence. As a starting point, we would recommend considering nearby face-on spiral galaxies residing well above the galactic plane of the Milky Way, \ie galaxies to which we ourselves appear face-on. Thus, EM transmissions will be the least extincted by cosmic dust in either galaxy, thereby maximizing the chance of detection. Possible candidates might include M31 and M33 (whose oblique inclinations and low galactic latitudes are compensated for by their remarkable proximity), NGC300, NGC2403, M51, M81, M101, and many others. \section{Conclusion} \label{label_section_Conclusion} This paper was arranged in three parts. First, we introduced SRPs, presented the prevalent arguments against them, and showed that such arguments leave room for future SRP consideration. Namely, we proposed that recent literature has been overzealous in its exclusion of SRPs and we encourage their return to the field. Second, we presented percolation theory and its nonsociological explanation for the Fermi Paradox. We then showed that the theory can be extended in very reasonable ways which undermine its primary conclusion that galactic expansion might be intrinsically bounded. Third, we reviewed two theories of interstellar societal collapse and showed a few counter-arguments to each theory. Furthermore, we introduced ITB theory and showed that its implications might suggest a fundamental error in such theories. We then discussed one additional paper theorizing that interstellar societies shrink back to their homeworlds and explained that the model involves a number of unlikely assumptions. Following this final analysis, we described the best theory yet offered on the Fermi Paradox which permits intragalactic ETI, namely that exploration probes may currently reside in our solar system, yet undiscovered. Lastly, we offered our thoughts on how to design future SETI programs so as to maximize the likelihood of success. \section{Acknowledgements} \label{label_section_Acknowledgements} We wish to thank Angeline Madrid for editing and our Facebook/Google+ intellectual inner circle -- James Horey, Aaron Clauset, Kshanti Greene, Terran Lane, Diane Oyen, and Marlow Weston -- for helping to name the ITB theory and refine the mathematical formulations in section \ref{label_section_RefiningtheDrakeEquation}. \bibliographystyle{model2-names}
1,314,259,996,373
arxiv
\section{Introduction} The solar neutrino anomaly has been more and more strongly established both in its experimental~\cite{homestake,kamiokande,sage,gallex,superkam} as well as in its theoretical~\cite{JBahcall,BP95,BP98,reviewSSM} aspect. In fact, both have presented a very dynamical evolution. {}From one side, theoretical predictions, i.e., the standard solar models (SSM), have been refined by including several mechanisms such as helium diffusion~\cite{BP95,BP98} and an updating analysis of the $S_{17}$ astrophysical factor~\cite{BP98,INT}. One can see in ref. \cite{BKS} that theoretical predictions obtained by different SSMs, which are developed independently, are in good agreement with each other. Moreover, it has been shown that the SSM is in excellent agreement with the helioseismology \cite{BP98}. On the other hand, experimental data have become more accurate due to the calibration of the experiments, more statistics and the existence of the new generation SuperKamiokande experiment and its solar neutrino spectral observations~\cite{superkam}. Consequently, final numbers related to the solar neutrino deficit have also evolved significantly. The most updated experimental data as well as theoretical predictions are shown in Table I. One can show that these observed solar neutrino data are in strong disagreement with the ones predicted by the SSM \cite{BKS,minakata98}. Moreover, this conclusion does not depend on any details of the SSM. Solutions to the solar neutrino problem rely on different phenomena ~\cite{30years} which deplete the number of observable neutrinos at the Earth: neutrino oscillations in vacuum~\cite{vacuum}, resonantly enhanced matter oscillations~\cite{msw}, resonant spin-flavor precession phenomenon (RSFP)~\cite{LimMarciano,Akhmedov} and flavor mixing by new interactions~\cite{fcnc1,fcnc2}. The capability of each one of these processes to make compatible solar neutrino predictions and observations and therefore to find a solution to the solar neutrino anomaly have been updated from time to time~\cite{update}. In this paper we investigate the current status of the RSFP scenario \cite{review}. We believe that it is worthwhile to reanalyse this mechanism in the light of new solar neutrino data as well as the new SSM. We also discuss briefly how the solar neutrino spectral observations in SuperKamiokande are affected by the RSFP mechanism. \vglue 0.5cm \begin{table}[th] \label{tab:data} \caption{Observed solar neutrino event rates used in this analysis and corresponding predictions from the reference standard solar model \protect\cite{BP98}. The quoted errors are at $1\sigma$.} \begin{tabular}{ccccc} Experiment & Data~$\pm$(stat.)~$\pm$(syst.)&Ref. & Theory \protect\cite{BP98}& Units \\ \tableline Homestake & $ 2.56 \pm 0.16 \pm 0.15$ & \protect\cite{homestake} & $7.7^{+1.2}_{-1.0}$ & SNU \\ SAGE &$69.9^{+8.0}_{-7.7}{}^{+3.9}_{-4.1}$& \protect\cite{sage}& $129^{+8}_{-6}$ & SNU \\ GALLEX & $76.4 \pm 6.3^{+4.5}_{-4.9}$ & \protect\cite{gallex}& $129^{+8}_{-6}$ & SNU \\ SuperKamiokande &$2.44 \pm +0.05{}^{+0.09}_{-0.06}$& \protect\cite{superkam}& $5.15^{+0.98}_{-0.72}$ & $10^6$ cm$^{-2}$s$^{-1}$ \end{tabular} \vglue -0.5cm \end{table} RSFP mechanism is very sensitive to the magnetic profile in the Sun and, in fact, several possible scenarios for the magnetic strength have been proposed by different authors \cite{alp1,krastev,limnunokawa,pulido,chauhan}. We consider several possibilities which include the main qualitative aspects of the magnetic profile in the Sun previously invoked as a solution to the solar neutrino anomaly. Using the minimum $\chi^2$ method to compare theoretical predictions of the RSFP phenomenon and the experimental observations we conclude that very good fits can be obtained for the average solar neutrino suppression, if intense magnetic fields in the solar convective zone are considered. \section{RSFP mechanism} Assuming a nonvanishing transition magnetic moment of neutrinos, active solar neutrinos interacting with the magnetic field in the Sun can be spin-flavor converted into sterile nonelectron neutrinos~\cite{cisneros,vvo} (if we are dealing with Dirac particles) or into active nonelectron antineutrinos~\cite{schechtervalle} (if the involved particles are Majorana). In both cases the resulting particles interact with solar neutrino detectors significantly less than the original active electron neutrinos in such a way that this phenomenon can induce a depletion in the detectable solar neutrino flux. Spin-flavor precession of neutrinos can be resonantly enhanced in matter~\cite{LimMarciano,Akhmedov}, in close analogy with the MSW effect~\cite{msw}. In this case the precession strongly depends on the neutrino energy and provokes different suppressions for each portion of the solar neutrino energy spectrum. Therefore RSFP provides a satisfactory description~\cite{review,alp1,krastev,limnunokawa,pulido,chauhan} of the actual experimental panorama~\cite{homestake,kamiokande,sage,gallex,superkam}: all experiments detect less than the theoretically predicted solar neutrino fluxes~\cite{BP98} and different suppressions are observed in each experiment, suggesting that the mechanism to conciliate theoretical predictions and observations has to differentiate the different parts of the solar neutrino spectrum. For simplicity, we consider two generation of neutrinos, electron neutrino and, for e.g., muon neutrino (which could be replaced by tau neutrino in our discussion). Furthermore, we assume that the vacuum mixing angle is zero or small enough to be neglected. (See ref. \cite{rsfpmsw} for the case where RSFP and flavor mixing simultaneously exists.) The time evolution of neutrinos interacting with a magnetic field $B$ through a nonvanishing neutrino magnetic moment $\mu_\nu$ in matter is governed by a Schr\"odinger-like equation \cite{LimMarciano,Akhmedov}; \begin{equation} i \frac{d}{dt} \left( \begin{array}{l} \nu_{e_L} \\ \overline{\nu}_{\mu_R} \end{array} \right) = \left( \begin{array}{cl} a_{\nu_e} \, \, & \mu_\nu B \\ \mu_\nu B \, \, & \frac{\Delta m^2}{2 E} + a_{\nu_{\mu}} \end{array} \right) \left( \begin{array}{l} \nu_{e_L} \\ \overline{\nu}_{\mu_R} \end{array} \right) , \label{evolution} \end{equation} where $\nu_e$ and $\overline{\nu}_{\mu_R}$ are active electron neutrinos and muon antineutrinos, respectively, $\Delta m^2= m^2_{\nu_\mu} -m^2_{\nu_e}$ is their squared mass difference and $E$ is the neutrino energy, $a_{\nu_e}= G_F(2N_e-N_n)/\sqrt{2}$ and $a_{\nu_\mu}=G_FN_n/\sqrt{2}$, with $N_e$ and $N_n$ being electron and neutron number densities, respectively. In eq. (\ref{evolution}) we are assuming that neutrinos are Majorana particles. For the Dirac case, the spin-flavor precession involves $\nu_e \leftrightarrow \nu_s$, where $\nu_s$ is a sterile neutrino and $a_{\nu_s}=0$. \begin{figure}[ht] \vglue -1.2cm \centerline{ \psfig{file=vmatt.eps,height=9.4cm,width=7.8cm,angle=90}} \vglue -0.5cm \caption{ Matter potentials as a function of radial distance from the solar center are plotted. The solid and dotted curves correspond to the Majorana and Dirac case, respectively. The dashed line correspond to $\mu_\nu B = 10^{-11}\mu_B\times 100$ kG. \label{fig:mattpotential}} \end{figure} \section{Analysis} In order to obtain the survival probability we first numerically integrate the evolution equations~(\ref{evolution}) varying matter density in the Sun \cite{JBahcall} for some assumed profiles of the magnetic field which will be described below. Next, using the solar neutrino flux in ref. \cite{BP98}, we compute the expected solar neutrino event rate in each experiment, taking into account the relevant absorption cross sections~\cite{JBahcall} for $^{71}$Ga\ and $^{37}$Cl\ experiments as well as the scattering cross sections for $\nu_e$-$e^-$ and $\bar{\nu}_\mu$-$e^-$ reactions including also the efficiency function for the SuperKamiokande experiment in the same way as in ref. \cite{BKL}. We note that in this analysis we always adopt the solar model in ref. \cite{BP98} as a reference SSM. \begin{figure}[ht] \vglue -0.5cm \centerline{ \epsfig{file=bprofiles.eps,width=7.7cm} } \noindent \caption{ Various magnetic field profiles used in this work. For each field $\vev{B}$ is defined as the average of the field over the region where $B(r)$ is not zero. \label{fig:profile}} \vglue -0.5cm \end{figure} \subsection{Assumptions} We do not take into account the production point distribution of neutrinos. This can be justified by two main reasons. For the relevant values of $\Delta m^2$ our analysis shows that (see below) the resonance position lyes always outside the neutrino production region ($r/R_{\rm SUN}<0.3$) because the solutions we find implies $\Delta m^2 \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 10^{-7}$~eV$^2$. In order to see this we plot in Fig.~1 the matter potential as a function of the radial distance from the solar center. If neutrinos are considered Majorana particles, $V_{\rm matter} \equiv a_{\nu_e}-a_{\nu_\mu}$, and if they are Dirac, $V_{\rm matter} \equiv a_{\nu_e}$. It is shown, in the right ordinate in the same plot, also the value of the quantity $\Delta m^2$ to which a resonance is found. For example, if $\Delta m^2 = 10^{-7}$~eV$^2$ (for $E$= 1 MeV), a resonance is localized in $r/R_{\rm SUN}\approx 0.52$. Therefore, it can be seen that the resonance position is also larger than $r/R_{\rm SUN} = 0.3$, outside the production region. Also, the range of $\mu_\nu B$ we consider here is much smaller than the matter potential, $a_{\nu_e}$ and $a_{\nu_{\mu}}$, in the production region. Again this fact is shown in Fig.~1, where the quantity $\mu_\nu B = 10^{-11}\mu_B\times 100$~kG is presented. Therefore, from eq. (\ref{evolution}) we observe that neutrino spin-flavor precession is prevented before neutrino gets the resonance region and the final survival probability is not affected by the production position. It is obvious from the evolution equations (\ref{evolution}) that the RSFP mechanism crucially depends on the solar magnetic field profile along the neutrino trajectory. In the present paper we choose several different profiles which, we believe, cover in general all the previously~\cite{alp1,krastev,limnunokawa,pulido,chauhan} analysed magnetic profiles which led to a solution to the solar neutrino anomaly. In Fig.~2, these magnetic fields are presented in their general aspects. The constant magnetic profile $B_1(r)$ was adopted in references~\cite{limnunokawa}, while the general aspects of the profiles $B_3(r)$ and $B_4(r)$ have already appeared in refs. \cite{krastev,pulido,alp1}, and \cite{chauhan}, respectively. We note that close to the solar surface ($r > 0.95 R_\odot$) we have switched off the magnetic field for all the profiles we considered in this work. \subsection{Definition of $\chi^2$} The relevant free parameters in the RSFP mechanism are $\Delta m^2$ and $\mu_\nu B$. Using the solar neutrino data given in Table I, we look for the region in the $\Delta m^2 -\mu_\nu \vev{B}$ parameter space, where $\vev{B}$ denotes the average field strength defined as in Fig. 2, which leads to a solution to the solar neutrino anomaly by means of the minimum $\chi^2$ method. In this analysis we will use only the SuperKamiokande data \cite{superkam} without including the Kamiokande data \cite{kamiokande} because of the larger statistics and the smaller systematic error in the SuperKamiokande experiment. We also note that we will use the combined value of the two $^{71}$Ga\ experiments, 72.3 $\pm$ 5.6 SNU. Our $\chi^2$ is defined as follows, \begin{equation} \chi^2 = \sum_{i,j} (R_i^{th}-R_i^{obs}) [\sigma^2_{ij}(\mbox{tot})]^{-1} (R_j^{th}-R_j^{obs}), \end{equation} where $(i,j)$ run through three experiments, i.e., $^{71}$Ga\ , $^{37}$Cl\ and SuperKamiokande, and the total error matrix $\sigma^2_{ij}(\mbox{tot})$ and the expected event rates $R_i$ are computed as follows. We essentially follow ref. \cite{FL} for the derivation of the error matrix and to describe the correlations of errors we used in this work. The expected experimental rates in the absence of neutrino conversion is given by, \begin{equation} R_{i} = \sum_j C_{ij}\phi_j \ \ \ (i= \mbox{Ga},\ \mbox{Cl},\ \mbox{SK}), \end{equation} where $C_{ij}$ is the cross section coefficients and $\phi_j$ is the solar neutrino flux. In Sec. IV where we consider the case with neutrino conversion we use the coefficients $C_{ij}$ determined by properly convoluting the conversion probability in the integration over the neutrino energy spectrum for each detector and neutrino flux. In this work we consider neutrino from $pp$, $pep$, $^7$Be, $^8$B, $^{13}$N and $^{15}$O sources and neglect other minor flux such as $^{17}$F and $hep$ neutrinos. The total error matrix $\sigma^2_{ij}$ is the sum of the theoretical $\sigma^2_{ij}(\mbox{th})$ and experimental one $\sigma^2_{ij}(\mbox{exp})$, \begin{equation} \label{total} \sigma^2_{ij}(\mbox{tot}) = \sigma^2_{ij}(\mbox{th}) + \sigma^2_{ij}(\mbox{exp}). \end{equation} The theoretical error matrix can be further divided into the one coming from the uncertainties in the cross sections, $\sigma^2_{ij}(\mbox{cross})$ and the one coming from uncertainties in the solar neutrino flux, $\sigma^2_{ij}(\mbox{flux})$, \begin{equation} \sigma^2_{ij}(\mbox{th}) = \sigma^2_{ij}(\mbox{cross}) + \sigma^2_{ij}(\mbox{flux}). \end{equation} The cross section error matrix $\sigma^2_{ij}(\mbox{cross})$ can be calculated by, \begin{eqnarray} \sigma^2_{ij}(\mbox{cross}) &=& \delta_{ij}\sum_{k,l=1}^6 \frac{\partial R_i}{\partial \mbox{ln}C_{kj} } \frac{\partial R_j}{\partial \mbox{ln}C_{lj} } \Delta \mbox{ln}C_{kj} \Delta \mbox{ln}C_{lj} \nonumber \\ & = & \delta_{ij}\sum_{k=1}^6 (R_{ik}\Delta \mbox{ln}C_{ik})^2, \end{eqnarray} where $R_{ik}\equiv C_{ik} \phi_k$. On the other hand, the flux error matrix $\sigma^2_{ij}(\mbox{flux})$ can be calculated by, \begin{eqnarray} \label{flux_error} \sigma^2_{ij}(\mbox{flux}) &=& \sum_{k,l=1}^6 \frac{\partial R_i}{\partial \mbox{ln}\phi_{k} } \frac{\partial R_j}{\partial \mbox{ln}\phi_{l} } \sum_{m=1}^{11} (\Delta \mbox{ln} \phi_{k})_m (\Delta \mbox{ln} \phi_{l})_m \nonumber \\ & = & \sum_{k,l=1}^6 R_{ik}R_{jl} \sum_{m=1}^{11} (\Delta \mbox{ln} \phi_{k})_m (\Delta \mbox{ln} \phi_{l})_m, \end{eqnarray} where $(\Delta \mbox{ln} \phi_{k})_m$ is the fractional uncertainty of the $k$-th neutrino flux coming from the uncertainty in the $m$-th input parameter ($S_{11}$, $S_{33}$, $S_{34}$, $S_{17}$, $Z/X$, opacities, $S_{1,14}$, luminosity, age, diffusion or $^7$Be + $e^{-}$ capture rate) which are obtained by the computer code exportrates.f, available at URL http://www.sns.ias.edu/$^\sim$jnb/. We note that in eq. (\ref{flux_error}) the product $(\Delta \mbox{ln} \phi_{k})_m (\Delta \mbox{ln} \phi_{l})_m$ takes positive (negative) value when $k$-th and $l$-th fluxes are positively (negatively) correlated to each other with respect to the variation of the $m$-th input parameter. The experimental error matrix is given by, \begin{equation} \sigma^2_{ij}(\mbox{exp}) = \delta_{ij} \sigma_i \sigma_j, \end{equation} where $\sigma_{i,j}$ ($i,j$ = Ga, Cl, SK) stands for the combined error in each experiment. In Table II we show the correlation matrix $\rho_{ij}$ defined as, \begin{equation} \rho_{ij} \equiv \frac{ \sigma^2_{ij}(\mbox{tot}) } { \sqrt{\sigma^2_{ii}(\mbox{tot})\ \sigma^2_{jj}(\mbox{tot})} }. \label{correlation} \end{equation} \begin{table}[h] \caption[Tab]{The correlation matrix $\rho_{ij}$ obtained from eqs. (\ref{total})-(\ref{correlation}).} \begin{center} \begin{tabular}{cccc} Experiment & Correlation matrix & & \\ \hline Ga & 1.00 & & \\ Cl & 0.497 & 1.00 & \\ Super-Kam & 0.486 & 0.952 & 1.00 \\ \end{tabular} \end{center} \label{tab:corr} \vglue -3cm \end{table} \vglue 0.5cm \begin{figure}[ht] \hglue 1.8cm \epsfig{file=prob1.eps,width=7.0cm} \vglue -5.85cm \hglue 8.8cm \epsfig{file=prob2.eps,width=7.0cm} \vglue -0.5cm \hglue 1.8cm \epsfig{file=prob3.eps,width=7.0cm} \vglue -5.85cm \hglue 8.8cm \epsfig{file=prob4.eps,width=7.0cm} \caption{ The contour plots of the survival probability in the $\vev{B}-\Delta m^2/E$ plane are shown in (a), (b), (c) and (d) for the magnetic field profiles, $B_1$, $B_2$, $B_3$ and $B_4$, respectively, sketched in Fig. 2. } \end{figure} \section{Results} Now we compute the spin-flavor conversion probability, by numerically integrating the evolution eq. (1), assuming $\mu_\nu=10^{-11}\mu_B$ as a reference value, which is slightly below the present experimental upper bound \cite{PDG98}. Hereafter we always assume this value of magnetic moment \cite{Raffelt} in this work but as it is clear from eq. (1) if $\mu_\nu$ is assumed to be smaller by certain amount, the same effect can still be obtained by simply increasing the value of magnetic field strength properly so that the product $\mu_\nu B$ would not be changed. In Fig. 3 (a), (b), (c) and (d) we show the contour plots of the survival probability $P(\nu_e\to\nu_e)= |\vev{\nu_e(t)|\nu_e(0)}|^2$ in the $\Delta m^2 - \vev{B}$ parameter space, for the magnetic profiles we sketched in Fig. 2, $B_1(r)$, $B_2(r)$, $B_3(r)$ and $B_4(r)$, respectively. We note that the field $B_3$ gives very different probability contours from the other profiles, which will be also reflected in the final allowed region (see below). Including now the experimental observations on the solar neutrino signal above shown in Table~I, we can determine the region in the $\Delta m^2-\vev{B}$ parameter space which leads to a RSFP solution to the solar neutrino problem for a specified confidence level. We present the $\Delta m^2-\vev{B}$ parameter region which can account for all the solar neutrino data, at 90, 95 and 99 \% C.\ L. in Figs.~4 (a), (b), (c) and (d), for the magnetic profiles $B_1(r)$, $B_2(r)$, $B_3(r)$ and $B_4(r)$, respectively. We observe from Figs.~4 (a) to (d) that a solution to the solar neutrino problem can be found when $\vev{B} \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} $ few times 10 kG and $\Delta m^2$ is the order of $10^{-8}$ to $10^{-7}$ eV$^2$ for any of the magnetic profiles used in this work. Nevertheless, the quality of the fit, measured by the minimum $\chi^2$ criterion, varies a lot. The poorest fit is obtained when the continuously decaying magnetic field profile $B_4$ is used, with $\chi^2_{min}=6.1$ for one (three data points - two free parameters) degrees of freedom. Better fits are obtained when the $B_1$ (uniform) and $B_2$ (large triangle) fields are employed showing $\chi^2_{min}=2.0$ and 1.8, respectively. And the best fit appears when the triangular field in the solar convective zone, $B_3$, is employed, with a rather small value $\chi^2_{min}=0.13$. For this profile, we note that, as expected from Fig. 3 (c) we have several local best fit points also indicated in Fig. 4 (c) by the open circles whose corresponding $\chi^2_{min}$ are, from left to right, 2.3, 0.29 and 0.19. \vglue 0.5cm \begin{figure}[ht] \hglue 1.8cm \epsfig{file=allowed_pro1.eps,width=7.0cm} \vglue -5.8cm \hglue 8.8cm \epsfig{file=allowed_pro2.eps,width=7.0cm} \vglue -0.5cm \hglue 1.8cm \epsfig{file=allowed_pro3.eps,width=7.0cm} \vglue -5.7cm \hglue 8.8cm \epsfig{file=allowed_pro4.eps,width=7.0cm} \caption{ The Allowed RSFP solution to the solar neutrino problem. The parameter region allowed at 90, 95 and 99 \% C. L. are shown in (a), (b), (c) and (d) for the magnetic field profiles, $B_1$, $B_2$, $B_3$ and $B_4$, respectively, sketched in Fig. 1. We indicate best fit points by filled circles. In (c) we also indicate, by the open circles, the local best fit points inside each island delimited by 90 \% C.L. curves. } \vglue 0.5cm \end{figure} The reason why we are getting very good fit for $B_3$ is that this profile can provide the required suppression patterns of various neutrino flux implied by latest the data \cite{minakata98} as discussed in ref. \cite{pulido}. First we note that low energy $pp$ neutrinos are not so suppressed because the resonance positions are located in the inner region where the magnetic field is zero or small (see Figs. 1 and 2). However, intermediate energy $^{7}$Be neutrinos can be strongly suppressed due to the rapid increase of the field at the bottom of the convective zone since their resonance position is in a slightly outer region than the $pp$ one. On the other hand, high energy $^{8}$B neutrinos are moderately suppressed because their resonance positions are closer to the solar surface than the $^{7}$Be ones, where the field is decreasing. The best fitted values of $\vev{B}$ and $\Delta m^2$ as well as $\chi^2_{min}$ obtained from these different profiles are summarized in Table III. \begin{table}[h] \caption[Tab]{The best fitted parameters and $\chi^2_{min}$ for the Majorana case. Dirac case is presented in the parentheses.} \begin{center} \begin{tabular}{cccc} Profile & $\vev{B}$ (kG) & $\Delta m^2$ (10$^{-8}$ eV$^2$) & $\chi^2_{min}$ \\ \hline 1 & 50.6\ (40.9) &3.5\ (2.3) & 2.0\ (6.2) \\ 2 & 47.1\ (40.8) & 6.1\ (4.6) & 1.8\ (5.7) \\ 3 & 118\ (69.4) & 1.5\ (1.2) & 0.13\ (1.3) \\ 4 & 82.9\ (81.6) & 8.1\ (6.6) & 6.1\ (11.4) \\ \end{tabular} \end{center} \label{tab:chi2} \end{table} We have repeated the same analysis also for the Dirac neutrino case. We, however, do not show the plots for the allowed region since they are rather similar to what have been presented above, if the same magnetic field profile is assumed. Instead, for the case of Dirac neutrinos, we only present the best fitted parameters and $\chi^2_{min}$ in the parentheses in Table III. We see from this table that, the Dirac case always leads to a worse fit if the same magnetic field profile is assumed. To understand this we should note that for the Dirac case, $\nu_e$'s are converted into the right handed muon (or tau) neutrino, which do not contribute to any of the solar experiments including the water Chrenkov experiment \cite{note}. In contrast in the Majorana case, converted right handed neutrino $\bar{\nu}_\mu$'s do contribute to the signal observed in the SuperKamiokande detector. This makes it difficult to conciliate the difference between the SuperKamiokande and $^{37}$Cl\ data. Let us now comment about the possibility of having such strong magnetic field in the Sun. While there is no generally accepted theory of solar magnetic field, it is possible to bound the field strength from very general arguments. It can be shown \cite{JBahcall} that the magnetic field less than 10$^6$ kG in the solar core or less than 10$^4$ kG in the solar convective zone, will hardly affect the thermal structure and nuclear reaction processes well described by the standard solar model. These values come from the requirement that the magnetic pressure should be much smaller than the gas pressure, and can be regarded as the most generous upper limits of the magnetic field inside the Sun. More stringent bounds on the magnetic field in the convective zone are found in refs. \cite{SR,shi} where the discussion is based on the non-linear effects which eventually prevents the growth of magnetic fields created by the dynamo process. Naive limit can be obtained by estimating the required field tension necessary to prevent a fluid element from sinking into a magnetically stratified region, so that the magnetic flux would not be further amplified. By equating the magnetic tension to the energy excess of a sinking element at the bottom of the convective zone, Schmitt and Rosner \cite{SR} obtained $\sim 10$ kG as an upper bound for the magnetic field, which is of the order of the magnitude we need to have a good fit to the solar data by RSFP mechanism for the reference value of magnetic moment, $\mu_\nu=10^{-11}$ $\mu_B$. Finally, we briefly discuss how the recoil electron energy spectra in the SuperKamiokande detector will be affected by the RSFP mechanism \cite{pulido98}. In Fig. 5 (a) we plot the electron neutrino survival probabilities as a function of neutrino energy using the best fit parameters. In Fig. 5 (b) we plot the recoil electron energy spectra divided by the standard prediction expected to be observed in the SuperKamiokande detector, using also the best fit parameters as in Fig. 5 (a). In Fig. 5 (b) we also plot the latest data from SuperKamiokande \cite{superkam}. As we can see from the plot the observed data indicate some distortion mainly due to the last three data points in the higher energy bins. We, however, note from this plot that it seems difficult to exclude, at this moment, any of our predicted spectra expected from different field profiles, because of the experimental errors. We have to wait for more statistics and more careful analysis from the experimental group before drawing any definite conclusion. \begin{figure}[ht] \hglue 1.4cm \epsfig{file=bestprob.eps,width=7.5cm} \vglue -6.2cm \hglue 8.8cm \epsfig{file=recoil.eps,width=7.5cm} \caption{We plot in (a) electron neutrino survival probability as a function of energy with the best fitted parameters for various field profiles. In (b) we plot recoil electron energy spectra expected from RSFP scenario using our best fit parameters, divided by the SSM prediction. The SuperKamiokande data are also shown by the filled circles with error bars. The last data point includes the contribution from the electrons with energy larger than 14 MeV. \label{fig:recoil}} \end{figure} \section{Conclusions} We have reanalysed the RSFP mechanism as a solution to the solar neutrino problem in the light of the latest experimental data as well as the theoretical predictions. We found that the quality of the RSFP solution to the solar neutrino anomaly crucially depends on the solar magnetic field configuration along the neutrino trajectory inside the Sun. We found that the best fit to the observed solar neutrino data, which seems to be even better than the usual MSW solution as far as the total rates are concerned, is obtained if intensive magnetic fiels in the convective zone is assumed, in agreement with the conclusion found in ref. \cite{pulido}, whereas the linearly decaying magnetic field gives the worst fit. We, however, note that the required magnitude of the free parameters involved in the process, i.e., the magnetic field strength multiplied by the neutrino magnetic moment $\mu_\nu \vev{B}$ and the squared mass difference $\Delta m^2$, points to the same order, $\mu_\nu \vev{B} \approx \mbox{few\ times\ } 10^{-11}\mu_B \cdot 10$~kG and $\Delta m^2 \approx$ few times $10^{-8}$~eV$^2$, for any of the field profiles assumed in this work. Our ignorance about the profile as well as the magnitude of the solar magnetic field makes this approach to the solar neutrino observation less predictive than its alternative approaches~\cite{vacuum,msw,fcnc1}. Nevertheless the presence of this mechanism opens some interesting possibilities. One possibility is to look for any time variation of the solar neutrino signal \cite{mhd} which can not be expected in other alternative solutions found in refs. \cite{vacuum,msw,fcnc1}. Any time variation of the solar neutrino signal which can be attributed to some time variation of the solar magnetic field can be a good signature of this mechanism. Although SuperKamiokande has not yet confirmed any significant time variation up to experimental uncertainty this possibility remains. Another possibility is to look for the solar $\bar{\nu}_e$ flux, which can not be produced in the usual MSW or vacuum oscillation case but can be produced in RSFP mechanism if the flavor mixing is included. $\bar{\nu}_\mu$ produced by RSFP mechanism can be converted into $\bar{\nu}_e$ by the usual vacuum oscillation. Ref. \cite{fiorentinie} suggests to observe (or to put upper bound of) $\bar{\nu}_e$ flux in the SuperKamiokande whereas ref. \cite{pastor} suggests to use low energy solar neutrino experiment such as Borexino or Hellaz. We finally stress that RSFP mechanism can still provide a good solution to the solar neutrino problem, comparable in quality to MSW or Just So solution, and is not excluded by the present solar neutrino data. \vskip 0.5cm \centerline{\bf Acknowledgements} The authors would like to thank Eugeni Akhmedov and Jo\~ao Pulido for useful discussions and valuable suggestions, John Bahcall for helpful correspondence and encouragement, Andrei Gruzinov for the helpful comment regarding to the size of the solar magnetic field, Eligio Lisi for useful comments regarding to the $\chi^2$ analysis. The authors would also like to thank Conselho Nacional de Desenvolvimento Cient\'\i fico e Tecnol\'ogico (CNPq), PRONEX and Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP) for several financial supports. H. N. has been supported by a postdoctral fellowship from FAPESP.
1,314,259,996,374
arxiv
\section*{Introduction} \label{intro} In this article, we study the subvarieties with nonsingular real loci of a smooth projective real algebraic variety $X$. We consider their classes in the Chow groups of~$X$ and whether their real loci can approximate a fixed $\mathcal{C}^{\infty}$ submanifold of $X(\mathbb R)$. Let $c$ and $d$ denote the codimension and the dimension of these subvarieties. The guiding principle of our results is that, for each of the three problems that we will consider in \S\S\ref{Chow}-\ref{approximation}, subvarieties with nonsingular real loci are abundant when $d<c$ (see Theorems~\ref{Chow1}, \ref{Chow2} and \ref{thC1}), but may be scarce for $d\geq c$ (see Theorems \ref{Chow3}, \ref{Chow4} and~\ref{thC2}). The geometric rationale behind this principle goes back to Whitney \cite{Whitneydiff}: a $d$\nobreakdash-dimensional variety mapped generically to $X$ is expected not to self-intersect, hence to have nonsingular image in $X$, only if~$d< c$. \subsection{Chow groups} \label{Chow} It is an old question, going back to \cite[\S 5.17]{BH}, to decide when the Chow group $\mathrm{CH}_d(X)$ of a smooth projective variety $X$ of dimension $c+d$ over a field is generated by classes of smooth subvarieties of $X$. The main result in this direction, due to Hironaka \cite[Theorem p.~50]{Hironakasmoothing}, gives a positive answer if $d<c$ and $d\leq 3$ (his arguments now work over any infinite perfect field, thanks to \cite{CP}). Although this question cannot be answered in the affirmative in general (a counterexample for $c=2$ and $d=7$ appears in \cite[Theorem~1]{HRT}, see also Theorem~\ref{Chow5}), one may wonder whether Hironaka's theorem holds as soon as $d<c$. In real algebraic geometry, it is natural to consider, more generally, subvarieties that are smooth along their real loci. Our first theorem is a variant of Hironaka's result in this setting, valid for all values of $(c,d)$ such that $d<c$. \begin{thm}[Theorem \ref{Chowth}] \label{Chow1} Let $X$ be a smooth projective variety of dimension $c+d$ over $\mathbb R$. If $d<c$, then the group $\mathrm{CH}_d(X)$ is generated by classes of closed subvarieties of~$X$ that are smooth along their real loci. \end{thm} Our proof is based on the smoothing technique developed by Hironaka in \cite{Hironakasmoothing}. We need to refine it for two reasons: to control real loci, and to deal with the singularities that inevitably appear in the course of our proof if $d>3$. To do so, we rely on the theory of linkage, as developed by Peskine and Szpiro \cite{PS} and Huneke and Ulrich \cite{HUDCG}. Our argument works over an arbitrary real closed field. \vspace{.5em} Our second theorem shows that Theorem \ref{Chow1} is optimal, for infinitely many values of~$c$. We let $\alpha(m)$ denote the number of $1$'s in the dyadic expansion of $m$. \begin{thm}[Theorem \ref{Chowth3}] \label{Chow3} If $d\geq c$ are such that $\alpha(c+1)\geq 3$, there exists an abelian variety $X$ of dimension $c+d$ over $\mathbb R$ such that $\mathrm{CH}_d(X)$ is not generated by classes of closed subvarieties of $X$ that are smooth along their real loci. \end{thm} Theorem \ref{Chow3} is entirely new. The hypothesis that $\alpha(c+1)\geq 3$ cannot be weakened to ${\alpha(c+1)\geq 2}$. Indeed, Kleiman has showed that the Chow group of codimension~$2$ cycles on a smooth projective fourfold or fivefold over an infinite field is generated by classes of smooth subvarieties (see \cite[Theorem 5.8]{KleimanGr}, where the hypothesis that the base field is algebraically closed may be discarded as a theory of Chow groups and Chern classes is now available in the required generality \cite{Fulton}). Let us briefly explain the principle of the proof of Theorem \ref{Chow3} in the key case where $c=d$. Assume to simplify that $\beta\in\mathrm{CH}_d(X)$ is the class of closed subvariety $Y\subset X$ which is smooth along $Y(\mathbb R)$. Let $g:W\to X$ be a morphism obtained by resolving the singularities of~$Y$. The double locus of $g$, which is well-defined as a $0$-cycle on~$W$, has degree divisible by~$4$. Indeed, double points come two by two, and each such pair has a distinct complex conjugate. On the other hand, a double point formula due to Fulton \cite[\S 9.3]{Fulton} computes the degree of this double locus in terms of the Chern classes of $X$ and~$W$ and of the self-intersection of $Y$ in $X$. Divisibility results for Chern numbers due to Rees and Thomas \cite[Theorem 3]{RT} now give restrictions on $\beta$, which sometimes lead to a contradiction. This strategy applies as well over $\mathbb C$, and yields new examples of smooth projective complex varieties whose Chow groups are not generated by smooth subvarieties. \begin{thm}[Theorem \ref{Chowth5}] \label{Chow5} If $d\geq c$ are such that $\alpha(c+1)\geq 3$, there exists a smooth projective variety~$X$ of dimension $c+d$ over $\mathbb C$ such that $\mathrm{CH}_d(X)$ is not generated by classes of smooth closed subvarieties of $X$. \end{thm} This complements the example of \cite[Theorem~1]{HRT}, where $c=2$ and $d=7$. Theorem \ref{Chow5} is closely related to \cite[Proposition 1]{RTac}, where the easier problem of showing that a cycle is not rationally equivalent to the class of a smooth subvariety (as opposed to a linear combination of classes of smooth subvarieties) is considered. In contrast with \cite{HRT,RTac}, our argument is not purely topological, since it relies partly on Hodge theory. \subsection{The kernel of the Borel--Haefliger map} \label{BH} If $X$ is a smooth projective variety of dimension $c+d$ over $\mathbb R$, a related problem is to determine the subgroup of $\mathrm{CH}_d(X)$ generated by classes of subvarieties with no real points. This subgroup is included in the kernel $\mathrm{CH}_d(X)_{\mathbb R-\mathrm{hom}}$ of the Borel--Haefliger cycle class map ${\mathrm{cl}_{\mathbb R}:\mathrm{CH}_d(X)\to H_d(X(\mathbb R),\mathbb Z/2)}$ (see \cite{BH} or \cite[\S 1.6.2]{BW1}), which associates with the class of an integral closed subvariety $Y\subset X$ the homology class of its real locus. One may wonder when these two subgroups coincide. This question was known to have a positive answer for $c=1$ (Br\" ocker \cite{Brocker}, see also \hbox{\cite[\S 4]{Scheidererpurity}}), for $d=0$ (Colliot-Th\'el\`ene and Ischebeck \cite[Proposition~3.2~(ii)]{CTI}), and for $d=1$ and $c=2$ (Kucharz \cite[Theorem~1.2]{KucChow}). Combining our improvements of Hironaka's smoothing technique and a theorem of Ischebeck and Sch\"ulting according to which $\mathrm{CH}_d(X)_{\mathbb R-\mathrm{hom}}$ is generated by classes of integral closed subvarieties of $X$ whose real locus is not Zariski-dense \cite[Main Theorem 4.3]{IS}, we obtain an affirmative answer for all values of $(c,d)$ such that $d<c$. \begin{thm}[Theorem \ref{Chowth2}] \label{Chow2} Let $X$ be a smooth projective variety of dimension $c+d$ over $\mathbb R$. If $d<c$, then $\mathrm{CH}_d(X)_{\mathbb R-\mathrm{hom}}$ is generated by classes of closed integral subvarieties of~$X$ with empty real loci. \end{thm} Theorem \ref{Chow2} fails in general over non-archimedean real closed fields (see Remark~\ref{remrealclosed}). Kucharz has shown in \cite[Theorem 1.1]{KucChow} that the hypothesis $d<c$ of Theorem~\ref{Chow2} cannot be improved, for all even values of $c\geq 2$. We extend this result to all the values of $c$ not of the form $2^k-1$. \begin{thm}[Theorem \ref{Chowth4}] \label{Chow4} If $d\geq c$ are such that $\alpha(c+1)\geq 2$, there exists an abelian variety $X$ of dimension $c+d$ over $\mathbb R$ such that $\mathrm{CH}_d(X)_{\mathbb R-\mathrm{hom}}$ is not generated by classes of closed subvarieties of $X$ with empty real loci. \end{thm} The proof of Theorem \ref{Chow4} follows the same path as that of Theorem \ref{Chow3}, using additionally a new result on congruences of Chern numbers (Theorem \ref{divtop}). The hypothesis that $\alpha(c+1)\geq 2$ in Theorem \ref{Chow4} cannot be removed in general, in view of Br\"ocker's above-mentioned theorem \cite{Brocker} when $c=1$. \subsection{Algebraic approximation} \label{approximation} In \S\ref{approximation}, we fix a smooth projective variety $X$ of dimension $c+d$ over~$\mathbb R$, and a closed $d$-dimensional $\mathcal{C}^{\infty}$ submanifold $j:M\hookrightarrow X(\mathbb R)$. \subsubsection{Approximation properties} We focus on the classical question whether $M$ can be approximated by the real loci of algebraic subvarieties of $X$ in one of the following two senses: \vspace{1.5pt} \begin{flushright} $(A_{\mathbb R})\mkern9mu$\begin{minipage}[t]{.92\textwidth} For all neighbourhoods $\mathcal{U}\subset\mathcal{C}^{\infty}(M,X(\mathbb R))$ of the inclusion, there exist ${\phi\in\mathcal{U}}$ and a closed subvariety $Y\subset X$ which is smooth along $Y(\mathbb R)$ such that $\phi(M)=Y(\mathbb R)$. \end{minipage}\end{flushright} \vspace*{.5pt} \begin{flushright} $(A_{\mathbb C})\mkern9mu$\begin{minipage}[t]{.92\textwidth} For all neighbourhoods $\mathcal{U}\subset\mathcal{C}^{\infty}(M,X(\mathbb R))$ of the inclusion, there exist $\phi\in\mathcal{U}$ and a smooth closed subvariety $Y\subset X$ such that $\phi(M)=Y(\mathbb R)$. \end{minipage} \end{flushright} \vspace*{3pt} Property $(A_{\mathbb R})$, in which we allow the subvariety $Y$ to have singularities away from its real locus is more frequently considered in the literature (see for instance \cite[Definition 12.4.10]{BCR} or \cite[\S 2.8]{AKbook}). \subsubsection{Homology and cobordism} \label{cobordism} There is a classical homological obstruction to the validity of these approximation properties. Consider the group $H_d^{\mathrm{alg}}(X(\mathbb R),\mathbb Z/2)$ of algebraic homology classes of $X(\mathbb R)$, which is the image of the Borel--Haefliger cycle class map $\mathrm{cl}_{\mathbb R}:\mathrm{CH}_d(X)\to H_d(X(\mathbb R),\mathbb Z/2)$. Properties~$(A_{\mathbb R})$ and $(A_{\mathbb C})$ both imply: \begin{flushright} $(H)\mkern9mu$\begin{minipage}[t]{.92\textwidth} One has $j_*[M]\in H_d^{\mathrm{alg}}(X(\mathbb R),\mathbb Z/2)$. \end{minipage} \end{flushright} \vspace*{3pt} Our main theorems concerning algebraic approximation (Theorems~\ref{thC1} and \ref{thC2} below) take into account a finer obstruction to the approximation properties $(A_{\mathbb R})$ and $(A_{\mathbb C})$, based on cobordism theory, and originating from \cite{AKIHES}. Recall that two $\mathcal{C}^{\infty}$ maps $f_1:N_1\to X(\mathbb R)$ and $f_2:N_2\to X(\mathbb R)$, where the $N_i$ are $d$-dimensional compact~$\mathcal{C}^{\infty}$ manifolds, are said to be \textit{cobordant} if there exists a compact $\mathcal{C}^{\infty}$ manifold with boundary $C$, a diffeomorphism $\partial C\simeq N_1\cup N_2$, and a $\mathcal{C}^{\infty}$ map $F:C\to X(\mathbb R)$ such that $F|_{N_i}=f_i$ for $i\in\{1,2\}$. The group (for the disjoint union) of cobordism equivalence classes of such maps is the $d$-th \textit{unoriented cobordism group} $MO_d(X(\mathbb R))$ of $X(\mathbb R)$. Let $MO_d^{\mathrm{alg}}(X(\mathbb R))\subset MO_d(X(\mathbb R))$ be the subgroup generated by cobordism classes of $\mathcal{C}^{\infty}$ maps of the form $g(\mathbb R):W(\mathbb R)\to X(\mathbb R)$, where $g:W\to X$ is a morphism of smooth projective varieties over $\mathbb R$ and $W$ has dimension $d$. We consider the following property: \vspace*{1.5pt} \begin{flushright} $(C)\mkern9mu$\begin{minipage}[t]{.92\textwidth} One has $[j:M\hookrightarrow X(\mathbb R)]\in MO_d^{\mathrm{alg}}(X(\mathbb R))$. \end{minipage} \end{flushright} \vspace*{3pt} Property $(C)$ is a necessary condition for the validity of $(A_{\mathbb C})$ and $(A_{\mathbb R})$, by resolution of singularities. Property $(C)$ implies property $(H)$ since algebraic homology classes are preserved by push-forwards (see for instance \cite[\S 1.6.2]{BW1}), and is equivalent to it if $d\leq 2$ or $c\leq 1$. It was used by Bochnak and Kucharz \cite[Corollary 1.3]{BKsub} to give examples where $(H)$ holds but $(A_{\mathbb R})$ fails when $d\geq 3$ and~$c\geq 2$. We stress that although $(C)$ really is finer than $(H)$, checking whether it holds or not is not a more difficult problem (using \cite[Corollary 1 p.~314]{IS}). \subsubsection{Existence of algebraic approximations} We show that cobordism is the only obstruction to the validity of $(A_{\mathbb R})$ and $(A_{\mathbb C})$ for low values of $d$. \begin{thm}[Theorem \ref{approxth}] \label{thC1} (i) Properties $(C)$ and $(A_{\mathbb R})$ are equivalent if $d<c$. (ii) They are also equivalent to $(A_{\mathbb C})$ if $d<c$ and $d\leq 3$. \end{thm} Theorem \ref{thC1} (i) was already known when $d=1$, thanks to Bochnak and Kucharz \cite[Theorem~1.1]{BKsub} for $c=2$, and to Wittenberg and the author \cite[Theorem~6.8]{BW2} for any $c\geq 2$ (improving earlier results by Akbulut and King \cite{AKcourbes}). These references consider property $(H)$ instead of $(C)$, but they are equivalent when~$d=1$ by \cite[Lemma~2.1]{AKIHES}. Theorem~\ref{thC1} is new for $d\geq 2$. Our proof is based on a relative Nash--Tognoli theorem of Akbulut and King (see \cite[Proposition 0.2]{AKIHES} or \cite[Theorem 2.8.4]{AKbook}), which solves the approximation problem up to unwanted singular points. To remove these singular points, we use the refinements of Hironaka's smoothing method already mentioned in \S\S\ref{Chow}-\ref{BH}. We emphasize that Hironaka's smoothing technique, as developed in \cite{Hironakasmoothing}, had already been applied in the context of real algebraic approximation in the proof of \cite[Theorem~6.8]{BW2}. We do not know whether $(C)$ and $(A_{\mathbb C})$ are always equivalent when $d<c$. \subsubsection{Obstructions to algebraic approximation} We also prove that Theorem \ref{thC1}~(i) is sharp: it may fail as soon as $d\geq c$, for infinitely many values of $c$. Recall that $\alpha(m)$ is the number of $1$'s in the dyadic expansion of $m\geq 0$. \begin{thm}[Theorem \ref{thji}] \label{thC2} Let $d\geq c$ and $e\geq 1$ be such that $\alpha(c+e)=2e$. \begin{enumerate}[(i)] \item If $e=1$, there exist $X$ and $M$ such that $(C)$ holds but $(A_{\mathbb R})$ fails. \item In general, there exist $X$ and $M$ such that $(C)$ holds but $(A_{\mathbb C})$ fails. \end{enumerate} \end{thm} The hypothesis on $c$ that $\alpha(c+e)=2e$ cannot be entirely dispensed with, as $(A_{\mathbb R})$ and $(A_{\mathbb C})$ are implied by~$(H)$, hence by $(C)$, when $c=1$ (see Proposition~\ref{hyp}). To the best of our knowledge, Theorem \ref{thC2} (i) features the first examples demonstrating that properties $(C)$ and $(A_{\mathbb R})$ are not equivalent in general. The values of~$c$ for which Theorem \ref{thC2} (i) applies are $c\in\{2,4,5,8,9,11,16,\dots\}$. We have not been able to disprove the equivalence of $(C)$ and $(A_{\mathbb R})$ for other values of $c\geq 2$, for instance for $c=3$. The first case left open is $c=d=3$. Examples where $(C)$ (and even $(A_{\mathbb R})$) holds but $(A_{\mathbb C})$ fails had already been obtained by Akbulut and King \cite[Theorem 4]{AKtransc}, and refined by Kucharz \cite[Theorem 1.1]{Kuctransc}. Their examples work for all $(c,d)$ with $c\geq 2$ and $d\geq c+2$. The range of pairs $(c,d)$ that we reach in Theorem \ref{thC2} (ii) is different. The proof of Theorem \ref{thC2} uses techniques similar to that of Theorems \ref{Chow3} and~\ref{Chow4}. Although both properties $(C)$ and $(A_{\mathbb R})$ only involve real loci, the proof of Theorem~\ref{thC2}~(i) makes use in an essential way of global topological properties of sets of complex points, through their classes in the complex cobordism ring $MU_*$. \subsubsection{Products of projective spaces} In \cite[p.~269]{KvH}, Kucharz and van Hamel ask whether property $(A_{\mathbb R})$ always holds when $X=\mathbb P^n_{\mathbb R}$. The obstructions used in the proof of Theorem~\ref{thC2}~(i) show that this question would have a negative answer if one replaced $\mathbb P^n_{\mathbb R}$ with other very similar varieties, such as some products of projective spaces (note that property~$(C)$ always holds for these varieties by \cite[Lemma~2.1]{AKIHES}). This demonstrates the very particular role played by projective spaces in the question of Kucharz and van~Hamel. \begin{thm}[Theorem \ref{projth}] \label{thP} For all $k\geq 1$, property $(A_{\mathbb R})$ fails in general for $c=d=2^k$ and $X=\mathbb P^1_{\mathbb R}\times\mathbb P^{2^{k+1}-1}_{\mathbb R}$. \end{thm} \subsubsection{Algebraic approximation and algebraic homology} In \cite[pp.~685-686]{BKsub}, Bochnak and Kucharz ask for which values of $c$ and $d$ the properties $(H)$ and $(A_{\mathbb R})$ are in fact equivalent. Theorems~\ref{thC1} and \ref{thC2}, combined with the existing literature, yield a full answer to that question, and disprove the expectation raised in \cite[p.~686]{BKsub} that $(H)$ and $(A_{\mathbb R})$ are not equivalent for $d=2$ and~$c\geq 3$. \begin{thm}[\S\ref{proofsec}] \label{thH} Properties $(H)$, $(C)$, $(A_{\mathbb R})$ and $(A_{\mathbb C})$ are all equivalent in the following cases: if $c\leq 1$, if $d\leq 1$, or if $d=2$ and $c\geq 3$. For all other values of $c$ and $d$, there exist $X$ and $M$ satisfying $(H)$ but not $(A_{\mathbb R})$. \end{thm} \subsection{Structure of the article} We study linkage to expand the scope of Hironaka's smoothing technique in \S\ref{linkage}, and use it in \S\ref{sectionapprox} to prove Theorems~\ref{Chow1}, \ref{Chow2} and~\ref{thC1}. Generalities about complex cobordism and an application to the divisibility of the top Segre class may be found in \S\ref{complexcobordism}. This result and a double point formula are combined in \S\ref{doublepoint} to prove Theorems \ref{Chow3}, \ref{Chow5}, \ref{Chow4}, \ref{thC2} and \ref{thP}. We explain how to deduce Theorem \ref{thH} in \S\ref{final}. \subsection{Notation and conventions} \label{notation} A variety over a field $k$ is a separated scheme of finite type over $k$. Smooth varieties over $k$ are understood to be equidimensional. If $f:X\to Y$ is a morphism of varieties over $k$ and $k'$ is a field extension of $k$, we let $f(k'):X(k')\to Y(k')$ be the map induced at the level of $k'$-points. We denote by $\mathbb R$ and $\mathbb C$ the fields of real and complex numbers. All $\mathcal{C}^{\infty}$ manifolds are assumed to be Hausdorff and second countable. We endow the set $\mathcal{C}^{\infty}(M,N)$ of $\mathcal{C}^{\infty}$ maps between two $\mathcal{C}^{\infty}$ manifolds with the weak $\mathcal{C}^{\infty}$ topology \cite[p.~36]{Hirsch}. For $m\geq 0$, we let $\alpha(m)$ be the number of $1$'s in the dyadic expansion of $m$. \section{Linkage} \label{linkage} In the whole of \S\ref{linkage}, we fix an infinite field $k$, a smooth projective variety $V$ over~$k$, a very ample line bundle $\mathcal{O}_V(1)$ on $V$, and a (possibly empty) Cohen--Macaulay closed subscheme $W\subset V$ of pure codimension $r$ in $V$. We study the subvarieties of $V$ that are linked to $W$ by complete intersections defined by sections of multiples of~$\mathcal{O}_V(1)$~(\S\ref{link}), and their behaviour in families (\S\S\ref{family}-\ref{generic}), focusing in particular on their images by a morphism and on their real loci when $k$ is a real closed field (\S\S\ref{morphism}-\ref{realpar}). If $\underline{l}=(l_1,\dots,l_r)$ is an $r$-tuple of integers and if $\mathcal{F}$ is a coherent sheaf on $V$, we set $\mathcal{F}(\underline{l}):=\bigoplus_{i=1}^r\mathcal{F}(l_i)$. In particular, $H^0(V,\mathcal{F}(\underline{l}))=\bigoplus_{i=1}^r H^0(V,\mathcal{F}(l_i))$. A statement depending on an $r$-tuple of integers $\underline{l}=(l_1,\dots,l_r)$ is said to hold for $\underline{l}\gg0$ if it holds for $l_r\gg\dots\gg l_2\gg l_1\gg 0$, i.e., if $l_1$ is big enough, if $l_2$ is big enough (depending on $l_1$), and so forth. \subsection{Linked subvarieties} \label{link} Let $\mathcal{I}_W\subset\mathcal{O}_V$ be the ideal sheaf of $W$ in $V$. Choose an $r$-tuple of integers $\underline{l}=(l_1,\dots,l_r)$ and a section $\underline{F}\in H^0(V,\mathcal{I}_W(\underline{l}))$ such that $\underline{F}=(F_1,\dots,F_r)$ is a regular sequence (such $\underline{F}$ always exists if $\mathcal{I}_W(\underline{l})$ is generated by its global sections, for instance for $\underline{l}\gg0$). Let ${Z:=\{F_1=\dots=F_r=0\}}\subset V$ be the complete intersection it defines, and let $\mathcal{I}_Z=\langle\underline{F}\rangle\subset\mathcal{O}_V$ be its ideal sheaf. Let $W'\subset V$ be the subvariety with ideal sheaf $\mathcal{I}_{W'}:=(\mathcal{I}_Z:\mathcal{I}_W)\subset\mathcal{O}_V$, where a local section $s\in \mathcal{O}_V$ belongs to $(\mathcal{I}_Z:\mathcal{I}_W)$ if multiplication by $s$ induces a morphism $\mathcal{I}_W\xrightarrow{s}\mathcal{I}_Z$. One has $Z=W\cup W'$ set-theoretically. It is a theorem of Peskine and Szpiro \cite[Proposition 1.3]{PS} that $W'\subset V$ is also Cohen--Macaulay of pure codimension~$r$, and that $\mathcal{I}_{W}=(\mathcal{I}_Z:\mathcal{I}_{W'})\subset\mathcal{O}_V$. In view of the symmetry of the relation between the subschemes $W$ and $W'$ of $V$, they are said to be \textit{linked} by the regular sequence~$\underline{F}$. We write $W\sim W'$, or $W\sim_{\underline{l}} W'$ if we want to emphasize that the regular sequence is a section of $\mathcal{O}_V(\underline{l})$. We also say that $W\sim W'$ is the \textit{link} defined by $\underline{F}$. \begin{rems} (i) In the whole of \S\ref{linkage}, we could have only considered links with respect to complete intersections of multidegree $(l,\dots,l)$, with $l\gg 0$ when needed. The reason why we allow multidegrees $\underline{l}=(l_1,\dots,l_r)$, requiring $\underline{l}\gg 0$ when needed, is to be able to apply directly the proof of \cite[Lemma 5.1.1]{Hironakasmoothing} in the proof of Proposition \ref{Hironaka} below. (ii) In \S\S\ref{link}--\ref{generic}, we could allow $V$ to be any Gorenstein projective variety (see especially \cite[Proposition 1.3]{PS}). \end{rems} \begin{lem} \label{choosesequence} Let $x\in V$, and let $g_1,\dots,g_r\in\mathcal{I}_{W,x}\subset \mathcal{O}_{V,x}$ be a regular sequence. Then, for an $r$-tuple of integers $\underline{l}\gg 0$, there exists a regular sequence $\underline{F}\in H^0(V,\mathcal{I}_W(\underline{l}))$ such that the ideals $\langle \underline{F}\rangle$ and $\langle g_1,\dots,g_r\rangle$ of $\mathcal{O}_{V,x}$ coincide. \end{lem} \begin{proof} Let $Y\subset V$ and $Y_i\subset V$ be the schematic closures of $\{g_1=\dots=g_r=0\}$ and $\{g_i=0\}$ in $V$. Let $\mathcal{I}_W,\mathcal{I}_Y,\mathcal{I}_{Y_i}\subset\mathcal{O}_V$ be the ideal sheaves of $W$, $Y$ and $Y_i$ in $V$ and define $\mathcal{I}:=\mathcal{I}_W\cap\mathcal{I}_Y$ and $\mathcal{I}_i:=\mathcal{I}_W\cap\mathcal{I}_{Y_i}$. The subscheme of $V$ defined by $\mathcal{I}$ has support $Y\cup W$, hence has pure codimension $r$ in $V$, and coincides with $Y$ in a neighbourhood of $x$. Choose $\underline{l}\gg 0$ so that the sheaves $\mathcal{I}(\underline{l})$ and $\mathcal{I}_i(l_i)$ are generated by their global sections, and choose a general element $\underline{F}\in H^0(V,\mathcal{I}(\underline{l}))$. Since $\mathcal{I}_i(l_i)$ is globally generated, there exist $G_i\in H^0(V,\mathcal{I}_i(l_i))\subset H^0(V,\mathcal{I}(l_i))$ with $\langle G_i\rangle=\langle g_i\rangle\subset\mathcal{O}_{V,x}$, hence with $\langle G_1,\dots,G_r\rangle=\langle g_1,\dots,g_r\rangle\subset\mathcal{O}_{V,x}$. Since $\underline{F}$ has been chosen general, we deduce that $\langle \underline{F}\rangle=\langle g_1,\dots,g_r\rangle\subset\mathcal{O}_{V,x}$. Since $\mathcal{I}(\underline{l})$ is globally generated, we also see that $\underline{F}$ forms a regular sequence. The lemma is proven. \end{proof} \begin{lem} \label{disappear} Let $W=W_0\sim_{\underline{l}_1} W_1\sim_{\underline{l}_2}\dots\sim_{\underline{l}_j}W_j$ be links of subschemes of~$V$. Assume that $W$ is a local complete intersection at $x\in W$. For $r$-tuples of integers $\underline{l}_{2j+1}\gg\dots\gg \underline{l}_{j+1}\gg0$, there exists a chain $W_j\sim_{\underline{l}_{j+1}} W_{j+1}\sim_{\underline{l}_{j+2}}\dots\sim_{\underline{l}_{2j+1}}W_{2j+1}$ of links of subschemes of $V$ such that $x\notin W_{2j+1}$. \end{lem} \begin{proof} For $1\leq i\leq j$, let $\underline{F}_i$ be the regular sequence yielding the link $W_{i-1}\sim_{\underline{l}_i} W_i$. Thanks to Lemma \ref{choosesequence}, one may choose inductively, for $j+1\leq i\leq 2j$, an $r$-tuple $\underline{l}_i\gg 0$, a regular sequence $\underline{F}_i\in H^0(V,\mathcal{I}_{W_{i-1}}(\underline{l}_i))$ such that the ideals $\langle \underline{F}_i\rangle$ and $\langle \underline{F}_{2j+1-i}\rangle$ of $\mathcal{O}_{V,x}$ coincide. This gives rise to a link $W_{i-1}\sim_{\underline{l}_i} W_i$ with the property that $W_i$ and $W_{2j-i}$ coincide in a neighourhood of $x$, by the symmetry of the link construction. The subschemes $W_{2j}$ and $W_0=W$ then coincide in a neighbourhood of $x$, hence $W_{2j}$ is a local complete intersection at $x$, defined by a regular sequence $g_1,\dots,g_r\in\mathcal{I}_{W_{2j},x}$. A final application of Lemma \ref{choosesequence} provides us with an $r$-tuple $\underline{l}_{2j+1}\gg0$, and with a link $W_{2j}\sim_{\underline{l}_{2j+1}}W_{2j+1}$ associated with a regular sequence $\underline{F}_{2j+1}$ such that $\langle \underline{F}_{2j+1}\rangle=\langle g_1,\dots,g_r\rangle\subset \mathcal{O}_{V,x}$. It follows that $x\notin W_{2j+1}$. \end{proof} \subsection{Linkage in families} \label{family} Let $B$ be a smooth variety over $k$. Let $\mathfrak{W}\subset V\times B$ be a closed subscheme of pure codimension $r$ with ideal sheaf $\mathcal{I}_{\mathfrak{W}}\subset\mathcal{O}_{V\times B}$, such that the second projection $f:\mathfrak{W}\to B$ is flat with Cohen--Macaulay fibers. Let $\underline{l}$ be an $r$\nobreakdash-tuple such that, letting $p:V\times B\to B$ denote the second projection, the adjunction morphism $p^*p_*(\mathcal{I}_{\mathfrak{W}}(\underline{l}))\to\mathcal{I}_{\mathfrak{W}}(\underline{l})$ is surjective and $R^ip_*(\mathcal{I}_{\mathfrak{W}}(\underline{l}))=0$ for $i>0$. Note that, under these conditions, the push-forward sheaf $E:=p_*(\mathcal{I}_{\mathfrak{W}}(\underline{l}))$ is a vector bundle such that the natural morphism $E|_b\to H^0(V,\mathcal{I}_{\mathfrak{W}_b}(\underline{l}))$ is an isomorphism for all $b\in B$ \cite[III,~Theorem~12.11]{Hartshorne}. View $E\to B$ as a geometric vector bundle over $B$. A point of $E$ over $b\in B$ corresponds to a section $\underline{F}\in H^0(V,\mathcal{I}_{\mathfrak{W}_b}(\underline{l}))$. Let $B'\subset E$ be the open subset of those points such that $\underline{F}$ forms a regular sequence, and hence defines a complete intersection in $V$. Let $\mathfrak{Z}\subset V\times B'$ be the universal family of these complete intersections, and let $\mathfrak{W}_{B'}\subset V\times B'$ be the base change of $\mathfrak{W}$, with ideal sheaves $\mathcal{I}_{\mathfrak{Z}},\mathcal{I}_{\mathfrak{W}_{B'}}\subset\mathcal{O}_{V\times B'}$. We consider the subscheme $\mathfrak{W}'\subset V\times B'$ with ideal sheaf $\mathcal{I}_{\mathfrak{W}'}:=(\mathcal{I}_{\mathfrak{Z}}:\mathcal{I}_{\mathfrak{W}_{B'}})$ and we let $f':\mathfrak{W}'\to B'$ denote the second projection. By Proposition \ref{linkrel}, this extends the construction of \S\ref{link} in the relative setting. \begin{prop} \label{linkrel} The morphism $f':\mathfrak{W}'\to B'$ is flat with Cohen--Macaulay fibers. For $b\in B'$, one has $\mathcal{I}_{\mathfrak{W}'_b}=(\mathcal{I}_{\mathfrak{Z}_b}:\mathcal{I}_{\mathfrak{W}_b})$. \end{prop} \begin{proof} The scheme $\mathfrak{W}$ is Cohen--Macaulay by \cite[Corollaire 6.3.5 (ii)]{EGA42}, hence so is $\mathfrak{W}'$ by \cite[Proposition 1.3]{PS}. Since $f'$ is equidimensional with regular base and Cohen--Macaulay total space, it is flat by \cite[Proposition 6.1.5]{EGA42}. Choose a regular system of parameters $x_1,\dots,x_N$ of the regular local ring $\mathcal{O}_{B',b}$. To show the equality of ideals $\mathcal{I}_{\mathfrak{W}'_b}=(\mathcal{I}_{\mathfrak{Z}_b}:\mathcal{I}_{\mathfrak{W}_b})$ at a point $v\in V_b$, one may apply $N$ times sucessively \cite[Lemma 2.12]{HUDCG} in the local ring $\mathcal{O}_{V_b,v}$ (this is essentially what is done in \cite[Proposition 2.13]{HUDCG}). That the fibers $\mathfrak{W}'_b$ of $f'$ are Cohen--Macaulay now follows from \cite[Proposition~1.3]{PS}. \end{proof} \subsection{Moduli of links} \label{generic} We now iterate the construction of \S\ref{family}, thus adapting to our global setting a local construction due to Huneke and Ulrich \cite{HUstructure, HUalgebraic}. Recall that $W\subset V$ is a Cohen--Macaulay closed subscheme of pure codimension~$r$. We set $L_0(W):=\mathrm{Spec}(k)$, $\mathfrak{W}_0:=W$ and $f_0:\mathfrak{W}_0\to L_0(W)$ to be the structural morphism. We inductively construct $f_i:\mathfrak{W}_i\to L_i(W)$ for $i\geq 1$, by choosing an $r$-tuple $\underline{l}_i\gg 0$, by applying the construction of \S\ref{family} to $f_{i-1}:\mathfrak{W}_{i-1}\to L_{i-1}(W)$, and by setting $(f_i:\mathfrak{W}_i\to L_i(W))=(f'_{i-1}:\mathfrak{W}'_{i-1}\to L_{i-1}(W)')$. The varieties $L_i(W)$ are irreducible, smooth over $k$ and $k$-rational, and the morphisms $f_i$ are flat with Cohen--Macaulay fibers, as an induction based on Proposition \ref{linkrel} shows. We call $f_j:\mathfrak{W}_j\to L_j(W)$ the $j$-th \textit{moduli of links} of $W$ (with respect to the degrees $\underline{l}_1,\dots, \underline{l}_j$). Its points parametrize sequences $(\underline{F}_i)_{1\leq i\leq j}$ of regular sequences that give rise to chains of linked subvarieties $W=W_0\sim_{\underline{l}_1} W_1\sim_{\underline{l}_2}\dots\sim_{\underline{l}_j}W_j$ of $V$. \begin{rem} The construction of the $j$-th moduli of links $f_j:\mathfrak{W}_j\to L_j(W)$ goes through with no modifications if $k$ is finite, but beware that $L_j(W)(k)$ might be empty in this case. \end{rem} \subsection{Images by a morphism} \label{morphism} In \S\S\ref{morphism}-\ref{realpar}, we fix a smooth morphism ${\pi:V\to X}$ of smooth projective varieties over $k$ and we let $d$ and $n$ be the dimensions of~$W$ and~$X$. In Propositions \ref{Hironaka}, \ref{propBertini}, \ref{propinj} and \ref{Rlink}, we study the images by~$\pi$ of subvarieties of~$X$ linked to $W$, under the assumption that $n>2d$. Proposition~\ref{Hironaka} is due to Hironaka \cite{Hironakasmoothing}. Proposition~\ref{propBertini} is a simple Bertini theorem. Proposition~\ref{propinj}, which is the main result of \S\ref{morphism}, is more delicate since one must deal with singularities of varieties linked to~$W$. Proposition \ref{Rlink}, which is the main result of \S\ref{realpar}, is specific to the case where $k$ is a real closed field. \begin{prop}[Hironaka] \label{Hironaka} Assume that $n>2d$, that $d\leq 3$, and that $W$ is smooth. Then, for $j\geq 4$ and $r$-tuples of integers $\underline{l}_j\gg \dots \gg \underline{l}_{1}\gg0$, there exists a chain of linked smooth subvarieties $W=W_0\sim_{\underline{l}_1} W_1\sim_{\underline{l}_2}\dots\sim_{\underline{l}_j}W_j$ of $V$ such that $\pi|_{W_j}:W_j\to X$ is a closed embedding. \end{prop} \begin{proof} Choose general links $W=W_0\sim_{\underline{l}_1} W_1\sim_{\underline{l}_2}\dots\sim_{\underline{l}_j}W_j$. Since $d\geq 3$ and $W$ is smooth, the $W_i$ are smooth by \cite[Corollary 3.9.1]{Hironakasmoothing}. That $\pi|_{W_j}:W_j\to X$ is a closed embedding may be checked over an algebraic closure of $k$, where it follows from the proof of \cite[Lemma 5.1.1]{Hironakasmoothing}. More precisely, define $A(W_i)\subset W_i$ to be the closed subset of those $x\in W_i$ such that ${\pi|_{W_i}:W_i\to X}$ is not a closed embedding above a neighbourhood of $\pi(x)$. One has ${\dim(A(W_0))\leq\dim(W)\leq 3}$. Moreover, $\dim(A(W_{i+1}))\leq\dim(A(W_i))$, and the inequality is strict if $A(W_i)\neq\varnothing$, by the proof of \cite[Lemma 5.1.1]{Hironakasmoothing}. It follows that $A(W_i)=\varnothing$ for $i\geq 4$, hence that $\pi|_{W_j}:W_j\to X$ is a closed embedding. \end{proof} \begin{prop} \label{propBertini} Assume that $n>2d$, and let $\underline{l}=(l_1,\dots,l_r)$ be an $r$-tuple of integers such that the linear systems $H^0(V,\mathcal{I}_W(l_i))$ embed $V\setminus W$ in projective spaces. Then, for $\underline{F}\in H^0(V,\mathcal{I}_W(\underline{l}))$ general, the link $W\sim_{\underline{l}}W'$ associated with $\underline{F}$ satisfies: \begin{enumerate}[(i)] \item The variety $S:=W'\setminus(W\cap W')$ is smooth. \item The morphism $\pi|_{S}:S\to X$ is an embedding. \item The subsets $\pi(S)$ and $\pi(W)$ of $X$ are disjoint. \end{enumerate} \end{prop} \begin{proof} Apply Lemma \ref{Bertini} below with $Y=V\setminus W$, $f=\pi|_{V\setminus W}$, and $F=\pi(W)$. \end{proof} \begin{lem} \label{Bertini} Let $f:Y\to X$ be a smooth morphism of smooth varieties over $k$, let $F\subset X$ be a closed subset, and let $V_1,\dots,V_r$ be linear systems on $Y$ inducing embeddings of $Y$ in projective spaces. If one has $\dim(X)>2(\dim(Y)-r)$ and $\dim(X)>\dim(F)+\dim(Y)-r$, then, for general $\sigma_i\in V_i$, the variety ${S:=\{\sigma_i=0\}}$ is smooth, the morphism $f|_S:S\to X$ is an embedding, and ${f(S)\cap F=\varnothing}$. \end{lem} \begin{proof} The smoothness of $S$ follows from the Bertini theorem. For general $\sigma_i$, the complete intersection $S\cap f^{-1}(F)$ in $f^{-1}(F)$ is empty, hence ${f(S)\cap F=\varnothing}$. Let $H$ be the Hilbert scheme parametrizing zero-dimensional subschemes $Z\subset Y$ of length $2$ in the fibers of $f$. Let $B\subset H\times V_1\times\dots\times V_r$ be the closed subset parametrizing tuples $([Z],\sigma_1,\dots,\sigma_r)$ such that the $\sigma_i$ vanish on $Z$. Computing $$\dim(B)=\dim(H)+\sum_i\dim(V_i)-2r=2\dim(Y)-\dim(X)+\sum_i\dim(V_i)-2r,$$ shows that $\dim(B)<\sum_i\dim(V_i)$. To ensure that $f|_S$ is an embedding, it suffices to choose $(\sigma_1,\dots,\sigma_r)$ outside of the image of the projection $B\to V_1\times\dots\times V_r$. \end{proof} \begin{prop} \label{propinj} Assume that $n>2d$ and that $W$ is a local complete intersection. For $j\gg 0$ and for $r$-tuples of integers $\underline{l}_j\gg \dots \gg \underline{l}_{1}\gg0$, there exists a chain of linked subvarieties $W=W_0\sim_{\underline{l}_1} W_1\sim_{\underline{l}_2}\dots\sim_{\underline{l}_j}W_j$ of $V$ with the property that $\pi|_{W_j}:W_j\to X$ is geometrically injective. \end{prop} \begin{proof} Let $W=W_0\sim_{\underline{l}_1} W_1\sim_{\underline{l}_2}\dots\sim_{\underline{l}_j}W_j$ be a general chain of links. Define $C(W_i)\subset W_i$ to be the constructible subset of those $x\in W_i$ such that $(\pi|_{W_i})^{-1}(\pi(x))$ has more than one geometric point. Of course, one has $\dim(C(W_0))\leq\dim(W)=d$. Proposition \ref{propBertini} implies that $C(W_{i+1})\subset C(W_i)$ for all ${i\geq 0}$. We also claim that $\dim(C(W_{2i+1}))<\dim(C(W_i))$ if $2i+1\leq j$ and ${C(W_i)\neq\varnothing}$. These facts imply that $C(W_{j})=\varnothing$ if $j\geq 2^{d+1}-1$, which concludes. It remains to prove the claim. Assume that $2i+1\leq j$ and that ${C(W_i)\neq\varnothing}$. Choose finitely many points $x_1,\dots,x_N\in C(W_i)$, including at least one in each irreducible component of the Zariski closure of $C(W_i)$. By Lemma \ref{disappear}, there exists a chain of links $W_i\sim_{\underline{l}_i}W_{i+1}^{(s)}\sim_{\underline{l}_{i+1}}\!\dots\sim_{\underline{l}_{2i+1}}W^{(s)}_{2i+1}$ such that ${x_s\notin W^{(s)}_{2i+1}}$, for all ${1\leq s\leq N}$. Since $W_i\sim_{\underline{l}_i}W_{i+1}\sim_{\underline{l}_{i+1}}\!\dots\sim_{\underline{l}_{2i+1}}W_{2i+1}$ corresponds to a general point of the $(i+1)$-th moduli of links of $W_i$ in the sense of \S\ref{generic}, we deduce that $x_s\notin W_{2i+1}$, hence that $x_s\notin C(W_{2i+1})$ for $1\leq s\leq N$. The chain of inclusions $C(W_{2i+1})\subset C(W_{2i})\subset\dots\subset C(W_{i})$ implies that $\dim(C(W_{2i+1}))<\dim(C(W_i))$, which proves the claim. \end{proof} \subsection{Real loci} \label{realpar} In \S\ref{realpar}, we keep the notation of \S\ref{morphism} and assume moreover that $k=R$ is a real closed field, for instance the field $\mathbb R$ of real numbers. \begin{lem} \label{doublerien} Fix $r$\nobreakdash-tuples of integers $\underline{l}_1=(l_{1,1},\dots, l_{1,r})$ and $\underline{l}_2=(l_{2,1},\dots, l_{2,r})$ with $l_{2,i}-l_{1,i}$ nonnegative and even for $1\leq i\leq r$. Let $W\sim_{\underline{l}_1}W_1$ be a link. Then there exists a link $W_1\sim_{\underline{l}_2}W_2$ such that $W_2=W$ in a neighbourhood of~$V(R)$. \end{lem} \begin{proof} Let $(u_1,\dots,u_N)$ be a basis of $H^0(V,\mathcal{O}_V(1))$. The section $u:=\sum_{m=1}^N u_m^2$ does not vanish on $V(R)$. Let $v_1,\dots,v_r\in H^0(V,\mathcal{O}_V(2))$ be general small deformations of $u$. They are general elements of $H^0(V,\mathcal{O}_V(2))$ that do not vanish on~$V(R)$. Let $\underline{F}=(F_1,\dots,F_r)\in H^0(V,\mathcal{I}_W(\underline{l}_1))$ be a regular sequence defining ${W\sim_{\underline{l}_1}W_1}$. There exist integers $a_i\geq 0$ such that $\underline{G}:=(v_1^{a_1}F_1,\dots,v_r^{a_r}F_r)\in H^0(V,\mathcal{I}_{W_1}(\underline{l}_2))$. Since the $v_i$ are general and since $\underline{F}$ is a regular sequence, we see that $\underline{G}$ is also regular sequence. Let $W_1\sim_{\underline{l}_2}W_2$ be the link it defines. The ideal sheaves $\langle\underline{F}\rangle$ and $\langle\underline{G}\rangle$ of $\mathcal{O}_V$ coincide in a neighbourhood of $V(R)$ since the $v_i$ do not vanish on $V(R)$. It thus follows from the symmetry of linkage (see~\S\ref{link}) that $W_2=W$ in a neighbourhood of $V(R)$. \end{proof} \begin{lem} \label{Rlinklem} Set $W_0:= W$. Suppose that $W_0$ is smooth along $W_0(R)$ and that $n>2d$. Fix $r$\nobreakdash-tuples of even integers $\underline{l}_2\gg\underline{l}_1\gg 0$. If $R=\mathbb R$, fix a neighbourhood $\mathcal{U}\subset \mathcal{C}^{\infty}(W_0(\mathbb R),V(\mathbb R))$ of the inclusion. Then there exist links $W_0\sim_{\underline{l}_1}W_1\sim_{\underline{l}_2}W_2$ with the following properties. \begin{enumerate}[(i)] \item The variety $W_2$ is smooth along $W_2(R)$. \item Let $D(W_i)\subset W_i$ be the subset of those $x\in W_i$ such that $\pi|_{W_i}$ is not immersive at $x$. Set $d_i:=\sup_{x\in D(W_i)(R)} \dim_x D(W_i)$. If $D(W_0)(R)\neq\varnothing$, then $d_2<d_0$. \item If $R=\mathbb R$, there exists a diffeomorphism $\phi: W_0(\mathbb R)\myxrightarrow{\,\sim\,} W_2(\mathbb R)$ such that, letting ${\iota: W_2(\mathbb R)\to V(\mathbb R)}$ denote the inclusion, one has $\iota\circ\phi\in \mathcal{U}$. \end{enumerate} \end{lem} \begin{proof} Choose finitely many points $x_1,\dots,x_N\in W_0(R)$, including at least one in each irreducible component of $D(W_0)$ that has real points. By Lemma \ref{disappear}, a link $W_0\sim_{\underline{l}_1} W_1$ corresponding to a general point of the first moduli of links of $W_0$ (in the sense of \S\ref{generic}) has the property that $x_s\notin W_1$ for $1\leq s \leq N$. Lemma \ref{doublerien} shows the existence of a link $W_1\sim_{\underline{l}_2}\widetilde{W}$ such that $\widetilde{W}=W_0$ in a neighbourhood of~$V(R)$. We deduce that $\widetilde{W}$ is smooth along $\widetilde{W}(R)=W_0(R)$. Let $\widetilde{f}:\widetilde{\mathfrak{W}}\to L_1(W_1)$ be the first moduli of links of $W_1$ with respect to the degree~$\underline{l}_2$ (as in \S\ref{generic}) and let $b\in L_1(W_1)(R)$ be the point associated with ${W_1\sim_{\underline{l}_2}\widetilde{W}}$. Proposition~\ref{linkrel} shows that $\widetilde{\mathfrak{W}}_{b}=\widetilde{W}$ and that $\widetilde{f}$ is flat, hence smooth in a neighbourhood of $\widetilde{\mathfrak{W}}_b(R)$. As $\widetilde{f}$ is proper, the map $\widetilde{f}(R)$ is closed by \cite[Theorem~9.6]{DK2}. We deduce that there exists a Euclidean neighourhood $\Omega$ of $b$ in $L_1(W_1)(R)$ such that the morphism $\widetilde{f}$ is smooth along $\widetilde{f}(R)^{-1}(\Omega)$. Choose such an $\Omega$ small enough. Since $L_1(W_1)$ is smooth and irreducible (see \S\ref{generic}), the subset $\Omega\subset L_1(W_1)$ is Zariski-dense (apply \cite[Proposition~2.8.14]{BCR}). Consequently, one may choose $a\in\Omega$ general. Let $W_1\sim_{\underline{l}_2}W_2:=\widetilde{\mathfrak{W}}_a$ be the associated link. Assertion (i) holds by our choice of $\Omega$. Let $D\subset \widetilde{\mathfrak{W}}$ be the closed subset of those $x\in\widetilde{\mathfrak{W}}$ such that the morphism $\pi|_{\tilde{f}^{-1}(\tilde{f}(x))}:\widetilde{\mathfrak{W}}_{\tilde{f}(x)}\to X$ is not immersive at $x$. Set $E:=D\cap (W_1\times L_1(W_1))\subset \widetilde{\mathfrak{W}}$. By Proposition \ref{propBertini}~(ii), there is a proper closed subset $F\subset L_1(W_1)$ such that $D\subset \widetilde{f}^{-1}(F)\cup (W_1\times L_1(W_1))$ as subsets of $V\times L_1(W_1)$. Since $a$ has been chosen general, it lies outside of~$F$. We deduce the inclusion $D(W_2)\subset E_a$ of subsets of $\widetilde{\mathfrak{W}}_a$. The function $x\mapsto \dim_x(E_{\tilde{f}(x)})$ is upper semicontinuous for the Zariski topology on~$E$ by \cite[Th\'eor\`eme 13.1.3]{EGA43}, hence upper semicontinuous for the Euclidean topology on $E(R)$. Since $\widetilde{f}|_E:E\to L_1(W_1)$ is proper, the map $\widetilde{f}|_E(R)$ is closed by \cite[Theorem~9.6]{DK2}. If $\Omega$ has been chosen small enough, we deduce at once the inequality: \begin{equation} \label{ineq} \sup_{x\in E_b(R)}\dim_x E_b\geq \sup_{x\in E_a(R)}\dim_xE_a. \end{equation} As $\widetilde{\mathfrak{W}}_b=W_0$ in a neighbourhood of $V(\mathbb R)$, one has $E_b\subset D(W_0)$ in a neighbourhood of $V(\mathbb R)$. Since none of the $x_s$ belong to $W_1$, we see that no irreducible components of $D(W_0)$ that has real points is included in $E_b$. If $D(W_0)(R)\neq\varnothing$, it follows that the left-hand side of (\ref{ineq}) is $<d_0$. On the other hand, the right-hand side of (\ref{ineq}) is $\geq d_2$ since $D(W_2)\subset E_a$. The inequality $d_2<d_0$ follows, proving (ii). If $R=\mathbb R$, one may assume $\Omega$ to be connected. Ehresmann's theorem applied to the proper submersion $\widetilde{f}(\mathbb R)|_{\widetilde{f}(\mathbb R)^{-1}(\Omega)}:\widetilde{f}(\mathbb R)^{-1}(\Omega)\to\Omega$ yields a diffeomorphism $\psi:\Omega\times \widetilde{\mathfrak{W}}_{b}(\mathbb R)\myxrightarrow{\,\sim\,} \widetilde{f}(\mathbb R)^{-1}(\Omega)$ compatible with the projections to~$\Omega$. If $\Omega$ has been chosen small enough, the composition $$W_0(\mathbb R)=\widetilde{\mathfrak{W}}_{b}(\mathbb R)\xrightarrow{\psi_a}\widetilde{\mathfrak{W}}_{a}(\mathbb R)=W_2(\mathbb R)\xrightarrow{\iota}V(\mathbb R)$$ belongs to~$\mathcal{U}$ for all $a\in\Omega$. Assertion~(iii) is proven. \end{proof} \begin{prop} \label{Rlink} Suppose that $W$ is smooth along $W(R)$ and that $n>2d$. Let $j\gg 0$ be even and let ${\underline{l}_j\gg \dots \gg\underline{l}_1\gg 0}$ be $r$-tuples of even integers. Define ${f_j:\mathfrak{W}_j\to L_j(W)}$ as in~\S\ref{generic}. Then there exists a nonempty subset ${\Omega\subset L_j(W)(R)}$ which is open for the Euclidean topology such that the following holds for all~$b\in\Omega$. \begin{enumerate}[(i)] \item The variety $\mathfrak{W}_{j,b}$ is smooth along $\mathfrak{W}_{j,b}(R)$. \item The morphism $\pi|_{\mathfrak{W}_{j,b}}$ is immersive along $\mathfrak{W}_{j,b}(R)$. \end{enumerate} If moreover $R=\mathbb R$ and $\mathcal{U}\subset \mathcal{C}^{\infty}(W(\mathbb R),V(\mathbb R))$ is a neighbourhood of the inclusion, one may ensure that the following holds. \begin{enumerate}[(i)] \setcounter{enumi}{2} \item There exists a diffeomorphism $\phi_b: W(\mathbb R)\myxrightarrow{\,\sim\,}\mathfrak{W}_{j,b}(\mathbb R)$ such that, letting ${\iota_b:\mathfrak{W}_{j,b}(\mathbb R)\to V(\mathbb R)}$ denote the inclusion, one has $\iota_b\circ\phi_b\in \mathcal{U}$. \end{enumerate} \end{prop} \begin{proof} Choose $j\geq 2d+2$ even. Applying $j/2$ times Lemma \ref{Rlinklem} shows the existence of a chain of linked subvarieties $W=W_0\sim_{\underline{l}_1} W_1\sim_{\underline{l}_2}\dots\sim_{\underline{l}_j}W_j$ of~$V$ such that $W_j$ is smooth along $W_j(R)$ and such that $\pi|_{W_j}$ is immersive along $W_j(R)$. Moreover, if $R=\mathbb R$, Lemma \ref{Rlinklem} ensures the existence of a diffeomorphism $\phi:W(\mathbb R)\myxrightarrow{\,\sim\,} W_j(\mathbb R)$ such that, letting $\iota:W_j(\mathbb R)\to V(\mathbb R)$ denote the inclusion, one has $\iota\circ\phi\in\mathcal{U}$. Let $a\in L_j(W)(R)$ be the point associated with $W=W_0\sim_{\underline{l}_1} W_1\sim_{\underline{l}_2}\dots\sim_{\underline{l}_j}W_j$. Proposition~\ref{linkrel} shows that $\mathfrak{W}_{j,a}=W_j$ and that $f_j$ is flat, hence smooth in a neighbourhood of $\mathfrak{W}_{j,a}(R)$. As $f_j$ is proper, the map $f_j(R)$ is closed by \cite[Theorem~9.6]{DK2}. We deduce the existence of a neighourhood $\Omega$ of $a$ in $L_j(W)(R)$ such that $f_j$ is smooth along $f_j(R)^{-1}(\Omega)$. Assertion (i) follows. So does assertion~(ii) after maybe shrinking $\Omega$. If $R=\mathbb R$, that assertion (iii) holds after shrinking $\Omega$ further follows from Ehresmann's theorem applied to the proper submersion $f_j(\mathbb R)|_{f_j(\mathbb R)^{-1}(\Omega)}:f_j(\mathbb R)^{-1}(\Omega)\to\Omega$. \end{proof} \section{Smoothing by linkage} \label{sectionapprox} Let us apply linkage theory as developed in \S\ref{linkage} to prove Theorems \ref{Chow1}, \ref{Chow2} and~\ref{thC1}. \subsection{Main statement} Here is the technical result from which the main theorems of \S\ref{sectionapprox} will follows. \begin{prop} \label{mainapprox} Let $g:W\to X$ be a morphism of smooth projective varieties over a real closed field $R$. Let $d$ and $n$ be the dimensions $W$ and $X$ and assume that $n>2d$. If $R=\mathbb R$, fix a neighbourhood $\mathcal{V}\subset\mathcal{C}^{\infty}(W(\mathbb R),X(\mathbb R))$ of $g(\mathbb R)$. Then there exists a closed subvariety $i:Y\hookrightarrow X$ of dimension $d$ with the following properties. \begin{enumerate}[(i)] \item The variety $Y$ is smooth along $Y(R)$. \item If $d\leq 3$, then $Y$ is smooth. \item The class $g_*[W]-[Y]\in\mathrm{CH}_d(X)$ is a linear combination of classes of smooth closed subvarieties of $X$ with empty real loci. \item If $R=\mathbb R$, there is a diffeomorphism $\psi: W(\mathbb R)\myxrightarrow{\,\sim\,} Y(\mathbb R)$ such that $i(\mathbb R)\circ\psi\in\mathcal{V}$. \end{enumerate} \end{prop} \begin{proof} Define $V:=X\times W$, let $\pi:V\to X$ be the first projection and let $\mathcal{O}_V(1)$ be a very ample line bundle on $V$. Let $r$ be the codimension of the closed embedding $(g,\mathrm{Id}):W\hookrightarrow V$. If $R=\mathbb R$, define $\mathcal{U}:=\{\xi\in\mathcal{C}^{\infty}(W(\mathbb R),V(\mathbb R))\,|\,\pi(\mathbb R)\circ\xi\in\mathcal{V}\}$, which is a neighbourhood of the inclusion $W(\mathbb R)\hookrightarrow V(\mathbb R)$ by \cite[Theorem~11.4]{Michor}. Choose an even integer $j\gg 0$ and $r$-tuples of even integers ${\underline{l}_j\gg \dots \gg \underline{l}_{1}\gg0}$. Let ${f_j:\mathfrak{W}_j\to L_j(W)}$ be the $j$-th moduli of links of $W$ as in~\S\ref{generic}, and choose $\Omega\subset L_j(W)(R)$ be as in Proposition \ref{Rlink}. Since $L_j(W)$ is smooth and irreducible (see \S\ref{generic}), the subset $\Omega\subset L_j(W)$ is Zariski-dense (apply \cite[Proposition~2.8.14]{BCR}). Consequently, one may choose a general point $b\in\Omega$. Define ${Y:=\pi(\mathfrak{W}_{j,b})}$ with inclusion $i:Y\hookrightarrow X$. By Proposition \ref{propinj}, the morphism $\pi|_{\mathfrak{W}_{j,b}}:\mathfrak{W}_{j,b}\to Y$ is geometrically injective. By Proposition \ref{Rlink} (i)-(ii), in a neighbourhood of $\mathfrak{W}_{j,b}(R)$, the variety $\mathfrak{W}_{j,b}$ is smooth and the morphism $\pi|_{\mathfrak{W}_{j,b}}$ is immersive. These facts show that $\theta:=\pi(R)|_{\mathfrak{W}_{j,b}(R)}:\mathfrak{W}_{j,b}(R)\to Y(R)$ is bijective and that $Y$ is smooth along the image of $\theta$. This proves (i). The argument also shows that $\theta$ is a diffeomorphism. Consequently, if $R=\mathbb R$, one may take $\psi:=\theta\circ\phi_b$, where $\phi_b$ is as in Proposition~\ref{Rlink}~(iii), which proves (iv). If $d\leq 3$, Proposition~\ref{Hironaka} shows that $\mathfrak{W}_{j,b}$ is smooth and that $\pi|_{\mathfrak{W}_{j,b}}$ is a closed immersion, proving~(ii). It remains to prove (iii). The chain of links relating $W$ and $\mathfrak{W}_{j,b}$ shows that the difference $[W]-[\mathfrak{W}_{j,b}]\in\mathrm{CH}_d(V)$ is a multiple of $(2\lambda)^r\in\mathrm{CH}^r(V)=\mathrm{CH}_d(V)$, where $\lambda:=c_1(\mathcal{O}_V(1))\in\mathrm{CH}^1(V)$. Consequently, $\pi_*[W]-\pi_*[\mathfrak{W}_{j,b}]=g_*[W]-[Y]$ is a multiple of $\pi_*((2\lambda)^r)\in\mathrm{CH}_d(X)$. Let $(u_1,\dots,u_N)$ be a basis of $H^0(V,\mathcal{O}_V(1))$. Proposition \ref{propBertini} applied with $W=\varnothing$, with $l_i=2$, and with the $F_i$ chosen to be general small deformations of $\sum_{m=1}^Nu_m^2$, shows that $\pi_*((2\lambda)^r)$ is the class of a smooth closed subvariety of $X$ with empty real locus, which concludes. \end{proof} \subsection{Low-dimensional cycles} We first give applications to Chow groups of smooth projective varieties over real closed fields. \begin{thm} \label{Chowth} Let $X$ be a smooth projective variety of dimension $n$ over a real closed field $R$. For $n>2d$, the Chow group $\mathrm{CH}_d(X)$ is generated by classes of closed integral subvarieties of~$X$ which are smooth along their real loci. \end{thm} \begin{proof} Let $Z\subset X$ be a closed integral subvariety of $X$, let $W\to Z$ be a resolution of singularities, and let $g:W\to X$ be the induced morphism. Proposition~\ref{mainapprox} furnishes a closed subvariety $Y\subset X$ which is smooth along $Y(R)$ and such that $g_*[W]-[Y]\in\mathrm{CH}_d(X)$ is a linear combination of classes of smooth closed subvarieties of $X$. This proves the theorem. \end{proof} \begin{thm} \label{Chowth2} Let $X$ be a smooth projective variety of dimension $n$ over $\mathbb R$. For $n>2d$, the group $\mathrm{Ker} \big[\mathrm{cl}_{\mathbb R}:\mathrm{CH}_d(X)\to H_d(X(\mathbb R),\mathbb Z/2)\big]$ is generated by classes of closed integral subvarieties of~$X$ with empty real loci. \end{thm} \begin{proof} Ischebeck and Sch\"ulting \cite[Main Theorem 4.3]{IS} have shown that the group $\mathrm{Ker} \big[\mathrm{cl}_{\mathbb R}:\mathrm{CH}_d(X)\to H_d(X(\mathbb R),\mathbb Z/2)\big]$ is generated by classes of closed integral subvarieties of~$Z\subset X$ with the property that $Z(\mathbb R)$ is not Zariski-dense in $Z$. Let $W\to Z$ be a resolution of singularities of such a subvariety, and let $g:W\to X$ be the induced morphism. Since the real locus of a smooth irreducible variety over $\mathbb R$ is empty or Zariski-dense, one has $W(\mathbb R)=\varnothing$. By Proposition~\ref{mainapprox}, there is a closed subvariety $Y\subset X$ with $Y(\mathbb R)\simeq W(\mathbb R)=\varnothing$ and such that $g_*[W]-[Y]\in\mathrm{CH}_d(X)$ is a linear combination of classes of closed subvarieties of $X$ with empty real loci. This concludes. \end{proof} \begin{rem} \label{remrealclosed} Theorem \ref{Chowth2} cannot be extended to the non-archimedean real closed field $R:=\cup_n\mathbb R((t^{1/n}))$ for any $d>0$ by Proposition \ref{realclosed} below, which is a consequence of the failure of Br\"ocker's EPT theorem over that field \cite[\S 9.4.1]{BW2}. The proof fails in this setting because of its use of \cite[Main Theorem~4.3]{IS}. \end{rem} \begin{prop} \label{realclosed} For all $c,d\geq 1$, there exist a smooth projective variety $X_{c,d}$ of dimension $c+d$ over~$R:=\cup_n\mathbb R((t^{1/n}))$ and a class $\beta_{c,d}\in\mathrm{CH}_d(X_{c,d})$ such that: \begin{enumerate}[(i)] \item One has $\mathrm{cl}_R(\beta_{c,d})=0\in H_d(X_{c,d}(R),\mathbb Z/2)$. \item For all identities $\beta_{c,d}=\sum_{i\in I} n_i[Z_i]\in\mathrm{CH}_d(X_{c,d})$ with $n_i\in\mathbb Z$ and $Z_i\subset X$ integral, there exists $i\in I$ such that $n_i$ is odd and $Z_i(\mathbb R)$ is Zariski-dense in~$Z_i$. \end{enumerate} \end{prop} \begin{proof} Such $X_{1,1}$ and $\beta_{1,1}$ have been constructed in \cite[Propositions 9.17 and~9.19~(ii)]{BW2}. Let $x\in \mathbb P^{c-1}(R)$ and $y\in \mathbb P^{d-1}(R)$ be general points. One may then define $X_{c,d}:=X_{1,1}\times \mathbb P^{c-1}_R\times \mathbb P^{d-1}_R$ and $\beta_{c,d}:=pr_1^*\beta_{1,1}\cdot pr_2^*[x]$. The required property of $(X_{c,d},\beta_{c,d})$ follows from that of $(X_{1,1},\beta_{1,1})$, and from the equation $\beta_{1,1}=(pr_1)_*(\beta_{c,d}\cdot pr_3^*[y])$. \end{proof} \subsection{Approximation of submanifolds} \label{parapp} We now give an application to the existence of algebraic approximations for submanifolds of the real locus of a smooth projective variety~$X$ over $\mathbb R$. We refer to \S\ref{cobordism} for the definition of the algebraic bordism group $MO^{\mathrm{alg}}_d(X(\mathbb R))$. \begin{thm} \label{approxth} Let $X$ be a smooth projective variety of dimension $n$ over $\mathbb R$, and let $j:M\hookrightarrow X(\mathbb R)$ be a closed $\mathcal{C}^{\infty}$ submanifold of dimension $d$. If $n>2d$, the following properties are equivalent. \begin{enumerate}[(i)] \item\label{i} One has $[j]\in MO^{\mathrm{alg}}_d(X(\mathbb R))$. \item \label{ii} For all neighbourhoods $\mathcal{U}\subset\mathcal{C}^{\infty}(M,X(\mathbb R))$ of $j$, there exist a closed $d$-dimen\-sional subvariety $i:Y\hookrightarrow X$ smooth along $Y(\mathbb R)$ and a diffeomorphism ${\phi:M\myxrightarrow{\,\sim\,} Y(\mathbb R)}$ such that $i(\mathbb R)\circ\phi\in\mathcal{U}$. \end{enumerate} If moreover $d\leq 3$, one may choose $Y$ to be smooth in assertion (ii). \end{thm} \begin{proof} Assume that (i) holds and let $\mathcal{U}$ be as in (ii). A relative variant of the Nash--Tognoli theorem due to Akbulut and King (\cite[Proposition 0.2]{AKIHES}, see also \cite[Theorem 2.8.4]{AKbook}) shows the existence of a morphism $g:W\to X$ of smooth projective varieties over $\mathbb R$ and of a diffeomorphism $\chi:M\myxrightarrow{\,\sim\,} W(\mathbb R)$ such that $g(\mathbb R)\circ\chi\in\mathcal{U}$. Assertion (ii) and the last statement of Theorem \ref{approxth} follow by applying Proposition~\ref{mainapprox} to $\mathcal{V}:=\{\xi\circ\chi^{-1},\,\xi\in\mathcal{U}\}\subset\mathcal{C}^{\infty}(W(\mathbb R),X(\mathbb R))$ and by defining $\phi:=\psi\circ\chi$. Suppose conversely that (ii) holds. Applying it to a small enough neighbourhood~$\mathcal{U}\subset\mathcal{C}^{\infty}(M,X(\mathbb R))$ of $j$ shows that $j$ is homotopic, hence cobordant, to a $\mathcal{C}^{\infty}$ map of the form $i(\mathbb R)$, by \cite[Proposition 4.4.4]{Wall}. To get (i), take $W\to Y$ to be a resolution of singularities which is an isomorphism over $Y(\mathbb R)$, define $g:W\to X$ be the induced morphism, and note that $j$ is cobordant to $g(\mathbb R)$. \end{proof} \section{Complex cobordism and Chern numbers} \label{complexcobordism} After a short review of cobordism theory (in \S\ref{cob}) and of its relation with characteristic classes (in \S\ref{swc}), we study the top Segre class in \S\ref{partop}, our goal being Theorem \ref{divtop}. \subsection{The cobordism rings} \label{cob} Two compact $\mathcal{C}^{\infty}$ manifolds $M_1$ and~$M_2$ of dimension~$n$ are said to be \textit{cobordant} if there exists a compact $\mathcal{C}^{\infty}$ manifold with boundary~$C$ and a diffeomorphism $\partial C\simeq M_1\cup M_2$. Let $MO_n$ be the set of cobordism classes of such manifolds, and define $MO_*:=\bigoplus_{n\geq 0}MO_n$. Let $M$ be a $\mathcal{C}^{\infty}$ manifold. A \textit{stably almost complex structure} on $M$ is a complex structure $J$ on the real vector bundle $T_M\oplus \mathbb R^k$ for some $k\geq 0$, modulo the equivalence relation generated by $(T_M\oplus \mathbb R^k,J)\simeq(T_M\oplus \mathbb R^{k+2}=T_M\oplus \mathbb R^k\oplus\mathbb C,(J,i))$. Two $n$-dimensional stably almost complex compact $\mathcal{C}^{\infty}$ manifolds $M_1$ and~$M_2$ are said to be \textit{complex cobordant} if there exists a stably almost complex compact $\mathcal{C}^{\infty}$ manifold with boundary $C$ and a diffeomorphism $\partial C\simeq M_1\cup M_2$ compatible with the stably almost complex structures. Let $MU_n$ be the set of complex cobordism classes of such manifolds, and define $MU_*:=\bigoplus_{n\geq 0}MU_n$. Disjoint union and cartesian product endow $MO_*$ and $MU_*$ with graded ring structures: they are the \textit{unoriented cobordism ring} and the \textit{complex cobordism ring}. Thom \cite[Th\'eor\`eme IV.12]{Thom} and Milnor \cite{MilnorMU} (see also Quillen \cite[Theorem~6.5]{Quillen}) have computed that $MO_*\simeq\mathbb Z/2[x_d]_{d\neq 2^k-1}$ and $MU_*\simeq\mathbb Z[t_{2d}]_{d\geq 1}$, where $x_d\in MO_d$ and $t_{2d}\in MU_{2d}$. Let $\phi:MU_*\to MO_*$ be the graded ring homomorphism forgetting the stably almost complex structures. Milnor \cite[Theorem 1]{MilnorSW} has shown that the image of~$\phi$ consists exactly of the squares in $MO_*$, hence that there exists a surjective ring homomorphism $\psi:MU_*\to MO_*$ such that $\phi(x)=\psi(x)^2$ for all $x\in MU_*$. \begin{lem} \label{lemker} The ideal $\ker(\psi)\subset MU_*$ is generated by $2$ and by $(\ker(\psi)_{2^{k+1}-2})_{k\geq 1}$. \end{lem} \begin{proof} We use the isomorphisms $MO_*\simeq\mathbb Z/2[x_d]_{d\neq 2^k-1}$ and $MU_*\simeq\mathbb Z[t_{2d}]_{d\geq 1}$. Since $\psi:MU_*\to MO_*$ is surjective, $\psi(t_{2d})-x_d\in MO_d$ is decomposable for $d\neq 2^k-1$, and $\psi(t_{2d})$ is of course decomposable for $d=2^k-1$. We deduce from the surjectivity of $\psi$ the existence of $t'_{2d}\in MU_{2d}$ such that $t_{2d}-t'_{2d}$ is decomposable (so that $MU_*=\mathbb Z[t'_{2d}]_{d\geq 1}$) and such that $\psi(t'_{2d})=x_d$ if $d\neq 2^k-1$ and $\psi(t'_{2d})=0$ otherwise. It is now clear that $\ker(\psi)$ is generated by $2$ and by the $(t'_{2^{k+1}-2})_{k\geq 1}$. \end{proof} \subsection{Stiefel--Whitney and Chern numbers} \label{swc} Let $M$ be a compact~$\mathcal{C}^{\infty}$ manifold of dimension $n$, and let $w_r(M)\in H^r(M,\mathbb Z/2)$ be the $r$-th Stiefel--Whitney class of its tangent bundle. For a sequence of nonnegative integers $I=(i_1,i_2,\dots)$ with $|I|:=\sum_r ri_r=n$, we define $w_I(M):=\langle \prod_r w_r(M)^{i_r},[M]\rangle\in\mathbb Z/2$, where $[M]$ is the fundamental class of $M$. The $w_I(M)$ are the \textit{Stiefel--Whitney numbers} of $M$. Thom has shown that they only depend on the cobordism class of~$M$, and that they determine this cobordism class \cite[Th\'eor\`emes~IV.3 and~IV.11]{Thom}. Similarly, if $M$ is a stably almost complex compact $\mathcal{C}^{\infty}$ manifold of dimension~$n$, we let $c_r(M)\in H^{2r}(M,\mathbb Z)$ denote the $r$-th Chern class of its stable tangent bundle, and we define the \textit{Chern numbers} $c_I(M):=\langle \prod_r c_r(M)^{i_r},[M]\rangle\in \mathbb Z$ of $M$ for $|I|=n/2$. These numbers only depend on the complex cobordism class of $M$, and determine it (see \cite[Theorem 6.5]{Quillen}). \begin{lem} \label{cw} For all $y\in MU_{2d}$ and all $I=(i_1,i_2,\dots)$ with $|I|=d$, one has $$c_I(y)=w_I(\psi(y))\in\mathbb Z/2.$$ \end{lem} \begin{proof} Represent $y$ by a stably almost complex compact $\mathcal{C}^{\infty}$ manifold $M$ and $\psi(y)$ by a compact $\mathcal{C}^{\infty}$ manifold $N$. For $I'=(0,i_1,0,i_2,\dots)$, one has $$c_I(M)=w_{I'}(M)=w_{I'}(N\times N)=w_I(N)\in\mathbb Z/2,$$ where the first equality holds by \cite[Problem 14-B]{MS}, the second since $M$ is cobordant to $N\times N$, and the third is \cite[Lemma~2]{MilnorSW}. \end{proof} The relation $(\sum_r c_r(M))(\sum_r s_r(M))=1$ defines the \textit{Segre classes} or \textit{normal Chern classes} $s_r(M)\in H^{2r}(M,\mathbb Z)$ of $M$. Pairing the top Segre class with $[M]$ yields a morphism ${s_d:MU_{2d}\to\mathbb Z}$ which is a linear combination of Chern numbers. This characteristic number is multiplicative in the following sense. \begin{lem} \label{mult} For $y\in MU_{2d}$ and $y'\in MU_{2d'}$, one has $s_{d+d'}(yy')=s_{d}(y)s_{d'}(y')$. \end{lem} \begin{proof} Represent $y$ and $y'$ by stably almost complex compact $\mathcal{C}^{\infty}$ manifolds and apply the Whitney sum formula \cite[(14.7)]{MS}. \end{proof} \subsection{Divisibility of the top Segre class} \label{partop} In \cite{RT}, Rees and Thomas study divisibility properties of some Chern numbers. In Theorem \ref{RT}, we recall one of their results, which we complement in Theorem \ref{divtop}. Recall from \S\ref{notation} that we let $\alpha(m)$ be the number of $1$'s in the dyadic expansion of $m$. \begin{thm}[Rees--Thomas] \label{RT} For $d\geq 0$ and $e\geq 1$, the function ${s_d:MU_{2d}\to\mathbb Z}$ is divisible by $2^e$ if and only if $\alpha(d+e-1)>2(e-1)$. \end{thm} \begin{proof} Rees and Thomas \cite[Theorem 3]{RT} show that $s_d:MU_{2d}\to\mathbb Z$ is divisible by $2^e$ if and only if $\alpha(d+f)>2f$ for all $0\leq f\leq e-1$, and it is easily verified that $\alpha(d+f)>2f$ implies that $\alpha(d+f-1)>2(f-1)$ for all $f\geq 1$. \end{proof} We point out for later use an easy corollary of Lemma~\ref{mult} and Theorem~\ref{RT}. \begin{cor} \label{cor4} For $d\geq 1$, the function $s_d:MU_{2d}\to\mathbb Z$ takes even values, and takes values divisible by $4$ on decomposable elements. \end{cor} Here is the main result of \S\ref{complexcobordism}. \begin{thm} \label{divtop} Let $d\geq 0$ and $e\geq 1$ be such that $\alpha(d+e-1)>2(e-1)$. Then the function $\frac{s_d}{2^e}:MU_{2d}\to\mathbb Z$ coincides modulo $2$ with an integral linear combination of Chern numbers if and only if $\alpha(d+e)\geq 2e$. \end{thm} \begin{proof} Assume first that $\alpha(d+e)<2e$, and let $d+e=2^{a_1}+\dots+2^{a_f}$ be the dyadic expansion of $d+e$, with $f\leq 2e-1$. Define $d_1:=2^{a_1}+\dots+2^{a_{f-1}}-(e-1)$ and $d_2:=2^{a_f}-1$. One has $\alpha(d_1+e-1)=f-1\leq 2e-2$. It thus follows from Theorem~\ref{RT} that there exists $y_{1}\in MU_{2d_1}$ such that $s_{d_1}(y_1)$ is not divisible by~$2^{e}$. Theorem~\ref{RT} also shows the existence of $y_2\in MU_{2d_2}$ such that $s_{d_2}(y_2)$ is not divisible by $4$. We deduce from Lemma \ref{mult} that $s_{d}(y_1y_2)/2^e\in\mathbb Z$ is odd. Since the map ${\psi:MU_*\to MO_*}$ is surjective and $MO_{2^{a_f}-1}$ contains no indecomposable element (see \S\ref{cob}), there exists a decomposable element $z_2\in MU_{2d_2}$ with $\psi(z_2)=\psi(y_2)$. By Corollary \ref{cor4}, one may replace $y_2$ with $y_2-z_2$ and thus assume that $\psi(y_2)=0$. But then $\psi(y_1y_2)=\psi(y_1)\psi(y_2)=0$, which shows, in view of Lemma \ref{cw}, that all the Chern numbers of $y_1y_2$ are even. Consequently, $s_{d}(y_1y_2)/2^e \pmod 2$ cannot be a linear combination with $\mathbb Z/2$ coefficients of Chern numbers of $y_1y_2$. We have thus proven the direct implication of the theorem. Assume now that $\alpha(d+e)\geq 2e$. Let $k$ be such that $1\leq 2^k-1\leq d$. We claim that $MU_{2^{k+1}-2}\cdot MU_{2d-2^{k+1}+2}$ is included in the kernel of the morphism ${\chi:MU_{2d}\to\mathbb Z/2}$ obtained by reducing $\frac{s_d}{2^e}:MU_{2d}\to\mathbb Z$ modulo $2$. To see it, choose $y\in MU_{2^{k+1}-2}$ and $z\in MU_{2d-2^{k+1}+2}$. We now compute $$\alpha(d-2^k+1+(e-1))=\alpha(d-2^k+e)\geq \alpha(d+e)-1>2(e-1),$$ and Theorem \ref{RT} shows that $s_{d-2^k+1}(z)$ is divisible by $2^e$. Since $s_{2^k-1}(y)$ is even by Corollary \ref{cor4}, Lemma~\ref{mult} shows that $s_d(yz)$ is divisible by $2^{e+1}$, as wanted. We deduce from Lemma \ref{lemker} that the kernel of ${\psi:MU_{2d}\to MO_d}$ is included in the kernel of $\chi$, hence that $\chi=\mu\circ\psi$ for some morphism~$\mu:MO_d\to\mathbb Z/2$. Since a class in $MO_d$ is determined by its Stiefel--Whitney numbers (see \S\ref{swc}), the morphism~$\mu$ is a linear combination of Stiefel--Whitney numbers. Lemma \ref{cw} now implies that~$\chi$ is the reduction modulo $2$ of an integral linear combination of Chern numbers, which concludes the proof. \end{proof} \begin{ex} \label{Noether} The first interesting case of Theorem \ref{divtop} is $d=2$ and $e=1$. Since $\alpha(d+e)=\alpha(3)=2=2e$, it predicts the existence of an integral linear combination of Chern numbers which coincides modulo $2$ with $\frac{s_2}{2}:MU_4\to \mathbb Z$. We claim that this linear combination may be chosen to be $c_2$. Indeed, $MU_4$ is generated by classes of projective complex surfaces (see \cite[II, Corollary~10.8]{Adams}), and for such a surface $S$, our claim follows from Noether's formula $$s_2(S)=(c_1^2-c_2)(S)=12\chi(S,\mathcal{O}_S)-2c_2(S).$$ \end{ex} \section{The double point class} \label{doublepoint} We use the results of \S\ref{complexcobordism} in combination with a double point formula. We give applications to Chow groups in \S\ref{high}, proving Theorems \ref{Chow3}, \ref{Chow5} and \ref{Chow4}. We also construct new examples of submanifolds of real loci of smooth projective varieties over~$\mathbb R$ without algebraic approximations in \S\ref{subnoapp}, proving Theorems \ref{thC2} and \ref{thP}. \subsection{A consequence of Fulton's double point formula} \label{dbpt} Formulas for the rational equivalence class of the double point locus of a morphism go back to Todd \cite[(7.01)]{Todd} and Laksov \cite[Theorem 26]{Laksov} under strong assumptions on the morphism. The following proposition is an application of a refined double point formula of Fulton \cite[Theorem 9.3]{Fulton}, which is valid for an arbitrary morphism. \begin{prop} \label{FL} Let $g:W\to X$ be a morphism of smooth projective varieties over $\mathbb R$. Let $d$ be the dimension of $W$ and assume that $X$ has dimension $2d$. Let $N_{W/X}:=[g^*T_X]-[T_W]$ be the virtual normal bundle of $g$. \begin{enumerate}[(i)] \item If $g$ is an embedding, then \begin{equation} \label{modrien} \deg ((g_*[W])^2)=\deg(c_d(N_{W/X})). \end{equation} \item If $g$ is an embedding in a neighbourhood of $g(\mathbb C)^{-1} (X(\mathbb R))$, then \begin{equation} \label{mod4} \deg ((g_*[W])^2)\equiv\deg(c_d(N_{W/X}))\pmod 4. \end{equation} \end{enumerate} \end{prop} \begin{proof} Let $D(g)\subset W$ be the closed subset consisting of those $x\in W$ such that $g$ is not an embedding above a neighbourhood of $g(x)$. Let $\mathbb D(g)\in \mathrm{CH}_0(D(g))$ be the double point class of $g$ defined in \cite[\S 9.3]{Fulton}. By \cite[Theorem 9.3]{Fulton}, one has \begin{equation*} \mathbb D(g)=g^*g_*[W]-c_d(N_{W/X})\in \mathrm{CH}_0(W). \end{equation*} Since $g_*g^*g_*[W]=(g_*[W])^2$ by the projection formula, we deduce that \begin{equation} \label{eqFL} \deg(g_*\mathbb D(g))=\deg((g_*[W])^2)-\deg(c_d(N_{W/X}))\in\mathbb Z. \end{equation} In case (i), one has $D(g)=\varnothing$, hence $\mathbb D(g)=0$, and (\ref{eqFL}) implies (\ref{modrien}). Define $\overline{D}(g):=g(D(g))$. By \cite[Example 9.3.14]{Fulton}, there exists a $0$-cycle $\overline{\mathbb D}(g)\in \mathrm{CH}_0(\overline{D}(g))$ such that $g_*\mathbb D(g)=2\overline{\mathbb D}(g)\in \mathrm{CH}_0(\overline{D}(g))$. The hypothesis of (ii) implies that $\overline{D}(g)\subset X$ has no real points, hence that $\deg(g_*\mathbb D(g))=2\deg(\overline{\mathbb D}(g))$ is divisible by~$4$. The desired congruence (\ref{mod4}) now follows from~(\ref{eqFL}). \end{proof} \begin{rem} Proposition \ref{FL} (i) is of course much easier than Proposition \ref{FL} (ii). It follows for instance from \cite[Corollary 6.3]{Fulton}. \end{rem} \subsection{Weil restrictions of scalars and quotients of abelian varieties} We gather here the geometric constructions that will be used in \S\ref{high}. Let $G:=\mathrm{Gal}(\mathbb C/\mathbb R)$, and consider the $G$-module $\mathbb Z(j):=(\sqrt{-1})^j\mathbb Z\subset\mathbb C$. If $X$ is a variety over $\mathbb R$, letting $G$ act both on $X(\mathbb C)$ and on $\mathbb Z(j)$ endows $H^k(X(\mathbb C),\mathbb Z(j))$ with an action of $G$. \begin{prop} \label{resab} For all $d\geq 1$, there exists an abelian variety $X$ of dimension $2d$ over $\mathbb R$ and a class $\beta\in \mathrm{CH}^d(X)$ with the following properties. \begin{enumerate}[(i)] \item If $\gamma,\gamma'\in H^{2d}(X(\mathbb C),\mathbb Z(d))$ are Hodge and $G$-invariant, then $\deg(\gamma\cdot\gamma')$ is even. \item $\deg (\beta^2)\equiv 2\pmod 4$. \item $\mathrm{cl}_{\mathbb R}(\beta)=0\in H_d(X(\mathbb R),\mathbb Z/2)$. \end{enumerate} \end{prop} \begin{proof} Let $A$ be a very general principally polarized abelian variety of dimension~$d$ over~$\mathbb C$, and let $A'$ be the conjugate variety of $A$, which is its base change by complex conjugation. Define $X:=\mathrm{Res}_{\mathbb C/\mathbb R}(A)$ to be the Weil restriction of scalars of $A$. It is the abelian variety over $\mathbb R$ obtained by Galois descent from $A\times A'$ using the descent datum $A\times A'\myxrightarrow{\,\sim\,} A'\times A$ switching the factors. The subvariety $A\times\{0\}\cup\{0\}\times A'$ of $A\times A'$ descends to a subvariety $Z\subset X$, and we set $\beta:=[Z]\in\mathrm{CH}^d(X)$. Since the normalization of~$Z$ has no real points, one has $\mathrm{cl}_{\mathbb R}(\beta)=0$. Moreover, $\deg(\beta^2)=2$. Assertions (ii) and (iii) are proven. Let $\lambda\in H^2(A(\mathbb C),\mathbb Z(1))$ be the principal polarization of $A$. Since $A$ is very general, Mattuck's theorem (see \cite[Theorem 17.4.1]{BL}) and the computation of the integral cohomology ring of a principally polarized abelian variety show that the group of integral Hodge classes in $H^{2k}(A(\mathbb C),\mathbb Z(k))$ is generated by $\frac{\lambda^k}{k\,!}$ for ${0\leq k\leq d}$. The same computation holds true for $A'$ and its principal polarization $\lambda'\in H^2(A'(\mathbb C),\mathbb Z(1))$. The K\"unneth formula and the computation of the action of $G$ on $H^{2d}(X(\mathbb C),\mathbb Z(d))$ now show that group of $G$\nobreakdash-invariant Hodge classes in $H^{2d}(X(\mathbb C),\mathbb Z(d))$ is generated by the $\delta_k:=\big(\frac{\lambda^k}{k\,!}\cdot\frac{(\lambda')^{d-k}}{(d-k)\,!}+\frac{(\lambda')^k}{k\,!}\cdot\frac{\lambda^{d-k}}{(d-k)\,!}\big)$ for ${0\leq k<d/2}$, as well as by $\delta_{d/2}:=\frac{\lambda^{d/2}}{(d/2)\,!}\cdot\frac{(\lambda')^{d-k}}{(d/2)\,!}$ if $d$ is even. Assertion~(i) follows from the orthogonality of the $\delta_k$ and from the computations $\deg(\delta_k\cdot\delta_k)=2\cdot\binom{d}{k}^2$ for ${0\leq k<d/2}$ and $\deg(\delta_{d/2}\cdot\delta_{d/2})=\binom{d}{d/2}^2=4\cdot\binom{d-1}{d/2}^2$ for $d$ even. \end{proof} The next proposition is a variant of Proposition \ref{resab} which works over the complex numbers, but which is slightly more complicated. \begin{prop} \label{quotab} For all $d\geq 1$, there exists a $2d$-dimensional smooth projective variety $X$ over $\mathbb C$ and a class $\beta\in \mathrm{CH}^d(X)$ with the following properties. \begin{enumerate}[(i)] \item If $\gamma,\gamma'\in H^{2d}(X(\mathbb C),\mathbb Z)$ are Hodge, then $\deg(\gamma\cdot\gamma')$ is even. \item One has $\deg (\beta^2)\equiv 2\pmod 4$. \item All higher Chern classes of $X$ are torsion, i.e., $c(X)=1\in\mathrm{CH}^*(X)\otimes_{\mathbb Z}\mathbb Q$. \end{enumerate} \end{prop} \begin{proof} Let $(A,\lambda)$ be a very general principally polarized abelian variety of dimension~$d$ over~$\mathbb C$. Let $e_1,\dots,e_d,f_1,\dots,f_d\in H^1(A(\mathbb C),\mathbb Z)$ be a basis such that the principal polarization $\lambda\in H^2(A(\mathbb C),\mathbb Z)=\bigwedge^2 H^1(A(\mathbb C),\mathbb Z)$ of $A$ is equal to $\sum_i e_i\wedge f_i$. Let $\tau\in A(\mathbb C)[2]\simeq H_1(A(\mathbb C),\mathbb Z/2)$ be the $2$-torsion point associated with the morphism $H^1(A(\mathbb C),\mathbb Z)\to\mathbb Z/2$ sending $e_1$ to $1$ and $e_2,\dots,e_d,f_1,\dots,f_d$ to $0$. Denote by $(A',\lambda'=\sum_ie'_i\wedge f'_i,\tau')$ another copy of $(A,\lambda,\tau)$. Let $\mathbb Z/4$ act on $A\times A'$ via $(x,x')\mapsto (x'+\tau,x)$. Let $p:A\times A'\to X$ (resp. $q:A\times A'\to B$) be the quotient of $A\times A'$ by $\mathbb Z/4$ (resp. by the subgroup $\mathbb Z/2\subset \mathbb Z/4$). Since $\mathbb Z/2$ acts on $A\times A'$ via $(x,x')\mapsto(x+\tau,x'+\tau')$, we see that $B$ is an abelian variety, and that $H^1(B(\mathbb C),\mathbb Z)\subset H^1(A(\mathbb C)\times A'(\mathbb C),\mathbb Z)$ is the subgroup generated by $e_2,\dots,e_d,f_1,\dots,f_d,e'_2,\dots,e'_d,f'_1,\dots,f'_d,2e_1, e_1+e'_1,2e'_1$. Assertion (iii) follows at once from the fact that $c(A\times A')=1\in \mathrm{CH}^*(A\times A')$ since $p:A\times A'\to X$ is finite \'etale. Consider $Z:=p(A\times\{0\})\subset A\times A'$, and define $\beta:=[Z]\in\mathrm{CH}^d(X)$. As $p^*\beta=[A\times\{0\}]+[A\times\{\tau'\}]+[\{0\}\times A']+[\{\tau\}\times A']$, one computes that $\deg(p^*\beta^2)=8$. Since $\deg(p)=4$, one has $\deg(\beta^2)=2$. This proves assertion (ii). The formula $H^*(A(\mathbb C)\times A'(\mathbb C),\mathbb Z)=\bigwedge^* H^1(A(\mathbb C)\times A'(\mathbb C),\mathbb Z)$ and Mattuck's theorem (see \cite[Theorem 17.4.1]{BL}) show that the subgroup of $\mathbb Z/4$-invariant integral Hodge classes $E\subset H^{2d}(A(\mathbb C)\times A'(\mathbb C),\mathbb Z)$ is generated by the classes ${\delta_k:=\big(\frac{\lambda^k}{k\,!}\cdot\frac{(\lambda')^{d-k}}{(d-k)\,!}+\frac{(\lambda')^k}{k\,!}\cdot\frac{\lambda^{d-k}}{(d-k)\,!}\big)}$ for ${0\leq k<d/2}$, as well as $\delta_{d/2}:=\frac{\lambda^{d/2}}{(d/2)\,!}\cdot\frac{(\lambda')^{d-k}}{(d/2)\,!}$ if $d$ is even. The classes in the image of $q^*:H^{2d}(B(\mathbb C),\mathbb Z)\to H^{2d}(A(\mathbb C)\times A'(\mathbb C),\mathbb Z)$ all vanish in the quotient $\bigwedge^*\big((\mathbb Z e_1\oplus\dots\oplus\mathbb Z f'_d)/\langle e_1-e'_1\rangle\big)\otimes _{\mathbb Z} \mathbb Z/2$. Since the $\delta_k$ are $\mathbb Z/2$-linearly independent in this quotient, we deduce that $E\cap\mathrm{Im}(q^*)$ is included in (and in fact equal to) $2E\subset H^{2d}(A(\mathbb C)\times A'(\mathbb C),\mathbb Z)$. Now let $\gamma,\gamma'\in H^{2d}(X(\mathbb C),\mathbb Z)$ be Hodge classes. Then $p^*\gamma$ and $p^*\gamma'$ belong to $E\cap\mathrm{Im}(q^*)=2E$. It follows from the orthogonality of the $\delta_k$ and the computations $\deg(\delta_k\cdot\delta_k)=2\cdot\binom{d}{k}^2$ for ${0\leq k<d/2}$ and $\deg(\delta_{d/2}\cdot\delta_{d/2})=\binom{d}{d/2}^2=4\cdot\binom{d-1}{d/2}^2$ for~$d$ even that $\deg(p^*\gamma\cdot p^*\gamma')\equiv 0\pmod 8$. Since $\deg(p)=4$, we deduce that $\deg(\gamma\cdot \gamma')\equiv 0\pmod 2$, which proves (i). \end{proof} \subsection{High-dimensional cycles} \label{high} Here are applications to Chow groups. \begin{thm} \label{Chowth3} Let $d\geq c$ be such that $\alpha(c+1)\geq 3$. Then there exists an abelian variety $X$ of dimension $c+d$ over $\mathbb R$ such that $\mathrm{CH}_d(X) $ is not generated by classes of closed subvarieties of $X$ that are smooth along their real loci. \end{thm} \begin{proof} Suppose first that $d=c$, and let $X$ and $\beta$ be as in Proposition \ref{resab}. Assume for contradiction that $\beta=\sum_i n_i[Y_i]\in \mathrm{CH}_d(X)$ where $n_i\in\mathbb Z$ and the $Y_i$ are integral closed subvarieties of $X$ that are smooth along their real loci. One computes: \begin{equation} \label{congChow} \deg(\beta^2)=\sum_i n_i^2\deg([Y_i]^2)+2\sum_{i<j}n_in_j\deg([Y_i]\cdot[Y_j]). \end{equation} The existence of Krasnov's cycle class map $\mathrm{cl}:\mathrm{CH}_d(X)\to H^{2d}_G(X(\mathbb C),\mathbb Z(d))$ to $G$-equivariant Betti cohomology refining the usual complex cycle class map ${\mathrm{cl}_{\mathbb C}:\mathrm{CH}_d(X)\to H^{2d}(X(\mathbb C),\mathbb Z(d))}$ to Betti cohomology \cite[Theorem 0.6]{Krasnov2}, the fact that the image of $\mathrm{cl}_{\mathbb C}$ consists of Hodge classes, and assertion (i) of Proposition~\ref{resab}, combine to show that $2\sum_{i<j}n_in_j\deg([Y_i]\cdot[Y_j])$ is divisible by $4$. Let $g_i:W_i\to Y_i$ be a resolution of singularities which is an isomorphism above~$Y_i(\mathbb R)$. Proposition \ref{FL} (ii) and the fact that all higher Chern classes of the abelian variety~$X$ vanish show that $\deg([Y_i]^2)\equiv \deg(s_d(W_i))\pmod 4$. Since $\alpha(d+1)\geq 3$ by hypothesis, Theorem \ref{RT} implies that $\deg(s_d(W_i))=s_d([W_i(\mathbb C)])\equiv 0\pmod 4$. These congruences and assertion (ii) of Proposition \ref{resab} contradict (\ref{congChow}). To deal with the general case, apply the $d=c$ case to get a smooth projective variety $X'$ of dimension $2c$ over $\mathbb R$ and a class $\beta'\in\mathrm{CH}_c(X')$ that is not a linear combination of classes of subvarieties of $X'$ that are smooth along their real loci. Define $X:=X'\times A$ where $A$ is any abelian variety of dimension $d-c$ over $\mathbb R$, and $\beta:=pr_1^*\beta'\in\mathrm{CH}_d(X)$. That $\beta$ is not a linear combination of classes of subvarieties of $X$ that are smooth along their real loci follows from the corresponding property of $\beta'$ and from the Bertini theorem. \end{proof} \begin{thm} \label{Chowth4} If $d\geq c$ are such that $\alpha(c+1)\geq 2$, there exists an abelian variety~$X$ of dimension $c+d$ over $\mathbb R$ such that $\mathrm{Ker} \big[\mathrm{cl}_{\mathbb R}:\mathrm{CH}_d(X)\to H_d(X(\mathbb R),\mathbb Z/2)\big]$ is not generated by classes of closed subvarieties of $X$ with empty real loci. \end{thm} \begin{proof} The proof is almost identical to the proof of Theorem \ref{Chowth3}, replacing \textit{that are smooth along their real loci} by \textit{with empty real loci} everywhere. Only the argument used in the $d=c$ case to show that $s_d([W_i(\mathbb C)])\equiv 0\pmod 4$ needs to be modified, as follows. Since $W_i(\mathbb R)=\varnothing$, one has $\psi([W_i(\mathbb C)])=0\in MO_{d}$ by \cite[Theorem~22.4]{CF}. We deduce from Lemma \ref{cw} that all the Chern numbers of $[W_i(\mathbb C)]\in MU_{2d}$ are even. Theorem \ref{divtop} and the hypothesis that $\alpha(d+1)\geq 2$ imply that $s_d([W_i(\mathbb C)])\equiv 0\pmod 4$, as wanted. \end{proof} \begin{thm} \label{Chowth5} If $d\geq c$ are such that $\alpha(c+1)\geq 3$, there exists a smooth projective variety~$X$ of dimension $c+d$ over $\mathbb C$ such that $\mathrm{CH}_d(X)$ is not generated by classes of smooth closed subvarieties of $X$. \end{thm} \begin{proof} The proof is similar to that of Theorem \ref{Chowth3}. The argument at the end of the proof of Theorem \ref{Chowth3} shows that we may assume that $d=c$. Let $X$ and $\beta$ be as in Proposition \ref{quotab}. Assume that $\beta=\sum_i n_i[Y_i]\in \mathrm{CH}_d(X)$ where $n_i\in\mathbb Z$ and the $Y_i$ are smooth closed subvarieties of $X$, and consider equation~(\ref{congChow}). Since the Betti cohomology classes of the $Y_i$ are Hodge, assertion (i) of Proposition~\ref{quotab} shows that $2\sum_{i<j}n_in_j\deg([Y_i]\cdot[Y_j])$ is divisible by $4$. Assertion (iii) of Proposition \ref{quotab} and \cite[Corollary 6.3]{Fulton} show that $\deg([Y_i]^2)=\deg(s_d(W_i))$. Since $\alpha(d+1)\geq 3$, Theorem \ref{RT} implies that $\deg(s_d(W_i))\equiv 0\pmod 4$. Assertion~(ii) of Proposition \ref{quotab} now contradicts (\ref{congChow}). \end{proof} \subsection{Hypersurfaces in abelian varieties} We give here a geometric construction based on a Noether--Lefschetz argument, on which the proof of Theorem \ref{thji} relies. \begin{prop} \label{constrHodge} For all $d,e\geq 1$, there exists a $2d$-dimensional smooth projective variety $X$ over $\mathbb R$ with the following properties. \begin{enumerate}[(i)] \item The total Chern class $c(X)\in\mathrm{CH}^*(X)$ of $X$ satisfies $c(X)\equiv 1\pmod {2^{e+1}}$. \item The subgroup of Hodge classes $\mathrm{Hdg}^{2d}(X(\mathbb C),\mathbb Z) \subset H^{2d}(X(\mathbb C),\mathbb Z)$ is generated by a class $\omega\in \mathrm{Hdg}^{2d}(X(\mathbb C),\mathbb Z)$ with $\deg (\omega^2)\equiv 0\pmod {2^{e+1}}$. \item One has $X(\mathbb R)\neq\varnothing$. \end{enumerate} \end{prop} \begin{proof} Let $A$ be a very general principally polarized abelian variety of dimension $2d+1$ over $\mathbb R$ (by Baire's theorem, one may choose a very general real point of the moduli space $M$ of $(2d+1)$-dimensional principally polarized abelian varieties with level $3$ structure, since $M$ is a smooth variety). The principal polarization of $A$ is represented by an ample line bundle $\mathcal{L}$ on $A$ which is defined over $\mathbb R$ (see \cite[Theorem~4.1]{SS}). The group $\mathrm{Hdg}^{2d}(A(\mathbb C),\mathbb Z)$ of degree $2d$ Hodge classes on $A_{\mathbb C}$ is generated by $\frac{1}{d\,!}c_1(\mathcal{L})^{d}$ by Mattuck's theorem (see \cite[Theorem 17.4.1]{BL}) since $\frac{1}{d\,!}c_1(\mathcal{L})^{d}$ is a primitive integral cohomology class. Let $l\gg 0$ be such that $2^{e+1}\,|\,l$ and $\mathcal{L}^{\otimes l}$ is very ample. Choose a Lefschetz pencil of sections $\mathcal{L}^{\otimes l}$, and let $X\subset A$ be a very general member of this pencil with $X(\mathbb R)\neq\varnothing$. The restriction morphism $H^{2d}(A(\mathbb C),\mathbb Z)\to H^{2d}(X(\mathbb C),\mathbb Z)$ is injective with torsion free cokernel $E$, by the weak Lefschetz theorem \cite[Theorem 2]{AF}. Let $F\subset E_{\mathbb C}$ be the subspace of Hodge classes. Since $X$ was chosen very general, any class in $F$ remains Hodge when transported horizontally using the Gauss--Manin connection of the Lefschetz pencil. It follows that $F$ is stabilized by the monodromy of the Lefschetz pencil on $E_{\mathbb C}$. Since this action is irreducible as a consequence of the hard Lefschetz theorem (see \cite[Theorem 7.3.2]{Lamotke}) and since the Hodge structure on $E_{\mathbb C}$ is not trivial (as the restriction map $H^{2d}(A,\mathcal{O}_A)\to H^{2d}(X,\mathcal{O}_X)$ is not surjective), we deduce that $F=0$. This shows that all Hodge classes in $H^{2d}(X(\mathbb C),\mathbb Z)$ are in the image of the injective restriction map $H^{2d}(A(\mathbb C),\mathbb Z)\to H^{2d}(X(\mathbb C),\mathbb Z)$, hence that $\mathrm{Hdg}^{2d}(X(\mathbb C),\mathbb Z)$ is generated by the restriction $\omega$ of $\frac{1}{d\,!}c_1(\mathcal{L})^{d}$. We now compute $\deg(\omega^2)=\deg(\frac{1}{(d\,!)^2}c_1(\mathcal{L})^{2d}\cdot c_1(\mathcal{L}^{\otimes l}))=\frac{l(2d+1)!}{(d\,!)^2}$, which proves (ii). The normal exact sequence $0\to T_X\to (T_A)|_X\to \mathcal{L}^{\otimes l}|_X \to 0$ finally shows that $c(X)=c(A)|_X\cdot c(\mathcal{L}^{\otimes l}|_X)^{-1}=(1+lc_1(\mathcal{L})|_X)^{-1}\equiv 1\pmod{2^{e+1}}$, proving~(i). \end{proof} \subsection{Submanifolds with no algebraic approximations} \label{subnoapp} Now come the promised applications to algebraic approximation. We refer to \S\ref{cobordism} for the definition of the bordism group $MO_d(X(\mathbb R))$ of the real locus of a smooth projective variety $X$ over~$\mathbb R$, and of its subgroup $MO^{\mathrm{alg}}_d(X(\mathbb R))$ of algebraic bordism classes. \begin{thm} \label{thji} Assume that $c,d,e\geq 1$ are such that $d\geq c$ and $\alpha(c+e)=2e$. Then there exist a smooth projective variety $X$ of dimension $c+d$ over $\mathbb R$ and a $d$-dimensional closed $\mathcal{C}^{\infty}$ submanifold $j:M\hookrightarrow X(\mathbb R)$ such that the following properties hold for all $d$-dimensional closed subvarieties $i:Y\hookrightarrow X$. \begin{enumerate}[(i)] \item One has $[j]\in MO^{\mathrm{alg}}_d(X(\mathbb R))$. \item If $Y$ is smooth, then $[M]\neq[Y(\mathbb R)]\in MO_d$. \item If $e=1$ and $Y$ is smooth along $Y(\mathbb R)$, then $[M]\neq[Y(\mathbb R)]\in MO_d$. \end{enumerate} \end{thm} We first prove Theorem \ref{thji} in the particular case where $c=d$. \begin{lem} \label{lemji} If $c=d$, Theorem \ref{thji} holds with any $X$ as in Proposition \ref{constrHodge}. \end{lem} \begin{proof} Recall from \S\ref{swc} that if $I=(i_1,i_2,\dots)$ is a sequence of nonnegative integers, we set $|I|:=\sum_r ri_r$. By Theorem \ref{divtop}, there exists a degree $1$ homogeneous polynomial $P\in \mathbb Z[x_I]_{|I|=d}$ such that $s_d(y)\equiv 2^eP(c_I(y)) \pmod{2^{e+1}}$ for all $y\in MU_{2d}$. Theorem \ref{RT} shows the existence of $y_0\in MU_{2d}$ such that $s_d(y_0)\not\equiv 0\pmod{2^{e+1}}$, hence such that $P(c_I(y_0))\not\equiv 0\pmod 2$. Letting $M$ be a compact $\mathcal{C}^{\infty}$ manifold representing the $\psi(y_0)\in MO_d$, Lemma \ref{cw} shows that $P(w_I(M))\neq 0\in\mathbb Z/2$. By Whitney's theorem \cite[Theorem 5]{Whitney}, one may embed $j:M\hookrightarrow X(\mathbb R)$ in a small ball of $X(\mathbb R)$. Since this small ball is contractible, $j$ is homotopic (hence cobordant) to a constant map $M\to X(\mathbb R)$. As $M$ is cobordant to the real locus of a smooth projective variety over $\mathbb R$ (a disjoint union of products of projective spaces and Milnor hypersurfaces, see \cite[Lemma 1]{MilnorSW}), one deduces that ${[j]\in MO^{\mathrm{alg}}_d(X(\mathbb R))}$, proving (i). Let $W\to Y$ be a desingularization of $Y$ which is an isomorphism above $Y(\mathbb R)$ and let $g:W\to X$ be the induced morphism. Proposition \ref{FL} shows, under the assumptions of either (ii) or (iii), that \begin{equation} \label{congruence} \deg([Y]^2)=\deg ((g_*[W])^2)\equiv\deg(c_d(N_{W/X}))\pmod {2^{e+1}}. \end{equation} Since the Betti cohomology class of $Y$ is a Hodge class, assertion (ii) of Proposition~\ref{constrHodge} shows that $\deg([Y]^2)\equiv 0 \pmod{2^{e+1}}$. The total Chern class of $N_{W/X}$ is $c(N_{W/X})=c(g^*T_X)\cdot c(T_W)^{-1}=g^*c(X)\cdot s(W)$, where $s(W)\in \mathrm{CH}^*(W)$ denotes the total Segre class of $W$. We deduce from assertion (iii) of Proposition \ref{constrHodge} that $c_d(N_{W/X})\equiv s_d(W)\pmod{2^{e+1}}$. Together with (\ref{congruence}), these facts shows that $s_d(W)\equiv 0\pmod{2^{e+1}}$. Consequently, we have $s_d(W(\mathbb C))\equiv 0\pmod{2^{e+1}}$. By our choice of $P$, we deduce that $P(c_I(W(\mathbb C)))\equiv 0\pmod 2$. Conner and Floyd \cite[Theorem 22.4]{CF} have proven that $W(\mathbb C)$ and $W(\mathbb R)\times W(\mathbb R)$ are cobordant, and it follows that ${\psi([W(\mathbb C)])=[W(\mathbb R)]\in MO_d}$. \!Lemma \ref{cw} now shows that ${P(w_I(W(\mathbb R)))=0\in \mathbb Z/2}$, hence that $P(w_I(Y(\mathbb R)))=P(w_I(W(\mathbb R)))\neq P(w_I(M))$. Since Stiefel--Whitney numbers are cobordism invariants (see \S\ref{swc}), we have $[M]\neq [Y(\mathbb R)]\in MO_d$. \end{proof} The proof of Theorem \ref{thji} in general easily reduces to the above lemma. \begin{proof}[Proof of Theorem \ref{thji}] Lemma \ref{lemji} yields a smooth projective variety~$X'$ of dimension $2c$ over $\mathbb R$ and a $c$-dimensional closed $\mathcal{C}^{\infty}$ submanifold $j':M'\hookrightarrow X'(\mathbb R)$ satisfying properties (i)-(iii) of Theorem \ref{thji} (with $d$ replaced by $c$). Define $X:=X'\times\mathbb P^{d-c}_{\mathbb R}$ and $M:=M'\times\mathbb P^{d-c}(\mathbb R)$, and consider the embedding $j:=(j',\mathrm{Id}):M\hookrightarrow X(\mathbb R)$. Our choice of $X'$ and $M'$, shows the existence of a morphism $g':W'\to X'$ of smooth projective varieties over $\mathbb R$ such that $j'$ is cobordant to $g'(\mathbb R)$. Defining $W:=W'\times\mathbb P^{d-c}_{\mathbb R}$ and $g:=(g',\mathrm{Id})$, we see that $j$ is cobordant to $g(\mathbb R)$, hence that $[j]\in MO^{\mathrm{alg}}_d(X(\mathbb R))$, which proves (i). Suppose now that $i:Y\hookrightarrow X$ is as in the statement of Theorem \ref{thji} and satisfies the hypothesis of either (ii) or (iii). Let $x\in\mathbb P^{d-c}(\mathbb R)$ be a general point, and define $Y':=Y\cap (X'\times\{x\})$ and $i':Y'\hookrightarrow X'\times\{x\}\simeq X'$ to be the natural inclusion. Bertini's theorem ensures that $Y'$ is smooth in case (ii) and that $Y'$ is smooth along $Y'(\mathbb R)$ in case (iii). If $M$ were cobordant to $Y(\mathbb R)$, Sard's theorem would imply that $M'$ is cobordant to $Y'(\mathbb R)$. This contradicts our choice of $M'$ and proves (ii) and~(iii). \end{proof} \begin{rems} \label{remNoether} (i) Theorem \ref{thji} implies at once Theorem \ref{thC2}. (ii) It is striking that the obstructions to $M$ being approximable by real loci of algebraic subvarieties of $X$ provided by Theorem \ref{thji} (ii)-(iii) involve cobordism theory, although $[j]\in MO^{\mathrm{alg}}_d(X(\mathbb R))$ by Theorem \ref{thji} (i). Loosely speaking, $j$ is cobordant to an algebraic map, but not to an algebraic embedding. (iii) Complex cobordism and Theorem \ref{divtop} are not needed to prove Theorem~\ref{thji} for $c=2$. One may use Noether's formula instead, as in Example~\ref{Noether}. \end{rems} The proof of Theorem \ref{projth} is a variant of the proof of Lemma \ref{lemji}. Since $c(\mathbb P^1_{\mathbb R}\times\mathbb P^{2^{k+1}-1}_{\mathbb R})\not\equiv 1\pmod 4$, the argument is slightly more complicated. The relation $(\sum_rw_r(M))(\sum_r\overline{w}_r(M))=1$ defines the \textit{normal Stiefel--Whitney numbers} $\overline{w}_r(M)\in H^r(M,\mathbb Z/2)$ of a compact $\mathcal{C}^{\infty}$ manifold $M$. \begin{thm} \label{projth} Fix $k\geq 1$ and define $X:=\mathbb P^1_{\mathbb R}\times\mathbb P^{2^{k+1}-1}_{\mathbb R}$. There exists a $2^k$\nobreakdash-dimen\-sional closed $\mathcal{C}^{\infty}$ submanifold $j: M\hookrightarrow X(\mathbb R)$ such that $[j]\neq [i(\mathbb R)]\in MO_{2^k}(X(\mathbb R))$ for all $2^k$-dimensional closed subvarieties $i:Y\hookrightarrow X$ that are smooth along $Y(\mathbb R)$. \end{thm} \begin{proof} Since $\alpha(2^{k}+1)=2$, Theorem \ref{divtop}, shows that there is a degree $1$ homogeneous polynomial $P\in \mathbb Z[x_I]_{|I|=2^k}$ with $s_{2^k}(y)\equiv 2P(c_I(y)) \pmod{4}$ for all $y\in MU_{2^{k+1}}$. By Theorem~\ref{RT}, one may find $y_0\in MU_{2^{k+1}}$ such that $s_{2^k}(y_0)\not\equiv 0\pmod{4}$, hence such that $P(c_I(y_0))\not\equiv 0\pmod 2$. Letting $M$ be a compact $\mathcal{C}^{\infty}$ manifold representing the $\psi(y_0)\in MO_{2^k}$, Lemma \ref{cw} shows that $P(w_I(M))\neq 0\in\mathbb Z/2$. By Whitney's theorem \cite[Theorem 5]{Whitney}, one may embed $j:M\hookrightarrow X(\mathbb R)$ in a small ball of~$X(\mathbb R)$. Let $j:Y\hookrightarrow X$ be as in the statement of Theorem \ref{projth}. Assume for contradiction that $[j]=[i(\mathbb R)]\in MO_{2^k}(X(\mathbb R))$. Let $W\to Y$ be a desingularization of $Y$ which is an isomorphism above $Y(\mathbb R)$ and let $g:W\to X$ be the induced morphism. Proposition~\ref{FL} (ii) shows that $\deg([Y]^2)=\deg ((g_*[W])^2)\equiv\deg(c_{2^k}(N_{W/X}))\pmod {4}$, hence that \begin{equation} \label{congruence2} \deg([Y]^2)\equiv\sum_{r=0}^{2^k} \deg(g^*c_r(X)\cdot s_{2^k-r}(W))\pmod {4}. \end{equation} Consider the Borel--Haefliger cycle class map $\mathrm{cl}_{\mathbb R}:\mathrm{CH}^*(X)\to H^*(X(\mathbb R),\mathbb Z/2)$ (\cite{BH}, see also \cite[\S 1.6.2]{BW1}). Since $[j]=[i(\mathbb R)]\in MO_{2^k}(X(\mathbb R))$, one has $\mathrm{cl}_{\mathbb R}([Y])=[M]=0\in H^{2^k\!\!}(X(\mathbb R),\mathbb Z/2)$. Set $H_1:=c_1(\mathcal{O}_{\mathbb P^1_{\mathbb R}}(1))\in \mathrm{CH}^1(\mathbb P^1_{\mathbb R})$ and $H_2:=c_1(\mathcal{O}_{\mathbb P_{\mathbb R}^{2^{k+1}-1}}(1))\in \mathrm{CH}^1(\mathbb P^{2^{k+1}-1}_{\mathbb R})$. As $\mathrm{CH}^{2^k\!\!}(X)$ is generated by $(H_2)^{2^k}$ and $H_1(H_2)^{2^k-1}$, we compute that the kernel of $\mathrm{cl}_{\mathbb R}:\mathrm{CH}^{2^k\!\!}(X)\to H^{2^k\!\!}(X(\mathbb R),\mathbb Z/2)$ is generated by $2(H_2)^{2^k}$ and $2H_1(H_2)^{2^k-1}$. As a consequence, $[Y]\in \mathrm{CH}^{2^k\!\!}(X)$ is a multiple of $2$, and hence $ \deg([Y]^2)$ is divisible by $4$. The Euler exact sequences $0\to \mathcal{O}_{\mathbb P^N_{\mathbb R}}\to \mathcal{O}_{\mathbb P^N_{\mathbb R}}(1)^{\oplus N+1}\to T_{\mathbb P^N_{\mathbb R}}\to 0$ and the Whitney sum formula yield $c(X)=(1+H_1)^2(1+H_2)^{2^k}\in CH^*(X)$. Since ${H_1^2=H_2^{2^k}=0}$, we deduce that $c(X)\equiv 1\pmod 2$. For $r\geq 1$, let $\gamma_r\in \mathrm{CH}^r(X)$ be such that $c_r(X)=2\gamma_r$. Since Borel and Haefliger have shown that $\mathrm{cl}_{\mathbb R}(c(W))=w(W(\mathbb R))$ (\cite[\S 5.18]{BH}, see also \cite[Proposition 3.5.1]{Krasnov1}), we have $\mathrm{cl}_{\mathbb R}(s(W))=\overline{w}(W(\mathbb R))$. We deduce that, for $r\geq 1$, \begin{alignat}{4} \label{degmod2} \deg(\mathrm{cl}_{\mathbb R}(g^*\gamma_r\cdot s_{2^k-r}(W)))&=\deg(g(\mathbb R)^*\mathrm{cl}_{\mathbb R}(\gamma_r)\cdot \overline{w}_{2^k-r}(W(\mathbb R)))\nonumber\\ &=\deg(j^*\mathrm{cl}_{\mathbb R}(\gamma_r)\cdot\overline{w}_{2^k-r}(M))\\ &=0\in\mathbb Z/2,\nonumber \end{alignat} where the first equality follows from the functorial properties of $\mathrm{cl}_{\mathbb R}$ (see \cite[\S 1.6.2]{BW1}), the second from the equality of the Stiefel--Whitney numbers of the cobordant maps $j$ and $g(\mathbb R)=i(\mathbb R)$ (see \cite[Theorem 17.3]{CF}), and the third holds since $j^*:H^r(X(\mathbb R),\mathbb Z/2)\to H^r(M,\mathbb Z/2)$ vanishes for $r\geq 1$ because the image of~$j$ is included in a small ball of $X(\mathbb R)$. Equation (\ref{degmod2}) demonstrates that $\deg(g^*\gamma_r\cdot s_{2^k-r}(W))\in\mathbb Z$ is even, and hence that $\deg(g^*c_r(X)\cdot s_{2^k-r}(W))$ is divisible by $4$, for all $r\geq 1$. Plugging the congruences we have obtained into (\ref{congruence2}) shows that $\deg(s_{2^k}(W))\equiv 0\pmod 4$, hence that $s_{2^k}(W(\mathbb C))\equiv 0\pmod 4$. Our choice of $P$ implies that $P(c_I(W(\mathbb C)))\equiv 0\pmod 2$. By \cite[Theorem 22.4]{CF}, one has ${\psi([W(\mathbb C)])=[W(\mathbb R)]\in MO_{2^k}}$. Lemma \ref{cw} now shows that ${P(w_I(W(\mathbb R)))=0\in \mathbb Z/2}$, hence that $P(w_I(Y(\mathbb R)))\neq P(w_I(M))$. We deduce that ${[M]\neq [Y(\mathbb R)]\in MO_{2^k}}$ by cobordism invariance of Stiefel--Whitney numbers. A fortiori, $[j]\neq [i(\mathbb R)]\in MO_{2^k}(X(\mathbb R))$, which is a contradiction. \end{proof} \begin{rems} \label{remP} (i) Theorem \ref{projth} implies Theorem \ref{thP} by \cite[Proposition 4.4.4]{Wall}. (ii) Theorem \ref{projth} is false for $k=0$ by \cite[Theorem 12.4.11]{BCR}. The proof fails in this case because $\alpha(2^k+1)<2$ precisely for this value of $k$. (iii) The simplest particular case of Theorem \ref{projth} is the following. Embed $\mathbb P^2(\mathbb R)$ in $\mathbb R^4=\mathbb R\times\mathbb R^3$ and let $j:\mathbb P^2(\mathbb R)\hookrightarrow\mathbb P^1(\mathbb R)\times \mathbb P^3(\mathbb R)$ be the induced embedding. Then $j$ is not cobordant to the inclusion of the real locus of a closed subvariety of $\mathbb P^1_{\mathbb R}\times\mathbb P^3_{\mathbb R}$ which is smooth along its real locus. A fortiori, $j$ may not be isotoped to such a real locus. As in Remark \ref{remNoether} (iii), the use of Theorem \ref{divtop} may be replaced by Noether's formula in the proof of this particular case. (iv) The conclusion of Theorem \ref{projth}, is not explained by a difference between the groups $MO_*^{\mathrm{alg}}(X(\mathbb R))$ and $MO_*(X(\mathbb R))$, as they coincide by \cite[Lemma 2.1]{AKIHES} or \cite[Corollary~1~p.~314]{IS}. \end{rems} \section{Algebraic approximations and algebraic homology} \label{final} We finally combine the results of the previous sections to prove Theorem~\ref{thH}. \subsection{Hypersurfaces} The following proposition is a well-known improvement of \cite[Theorem 12.4.11]{BCR}, which goes back to the work of Benedetti and Tognoli \cite[Proof of Theorem 4.1]{BT} (see also \cite[Theorem A]{Akak}). \begin{prop} \label{hyp} Let $X$ be a smooth projective variety of dimension $n$ over $\mathbb R$, let $j:M\hookrightarrow X(\mathbb R)$ be a closed $\mathcal{C}^{\infty}$ hypersurface such that $j_*[M]\in H_{n-1}^{\mathrm{alg}}(X(\mathbb R),\mathbb Z/2)$, and let $\mathcal{U}\subset\mathcal{C}^{\infty}(M,X(\mathbb R))$ be a neighbourhood of the inclusion. Then there exist $\phi\in\mathcal{U}$ and a smooth closed hypersurface $Y\subset X$ with $\phi(M)=Y(\mathbb R)$. \end{prop} \begin{proof} By \cite[Theorem 12.4.11]{BCR}, there exist $\psi\in\mathcal{U}$, an open neighbourhood $U$ of $X(\mathbb R)$ in $X$ and a smooth closed hypersurface $Z\subset U$ with $\psi(M)=Z(\mathbb R)$. Let $\overline{Z}\subset X$ be the Zariski closure of $Z$. Since $X$ is smooth, there exist a line bundle $\mathcal{L}$ on $X$ and a section $s\in H^0(X,\mathcal{L})$ with $\overline{Z}=\{s=0\}$. Fix a very ample line bundle $\mathcal{O}_X(1)$ on $X$. Let $(u_1,\dots,u_N)$ be a basis of $H^0(X,\mathcal{O}_X(1))$. The section $v:=\sum_{m=1}^N u_m^2\in H^0(X,\mathcal{O}_X(2))$ vanishes nowhere on~$X(\mathbb R)$. Choose $l\gg0$ with $\mathcal{M}:=\mathcal{L}(2l)$ very ample, let $t\in H^0(X,\mathcal{M})$ be a general small deformation of $sv^l$, and set $Y:=\{t=0\}$, which is smooth by Bertini. That $Y$ has the required properties follows from \cite[\S 20]{AR}. More precisely, the proofs of \cite[Lemmas 20.3 and 20.4]{AR} applied with $X=X(\mathbb R)$, $Y=\mathcal{M}(\mathbb R)$, $W\subset Y$ the zero section, $r\geq 1$, and $\mathcal{A}=\mathcal{C}^{r+1}(X(\mathbb R),\mathcal{M}(\mathbb R))$ show that if $t$ is close to $sv^l$, then the inclusions ${Z(\mathbb R)\subset X(\mathbb R)}$ and $Y(\mathbb R)\subset X(\mathbb R)$ are isotopic, by an isotopy which is $\mathcal{C}^{\infty}$ because so are $t$ and $sv^l$, and small in the $\mathcal{C}^{\infty}$ topology (see the use of the implicit section theorem in the proof of \cite[Lemma 20.3]{AR}). \end{proof} \subsection{Proof of Theorem \ref{thH}} \label{proofsec} Theorem \ref{thH} is trivial if $c=0$ or $d=0$. It follows from Proposition \ref{hyp} if~$c=1$. The cases with $d\geq 3$ and $c\geq 2$ are covered by Bochnak and Kucharz in \cite[Corollary 1.3]{BKsub}. Assume from now on that $d\leq 2$. Since $MO_1=0$, an isomorphism $$(H_{d-2}(X(\mathbb R),\mathbb Z/2)\otimes MO_2)\oplus H_d(X(\mathbb R),\mathbb Z/2)\myxrightarrow{\,\sim\,} MO_d(X(\mathbb R))$$ is constructed in \cite[Theorem 17.2]{CF}. It restricts to an isomorphism $$(H_{d-2}(X(\mathbb R),\mathbb Z/2)\otimes MO_2)\oplus H_d^{\mathrm{alg}}(X(\mathbb R),\mathbb Z/2)\myxrightarrow{\,\sim\,} MO_d^{\mathrm{alg}}(X(\mathbb R))$$ by a theorem of Ischebeck and Sch\"ulting \cite[Corollary 1 p.~314]{IS}, in view of the equality $H_0^{\mathrm{alg}}(X(\mathbb R),\mathbb Z/2)=H_0(X(\mathbb R),\mathbb Z/2)$. We deduce that the two conditions that $j_*[M]\in H_d^{\mathrm{alg}}(X(\mathbb R),\mathbb Z/2)$ and that $[j:M\hookrightarrow X(\mathbb R)]\in MO_d^{\mathrm{alg}}(X(\mathbb R))$ are equivalent. Theorem \ref{thH} for $d=1$ and $c\geq 2$, or for $d=2$ and $c\geq 3$, now follows from Theorem~\ref{approxth}. Finally, the $c=d=2$ case of Theorem \ref{thH} is a consequence Theorem \ref{thji} (ii). \bibliographystyle{myamsalpha}
1,314,259,996,375
arxiv
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{D}{eep} learning has revolutionized computer vision by providing an effective solution to address a wide range of tasks (e.g.\@\xspace, classification, depth estimation, semantic segmentation, etc.). The rise of a common framework has allowed incredible leaps forward for the whole research community thanks to the ability to reuse architectural and algorithmic improvements discovered to solve one task across many others. However, the real knowledge of a neural network is stored inside its trained parameters and we still have no simple way of sharing this knowledge across different tasks and domains (i.e.\@\xspace, datasets). As such, the first step for every practitioner faced with a new problem or domain deals with acquisition and labeling of a new training set, an extremely tedious, expensive and time consuming operation. We argue that sharing the knowledge acquired by a neural network to solve a specific task in a specific domain across other tasks and domains could be a more straightforward and cost-effective way to tackle them. Indeed, this is demonstrated by the widespread use and success of \emph{transfer learning}. Transfer learning concerns solving new tasks by initializing a network with pre-trained weights, thereby providing a basic approach to knowledge reuse. However, it still requires a new annotated dataset to fine tune the pretrained network on the the task at hand. A few works focused on the related \emph{task transfer} (TT) problem \cite{zamir2018taskonomy, zamir2020robust}, i.e.\@\xspace, on exploiting supervised data to tackle multiple tasks in a single domain more effectively by leveraging on the relationships between the learned representations. As unlabeled domains are not considered in TT problem formulations, the proposed methodologies still rely on transfer learning and availability of a small annotated training set in order to address new datasets. On the other hand, the unsupervised \emph{domain adaptation} literature (DA) \cite{Wang_2018} studies how the need for annotated data can be removed when leveraging on knowledge reuse to solve the same task across different domains, but it does not consider different tasks. \begin{figure} \centering \includegraphics[width=\linewidth]{Images/teaser_new.jpg} \caption{Our framework transfers knowledge across tasks and domains. Given two tasks (1 and 2) and two domains (A and B), with supervision for both tasks in A but only for one task in B, we learn the dependency between the tasks in A and exploit this in B in order to solve task 2 without the need of supervision.} \label{fig:teaser_main} \end{figure} Differently, we propose to merge DA and TT by explicitly addressing a cross domain and cross task problem where on one source domain (e.g.\@\xspace, synthetic data) we have supervision for many tasks, while in another target one (e.g.\@\xspace, real data) annotations are available only for a specific task while we wish to solve many. A schematic representation of our problem formulation with two domains and two tasks is shown in the right part of \autoref{fig:teaser_main}. Following this schematic representation we will consider a scenario with two domains (a source one and a target one, namely A and B) and two tasks (again a source one and a target one, namely task 1 and 2), but nothing prevents our method to be extended to more. In domain A we use the available supervision to learn two models for the source and target tasks, while in the target domain B we can do the same for the source task only. In domain A we use the trained task-specific models to learn a mapping function ($G_{1\rightarrow2}$ in \autoref{fig:teaser_main}) between deep features extracted to solve the source task and those extracted to solve the target task. This mapping function is then applied in domain B to solve the target task by transforming the features extracted to solve the source task. The key component of our framework is the mapping function between the two task-specific deep features. In \cite{ramirez2019learning} we proposed a preliminary formulation of our framework by modeling the mapping function as a deep convolutional neural network and optimizing its parameters by standard supervised learning in the source domain A. In this work, we expand and improve upon our preliminary formulation by proposing two features alignment strategies aimed at learning the feature mapping function more effectively. Firstly, we align feature representations across domains using a novel norm discrepancy alignment (NDA) loss that constraints the feature space by penalizing features with very different norms in a spatially-aware manner. Secondly, we align feature representations across tasks by using them as inputs to solve a common auxiliary task. This pretext problem acts as a bridge between the source and the target tasks: in fact, if the deep features extracted to solve them independently can be used to address effectively an additional common task, we are pushed to believe that those features present the same semantic content and encode it in a similar manner. We test the effectiveness of our proposal in a challenging autonomous driving scenario where we try to solve the two related dense prediction tasks of monocular depth estimation and semantic segmentation \cite{ramirez2018exploiting}. We select edge detection as the auxiliary task since color edges provide oftentimes detailed key information related to both the semantic as well as the depth structure of the scene. Many edge detectors have been proposed during the years, with recent deep learning based approaches outperforming classical hand-crafted methods even in the most challenging scenarios \cite{soria2020dexined, HED, Wang_2019}. Interestingly, such deep models present good generalization capabilities, allowing us to use the state-of-the-art approach \cite{soria2020dexined} to generate proxy supervision for the auxiliary task without extra labels. Thanks to our formulation, we can use a fully supervised and completely synthetic domain (i.e.\@\xspace, the Carla simulator~\cite{Dosovitskiy17}) to improve the performance on a partially labeled real domain (i.e.\@\xspace, Cityscapes~\cite{Cordts_2016_CVPR}). The contributions of this paper can be summarized as follow: \begin{itemize} \item We propose for the first time to study a cross domain and cross task problem where supervision for all tasks is available in one domain whilst only for a subset of them in the other. This is done by learning a mapping between deep representations. \item We demonstrate how constraining explicitly deep features across domains with a novel norm discrepancy alignment loss improves the learning of the mapping function. \item We further show how the learning of the mapping function can be improved by deploying an auxiliary task. \item Considering the dense prediction tasks of monocular depth estimation and semantic segmentation, we achieve results close to the practical upper bound when transferring knowledge between a synthetic and a real domain. \end{itemize} \section{Related Works} \subsection{Transfer Learning and Task Transfer} Collecting training data is often expensive, time-consuming, or even unrealistic in many scenarios. Many works have tackled this problem by exploiting the existence of a relationship between the weights of CNNs trained for different tasks \cite{zhuang2019comprehensive}. In particular, \cite{yosinski2014transferable} showed that this strategy, referred to as transfer learning, can lead to better results than using random initialization even if applied on quite diverse tasks. Transfer learning has become a common practice, for instance, in object detection, where networks are usually initialized with Imagenet \cite{deng2009imagenet} classification weights \cite{redmon2016you,Ren_2017,He_2017,liu2016ssd}. Additional insights on the transferability of learned representations between different visual tasks were provided in \cite{zamir2018taskonomy}, where the authors present Taskonomy, a computational approach to represent a taxonomy of relationships among visual tasks. Along similar lines, \cite{Pal_2019_CVPR} proposed to exploit the correlation between known supervised tasks and novel target tasks, in order to predict the parameters of models deployed to solve the target tasks starting from the parameters of networks trained on the known tasks. While \cite{zamir2018taskonomy} and \cite{Pal_2019_CVPR} study the correlation between tasks in a given domain and assume either full or no supervision, we explicitly address a multi-domain scenario assuming full supervision in one domain and partial supervision in the target one. \subsection{Domain Adaptation} Domain adaptation techniques aim at reducing the performance drop of a model deployed on a domain different from the one the model was trained on \cite{Wang_2018}. Throughout the years, adaptation has been performed at different levels. Early approaches tried to model a shared feature space relying on statistical metrics such as MMD \cite{gong2012geodesic,long15Learning}. Later, some works proposed to align domains by adversarial training \cite{ganin2015unsupervised,ganin2016domain,tzeng2017adversarial}. Recently \cite{xu2019larger} noticed that, for classification tasks, aligning feature norms to an arbitrarily large value results in better transferability across domains. Generative adversarial networks \cite{goodfellow2014generative} have also been employed to perform image-to-image translation between different domains \cite{Zhu_2017_ICCV,Bousmalis_2017,Isola_2017_CVPR}, and, in particular, to render cheaply labelled synthetic images similar to real images from a target domain. However, when dealing with dense tasks such as semantic segmentation, feature-based domain adaptation approaches tend to fail \rev{as deeply discussed in \cite{Tsai_2018}}. Thus, several approaches to address domain adaptation for dense tasks, such as semantic segmentation \cite{hong2018conditional,pizzati2020domain,hoffman2016fcns,zhang2017curriculum,ramirez2018exploiting,chang2019all,Tsai_2018,hoffman2018cycada,shrivastava2017learning,Zhang_2018,Sankaranarayanan_2018, pan2020unsupervised, kim2020learning} or depth estimation \cite{Tonioni_2017_ICCV,Zheng_2018_ECCV,Tonioni_2019_CVPR} have been proposed recently. \rev{Among them, SPIGAN \cite{lee2018spigan} uses extra supervision coming from synthetic depth of the source domain to improve the quality of an image-to-image translation network and consequently achieving better adaptation performances.} Akin to DA methods, we learn from a labeled source domain to perform well on a different target domain. However, unlike the classical DA setting, we assume the existence of an additional task where supervision is available for both domains. \subsection{Multi-task Learning} The goal of multi-task learning is to solve many tasks simultaneously. By pursuing this rather than solving the tasks independently, a neural network may use more information to obtain more robust and reliable predictions. Many works try to tackle several tasks jointly \cite{Kokkinos_2017,ramirez2018geometry,Cipolla_2018, tosi2020distilled}. For example, \cite{Cipolla_2018} showed that by learning to correctly weigh each task loss, multi-task learning methods can outperform separate models trained individually. \cite{ramirez2018exploiting, tosi2020distilled} show how learning multiple perception tasks jointly while enforcing geometrical consistency across them can lead to better performances for almost all tasks. Recently, \cite{zamir2020robust} proposes a method to improve the performances of multiple single-task networks by imposing consistency across them during training. Finally, Taskonomy \cite{zamir2018taskonomy} investigates the relationship between the deployed tasks to accomplish multi-task learning effectively. However, multi-task learning approaches usually try to achieve the best balance between tasks in a single-domain scenario. We instead tackle a multi-task and multi-domain problem. Nevertheless, taking inspiration from multi-task learning, we show how jointly learning an auxiliary task while learning the two task networks helps the alignment of features across tasks. \subsection{Task Transfer and Domain Adaptation} Most existing approaches address independently either task transfer or domain adaptation. Yet, a few works have proposed to tackle these two problems jointly. \cite{tzeng2015simultaneous} was the first paper to propose a cross-tasks and cross-domains adaptation approach, considering as tasks different image classifications problems. UM-Adapt \cite{kundu2019adapt}, instead, learns a cross-task distillation framework with full supervision on the source domain and deploys such framework on the target domain in a fully unsupervised manner, while minimizing adversarially the discrepancy between the two domains. Differently, in a preliminary version of this work \cite{ramirez2019learning}, we introduced AT/DT{} (Across Tasks and Domains Transfer) and set forth a novel learning framework, where the relationship between a set of tasks is learned on the source domain and it is later deployed to solve a specific task on the target domain without supervision thanks to the availability of ground-truth for all the tasks except the target one. In this work we will expand and improve this methodology. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{Images/atdt_original.jpg} \caption{AT/DT framework: here $N_1$ and $N_2$ are trained separately to solve tasks $\mathcal{T}_1${} and $\mathcal{T}_2${}. While $N_2$ is trained only on images from domain $\mathcal{A}${}, $N_1$ is trained jointly on both domain $\mathcal{A}${} and domain $\mathcal{B}${}, to enable the extraction of domain invariant features. Then, encoders from the two networks are frozen and used to learn the transfer function $G_{1\rightarrow2}${}, which aims at transforming features extracted for $\mathcal{T}_1${} in features that are good for $\mathcal{T}_2${}. This step is performed only on domain $\mathcal{A}${}, since we have no supervision for $\mathcal{T}_2${} on domain $\mathcal{B}${}. Finally, at inference time, features are extracted from $E_1$ starting from images of domain $\mathcal{B}${}, transformed with the $G_{1\rightarrow2}${} and fed to $D_2$ to produce the final predictions.} \label{fig:atdt_original} \end{figure} \section{Method}\label{sec:method} \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{Images/framework.jpg} \caption{Features alignment strategies across tasks and domains. We train jointly the networks $N_1$, $N_2$ and a shared auxiliary decoder $D_{aux}$. We train $N_1$ to solve $\mathcal{T}_1${} on images from domains $\mathcal{A}${} and $\mathcal{B}${} using a supervised loss \losstask{1} for $\mathcal{T}_1$ alongside a novel feature Norm Discrepancy Alignment loss $\mathcal{L}_{NDA}$ which helps better aligning the features computed by $N_1$ across the two domains. We train $N_2$ using a supervised loss \losstask{2} for $\mathcal{T}_2$ on images from $\mathcal{B}${}. $D_{aux}$ is trained to solve an auxiliary task $\mathcal{T}_{aux}$ using the loss $\mathcal{L}_{aux}$ and based on the features computed by $E_1$ on images from $\mathcal{A}${} and $\mathcal{B}${} as well as by $E_2$ on images from $\mathcal{B}${}.} \label{fig:framework} \end{figure*} We introduce the problem we are trying to solve with a practical example. Imagine we aim to solve semantic segmentation in a real domain but we only have labels for a closely related task (e.g.\@\xspace, depth estimation). Moreover, let us suppose to have access to a synthetic domain, where labels can be easily obtained for both tasks. Unsupervised domain adaptation may be used in this synthetic to real scenario. However, we wish to go one step further, trying to answer this question: can we exploit the depth estimation task to boost the performance of semantic segmentation in the real domain? The answer is yes, thanks to our novel framework AT/DT{}. In AT/DT{} we first learn a mapping function in the synthetic domain between deep features of two networks trained for depth estimation and semantic segmentation. This mapping function captures the relationship between the two tasks. Once learned, we use the mapping on depth features extracted from real samples to solve semantic segmentation in the real domain without the need of labels for it, thereby transferring knowledge across tasks and domains. To further improve performance, we propose two strategies aimed at increasing the transferability of the learned features, namely leveraging on a norm discrepancy alignment loss and an auxiliary task. In the following sub-sections, we first describe the base AT/DT{} framework and then delineate its improved formulation which includes the norm discrepancy alignment loss and auxiliary task. \subsection{Notation} We consider two tasks, $\mathcal{T}_1${} and $\mathcal{T}_2${}, as well as two domains, $\mathcal{A}${} and $\mathcal{B}${}. We denote the images belonging to $\mathcal{A}${} and $\mathcal{B}${} as $x^{\mathcal{A}}$ and $x^{\mathcal{B}}$, respectively. We have labels for $\mathcal{T}_1${} in $\mathcal{A}${} and $\mathcal{B}${}, denoted as $y^\mathcal{A}_1$ and $y^\mathcal{B}_1$, respectively. On the other hand, we have labels for $\mathcal{T}_2${} only in $\mathcal{A}${}, denoted as $y^\mathcal{A}_2$. Our aim is to solve $\mathcal{T}_2${} in $\mathcal{B}${}, where we do not have supervision. We assume $\mathcal{T}_1${} and $\mathcal{T}_2${} to be both dense tasks, which can therefore be addressed by an encoder-decoder architecture. We denote as $N_1$ and $N_2$ two networks that solve $\mathcal{T}_1${} and $\mathcal{T}_2${}, respectively. Each network $N_k, k\in \{1,2\}$ consists of an encoder $E_k$ and a decoder $D_k$, such that $N_k(x) = D_k(E_k(x))$, $x$ being the input image. \subsection{Across Tasks and Domains Transfer} \label{subsec:atdt} In our AT/DT{} framework we aim at learning the relationships between $\mathcal{T}_1${} and $\mathcal{T}_2${} through a neural network. This is achieved by 3 steps, each represented as a block in \autoref{fig:atdt_original}:\\ \textbf{Training $N_1$ and $N_2$.} We train $N_1$ and $N_2$ to solve $\mathcal{T}_1${} and $\mathcal{T}_2${}. Since we assume supervision for $\mathcal{T}_1${} on both domains, $N_1$ is trained with images from $\mathcal{A}${} and $\mathcal{B}${}. This enables $N_1$ to learn a feature space shared across the two domains. $N_2$, instead, is trained only on $\mathcal{A}${}. Both networks are trained with a specific supervised task loss \losstask{k} for $\mathcal{T}_k$.\\ \textbf{Training $G_{1\rightarrow2}${}.} Considering only domain $\mathcal{A}${}, where we have supervision for both tasks, we then train a transfer network $G_{1\rightarrow2}${} to map the features computed by $N_1$, $f_1^{\mathcal{A}}=E_1(x^{\mathcal{A}})$, into those computed by $N_2$, $f_2^{\mathcal{A}}=E_2(x^{\mathcal{A}})$. Denoting the transferred features as $f_{1\rightarrow2}^{\mathcal{A}}={G_{1\rightarrow2}}(f_1^{\mathcal{A}})$, we train the transfer network by minimizing the $L_2$ loss: \begin{equation} \mathcal{L}_{Tr}= ||f_{1\rightarrow2}^{\mathcal{A}} - f_2^{\mathcal{A}}||_2 \end{equation} \textbf{Inference.} Once $G_{1\rightarrow2}${} has been trained, we can address $\mathcal{T}_2${} in $\mathcal{B}${} by computing the features to solve $\mathcal{T}_1${}, $f_1^{\mathcal{B}}=E_1(x^{\mathcal{B}})$, transform them into features amenable to $\mathcal{T}_2${}, $f_{1\rightarrow2}^{\mathcal{B}}={G_{1\rightarrow2}}(f_1^{\mathcal{B}})$, and finally decode these features into the required dense output by $D_2$: \begin{equation} \hat{y}^B_2 = D_2(f_{1\rightarrow2}^{\mathcal{B}}) \end{equation}\\ After presenting the base AT/DT{} framework, in the next sub-sections we will describe two strategies deployed to boost the feature alignment across domains and tasks. \autoref{fig:framework} provides a detailed view of these two strategies which in our final proposed framework replace the initial steps of the training protocol (i.e.\@\xspace, Training $N_1$ and $N_2$). \begin{figure*}[htbp] \centering \includegraphics[width=15cm]{Images/teaser.jpg} \caption{Two task transfer scenarios: depth-to-semantic on the left, the opposite on the right. First row: ground-truth depth and semantic segmentation maps; second row: corresponding edge maps. Red circles highlight information needed in the target task but missing in the source one. } \label{fig:teaser} \end{figure*} \subsection{Feature Alignment Across Domains}\label{sec:nda} For the effectiveness of the approach delineated in \autoref{subsec:atdt}, it is crucial that $G_{1\rightarrow2}${} can generalize well to the target unseen domain $\mathcal{B}${} even if trained only with data from the source domain $\mathcal{A}${}. The DA literature presents us with several ways to accomplish this. One may operate on the input space \cite{Bousmalis_2017}, on the feature space \cite{tzeng2017adversarial} or on the output space of the network \cite{Tsai_2018}. In our setting, though, both input and output space of $G_{1\rightarrow2}${} are high dimensional latent spaces and, as reported in \cite{Tsai_2018}, unsupervised domain adaptation techniques tend to fail when applied to such spaces while addressing dense tasks. Yet, we can address the domain shift issue with a direct approach in the input space of $G_{1\rightarrow2}${}, i.e.\@\xspace, the feature space of $N_1$, which is already shared between $\mathcal{A}${} and $\mathcal{B}${} due to the network being trained supervisedly with images from both domains. \rev{ We leverage on the intuition that scene spatial priors are typically domain invariant in many adaptation scenarios. We consider it as a reasonable assumption for several domain adaptation settings, where we select the source domain by considering visual similarities with the target domain. For instance, in autonomous driving scenarios we typically have cameras placed from a car viewpoint, and scenes are urban scenarios in both synthetic \cite{Dosovitskiy17, Richter_2017} and real \cite{Cordts_2016_CVPR, yu2020bdd100k, Geiger2013IJRR} datasets. Thus, if we consider the task of semantic segmentation in all datasets (synthetic and real) we typically find \textit{road} pixels in the bottom part of the images and instead sky pixels in the top part of the images. To visualize this property we select a synthetic domain $\mathcal{A}${} CARLA \cite{Dosovitskiy17} and a real domain $\mathcal{B}${} Cityscapes \cite{Cordts_2016_CVPR}. Then, we count for each pixel location the number of occurrences of each class. We show the result of this experiment in \autoref{fig:sp_priors}, using a \textit{viridis} colormap to display these occurrency maps for each class and for both domains $\mathcal{A}${} and $\mathcal{B}${}. We can clearly see that the maps have a structure similar across domains, e.g., building are concentrated in the top image regions. Leveraging this property, we propose to align more closely the features computed by $E_1$ on the images from both domains, i.e.\@\xspace, $f_1^A$ and $f_1^B$, by enforcing similarity of the $L_2$ norms across channels at the same spatial location.} Starting from features $f_1^A$ and $f_1^B$ of dimensionality $H\times W \times C$, where $H$, $W$ and $C$ are the height, width and number of channels of the feature maps, we calculate the $L_2$ norm along the $C$ axis and minimize the absolute difference at each spatial location $i,j$. Hence, our NDA (Norm Discrepancy Alignment) Loss is defined as follows: \begin{equation} \mathcal{L}_{NDA} = \frac{1}{W\times H} \sum_{i=1}^H\sum_{j=1}^W \left | \lVert f^A_{1_{i,j}} \rVert_2 - \lVert f^B_{1_{i,j}} \rVert_2 \right | \end{equation} \begin{figure*}[ht] \centering \setlength{\tabcolsep}{1pt} \begin{tabular}{cccccccc} $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} \\ \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/carla/carla_road.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/cs/cs_road.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/carla/carla_sidewalk.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/cs/cs_sidewalk.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/carla/carla_wall.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/cs/cs_wall.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/carla/carla_fence.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/cs/cs_fence.jpg} \\ \multicolumn{2}{c}{\textit{Road}} & \multicolumn{2}{c}{\textit{Sidewalk}} & \multicolumn{2}{c}{\textit{Wall}} & \multicolumn{2}{c}{\textit{Fence}} \\ $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} \\ \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/carla/carla_person.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/cs/cs_person.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/carla/carla_pole.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/cs/cs_pole.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/carla/carla_vegetation.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/cs/cs_vegetation.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/carla/carla_vehicle.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/cs/cs_vehicle.jpg}\\ \multicolumn{2}{c}{\textit{Person}} & \multicolumn{2}{c}{\textit{Pole}} & \multicolumn{2}{c}{\textit{Vegetation}} & \multicolumn{2}{c}{\textit{Vehicle}} \\ & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & \\ & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/carla/carla_traffic.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/cs/cs_traffic.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/carla/carla_building.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/cs/cs_building.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/carla/carla_sky.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_carla/cs/cs_sky.jpg} & \\ & \multicolumn{2}{c}{\textit{Traffic Signs}} & \multicolumn{2}{c}{\textit{Building}} & \multicolumn{2}{c}{\textit{Sky}} & \\ \end{tabular} \caption{\rev{Spatial Priors Similarities Across Domains. Considered the semantic segmentation task, we compute the number of occurrences of each class at each pixel location for both domains. Domain $\mathcal{A}${} is CARLA, $\mathcal{B}${} is Cityscapes. We visualize the occurrence maps with a \textit{viridis} colormap.}} \label{fig:sp_priors} \end{figure*} \iffalse \begin{table*}[ht] \centering \setlength{\tabcolsep}{1pt} \begin{tabular}{cccccccc} $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} \\ \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/gta/road.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/cs/road.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/gta/sidewalk.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/cs/sidewalk.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/gta/wall.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/cs/wall.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/gta/fence.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/cs/fence.jpg} \\ \multicolumn{2}{c}{\textit{Road}} & \multicolumn{2}{c}{\textit{Sidewalk}} & \multicolumn{2}{c}{\textit{Wall}} & \multicolumn{2}{c}{\textit{Fence}} \\ $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} \\ \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/gta/person.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/cs/person.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/gta/pole.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/cs/pole.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/gta/veg.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/cs/veg.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/gta/car.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/cs/car.jpg}\\ \multicolumn{2}{c}{\textit{Person}} & \multicolumn{2}{c}{\textit{Pole}} & \multicolumn{2}{c}{\textit{Vegetation}} & \multicolumn{2}{c}{\textit{Vehicle}} \\ & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & $\mathcal{A}${} & $\mathcal{B}${} & \\ & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/gta/sign.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/cs/sign.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/gta/building.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/cs/building.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/gta/sky.jpg} & \includegraphics[width=0.12\textwidth]{Images/sp_priors/cs_gta/cs/sky.jpg} & \\ & \multicolumn{2}{c}{\textit{Traffic Signs}} & \multicolumn{2}{c}{\textit{Building}} & \multicolumn{2}{c}{\textit{Sky}} & \\ \end{tabular} \caption{Spatial Priors Across Domains.} \label{tab:my_label} \end{table*} \fi \subsection{Feature Alignment Across Tasks}\label{sec:transfer} While the NDA loss presented above aims at improving the generalization across domains of the feature mapping network $G_{1\rightarrow2}${}, its effectiveness can be further improved by aligning features also across tasks. Accordingly, we conjecture that $f_1$ features should capture as much information as possible on the details of the scene, even though some of this information may not be necessary to solve $\mathcal{T}_1${}, because, when transferred by $G_{1\rightarrow2}${}, such a richer representation could help to solve $\mathcal{T}_2${} more effectively. For this reason, while training $N_1$ for $\mathcal{T}_1${}, we train jointly an additional decoder, $D_{aux}$, to solve an auxiliary task, $\mathcal{T}_{aux}${}, aimed at enriching the learnt representation $f_1$ . However, though multi-task learning of $\mathcal{T}_1${} and $\mathcal{T}_{aux}${} could help to encode more detailed information into $f_1$ features, it does not guarantee that the decoder $D_2$, used at inference time on the features $f_{1 \rightarrow 2}$ transferred from $\mathcal{T}_1$ to $\mathcal{T}_2$, may effectively deploy this additional information if it has been trained only to solve $\mathcal{T}_2${} in isolation. This leads us to reckon that $D_{aux}$ should be trained jointly with $N_2$ too, such that the additional information required to solve $\mathcal{T}_{aux}${} may be incorporated also within the features $f_2$ learnt by $E_2$. Therefore, given auxiliary task labels $y^A_{aux}$ and $y^B_{aux}$ for $\mathcal{A}${} and $\mathcal{B}${}, we train $N_1$ and $N_2$ jointly with a single auxiliary decoder $D_{aux}$ using an auxiliary loss $\mathcal{L}_{aux}$. Purposely, we obtain auxiliary predictions from both encoders with the shared decoder $D_{aux}$ as $\hat{y}_{k_{aux}}=D_{aux}(E_k(x))$, $k \in \{1,2\}$. Similarly to the simpler formulation of our framework presented in \autoref{subsec:atdt}, to compute the auxiliary loss we feed images of both domains through $E_1$, while we pass only images from $\mathcal{A}${} through $E_2$. We do not pass images belonging to $\mathcal{B}${} through $E_2$ while training $D_{aux}$ since this would be the only kind of supervision for $E_2$ in $\mathcal{B}${} and it may skew $E_2$ output to be more effective on $\mathcal{T}_{aux}${} than on $\mathcal{T}_2${}. \rev{ \subsection{Overall $N_1$ and $N_2$ loss} When training simultaneously $N_1$ and $N_2$, the overall loss is: \begin{equation} \begin{split} \mathcal{L} = \lambda_{T_1}\mathcal{L}_{T_1}(y_1^A, \hat{y}_1^A) + \lambda_{T_1}\mathcal{L}_{T_1}(y_1^B, \hat{y}_1^B) \\ + \lambda_{T_2}\mathcal{L}_{T_2}(y_2^A, \hat{y}_2^A) + \lambda_{aux}\mathcal{L}_{aux}(y_{1_{aux}}^A, \hat{y}_{1_{aux}}^A) + \\ \lambda_{aux}\mathcal{L}_{aux}(y_{1_{aux}}^B, \hat{y}_{1_{aux}}^B) + \lambda_{aux}\mathcal{L}_{aux}(y_{2_{aux}}^A, \hat{y}_{2_{aux}}^A) + \\ \lambda_{NDA}\mathcal{L}_{NDA}(f_1^A, f_1^B) \end{split} \end{equation} } \section{Experimental Settings}\label{sec:architecture} \textbf{Tasks.} We fix $\mathcal{T}_1${} and $\mathcal{T}_2${} to be monocular depth estimation and semantic segmentation, or vice versa. These two visual tasks can be addressed using the same encoder-decoder architecture, with changes needed only in the final layer. Semantic segmentation is solved by minimizing a pixel-wise cross entropy loss, monocular depth estimation by minimizing an $L_1$ loss. We select edge detection as our $\mathcal{T}_{aux}${} since it seems particularly amenable to improve the effectiveness of our framework in capturing and transferring important structural information that might otherwise be lost. Let us consider the case of $\mathcal{T}_1${} being depth estimation and $\mathcal{T}_2${} semantic segmentation. The features $f_1$ needed to compute depth may ignore the boundaries between semantically distinct regions showing up at the same distance from the camera: in \autoref{fig:teaser} (left) this is the case, e.g.\@\xspace, of the boundaries between legs or tyres and ground, as well as between street signs and poles. Therefore, even if fed to a perfect $G_{1\rightarrow2}${}, $f_1$ may not contain all the information needed to restore the semantic structure of the image. By solving jointly edge detection on the input image, instead, we force our $N_1$ network to extract additional information that would not need to be captured should the learning objective be concerned with depth estimation only. Similarly, \autoref{fig:teaser} (right) highlights how depth discontinuities do not necessarily correspond to semantic boundaries, such that a network $N_1$ trained in isolation to assign semantic labels to pixels may not need to learn information relevant to estimate the depth structure of the image. Besides, it is worth pointing out that edge detection can be solved using again the same decoder architecture as $\mathcal{T}_1${} and $\mathcal{T}_2${}. Since the edge proxy-labels that we adopt are gray-scale images \cite{soria2020dexined}, in our experiments we implement the $\mathcal{L}_{aux}$ loss introduced in \autoref{sec:transfer} as a standard $L_2$ loss. \rev{In all our experiments we set $\lambda_{aux}$ to 0.5, $\lambda_{NDA}$ to 0.001, $\lambda_{T_1}$ and $\lambda_{T_2}$ to 1 to balance loss values.} \\ \textbf{Datasets.} We test the effectiveness of our method in an autonomous driving scenario. We set $\mathcal{A}${} and $\mathcal{B}${} to be a synthetic and a real dataset, respectively. The former consists of a collection of images generated with the Carla simulator \cite{Dosovitskiy17}, while the latter is the popular Cityscapes dataset \cite{Cordts_2016_CVPR}. We generated the Carla dataset mimicking the camera settings of the real scenes. We render 3500, 500, and 1000 images for training, validation, and testing, respectively. For each image, we store the associated depth and semantic labels provided by the simulator. The Cityscapes dataset is a collection of 2975 and 500 images to be used for training and validation, respectively. As for our evaluation, we use the 500 Cityscapes validation images since test images are not equipped with labels. Moreover, as in Cityscapes only the semantic labels are provided, we use depth proxy-labels obtained with the SGM stereo algorithm \cite{hirschmuller2005accurate}, by filtering the erroneous predictions in the generated disparities with a left-right consistency check. This can be considered as an added value because it shows the ability to transfer knowledge when learning from noisy labels. Finally, we use a pre-trained\footnote{Neither $\mathcal{A}${} nor $\mathcal{B}${} belong to the training set of this network.} state-of-the-art neural network\cite{soria2020dexined} as an off-the-shelf edge detector to extract from the images belonging to $\mathcal{A}${} and $\mathcal{B}${} the edges used as proxy-labels to train $\mathcal{T}_{aux}${}.\\ \textbf{Architecture.} To solve each task, we use two dilated ResNet50 \cite{Yu2017} as encoder and a stack of bilinear upsample plus convolutional layers as decoder. The encoder shrinks both input dimensions with a factor of 1/16, while the decoder upsamples the feature map until a prediction with the same spatial resolution as the input image is obtained. The two networks for $\mathcal{T}_1${} and $\mathcal{T}_2${} are identical, but for the final prediction layer, which is task dependent. The two previously defined encoders are also used to capture good features for edge detection, which is solved using $D_{aux}$, that shares the same architecture as the decoders used in $N_1$ and $N_2$. $G_{1\rightarrow2}${} is a simple CNN made out of 6 pairs of convolutional and batch normalization layers with kernel size $3\times3$ which do not perform any downsampling or upsampling operation. \\ \textbf{Training and Evaluation Protocol.} During the training phase of the transfer network $G_{1\rightarrow2}${}, the model is evaluated on the validation set of Carla. Of course, it is possible that optimality on Carla does not translate into optimal performance on Cityscapes. Yet, we cannot use data from the target domain neither for hyper-parameters tuning nor for early stopping, because in our setting these data would not be available in any real scenario. Therefore, the Cityscapes validation set is only used at test time to measure the final performances of our framework method. \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{Images/edge_details.jpg} \caption{From left to right: RGB input image of domain $\mathcal{A}${} , depth prediction from $N_1$, edges from $f_1$, semantic segmentation from $N_2$ and edges from $f_2$. Task features $f_1$ and $f_2$ encode richer details than strictly needed to solve either tasks as we can recover all edges from both of them by $D_{aux}$.} \label{fig:edge_details} \end{figure} \begin{table*}[!htbp] \center \caption{Experimental results of $Dep \rightarrow Sem${} scenario. Baseline stands for $N_2$ trained on $\mathcal{A}${} and tested on $\mathcal{B}${}, Transfer Oracle represents $G_{1\rightarrow2}${} trained only on $\mathcal{B}${}, Oracle refers to $N_2$ trained and tested on $\mathcal{B}${}. Best results highlighted in bold.} \label{tab:depth2sem} \setlength{\tabcolsep}{2.5pt} \scalebox{1}{ \begin{tabular}{cc|c|ccccccccccc|cc} \toprule $\mathcal{A}${} & $\mathcal{B}${} & Method & \rotatebox{90}{Road} & \rotatebox{90}{Sidewalk} & \rotatebox{90}{Walls} & \rotatebox{90}{Fence} & \rotatebox{90}{Person} & \rotatebox{90}{Poles} & \rotatebox{90}{Vegetation} & \rotatebox{90}{Vehicles} & \rotatebox{90}{Tr. Signs} & \rotatebox{90}{Building} & \rotatebox{90}{Sky} & \textbf{mIoU} & \textbf{Acc} \\ \midrule Carla{} & CS & Baseline & 78.99 & 38.81 & 1.34 & 5.80 & 24.02 & 24.47 & 71.98 & 52.23 & 5.57 & 65.17 & 59.10 & 38.86 & 78.58 \\ Carla{} & CS & ZDDA \cite{peng2018zero} & 85.93 & 41.28 & 4.62 & 8.63 & 38.80 & 25.94 & 72.78 & 58.37 & 18.44 & 73.74 & \textbf{78.16} & 46.06 & 82.82 \\ Carla{} & CS & \textbf{AT/DT{}} & \textbf{90.57} & \textbf{48.46} & \textbf{7.37} & \textbf{12.27} & \textbf{41.16} & \textbf{31.90} & \textbf{81.96} & \textbf{72.77} & \textbf{23.44} & \textbf{77.85} & 76.33 & \textbf{51.28} & \textbf{87.57} \\ \midrule CS & CS & Transfer Oracle & 89.69 & 48.05 & 11.46& 29.58 & 59.68 & 35.84 & 85.83 & 85.57 & 34.03 & 78.17 & 85.54 & 58.50 & 88.84\\ - & CS & Oracle & 96.74 & 78.28 & 29.26& 40.78 & 72.39 & 51.28 & 90.69 & 91.94 & 58.92 & 86.33 & 89.23 & 71.44 & 93.90\\ \bottomrule \end{tabular} } \end{table*} \begin{figure*}[!htbp] \centering \includegraphics[width=0.9\textwidth]{Images/qualitative_sem.jpg} \caption{Qualitative results of the $Dep \rightarrow Sem${} scenario. From left to right: RGB image, ground-truth, baseline trained only on domain $\mathcal{A}${}, ours.} \label{fig:qualitatives_depsem} \end{figure*} \textbf{Metrics.} To evaluate the performance on the semantic segmentation task two metrics are used: pixel accuracy, shortened \emph{Acc.} (i.e the percentage of pixels with a correct predicted label) and Mean Intersection Over Union, shortened \emph{mIoU}, as defined in \cite{Cordts_2016_CVPR}. To render these metrics comparable among the used datasets, we solve semantic segmentation on the 10 shared classes (Road, Sidewalk, Walls, Fence, Person, Poles, Vegetation, Vehicles, Traffic Signs, Building) plus the Sky category, which is defined as the set of points with infinite depth. Some of the Cityscapes classes are collapsed into one class: car and bicycle into vehicle, traffic signs and traffic light into traffic sign. The remaining categories of Cityscapes are instead ignored.\\ When testing the depth estimation task, we report the standard metrics described in \cite{eigen2014depth}: Absolute Relative Error (Abs Rel), Square Relative Error (Sq Rel), Root Mean Square Error (RMSE), logarithmic RMSE and $\delta_1$, $\delta_2$ and $\delta_3$ accuracy scores. Each $\delta_\alpha$ is obtained by computing, for each pixel of the input image, the maximum among ratio and inverse ratio between the predicted value and the ground-truth. $\delta_\alpha$ represents the percentage of pixels whose such ratio is lower than $1.25^\alpha$. \section{Experimental Results} We provide results for two different settings: transferring features from depth estimation to semantic segmentation (\autoref{sec:depsem}) as well as from semantic segmentation to depth estimation (\autoref{sec:semdep}). In both scenarios, as already mentioned, we used edge detection as auxiliary task, motivated by the idea that either semantic segmentation and depth estimation can benefit from edge information. \autoref{fig:edge_details} shows that with our multi-task learning protocol we are able to restore all the details of the scene from both $f_1$ and $f_2$, proving that $N_1$ and $N_2$ have indeed learned to encode into their features richer information than that strictly needed to solve $\mathcal{T}_1${} and $\mathcal{T}_2${}. \subsection{Depth to Semantics}\label{sec:depsem} In this setup, denoted as $Dep \rightarrow Sem${}, the goal of our framework is to transform depth features into semantic segmentation features. This mapping is learned using Carla{} as domain $\mathcal{A}${} and Cityscapes{} as domain $\mathcal{B}${}. We report results in \autoref{tab:depth2sem}: the first row shows results obtained with no adaptation (i.e.\@\xspace, training $N_2$ on Carla{} and testing it directly on Cityscapes{}), while from the second row we can see that our final framework yields 51.28\% mIoU and 87.57\% Acc with an improvement of +12.48\% and +8.99\% wrt to the baseline. \rev{Even though AT/DT{} is the first work to address the across tasks and domains scenario, we compare it against a related work, ZDDA \cite{peng2018zero}, which also leverage auxiliary data from a different tasks to perform domain adaptation. We apply it in our setup using as the "Source" and "Target" domains Carla{} and Cityscapes{} respectively. We address the $Dep \rightarrow Sem${} scenario using depth maps as "task-irrelevant" data. We skip the last sensor fusion step (Step 3) because it was not applicable in our scenario since we do not have task-irrelevant data at test time, and thus we stop training after the adaptation step (Step 2). We report results of this alternative approach in the second row of \autoref{tab:depth2sem}. As we can notice, ZDDA is effective in our scenario and achieves better performance compared to the baseline. However, AT/DT{} obtains much better results, surpassing ZDDA in all metrics. This is not surprising since ZDDA focus on extracting features only from task-irrelevant data, which can be sub-optimal for the relevant task as these data do not provide the same amount of information as the task-relevant data, e.g., features extracted only from depth images would not contain several useful information for semantic segmentation such as colors or textures. } Furthermore, as we are transferring features from another task, it is worth trying to investigate on the upper bound in performance due to the inherent transferability of the features between the two tasks. Purposely, we train $G_{1\rightarrow2}${} using only Cityscapes to learn a mapping function in a supervised fashion as explained in \autoref{sec:transfer} on $\mathcal{B}${} and test on the validation set of $\mathcal{B}${}. These results are shown in the third row of the table (denoted as Transfer Oracle): given a transfer architecture, there seems to be an upper bound in performance due to the nature of the two tasks, which in the considered setting amounts to a 58.5\% mIoU. Thus, our proposal exhibit a gap wrt the Transfer Oracle that is only about -7.2\% mIoU. We also report the performance of $N_2$ trained on $\mathcal{B}${} and tested on $\mathcal{B}${}, i.e.\@\xspace, the absolute upper bound in performance (last row of the table, denoted as Oracle). Some qualitative results dealing with the $Dep \rightarrow Sem${} scenario are depicted in \autoref{fig:qualitatives_depsem}. It is possible to appreciate the overall improvement of our method wrt the baseline, either in flat areas (e.g.\@\xspace, roads, sidelwalks and walls), objects shapes (e.g.\@\xspace, cars and persons) and fine-grained details (e.g.\@\xspace, poles and traffic signs). \subsection{Semantics to Depth}\label{sec:semdep} In this setup, which we define as $Sem \rightarrow Dep${}, the goal of our framework is to transform semantic features into depth features. This mapping is learned using Carla{} as domain $\mathcal{A}${} and Cityscapes{} as domain $\mathcal{B}${}, as done for the $Dep \rightarrow Sem${} scenario. Results are reported in \autoref{tab:sem2depth}. Similarly to the $Dep \rightarrow Sem${} scenario, in the first row we show results with no adaptation (i.e.\@\xspace, our baseline), while the third row presents the ones obtained with our framework. \rev{Also for this setup we report performances of ZDDA \cite{peng2018zero} (second row), in which we use semantic maps as task-irrelevant data. We can see that ZDDA achieves slight better performance of the baseline in 5 metrics out of 7, but still inferior to our approach. } Moreover, we report results from the Transfer Oracle and the Oracle, implemented as described for the $Dep \rightarrow Sem${} scenario. It is possible to appreciate that our framework outperforms the baseline on 6 out of 7 metrics, closing remarkably the gap with the practical upper bound of the Transfer Oracle. In \autoref{fig:qualitatives_semdep}, we show some qualitative results of the $Sem \rightarrow Dep${} scenario. While predictions look quite noisy in the background, we can see a good improvement in the foreground area thanks to our method. Shapes are recovered almost perfectly, both for big and small objects, even with difficult subjects like the crowd in the bottom row. It is also worth pointing out that the depth predictions yielded by our method turn out much smoother than the ones produced by the baseline and generally less noisy than the ground-truth that, as explained in \autoref{sec:architecture}, consists of proxy-labels computed with SGM \cite{hirschmuller2005accurate}. \begin{table*}[!htbp] \center \caption{Experimental results of $Sem \rightarrow Dep${} scenario. Baseline stands for $N_2$ trained on $\mathcal{A}${} and tested on $\mathcal{B}${}, Transfer Oracle represents $G_{1\rightarrow2}${} trained only on $\mathcal{B}${}, Oracle refers to $N_2$ trained and tested on $\mathcal{B}${}. Best results highlighted in bold.} \setlength{\tabcolsep}{2.5pt} \scalebox{1}{ \begin{tabular}{cc|c|cccc|ccc} \toprule \multicolumn{3}{c|}{} & \multicolumn{4}{c|}{\cellcolor{blue!25}Lower is better}& \multicolumn{3}{c}{\cellcolor{LightCyan}Higher is better}\\ $\mathcal{A}${} & $\mathcal{B}${} & Method & \cellcolor{blue!25}Abs Rel & \cellcolor{blue!25}Sq Rel & \cellcolor{blue!25}RMSE & \cellcolor{blue!25}RMSE log & \cellcolor{LightCyan}$\delta_1$ & \cellcolor{LightCyan}$\delta_2$ & \cellcolor{LightCyan}$\delta_3$\\ \midrule Carla{} & CS & Baseline & 0.7398 & 15.169 & 14.774 & 0.641 & \textbf{0.406} & 0.650 & 0.781 \\ Carla{} & CS & ZDDA \cite{peng2018zero} & 0.5206 & 7.5491 & 13.347 & 0.633 & 0.345 & 0.638 & 0.858 \\ Carla{} & CS & \textbf{AT/DT{}} & \textbf{0.3928} & \textbf{4.9094} & \textbf{12.363} & \textbf{0.444} & 0.372 & \textbf{0.757} & \textbf{0.923} \\ \midrule CS & CS & Transfer Oracle & 0.2210 & 2.2962 & 9.032 & 0.275 & 0.669 & 0.914 & 0.972 \\ - & CS & Oracle & 0.1372 & 1.6214 & 8.566 & 0.244 & 0.816 & 0.938 & 0.976 \\ \bottomrule \end{tabular} } \label{tab:sem2depth} \end{table*} \begin{figure*}[!htbp] \centering \includegraphics[width=0.9\textwidth]{Images/qualitative_depth.jpg} \caption{Qualitative result of the $Sem \rightarrow Dep${} scenario. From left to right: RGB image, ground-truth, baseline network trained only on domain $\mathcal{A}${}, ours.} \label{fig:qualitatives_semdep} \end{figure*} \section{Ablation Studies}\label{sec:ablations} In the following sections, we study the effectiveness of the key design choices behind our proposal. \begin{table*}[!htbp] \center \small \caption{Ablation study in the $Dep \rightarrow Sem${} scenario. Best results highlighted in bold. Aux refers to the framework trained with the auxiliary task. NDA refers to the framework trained with our NDA loss.} \label{tab:depth2sem_ablation} \setlength{\tabcolsep}{2.5pt} \scalebox{1}{ \begin{tabular}{cc|cc|ccccccccccc|cc} \toprule $\mathcal{A}${} & $\mathcal{B}${} & \rotatebox{90}{Aux} & \rotatebox{90}{NDA} & \rotatebox{90}{Road} & \rotatebox{90}{Sidewalk} & \rotatebox{90}{Walls} & \rotatebox{90}{Fence} & \rotatebox{90}{Person} & \rotatebox{90}{Poles} & \rotatebox{90}{Vegetation} & \rotatebox{90}{Vehicles} & \rotatebox{90}{Tr. Signs} & \rotatebox{90}{Building} & \rotatebox{90}{Sky} & \textbf{mIoU} & \textbf{Acc} \\ \midrule Carla{} & CS & & & 89.95 & 46.77 & 5.16 & 10.21 & 28.93 & 28.92 & 77.50 & 71.37 & 19.24 & 75.29 & 75.12 & 48.04 & 85.90 \\ Carla{} & CS & \checkmark & & 90.12 & 48.90 & 4.18 & 11.63 & 37.40 & 31.98 & \textbf{82.34} & 71.50 & 15.11 & \textbf{78.04} & \textbf{80.61} & 50.16 & 87.21 \\ Carla{} & CS & & \checkmark & \textbf{91.21} & \textbf{50.16} & 5.14 & \textbf{13.78} & 36.99 & \textbf{32.10} & 77.72 & \textbf{73.38} & \textbf{23.47} & 76.67 & 72.67 & 50.30 & 86.77 \\ Carla{} & CS & \checkmark & \checkmark & 90.57 & 48.46 & \textbf{7.37} & 12.27 & \textbf{41.16} & 31.90 & 81.96 & 72.77 & 23.44 & 77.85 & 76.33 & \textbf{51.28} & \textbf{87.57} \\ \bottomrule \end{tabular} } \end{table*} \subsection{Contribution of $\mathcal{T}_{aux}${} and NDA Loss} We start by studying the effect of introducing in our framework the auxiliary task and the NDA loss, analyzing their contribution when used separately as well as when combined together. The second and third row of \autoref{tab:depth2sem_ablation} report the results obtained in the $Dep \rightarrow Sem${} setting by integrating in our method either the auxiliary task (i.e.\@\xspace, edge detection) or the NDA loss, respectively. We can see that both design choices bring in an improvement of about +2\% in terms of mIoU with respect to the base AT/DT{} framework (first row). Moreover, the last row of the table shows that the auxiliary edge detection task and the NDA loss turn out complementary because, when combined together, they can provide an overall improvement of +3.34\% mIoU. \autoref{fig:structure_details} presents some zoomed-in qualitative results: we can see that, even if the base version of AT/DT{} already produces satisfactory results at a coarse level, the complete version of our framework can produce much more accurate predictions, especially regarding small details, such as poles, traffic signs and car outlines. \begin{figure*}[!htbp] \centering \includegraphics[width=0.9\textwidth]{Images/structure_details.jpg} \caption{Zoomed results in a $Dep \rightarrow Sem${} scenario. From left to right: base AT/DT{} without edge and NDA, our proposed method, ground-truth. We notice how, unlike base AT/DT{}, our method is able to recover the fine-grained details of the scene. } \label{fig:structure_details} \end{figure*} \subsection{Effectiveness of edge detection as auxiliary task} \begin{table*}[t] \caption{Comparison between autoencoder and edge detection as auxiliary tasks in the $Dep \rightarrow Sem${} scenario. Best results highlighted in bold.} \label{tab:auxtask} \center \setlength{\tabcolsep}{2.5pt} \scalebox{1}{ \begin{tabular}{l|ccccccccccc|cc} \toprule $\mathcal{T}_{aux}${} &\rotatebox{90}{Road} & \rotatebox{90}{Sidewalk} & \rotatebox{90}{Walls} & \rotatebox{90}{Fence} & \rotatebox{90}{Person} & \rotatebox{90}{Poles} & \rotatebox{90}{Vegetation} & \rotatebox{90}{Vehicles} & \rotatebox{90}{Tr. Signs} & \rotatebox{90}{Building} & \rotatebox{90}{Sky} & \textbf{mIoU} & \textbf{Acc} \\ \midrule None & 89.95 & 46.77 & 5.16 & 10.21 & 28.93 & 28.92 & 77.50 & 71.37 & \textbf{19.24} & 75.29 & 75.12 & 48.04 & 85.90 \\ Autoencoder & \textbf{90.68} & \textbf{50.12} & \textbf{7.45} & 9.08 & 31.40 & 29.43 & 78.72 & 68.51 & 12.95 & 74.67 & 75.68 & 48.07 & 86.31 \\ Edge detection & 90.12 & 48.90 & 4.18 & \textbf{11.63} & \textbf{37.40} & \textbf{31.98} & \textbf{82.34} & \textbf{71.50} & 15.11 & \textbf{78.04} & \textbf{80.61} & \textbf{50.16} & \textbf{87.21} \\ \bottomrule \end{tabular} } \end{table*} In this section, we show empirically that in our framework the choice of the proper auxiliary task is key to performance. In both the $Dep \rightarrow Sem${} and the $Sem \rightarrow Dep${} scenarios, we propose to use edge detection as auxiliary task because it captures information about the shapes of the objects in the input images and allows for straightforward computation of proxy-labels. To validate this design choice, we tested our framework in the $Dep \rightarrow Sem${} setting, using $D_{aux}$ to reconstruct the input images both from $f_1$ and $f_2$, i.e.\@\xspace, the classical autoencoder setting (results in \autoref{tab:auxtask}). Interestingly, using image reconstruction as auxiliary task results in an mIoU score almost identical to the base AT/DT{}. We consider that the autoencoder task is guided by a reconstruction loss which makes no distinction between the pixels of the input image: such supervision cannot guide effectively $f_1$ and $f_2$ to encapsulate the high-frequency components of the image that are needed to predict the fine-grained details of the scene, which is instead obtained by adopting edge detection as auxiliary task. \begin{table*}[t] \caption{\rev{Auxiliary tasks as source tasks in the $Dep \rightarrow Sem${} scenario. Best results highlighted in bold.}} \label{tab:aux_as_source} \center \setlength{\tabcolsep}{2.5pt} \scalebox{1}{ \begin{tabular}{l|ccccccccccc|cc} \toprule $\mathcal{T}_{aux}${} as $\mathcal{T}_1${} &\rotatebox{90}{Road} & \rotatebox{90}{Sidewalk} & \rotatebox{90}{Walls} & \rotatebox{90}{Fence} & \rotatebox{90}{Person} & \rotatebox{90}{Poles} & \rotatebox{90}{Vegetation} & \rotatebox{90}{Vehicles} & \rotatebox{90}{Tr. Signs} & \rotatebox{90}{Building} & \rotatebox{90}{Sky} & \textbf{mIoU} & \textbf{Acc} \\ \midrule Autoencoder & 60.24 & 19.33 & 1.67 & 1.67 & 4.12 & 8.00 & 33.15 & 10.49 & 0.69 & 17.89 & 62.66 & 19.99 & 52.91 \\ Edge Detection & 63.82 & 16.60 & 0.67 & 1.37 & 6.55 & 10.26 & 47.62 & 4.42 & 0.11 & 33.90 & 38.87 & 20.38 & 58.33 \\ \midrule Depth & \textbf{89.95} & \textbf{46.77} & \textbf{5.16} & \textbf{10.21} & \textbf{28.93} & \textbf{28.92} & \textbf{77.50} & \textbf{71.37} & \textbf{19.24} & \textbf{75.29} & \textbf{75.12} & \textbf{48.04} & \textbf{85.90} \\ \bottomrule \end{tabular} } \end{table*} \rev{ \subsection{Auxiliary tasks as source tasks} The main difference between a source and an auxiliary task is that the auxiliary task alone cannot provide enough information to solve $\mathcal{T}_2${}, but it is useful to enrich $\mathcal{T}_1${} features and align feature content across tasks. To better support our claims, we investigated AT/DT{} behaviour when using auxiliary tasks $\mathcal{T}_{aux}${} as source tasks $\mathcal{T}_1${} and semantic segmentation as target task $\mathcal{T}_2${}. The results of these experiments are reported in \autoref{tab:aux_as_source}. All rows of the table show results of the \emph{base} AT/DT{} i.e., trained without $L_{aux}$ and $L_{NDA}$ losses. As we can notice, using as source task $\mathcal{T}_1${} a standard image-reconstruction (row 1, autoencoder) or an edge detection (row 2) lead to much worse results than using depth estimation (row 3). We argue that features extracted by $N_1$ for these tasks do not contain enough information to perform semantic segmentation, which are yet contained in features for depth estimation. Similar finding were also made by Taskonomy \cite{zamir2018taskonomy}, in which they show that edge detection and image reconstruction (aka autoencoder) are less correlated to semantic segmentation than depth estimation. On the contrary, we have shown that Edge Detection can be a good auxiliary task in the $Dep \rightarrow Sem${} scenario since it can enrich depth features with missing edges useful for semantic segmentation and it can increase transferability aligning depth and semantic features. } \subsection{Importance of simultaneous training of $N_1$, $N_2$ and $D_{aux}$} In our experiments we use edge detection as auxiliary task and train a shared decoder $D_{aux}$ to reconstruct the edges of the input image from the features extracted by both $E_1$ and $E_2$. In fact, we argue that this procedure should force $E_1$ to encode into the extracted features also edge information that may be not necessary to solve $\mathcal{T}_1${} but that may be relevant for $\mathcal{T}_2${}. Besides, we believe that simultaneous training of $N_1$, $N_2$ and $D_{aux}$ is crucial to encourage features coming from $E_1$ and $E_2$ to represent edge information in a similar manner, making it easier to learn $G_{1\rightarrow2}${}. In \autoref{tab:edgeablation} we report the results concerning the ablation study conducted to validate these intuitions. We consider the $Dep \rightarrow Sem${} scenario using the Carla dataset as domain $\mathcal{A}${} and Cityscapes as domain $\mathcal{B}${}. The four rows of the table deal with the following training schemes: \begin{enumerate} \item The \emph{base} AT/DT (i.e.\@\xspace, without $\mathcal{T}_{aux}${} and NDA loss) as baseline. \item We first train $N_1$ and $D_{aux}$ on both $\mathcal{A}${} and $\mathcal{B}${}. Then, we train $N_2$ on $\mathcal{A}${}. Finally, we train $G_{1\rightarrow2}${} on features extracted by $E_1$ and $E_2$ on domain $\mathcal{A}${}. \item We train $N_1$ and a first $D_{aux}^1$ on both $\mathcal{A}${} and $\mathcal{B}${}. Then, we train $N_2$ and a second $D_{aux}^2$ on $\mathcal{A}${}. Finally, we train $G_{1\rightarrow2}${} on features extracted by $E_1$ and $E_2$ on domain $\mathcal{A}${} \item Our proposed method, which trains $N_1$, $N_2$ and a shared $D_{aux}$ simultaneously. \end{enumerate} The introduction of edge detection as auxiliary task helps in every scenario. In fact, if we use $D_{aux}$ only while training $N_1$ (second row), we already see an increase of $0.6\%$ in the overall mIoU. We believe that this is explained by the presence of edge details (not strictly necessary to solve $\mathcal{T}_1${} but relevant for $\mathcal{T}_2${}) in the features extracted by $E_1$. However, $G_{1\rightarrow2}${} may experience difficulties in adapting $f_1$ into $f_2$ if edge information is not explicitly present in $f_2$. This is confirmed by the results in the third row of the table, where an additional increase of $1.3\%$ in the overall mIoU is attained by using two different $D_{aux}$ (one during training of $N_1$ and one during training of $N_2$). Finally, the best results in terms of mIoU and Acc are achieved by our method, i.e.\@\xspace, when training $N_1$, $N_2$ and a shared $D_{aux}$ simultaneously. This vouches for the benefit of encoding in a similar manner the edge information in $f_1$ and $f_2$ in order to enforce feature alignment across tasks. \begin{table*}[t] \center \caption{Ablation study on the importance of simultaneous training of the $\mathcal{T}_1${}, $\mathcal{T}_2${}, and the auxiliary task. Best results highlighted in bold. See text for a detailed explanation of the training protocol used in each row.} \label{tab:edgeablation} \setlength{\tabcolsep}{2.5pt} \scalebox{1}{ \begin{tabular}{l|ccccccccccc|cc} \toprule method &\rotatebox{90}{Road} & \rotatebox{90}{Sidewalk} & \rotatebox{90}{Walls} & \rotatebox{90}{Fence} & \rotatebox{90}{Person} & \rotatebox{90}{Poles} & \rotatebox{90}{Vegetation} & \rotatebox{90}{Vehicles} & \rotatebox{90}{Tr. Signs} & \rotatebox{90}{Building} & \rotatebox{90}{Sky} & \textbf{mIoU} & \textbf{Acc} \\ \midrule \emph{base} AT/DT & 89.95 & 46.77 & 5.16 & 10.21 & 28.93 & 28.92 & 77.50 & 71.37 & \textbf{19.24} & 75.29 & 75.12 & 48.04 & 85.90 \\ Separate ($N_1$ + edge), $N_2$ & 87.24 & 43.30 & 3.08 & 10.17 & 41.77 & 29.04 & 81.81 & 72.35 & 16.58 & 77.10 & 73.10 & 48.69 & 85.89 \\ Separate ($N_1$ + edge), ($N_2$ + edge) & 88.83 & 47.31 & \textbf{7.10} & 8.59 & \textbf{44.53} & 30.99 & \textbf{83.24} & \textbf{73.54} & 18.05 & \textbf{78.10} & 69.66 & 49.99 & 86.72 \\ Simultaneous ($N_1$ + $N_2$ + edge) & \textbf{90.12} & \textbf{48.90} & 4.18 & \textbf{11.63} & 37.40 & \textbf{31.98} & 82.34 & 71.50 & 15.11 & 78.04 & \textbf{80.61} & \textbf{50.16} & \textbf{87.21} \\ \bottomrule \end{tabular} } \end{table*} \begin{table*}[t] \center \caption{Comparison between NDA loss and other strategies to align $E_1$ features. Best results highlighted in bold.} \label{tab:atdtn1align} \setlength{\tabcolsep}{2.5pt} \scalebox{1}{ \begin{tabular}{l|ccccccccccc|cc} \toprule $E_1$ Align. &\rotatebox{90}{Road} & \rotatebox{90}{Sidewalk} & \rotatebox{90}{Walls} & \rotatebox{90}{Fence} & \rotatebox{90}{Person} & \rotatebox{90}{Poles} & \rotatebox{90}{Vegetation} & \rotatebox{90}{Vehicles} & \rotatebox{90}{Tr. Signs} & \rotatebox{90}{Building} & \rotatebox{90}{Sky} & \textbf{mIoU} & \textbf{Acc} \\ \midrule None & 89.95 & 46.77 & 5.16 & 10.21 & 28.93 & 28.92 & 77.50 & 71.37 & 19.24 & 75.29 & 75.12 & 48.04 & 85.90 \\ Adv. & 89.89 & 46.01 & 4.22 & 11.89 & \textbf{38.20} & 30.65 & 77.00 & 63.68 & 12.99 & 74.35 & \textbf{81.16} & 48.19 & 85.42 \\ LargerNorm \cite{xu2019larger} (1) & 38.37 & 24.17 & 0.56 & 3.66 & 10.50 & 23.04 & 52.61 & 9.41 & 3.42 & 52.64 & 10.54 & 20.81 & 51.49 \\ LargerNorm \cite{xu2019larger} (25) & 86.82 & 42.23 & 1.94 & 9.00 & 34.92 & 29.02 & 76.39 & 70.97 & 23.38 & 74.97 & 80.00 & 48.15 & 84.62 \\ LargerNorm \cite{xu2019larger} (500) & 78.94 & 31.25 & 2.53 & 6.00 & 22.08 & 20.55 & 68.18 & 26.21 & 4.35 & 62.28 & 63.53 & 35.08 & 76.53 \\ Asymmetric Adv. \cite{yang2020mind} & 86.69 & 38.57 & \textbf{5.92} & 5.72 & 27.43 & 22.91 & 70.81 & 70.71 & 7.86 & 72.15 & 75.18 & 44.00 & 83.38 \\ NDA & \textbf{91.21} & \textbf{50.16} & 5.14 & \textbf{13.78} & 36.99 & \textbf{32.10} & \textbf{77.72} & \textbf{73.38} & \textbf{23.47} & \textbf{76.67} & 72.67 & \textbf{50.30} & \textbf{86.77} \\ \bottomrule \end{tabular} } \end{table*} \subsection{Alignment strategies for $N_1$} An alternative approach to align $N_1$ features between domains to ease the transfer process and favor the generalization of $G_{1\rightarrow2}${} consists in leveraging on the widely adopted adversarial training in feature space. In our setting, this can be obtained by adding a critic that must discriminate whether the features produced by $E_1$ come from $\mathcal{A}${} or $\mathcal{B}${}. Thus, the encoder $E_1$ not only has to learn a good feature space for its task, but it is also asked to fool the critic. Afterwards, we can proceed to learn a mapping function $G_{1\rightarrow2}${} among tasks as usual. In \autoref{tab:atdtn1align} we compare this standard DA methodology to our NDA loss. Adversarial training (second row) does not introduce significant improvements with respect to not performing DA for $\mathcal{T}_1${} (i.e.\@\xspace, base AT/DT{}, first row), while constraining the features extracted by $E_1$ in a norm aligned space (third row) significantly increases both performance metrics with respect to the baseline. Our intuition is that, although adversarial training can be useful for domain alignment, it alters the learned feature space with the goal of fooling the critic and this training objective can lead to worse performances on the current task. Our NDA loss, on the other hand, acts as a regularizer that favors the learning of an homogeneous latent space across the domains involved in our experiments, improving the generalization capability of the transfer network without degrading the performances in the single tasks. \rev{Then, from the third to the fifth row, we compare our NDA loss with another strategy, LargerNorm \cite{xu2019larger}, that also align features across domains operating on the feature norms. They show that features are more transferable across domains if we constrain feature norms to be equal to an arbitrary large number. We notice that the method is very sensible to the norm value, and it could be hard to select without using target labels. When using an appropriate norm value (25, fourth row), the method achieves a slight improvement over the baseline without alignment. However, since it just force all features globally to be a large number, it is not well-suited for tasks in which we have a spatial dimensions such as semantic segmentation. Moreover, in the sixth row, we experiment also with a more recent adversarial loss formulation, Asymmetric Adv. \cite{yang2020mind}, which preserve discriminability while performing domain alignment by changing only target features instead of both source and target ones. However, we notice that this method is achieving the worst results among feature alignment strategies, even worse than the baseline. Our motivation is that aligning feature distribution in such a high dimensional feature space with a spatial structure might be too difficult to achieve by only changing target features. Finally, we notice that NDA achieves the best performance probably because it only align features norm rather than the whole marginal distribution, which is an easier goal that can be achieved also in high-dimensional space. Moreover, NDA operates at each spatial location independently rather than globally, exploiting the spatial priors similarity across domains, reaching better performances. } \subsection{Aligning $N_2$ features} \label{sec:n2_align} \begin{table*}[ht] \center \caption{Results of aligning output space of $E_2$ in a $Dep \rightarrow Sem${} scenario. Best results highlighted in bold.} \label{tab:atdte2align} \setlength{\tabcolsep}{2.5pt} \scalebox{1}{ \begin{tabular}{l|ccccccccccc|cc} \toprule $E_2$ Align. &\rotatebox{90}{Road} & \rotatebox{90}{Sidewalk} & \rotatebox{90}{Walls} & \rotatebox{90}{Fence} & \rotatebox{90}{Person} & \rotatebox{90}{Poles} & \rotatebox{90}{Vegetation} & \rotatebox{90}{Vehicles} & \rotatebox{90}{Tr. Signs} & \rotatebox{90}{Building} & \rotatebox{90}{Sky} & \textbf{mIoU} & \textbf{Acc} \\ \midrule None & \textbf{89.95} & \textbf{46.77} & 5.16 & \textbf{10.21} & 28.93 & 28.92 & \textbf{77.50} & 71.37 & 19.24 & \textbf{75.29} & 75.12 & \textbf{48.04} & \textbf{85.90} \\ Adv. & 89.36 & 46.03 & 5.59 & 8.22 & \textbf{36.45} & 25.44 & 75.15 & \textbf{72.29} & 12.69 & 74.12 & \textbf{75.79} & 47.38 & 85.31 \\ Asymmetric Adversarial \cite{yang2020mind} & 87.90 & 42.81 & \textbf{7.64} & 8.44 & 26.02 & \textbf{29.11} & 72.54 & 69.01 & \textbf{24.01} & 71.71 & 70.42 & 46.33 & 83.61 \\ NDA & 44.94 & 23.82 & 3.81 & 2.09 & 30.74 & 24.21 & 42.08 & 68.84 & 11.69 & 35.67 & 11.10 & 27.18 & 56.17 \\ \bottomrule \end{tabular} } \end{table*} \begin{table*}[ht] \center \caption{\rev{Results of aligning output space of $D_2$ in a $Dep \rightarrow Sem${} scenario. Best results highlighted in bold.}} \label{tab:atdtn2align} \setlength{\tabcolsep}{2.5pt} \scalebox{1}{ \begin{tabular}{l|ccccccccccc|cc} \toprule $D_2$ Align. &\rotatebox{90}{Road} & \rotatebox{90}{Sidewalk} & \rotatebox{90}{Walls} & \rotatebox{90}{Fence} & \rotatebox{90}{Person} & \rotatebox{90}{Poles} & \rotatebox{90}{Vegetation} & \rotatebox{90}{Vehicles} & \rotatebox{90}{Tr. Signs} & \rotatebox{90}{Building} & \rotatebox{90}{Sky} & \textbf{mIoU} & \textbf{Acc} \\ \midrule None & \textbf{89.95} & \textbf{46.77} & \textbf{5.16} & \textbf{10.21} & \textbf{28.93} & \textbf{28.92} & \textbf{77.50} & \textbf{71.37} & \textbf{19.24} & \textbf{75.29} & \textbf{75.12} & \textbf{48.04} & \textbf{85.90} \\ Adv. & 87.48 & 45.73 & 0.63 & 2.12 & 26.22 & 26.39 & 61.40 & 66.92 & 12.97 & 66.39 & 74.77 & 42.82 & 81.87 \\ \bottomrule \end{tabular} } \end{table*} \begin{table*}[!ht] \center \caption{Results of aligning input and/or output space of $G_{1\rightarrow2}${} in a $Dep \rightarrow Sem${} scenario. Best results highlighted in bold.} \label{tab:atdtadversarial} \setlength{\tabcolsep}{2.5pt} \scalebox{1}{ \begin{tabular}{ll|ccccccccccc|cc} \toprule Input Align. & Output Align. &\rotatebox{90}{Road} & \rotatebox{90}{Sidewalk} & \rotatebox{90}{Walls} & \rotatebox{90}{Fence} & \rotatebox{90}{Person} & \rotatebox{90}{Poles} & \rotatebox{90}{Vegetation} & \rotatebox{90}{Vehicles} & \rotatebox{90}{Tr. Signs} & \rotatebox{90}{Building} & \rotatebox{90}{Sky} & \textbf{mIoU} & \textbf{Acc} \\ \midrule - & NDA & 42.97 & 19.60 & 2.31 & 1.36 & 4.21 & 15.74 & 18.42 & 11.77 & 7.19 & 36.72 & 38.99 & 18.12 & 43.63\\ - & Adv & 90.80 & 48.91 & \textbf{6.16} & 11.84 & 35.32 & 30.29 & \textbf{78.78} & 71.17 & 18.51 & 75.66 & 75.03 & 49.32 & 86.43 \\ & Asymmetric Adv. \cite{yang2020mind} & 85.49 & 40.70 & 4.94 & 10.49 & 34.02 & 30.26 & 76.31 & 70.30 & 17.07 & 74.30 & 72.94 & 46.99 & 83.86 \\ - & NDA + Adv & 91.03 & 48.93 & 6.14 & 12.24 & 35.91 & 31.05 & 77.93 & 70.28 & 16.65 & 75.50 & 74.47 & 49.10 & 86.28 \\ - & Adv D2 & 90.20 & 47.54 & 5.92 & 11.76 & \textbf{37.03} & 29.52 & 77.98 & 72.42 & 19.28 & 75.82 & \textbf{77.03} & 49.50 & 86.28 \\ NDA & Adv & 90.67 & 49.49 & 5.54 & 12.29 & 36.73 & 28.49 & 78.28 & 70.19 & 22.05 & 76.47 & 76.35 & 49.69 & 86.73 \\ NDA & - & \textbf{91.21} & \textbf{50.16} & 5.14 & \textbf{13.78} & 36.99 & \textbf{32.10} & 77.72 & \textbf{73.38} & \textbf{23.47} & \textbf{76.67} & 72.67 & \textbf{50.30} & \textbf{86.77} \\ \bottomrule \end{tabular} } \end{table*} We tried to perform feature alignment across domains also on the features $f_2$ extracted by $E_2$, either by deploying adversarial training or imposing our NDA loss. The idea is to favor the generalization of $G_{1\rightarrow2}${} by making more homogeneous not only its input space (i.e.\@\xspace, the features produced by $E_1$, aligned with our NDA loss), but also its output space, i.e.\@\xspace, the features produced by $E_2$. However, the setting is not completely symmetric: when learning $E_2$, we do not have supervision available for $\mathcal{B}${}, and the only loss shaping the feature space for its images would be the alignment loss. We report results of this ablation study in \autoref{tab:atdte2align} and discuss them below. \rev{ In the first row, we report the results provided by the \emph{base} AT/DT{} (without $L_{NDA}$ and $L_{aux}$). In the following two rows, we show results obtained by an adversarial (row 2) and an asymmetric adversarial \cite{yang2020mind} (row 3) training on the features $f_2$, using the same procedures described in the previous sub-section for $f_1$. We can observe that, not only both adversarial trainings does not improve (like adversarial training applied to $E_1$), but they even decrease the overall mIoU compared to the baseline. Finally, in the fourth row, we report the results obtained by our NDA loss on $f_2$: the NDA loss destroys the feature space of $\mathcal{T}_2${} when applied in this context, as vouched by the drop of $20\%$ in the overall mIoU wrt to base AT/DT{}. During AT/DT{} inference, we use also $D_2$ to yield the final task predictions. Nevertheless, $D_2$ has been trained only on $\mathcal{A}${}, thus its performance may be harmed when using $\mathcal{B}${} images. Thus, we ran an additional test reported in \autoref{tab:atdtn2align}. Following \cite{Tsai_2018} we train $N_2$ (i.e., $E_2$ and $D_2$) using an adversarial loss on the $D_2$ output space, thus making $D_2$ aware of $\mathcal{B}${}. Then, we train $G_{1\rightarrow2}${} to map features of $E_1$ into features of $E_2$, and during inference we employ the previously trained decoder $D_2$ to produce the final outputs reporting the results in row \textit{Adv.}. We notice a clear drop in performance w.r.t. \textit{base} AT/DT{} (row \textit{None}), i.e. AT/DT{} trained without $L_{NDA}$ and $L_{aux}$. } We formulate the following hypothesis to explain the above results: all adversarial trainings and NDA loss try to align $f_2^\mathcal{A}$ and $f_2^\mathcal{B}$. While $f_2^\mathcal{A}$ are shaped also by the supervision of $\mathcal{T}_2$, $f_2^\mathcal{B}$ evolve only according to the additional loss we impose, as we do not have supervision for $\mathcal{T}_2${} on $\mathcal{B}${}. However, $E_2$ is shared across domains, and therefore may be pushed to produce worse representations for both domains while it tries to accomplish the adversarial objectives or the NDA loss minimization for $\mathcal{B}${}. If this happens, mappings learned by $G_{1\rightarrow2}${} from $f_1^\mathcal{A}$ to $f_2^\mathcal{A}$ will hallucinate worse features for $\mathcal{T}_2${} on $\mathcal{B}${}. To understand why adversarial trainings leads to small decreases in performances compared to the use of NDA loss, we ought to consider that adversarial training implies a discriminator that cannot be easily fooled by totally degenerated features, while, without any additional constrain from task supervision, the NDA loss may yield totally collapsed representation. \subsection{Aligning $G_{1\rightarrow2}${} features} Although feature alignment does not turn out beneficial when training $N_2$, one may still expect to obtain better hallucinated features if the representations obtained when transferring $f_1^\mathcal{A}$ and $f_1^\mathcal{B}$ are aligned. We empirically found out that even though output space aligning strategies deployed when training $G_{1\rightarrow2}${} can lead to improvements in performance, input space alignment using our NDA loss deployed when training $N_1$ is more effective. Moreover, combining input and output space alignment techniques does not lead to further improvements. We performed this ablation study in the $Dep \rightarrow Sem${} scenario using Carla as $\mathcal{A}${} and Cityscapes as $\mathcal{B}${}. The results of these experiments are reported in \autoref{tab:atdtadversarial}. First, we applied our NDA loss to the output-space of $G_{1\rightarrow2}${}. Similarly to what discussed in the previous section, we notice that, without supervision on $\mathcal{B}${}, the representations transformed from $G_{1\rightarrow2}${} while minimizing the NDA loss yield a drastic drop in the framework performance (row 1). We also tried to align the output space features by training $G_{1\rightarrow2}${} alongside a discriminator in an adversarial fashion. We wanted to fool the discriminator in order to generate indistinguishable features from $\mathcal{A}${} or $\mathcal{B}${}. We notice that this strategy allows us to reach good overall performances with a 49.32 mIoU on Cityscapes{} (second row). Moreover, we thought that, as adversarial training provides a supervision on $\mathcal{B}${}, using the NDA loss in combination with the adversarial loss could avoid producing degenerated features for $\mathcal{B}${} while reaching a better overall alignment between $\mathcal{A}${} and $\mathcal{B}${}. However, we notice that the combination of the two losses leads us to slightly worse results than adversarial training alone (rows 2 vs 3). Furthermore, since using an adversarial loss on the output space of $G_{1\rightarrow2}${} lead us to good overall performances, we tested it in combination with the best input space alignment from \autoref{tab:atdtn1align}, i.e.\@\xspace NDA loss applied when training $N_1$. However, the combination of these two methods achieves worse performance than using only the NDA loss on input space (rows 6 vs 7). \rev{Finally, we also experimented a different alignment strategy for the $G_{1\rightarrow2}${} output space. Instead of directly applying adversarial loss in $E_2$ feature space, we apply adversarial loss in $D_2$ output space while training $G_{1\rightarrow2}${}. As discussed in \cite{Tsai_2018}, output space is easier to align than feature space for several reasons: i) the scene semantic structure is typically similar across domains ii) the feature space encode many information such as color, light, textures iii) the feature space has higher dimensions. By aligning $D_2$ output space we indirectly influence also $E_2$ features making them more domain aligned. During training, we keep $D_2$ frozen and we update only $G_{1\rightarrow2}${} weights. Also in this case, if compare this methodology with simply using $L_{NDA}$ alone (row 6 vs row 7), it achieves worse results.} \section{Concluding Remarks} We have introduced a framework to transfer knowledge between different tasks by learning an explicit mapping function between deep features. This mapping function can be parametrized by a neural network and show interesting generalization capabilities across domains. To further ameliorate performance we have proposed two novel feature alignment strategies. At a domain level, we showed that the transfer function presented in our framework can be boosted by making its input space more homogeneous across domains with our simple yet effective NDA loss. At a task level, instead, we reported how deep features extracted for different tasks can be enriched and aligned with the introduction of a shared auxiliary task, which we implemented as edge detection in our experiments. We reported good results in the challenging synthetic to real scenario while transferring knowledge between the semantic segmentation and monocular depth estimation tasks. Our proposal is complementary to the whole domain adaptation literature and might be integrated with it. While DA directly applied to the learned feature space does not seems effective (see \autoref{tab:atdte2align}) more modern techniques either try to align the prediction in the final label space \cite{Tsai_2018} or rely on self-ensembling for pseudo labeling \cite{choi2019self}. We plan to incorporate these promising direction into our framework as part of future developments. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,314,259,996,376
arxiv
\section{Introduction} A long standing question~\cite{Rudolph1976} asks whether the set of algebraic knots, those that are links of isolated singularities of complex curves, is linearly independent in the knot concordance group. Following Rudolph's initial work~\cite{Rudolph1976}, Litherland~\cite{Lith} used signature functions to prove that the subset consisting of positive torus knots is independent. However, it was then shown in~\cite{LM} by means of an example that invariants of the algebraic concordance group, such as the signature functions, are insufficient to prove the independence of algebraic knots. Somewhat later, Miyazaki~\cite{Miyazaki94} showed that the particular example in~\cite{LM} was not {\em ribbon}, and Rudolph~\cite{Rudolph2002} then used this result to conclude that ribbon knots do not behave well with respect to a certain operation called {\em plumbing}. Here we develop tools based on Casson-Gordon theory~\cite{CG} to prove the independence of a large family of algebraic knots, a family which includes the particular example first found in~\cite{LM}. To be more specific, let $\mathcal{C}$ denote the group of (topologically locally flat) concordance classes of knots in $S^3$, and let $\mathcal{G}$ denote the algebraic concordance group. In~\cite{levine}, Levine constructed a surjection $\mathcal{C}\to\mathcal{G}$ whose counterpart in higher dimensions is an isomorphism. The main result of this article is the following. \medskip \noindent{\bf Theorem 1.} {\em Let $\mathcal{A}$ denote the subgroup of the knot concordance group $\mathcal{C}$ generated by algebraic knots: connected links of isolated singularities in ${\bf C}^2$. Then the intersection of $\mathcal{A}$ with the kernel of $\mathcal{C}\to \mathcal{G}$ contains an infinitely generated free abelian subgroup.} \medskip We use the following notation: given a knot $K$ and a pair of relatively prime integers $p$ and $q$, $K_{p,q}$ denotes the oriented $(p,q)$--cable of $K$. Thus $K_{p,q}$ represents $p$ times the generator of the first homology of the tubular neighborhood of $K$. In the special case that $K$ is the unknot $U$, so that $K_{p,q}$ is a torus knot, we use the standard notation of $T_{p,q}$ rather than $U_{p,q}$. The notation can be iterated; for instance, $K_{p,q;r,s}$ denotes the $(r,s)$--cable of the $(p,q)$--cable of $K$. An {\em algebraic knot} is, by definition, the connected link of an isolated singularity of a polynomial map $f \colon\thinspace {\bf C}^2\to {\bf C}$. A knot is isotopic to an algebraic knot if and only if it is an iterated torus knot $T_{p_1,q_1;\cdots; p_n,q_n}$ with indices satisfying $p_i, q_i>0$ and $q_{i+1}>p_iq_i p_{i+1}$ (see, for instance,~\cite{EN1985}). Our results concern $(2,k)$--cables of knots. In particular, we resolve an old question of whether a particular linear combination of $(2,k)$--cables is slice; this combination is the simplest algebraically slice knot in the span of the algebraic knots,~\cite{LM}: \medskip \noindent{\bf Theorem 2.} {\em The linear combination of algebraic knots $$ T_{2,3;2, 13} \mathop{\#} T_{2,15} \mathop{\#} -T_{2,3;2,15} \mathop{\#} -T_{2,13} $$ is algebraically slice but has infinite order in $\mathcal{C}$.} \medskip Theorems 1 and 2 are consequences of the following result, which establishes the linear independence of an infinite collection of algebraic knots. \medskip \noindent{\bf Theorem 3.} {\em For appropriately chosen integers $q_i$, the set of algebraic knots $$\{ T_{2,q_i}, T_{2,3;2,q_i}\}_{i=1}^\infty$$ form a basis of a free abelian subgroup of the concordance group $\mathcal{C}$. This subgroup intersects the kernel of $\mathcal{C}\to\mathcal{G}$ in a free abelian subgroup, with basis given by the following set of algebraically slice knots $$\{ T_{2,3;2, q_{n}} \mathop{\#} T_{2,q_{1}}\mathop{\#} -T_{2,3;2,q_{1}}\mathop{\#} -T_{2,q_{n}}\}_{n=2}^{\infty} .$$ } \medskip Our arguments apply more generally, for example to cables of knots other than the trefoil $T_{2,3}$. We refer the reader to the body of the paper for details. \medskip The methods we use are those introduced by Casson-Gordon in~\cite{CG}. A novel feature of our approach is the essential interplay between signature and discriminant invariants on the Witt group of Hermitian forms over ${\bf C}(t)$. Casson-Gordon signature invariants, which are ${\bf Z}$--valued and hence more effective in identifying elements of infinite order, are often intractable to compute. Discriminant invariants are computable algorithmically~\cite{KL1}, but because they take values in a group that is ${\bf Z}/2{\bf Z}$--torsion, they are less effective in determining linear independence. By combining the two types of invariants we are able to apply the power of signatures while bypassing the need to explicitly compute their values; we can also avoid most of the typically messy work of analyzing metabolizers which plagues many discriminant arguments used to show that certain knots have infinite order in $\mathcal{C}$. \medskip We finish this introduction by giving some background to place our results in context. Given an oriented knot $K$, let $-K$ denote the mirror image of $K$ with its orientation reversed. Two oriented knots, $K_1$ and $ K_2$, are called {\em concordant} if the connected sum $K_1\mathop{\#} -K_2 $ bounds a locally flat embedded disk in $S^3$. The set of concordance classes of knots forms an abelian group $\mathcal{C}$ with operation induced by connected sum. Knots which represent the zero element in $\mathcal{C}$ are called {\em slice}. From this point forward we do not distinguish in our notation between a knot and its concordance class. In particular, we write $K_1 - K_2$ for the connected sum $K_1 \mathop{\#} - K_2$. We will also write $-K_{p,q}$ for $ -(K _{p,q})$, both of which equal $(-K)_{p,-q}$. Fox and Milnor observed~\cite{FoxMilnor} that if two knots are concordant, then the product of their Alexander polynomials is a norm in ${\bf Z}[t^\pm]$: that is, $\Delta_{K_1}(t)\Delta_{K_2}(t)= f(t)f(t^{-1})$ for some polynomial $f(t)\in {\bf Z}[t^\pm]$. (Recall that the Alexander polynomial is defined up to multiplication by $\pm t^{k}$.) An early result of Seifert~\cite{Seifert1950} (see~\cite[Theorem 6.15]{Lickorish} for a recent reference) states that the Alexander polynomial of a satellite knot is determined by the Alexander polynomials of the knots involved in the construction, together with an integer called the winding number. In the case of cables, the formula is given by $$\Delta_{K_{p,q}}(t) = \Delta_{T_{p,q}}(t) \Delta_K(t^p), \ \ \text{where}\ \ \Delta_{T_{p,q}}(t) =\frac{(t^{pq} -1)(t-1)}{(t^p-1)(t^q-1)}.$$ For a connected sum, the Alexander polynomial is simply the product of the Alexander polynomials of the constituent knots. A bit of calculation using these facts shows that distinct algebraic knots are not concordant. The Levine--Tristram signatures of a knot~\cite{levine, Lickorish, mi, tr}, which define integer-valued homomorphisms on $\mathcal{C}$, can be used to further show that algebraic knots have infinite order in $\mathcal{C}$. These observations might lead one to conjecture that the set of algebraic knots forms a basis for an infinitely generated free abelian subgroup $\mathcal{A}\subset \mathcal{C}$. A first line of attack to this question, as taken in~\cite{Litherland1979} and~\cite{LM}, is to consider the {\em algebraic concordance group} $\mathcal{G}$ and to determine the image of the composite $\mathcal{A}\subset\mathcal{C}\to\mathcal{G}$. For the purposes of this article the precise definition of $\mathcal{G}$ is not needed and it will suffice to say that $\mathcal{G}$ is the group generated by Seifert forms of knots, modulo Seifert forms of slice knots. The relevant facts surrounding $\mathcal{G}$ are: \begin{enumerate} \item There is a surjection $\mathcal{C}\to\mathcal{G}$~\cite{levine}. \item The algebraic concordance class of a knot $K$ is determined by its Blanchfield (torsion) form~\cite{Trotter1973}: $$Bl_K \colon\thinspace H_1(S^3-K;{\bf Z}[t^\pm])\times H_1(S^3-K;{\bf Z}[t^\pm])\to\frac{{\bf Q}(t)}{{\bf Z}[t^\pm]}.$$ \end{enumerate} With respect to the interplay of cabling and algebraic concordance, the formula~\cite{Ke, LivingstonMelvin1985} $$Bl_{K_{p,q}}(t) = Bl_K(t^p) \oplus Bl_{T_{p,q}}(t),$$ based on a Mayer-Vietoris argument, gives a quick method to determine if certain linear combinations of cable knots are {\em algebraically slice}, that is lie in the kernel of $\mathcal{C}\to \mathcal{G}$. As observed in~\cite{LM} (see Lemma \ref{summarylemma} below), this formula implies that the knot in Theorem 2, $ T_{2,3;2, 13} + T_{2,15}-T_{2,3;2,15} -T_{2,13}$, is algebraically slice. It represents the simplest example of a knot in the kernel of the composite $\mathcal{A}\subset\mathcal{C}\to\mathcal{G}$. Showing this knot is not slice has remained open until now, although Miyazaki proved it is not ribbon~\cite{Miyazaki94}. The reader will have noticed that the term ``algebraic'' has two different meanings in this paper; on the one hand it describes a class of knots defined as links of isolated singularities, and on the other it describes a certain quotient of the knot concordance group. Algebraic knots are iterated cables, and we will typically work with general cables, so this should cause no confusion. \subsection{Comparison with smooth techniques} Progress in identifying the structure of algebraic knots in the setting of smooth concordance has been achieved largely through analytic means or the deep combinatorial approach stemming from Khovanov homology theory. This is most notable in the solutions to the Milnor conjecture and the proof that the smooth 4--ball genus of a torus knot is realized by an algebraic curve~\cite{km, os, ra}. Highlighting the necessity of smooth techniques in studying algebraic knots, Rudolph~\cite{Rudolph93} observed that the Milnor conjecture is false in the topological locally flat category. Thus, it comes as a surprise that Casson-Gordon methods apply so effectively in the present setting, having the further advantage that we can establish the independence of these knots in the topological concordance group. Nonetheless, it would be interesting to know the extent to which the array of existing smooth concordance invariants can be used to address the question of independence of algebraic knots. We should point out, however, that the Ozsv{\'a}th-Szab{\'o}~\cite{os} and Rasmussen~\cite{ra} concordance invariants, $\tau$ and $s$, coming from knot Floer homology and Khovanov homology, respectively, contain no information for the knots at hand. We make this precise in Proposition \ref{prop:stau}, which shows that both invariants vanish for the family above and its obvious generalization to positively iterated torus knots. Despite the failure of $s$ and $\tau$, it seems likely that grading information from the Floer homology of branched covers (in the form of the Fr{\o}yshov invariant~\cite{Froyshov96}, Ozsv{\'a}th-Szab{\'o} correction terms~\cite{AbsGrad}, or Chern-Simons invariant of $SU(2)$ representations~\cite{MR1051101, MR1047138}) could be useful in our pursuit. However, extensive computations of such invariants in the first two cases is difficult, and computing Chern-Simons invariants of covers in the spirit of Fintushel-Stern~\cite[Theorem 5.1]{MR1051101} and Furuta~\cite[Theorem 2.1]{MR1047138} have failed to determine if any member of the family of knots in the present article are slice. \bigskip \noindent{\bf Acknowledgment:} Conversations with Tom Mrowka about the ribbon-slice conjecture led naturally to the investigations of this paper. The relationship between this ribbon-slice problem and the concordance independence of algebraic knots is discussed in the second appendix. \section{Two--fold branched covers and characters} As mentioned in the introduction, we write $K_{p,q}$ for the $(p,q)$--cable of $K$ and $-K_{p,q}$ for $-(K_{p,q})$, which, by a simple orientation argument, equals $(-K)_{p,-q}$. Cabling the concordance shows that the concordance class of $K_{p,q}$ depends only on the concordance class of $K$, in the sense that if $K$ and $K'$ are concordant, then $K_{p,q}$ and $K'_{p,q}$ are concordant. These observations, along with our earlier description of the Blanchfield pairing of a cable knot, yields the following general statement, implicit in~\cite{LM}. \begin{lemma}\label{summarylemma} For any knot K, $$K_{p,q_1} + T_{p, q_2} - K_{p, q_2} - T_{p, q_1} $$ is an algebraically slice knot and is a slice knot when $K$ is slice.\qed \end{lemma} We now turn our focus to 2--stranded cables; knots of the form $K_{2,q}$. A useful depiction of $K_{2,q}$ is the following. Figure \ref{fig:Lq} shows a 2--component link $L_q$ with one component a $(2,q)$--torus knot and the other component an unknot labeled $U$. If $K$ is a knot in $S^3$ and $q$ is an odd integer, then $K_{2,q}$ is obtained by removing a neighborhood of $U$ and replacing it by the complement of a tubular neighborhood of $K$ in such a way that the meridian-longitude pairs of $U$ and $K$ are interchanged. \begin{figure} \psfrag{K}{$q$} \psfrag{U}{$U$} \begin{center} \includegraphics[height=110pt]{fig1.eps} \caption{\label{fig:Lq}$L_q$} \end{center} \end{figure} For a knot $K$, denote by $M^3(K)$ the 2--fold branched cover of $S^3$ branched over $K$. Let $\widetilde{K}$ denote the lift of $K$ to $M^3(K)$ and let $M^3_0(K)$ denote the result of $0$--surgery on $M^3(K)$ along $\widetilde{K}$; that is, the surgery whose framing comes from a lift of the longitude of $K$. Note that $M^3_0(K)$ is the 2--fold cyclic cover of 0--surgery on $K$ in $S^3$. The 2--fold branched cover $M^3(T_{2,q})$ is the lens space $L^3(q,1)$. Since $U$ links the $(2,q)$--torus knot twice in $L_q$, the preimage of $U$ in this 2--fold branched cover consists of two curves, $\widetilde{U}_1$ and $\widetilde{U}_2$. One way to understand this is to take a 3--ball in $S^3$ which meets the $(2,q)$--torus knot in two unknotted arcs and contains $U$ in its interior. Then the preimage of this 3--ball in $M^3(T_{2,q})$ is a solid torus, as is the preimage of its complement in $S^3$. The curves $\widetilde{U}_1$ and $\widetilde{U}_2$ are each circles parallel to the core of this solid torus but oppositely oriented. In particular, $\widetilde{U}_1$ and $\widetilde{U}_2$ are isotopic as unoriented curves in $M^3(T_{2,q})$. Figure \ref{fig:BkU} depicts the situation. To obtain $M^3(K_{2,q})$, we replace the solid torus neighborhood of $\widetilde{U}_1$ and $\widetilde{U}_2$ with copies of the complement of $K$ in $S^3$, interchanging the meridian-longitude pairs. The preimage $\widetilde{K} \subset M^3(T_{2,q})$ is not drawn in Figure \ref{fig:BkU}. \begin{figure} \psfrag{U1}{$\tilde{U}_1$} \psfrag{U2}{$\tilde{U}_2$} \psfrag{K}{$q$} \begin{center} \includegraphics[height=110pt]{fig2.eps} \caption{\label{fig:BkU}$M^3(T_{2,q})$. The branched double cover of the $(2,q)$--torus knot is the lens space $L^3(q,1)$, obtained by performing $q$--surgery on an unknot. The unknotted component of $L_q$ lifts to $\widetilde{U}_1\cup \widetilde{U}_2$.} \end{center} \end{figure} We will need notation for certain curves in $M^3(K_{2,q})$. Denote by $\mu_U$ and $\lambda_U$ the meridian and longitude of the unknotted component $U\subset L_q$, with its orientation as in Figure \ref{fig:Lq}. From the perspective of $K_{2,q}$ as a satellite knot, these are the longitude and meridian, respectively, for the companion, $K$. Denote by $\tilde{\mu}_{i}$ and $\tilde{\lambda}_{i}$ the meridian and longitude in the boundary of the tubular neighborhood of $\widetilde{U}_i\subset M^3(T_{2,q})$, $i=1,2$ in the surgery diagram given in Figure \ref{fig:BkU}. The notation is somewhat ambiguous since there is no preferred choice of lift of $U$, but in any case we will choose the same ordering when comparing $M^3(K_{2,q})$ and $M^3(T_{2,q})$. Note that $\tilde{\mu}_1$ and $\tilde{\mu}_2$ vanish in $H_1(M^3(K_{2,q}))={\bf Z}/q{\bf Z}$, $\tilde{\lambda}_1$ generates $H_1(M^3(K_{2,q}))$, and $\tilde{\lambda}_2=-\tilde{\lambda}_1$ in $H_1(M^3(K_{2,q}))$. Denote by $\tilde{\mu}$ the preimage of the meridian to $K_{2,q}$ and by $\tilde{\lambda}$ a component of the preimage of the longitude of $K_{2,q}$ in $M^3(K_{2,q})$. In particular, $\tilde{\mu}$ and $\tilde{\lambda}$ are nullhomologous in $M^3(K_{2,q})$, since a Seifert surface for $K_{2,q} \subset S^3$ lifts to $M^3(K_{2,q})$. The linking form of $M^3(K_{2,q})$ is given by the $1\times 1$ matrix $(\tfrac{1}{q})$; in fact $$lk(\tilde{\lambda}_1,\tilde{\lambda}_1)=1/q=lk(\tilde{\lambda}_2,\tilde{\lambda}_2).$$ Let $p$ be an odd prime and let $C_p$ denote the cyclic group of order $p$. This group can be identified with the group of $p$--roots of unity: $C_p = \{\zeta_p^a\}\subset {\bf C}$, where $\zeta_p=e^{2\pi i/p}$. If $p$ and $q$ are relatively prime then every character $\chi \colon\thinspace H_1(M^3(K_{2,q}))\to C_p$ is trivial. On the other hand, if $p$ divides $q$, then the set of all characters $\chi \colon\thinspace H_1(M^3(K_{2,q})) \to C_p$ form a cyclic group isomorphic to $C_p$. We can fix an isomorphism $\Hom(H_1(M^3(K_{2,q})),C_p)\cong C_p$ as follows: let $$\chi_1 \colon\thinspace H_1(M^3(K_{2,q}))\to C_p$$ denote the character which takes $\tilde{\lambda}_1$ to $\zeta_p$. Then any other character is obtained by post-composing $\chi_1$ with the homomorphism $C_p\to C_p$ of the form $\zeta_p^i \mapsto \zeta_p^{ia}$ for some integer $a$. We denote this composite as $\chi_a \colon\thinspace H_1(M^3(K_{2,q}))\to C_p$. Notice that $a$ is well defined modulo $p$, and, although the definition of $a$ depends on a choice of ordering of the two lifts $\widetilde{U}_1, \widetilde{U}_2$, the unordered pair $\{a, -a\}$ is independent of this choice. Then $$\chi_a(\tilde{\lambda}_1)=\zeta_p^a, \ \chi_a(\tilde{\lambda}_2)=\zeta_p^{-a}, \ \chi_a(\tilde{\mu}_1)=1, \ \chi_a(\tilde{\mu}_2)=1, \ \chi_a(\tilde{\mu})=1, \text{ and } \chi_a(\tilde{\lambda})=1.$$ The unbranched 2--fold cover $M^3(K_{2,q})-\widetilde{K}_{2,q}\to S^3-K_{2,q}$ induces a homomorphism $H_1(M^3(K_{2,q}) -\widetilde{K}_{2,q})\to H_1(S^3-K_{2,q})={\bf Z}$ with image $2{\bf Z}$. Dividing by two defines a surjection $\epsilon \colon\thinspace H_1(M^3(K_{2,q}) -\widetilde{K}_{2,q})\to {\bf Z}$. Writing ${\bf Z}=\langle t\rangle$ multiplicatively, we have $$\epsilon(\mu_i)=1, \epsilon(\tilde{\lambda}_i)=t, \epsilon(\tilde{\mu})=t,\text{ and } \epsilon(\tilde{\lambda})=1.$$ To see that $\epsilon(\tilde{\lambda}_1)=t=\epsilon(\tilde{\lambda}_2)$, notice that $\tilde{\lambda}_i$ is sent to $\lambda_U$ in $S^3-K_{2,q}$, which links $K_{2,q}$ twice; dividing by two yields one. Recall that $M^3_0(K_{2,q})$ denotes the closed 3--manifold obtained by performing $0$--surgery on $\widetilde{K}_{2,q} \subset M^3(K_{2,q})$; that is, the surgery corresponding to the framing induced by a lift of a longitude of $K_{2,q}$ to $M^3(K_{2,q})$. Since $\chi_a(\tilde{\lambda})=\zeta_p^0=1$ and $\epsilon(\tilde{\lambda})=t^0=1$, both $\chi_a$ and $\epsilon$ uniquely extend to homomorphisms on $H_1(M^3_0(K_{2,q}))$. We can view their product as a homomorphism to the multiplicative group of non-zero elements of the field of rational functions, ${\bf C}(t)$: $$\chi_a\times \epsilon \colon\thinspace H_1(M^3_0(K_{2,q}))\to {\bf C}(t)^\times.$$ Each homology class is sent to an element of the form $\zeta_p^b t^c$. We summarize these facts in the following lemma. \begin{lemma} Let $K$ be a knot in $S^3$, $K_{2,q}$ its $(2,q)$--cable, and $T_{2,q}$ the $(2,q)$--torus knot. Let $M^3(K_{2,q})$ denote the 2--fold branched cover of $S^3$ branched over $K_{2,q}$, and let $M^3_0(K_{2,q})$ denote the manifold obtained from 0--surgery on $M^3(K_{2,q})$ along the preimage of the branch set. Choose an odd prime $p$ and let $\zeta_p=e^{2\pi i/p}$. Then \begin{enumerate} \item $M^3(K_{2,q})$ is obtained from $M^3(T_{2,q})=L^3(q,1)$ by removing neighborhoods of the two preimages $\widetilde{U}_1$, $\widetilde{U}_2$ of $U$ and gluing in two copies of $S^3-nbhd(K)$, so that the meridian-longitude pairs of $\widetilde{U}_i$ and $K$ are interchanged. \item $H_1(M^3(K_{2,q}))={\bf Z}/q{\bf Z}$, generated by $\tilde{\lambda}_1$, and $lk(\tilde{\lambda}_1,\tilde{\lambda}_1)=1/q$. \item To any character $\chi \colon\thinspace H_1(M^3(K_{2,q}))\to C_p$ one can associate the integer $a$ modulo $p$ by the condition $\chi(\tilde{\lambda}_1)=\zeta_p^a$. This character is denoted $\chi_a$. In particular, this sets up a 1-1 correspondence between $C_p$--valued characters on $H_1(M^3(K_{2,q}))$ and on $H_1(M^3(T_{2,q}))$. \item The character $\chi_a$ uniquely determines a character (also denoted $\chi_a$) on $H_1(M^3_0(K_{2,q}))$. \item There is a surjection $\epsilon \colon\thinspace H_1(M^3_0(K_{2,q}))\to {\bf Z} = \langle t\rangle$ satisfying $\epsilon(\mu_i)=1, \epsilon(\tilde{\lambda}_i)=t, \epsilon(\tilde{\mu})=t,\text{ and } \epsilon(\tilde{\lambda})=1.$ \end{enumerate} \qed \end{lemma} \section{Casson-Gordon invariants} Let $\mathcal{J} \colon\thinspace {\bf C}(t)\to {\bf C}(t)$ denote the involution $\mathcal{J}(f(t))=\bar{f}(t^{-1})$; specifically, $$ \mathcal{J} \big(\frac{\sum a_i t^i}{\sum b_j t^j} \big)= \frac{\sum \overline{a}_i t^{-i}}{\sum \overline{b}_j t^{-j} },$$ where $\overline{a}_i$ denotes complex conjugation. We let $W({\bf C}(t), \mathcal{J})$ denote the corresponding Witt group of non-singular $\mathcal{J}$--Hermitian forms. This Witt group is discussed in more detail in Appendix~\ref{appwitt}. In brief, two forms $I_1$ and $I_2$ are equivalent if the sum $I_1\oplus -I_2 $ is metabolic; that is, if it contains a half-dimensional subspace on which the form vanishes. The set of equivalence classes of forms constitute the Witt group, with operation induced by direct sums. For each choice of $K$ and $\chi \colon\thinspace H_1(M^3(K_{2,q}))\to C_p$ with $p$ an odd prime, the {\em Casson-Gordon invariant of $(K_{2,q}, \chi)$}, $$\tau (K_{2,q}, \chi)\in W({\bf C}(t), \mathcal{J})\otimes {\bf Z}_{(2)},$$ is defined as follows~\cite{CG}. (Here ${\bf Z}_{(2)}$ is ${\bf Z}$ {\it localized at} 2: the set of rational numbers with odd denominator.) Elementary bordism theory shows that $p \cdot (M^3_0(K_{2,q}), \chi\times \epsilon)$ bounds: say $p \cdot(M^3_0(K_{2,q}), \chi\times \epsilon)= \partial(Y^4, \rho)$. Then $Y^4$ has a non-singular ${\bf C}(t)$--valued intersection form $I(Y^4,\rho)\in W({\bf C}(t),\mathcal{J})$ defined using the cup product on middle degree cohomology, with local coefficients determined by the homomorphism $\rho \colon\thinspace \pi_1(Y^4)\to {\bf C}(t)^\times$. On the other hand, $Y^4$ also has its ordinary intersection form $I(Y^4)\in \text{ Image } \{W({\bf Q})\to W({\bf C}(t), \mathcal{J})\}$. The Casson-Gordon invariant is defined to be $$\tau (K_{2,q}, \chi)=\tfrac{1}{p}(I(Y^4,\rho)-I(Y^4)).$$ Since $p$ is odd, $\frac{1}{p} \in {\bf Z}_{(2)}$. The correspondence between characters on $M^3(T_{2,q})$ and $M^3(K_{2,q})$ described in the previous section permits us to unambiguously define the difference $\tau(K_{2,q} ,\chi)-\tau(T_{2,q},\chi)$. A formula for this difference was established (in much greater generality) by Litherland in his influential article~\cite{Lith}. (See also Gilmer~\cite{Gilmer} for related work and applications of this approach.) Using this result, one can compute the difference of Casson-Gordon invariants for different choices of $K$. The answer is given in terms of an abelian invariant, $\alpha_K$, which we define next. Let $S_0^3(K)$ denote the 3--manifold obtained by $0$--surgery on the knot $K\subset S^3$. The orientation of $S^3$ and $K$ determine an isomorphism $\delta \colon\thinspace H_1(S_0^3(K))\to {\bf Z}=\langle x\rangle$. There is a 4--manifold $X^4$ and $\bar{\delta} \colon\thinspace \pi_1(X^4)\to \langle x\rangle$ so that $\partial(X^4,\bar{\delta})=(S_0^3(K),\delta)$. Then $X^4$ has a $ {\bf Q}[x^\pm]$-equivariant intersection form $I(X^4,\bar{\delta})$ and an ordinary integer-valued intersection form $I(X)$. The concordance invariant, $\alpha_K$, is defined to be the difference of these forms in the Witt group of ${\bf Q}(x)$: $$\alpha_K = I(X^4,\bar{\delta}) - I(X^4) \in W({\bf Q}(x), \mathcal{J}).$$ The class $\alpha_K\in W({\bf Q}(x), \mathcal{J})$ is determined by the algebraic concordance class of $K$; that is, the image of $K$ under the map $\mathcal{C}\to\mathcal{G}$. Given a unit complex number $\omega$, the {\em Levine--Tristram $\omega$--signature of $K$} is defined to be the signature of the complex Hermitian matrix obtained by substituting $x=\omega$ into a matrix representative of $\alpha_K$. More generally, if $\omega $ is a unit complex number, the map $x \to \omega t$ induces a map $W({\bf Q}(x), \mathcal{J}) \to W({\bf C}(t), \mathcal{J})$. We define $\alpha_K(\omega t)$ to be the image of $\alpha_K$ under this map. Litherland's theorem~\cite[Corollary 2]{Lith}, proven by a delicate Mayer--Vietoris argument, implies the following. \begin{prop} \label{prop2.1} Given $K, q, \chi$, and $p$ as above, then $$\tau(K_{2,q},\chi_a)-\tau(T_{2,q},\chi_a)= \alpha_K(\zeta_p^a t) + \alpha_K(\zeta_p^{-a} t) $$ in $W({\bf C}(t))\otimes {\bf Z}_{(2)}$.\qed \end{prop} Notice that $ \alpha_{K}(\zeta_p^{a} t) + \alpha_{K}(\zeta_p^{-a} t)$ is unchanged by replacing $a$ by $-a$. Moreover, for any knot $K$ and character $\chi$, (writing $\chi$ additively in this formula for simplicity) \begin{equation}\label{eq4.4} \tau(K,\chi)=\tau(K, -\chi).\end{equation} This is because the 2--fold covering transformation is an orientation-preserving diffeomorphism which preserves the orientation of the branch set but induces $-1$ on the first homology of the branched cover. Precomposing $\chi$ with this diffeomorphism yields $-\chi$. Hence $\tau(K_{2,q}, \chi_{a })= \tau(K_{2,q}, \chi_{-a })$. In particular, to a character $\chi \colon\thinspace H_1(M^3(K_{2,q}))\to C_p$ we can unambiguously assign $a\in\{0,1,2,\cdots, \tfrac{p-1}{2}\}$ by evaluating $\chi$ on one of $\tilde{\lambda}_1$ or $\tilde{\lambda}_2$ and replacing $\zeta_p^a$ by $\zeta_p^{p-a}=\zeta_p^{-a}$ if necessary. The number $a$ determines $\chi$ up to sign, but it completely determines $\tau(K_{2,q},\chi_a)$ and $ \alpha_{K}(\zeta_p^{a} t) + \alpha_{K}(\zeta_p^{-a} t)$. This also resolves the ambiguity introduced earlier in choosing an order of the lifts of $U$, since if $\chi(\tilde{\lambda}_1)=a$, then $\chi(\tilde{\lambda}_2)=-a$. Notice that $\chi$ is trivial if and only if $a=0$. \vskip.1in We conclude this section with a lemma describing the role of orientation on the value of $\tau$ and $\alpha$. \begin{lemma} $\alpha_K = -\alpha_{-K}$ and $\tau(K, \chi) = -\tau(-K, \chi)$. \end{lemma} \begin{proof} If we consider a representative of $K$ to be an embedded $S^1$ in $S^3$, then $-K$ is represented by the same $S^1 $ in $S^3$, but with the orientation of $S^3$ (and of $S^1$) reversed. Hence, there is a natural orientation-reversing homeomorphism from $M^3(K)$ to $M^3(-K)$. This permits us to formally make sense of the statement of the lemma; characters on the covers of $K$ and $-K$ can be identified via this homeomorphism. Given this, the only difference between the computation of the Witt class invariants of $K$ and $-K$ are that the relevant 4--manifolds have their orientations reversed. This has the effect of changing the signs of the intersection forms. \end{proof} \section{Linear combinations and slicing}\label{sectionli} Casson-Gordon invariants are used to obstruct sliceness of knots. The main result of~\cite{CG} implies that if a knot $K$ is slice, then there exists a metabolizer $V\subset H_1(M^3(K))$ for the linking form on $M^3(K)$ (as earlier, $M^3(K)$ denotes the 2--fold branched cover of $S^3$ over $K$), so that $\tau(K,\chi)=0$ for every character $\chi \colon\thinspace H_1(M^3(K )) \to {\bf C}^\times$ that factors through ${\bf Z}/p {\bf Z} $ and vanishes on $V$. (Recall, a {\em metabolizer} is a subgroup $V\subset H_1(M^3(K))$ on which the linking form vanishes and for which the order of $V$ is the square root of the order of $H_1(M^3(K))$.) If $p$ is a prime dividing the order of $H_1(M^3(K))$, then given any metabolizer $V$, one can find a {\em non-trivial} $C_p$--valued character which vanishes on $V$, since $H_1(M^3(K))/V$ necessarily has non-trivial $p$--torsion. Therefore, given any knot $K$ and a prime $p$ dividing the order of $H_1(M^3(K))$, if $\tau(K,\chi)\ne 0$ for all non-trivial $C_p$ characters $\chi$, $K$ is not slice. Suppose now that sequences $K^i$ of knots and $q_i$ of relatively prime odd integers, $i=1,2,\cdots$, are given. Although our techniques apply more generally, for our applications we can assume that the $q_{2i-1}$ are prime, so henceforth do. As explained above, the linear combination of cables \begin{equation}\label{eq3.1} J_i= K^i_{ 2, q_{2i-1}} + T_{ 2, q_{2i}} -K^i_{2, q_{2i}} -T_{ 2, q_{2i-1}} \end{equation} is algebraically slice. \begin{lemma} \label{lem3.1} If $\theta$ denotes the trivial character on $M^3(J_i)$, the 2--fold branched cover of $S^3 $ branched over $J_i$, then $ \tau(J_i,\theta)=0$. \end{lemma} \begin{proof} Let $\theta$ denote the trivial character on the first homology of the 2--fold branched cover of any knot. Applying Proposition \ref{prop2.1} and using the fact that the Casson-Gordon invariants are additive with respect to connected sums of knots (see, for instance,~\cite[Corollary 1]{Lith} or~\cite{Gilmer}) one computes \begin{eqnarray*}\tau(J_i,\theta)&=&\tau(T_{2, k_{2i-1}} ,\theta) +\tau(T_{ 2, k_{2i}},\theta)+ \tau(-T_{2, k_{2i}},\theta)+ \tau(-T_{ 2,k_{2i-1}},\theta)\\ &&\quad + 2\alpha_{K^i}(t)+2\alpha_{-K^i}(t). \end{eqnarray*} But $\alpha_{K^i}(t)=-\alpha_{-K^i}(t)$ and, since $\theta$ is trivial, $\tau(T_{2,k},\theta)=-\tau(-T_{2,k},\theta)$. The lemma follows. \end{proof} Consider an algebraically slice linear combination \begin{equation}\label{eq3.2}J=\sum_{i=1}^N n_iJ_i.\end{equation} The 2--fold branched cover $M^3(J)$ of $S^3$ branched over $J$ is the connected sum of the (oriented) branched covers of the constituent knots in $J$. Hence $$M^3(J)= \mathop{\#}_{i=1}^N n_i \big( M^3(K^i_{2, q_{2i-1}}) \mathop{\#} M^3(T_{ 2, q_{2i}})\mathop{\#} M^3(-K^i_{2, q_{2i}})\mathop{\#} M^3(-T_{ 2, q_{2i-1}})\big).$$ Assume that $ n_1>0.$ Let $\chi \colon\thinspace H_1(M^3(J))\to C_{q_1}$ be a character. Let $\zeta_{q_1}=e^{2\pi i/{q_1}}$. The assumption that $q_i$ is relatively prime to $q_1$ for $i>1$ implies that $\chi$ vanishes on each summand in the connected sum, except possibly for some of the $M^3(K^1_{2,q_1}) $ and $M^3(-T_{2,q_1} )$ summands. On these summands, $\chi$ determines integers $a_1,a_2,\cdots, a_{n_1}$ and $b_1,\cdots, b_{n_1}$ in $\{ 0,1,2,\cdots, \tfrac{q_1-1}{2}\}$ by restricting $\chi$ to the $M^3(K^1_{2,q_1}) $ and $M^3(-T_{2,q_1} )$ summands, respectively, and evaluating on the corresponding lifts $\tilde{\lambda}_1$ or $\tilde{\lambda}_2$ in each summand, as in the previous section. Using Lemma~\ref{lem3.1}, one concludes \begin{equation}\label{eq3.3} \tau(J,\chi)= n_1\big(\tau(-K^1_{2, q_2} ,\theta)+\tau(T_{2,q_2},\theta) \big) + \sum_{i=1}^{n_1}\big( \tau(K^1_{2,q_1},\chi_{a_i})+\tau( -T_{2,q_1},\chi_{b_i})\big)\end{equation} where $\chi_{a_i}$ denotes the restriction of $\chi$ to $H_1(M^3(K^1_{2,q_1})) $ and $\chi_{b_i}$ denotes the restriction of $\chi$ to $H_1(M^3(-T_{2,q_1}) )$. Proposition \ref{prop2.1} gives the two equations: $$\tau(-K^1_{2,q_2},\theta)+\tau(T_{2, q_2},\theta)= 2\alpha_{-K^1}(t) $$ $$\tau(K^1_{2,q_1},\chi_{a_i})=\tau(T_{2, q_1},\chi_{a_i})+ \alpha_{K^1}(\zeta_{q_1}^{a_i} t)+ \alpha_{K^1}(\zeta_{q_1}^{-a_i} t).$$ Substituting these equations in Equation (\ref{eq3.3}) shows that \begin{equation}\label{eq4.3} \tau(J,\chi)=2n_1 \alpha_{-K^1}(t) +\sum_{i=1}^{n_1}\big( \alpha_{K^1}(\zeta_{q_1}^{a_i} t) + \alpha_{K^1}(\zeta_{q_1}^{-a_i} t) + \tau(T_{2,k_1},\chi_{a_i})-\tau(T_{2,k_1},\chi_{b_i}) \big). \end{equation} Summarizing, we have: \begin{prop} \label{prop3.2} Let $J_i$ and $J$ be the knots described in Equations (\ref{eq3.1}) and (\ref{eq3.2}). Assume that $n_1 > 0$ and that $p$ is an odd prime not dividing $q_i$ for $i > 1$, and $\chi \colon\thinspace H_1(M^3(J))\to C_{p}$ a character, determining integers $a_i, b_i\in\{0,1,\cdots, \tfrac{p-1}{2}\}$ as described above. Then \begin{equation*}\label{eq4.5} \tau(J,\chi)= -2n_1 \alpha_{K^1}(t) +\sum_{i=1}^{n_1}\big( \alpha_{K^1}(\zeta_p^{a_i} t) + \alpha_{K^1}(\zeta_p^{-a_i} t)+ \tau(T_{2,q_1},\chi_{a_i})-\tau(T_{2,q_1},\chi_{b_i}) \big). \end{equation*} \qed \end{prop} As explained above, given any metabolizer, one can find a non-trivial character that vanishes on it. Therefore, taking $p=q_1$ in Proposition \ref{prop3.2} and applying the main result of Casson and Gordon~\cite{CG}, one concludes the following. \begin{corollary}\label{deprel} If a knot $J$ as above is slice and $n_1 > 0$, then for some set of elements $a_i, b_i \in \{0, 1, \ldots , \frac{q_1 -1}{2}\}$, not all 0, the sum $$ -2n_1 \alpha_{K^1}(t) +\sum_{i=1}^{n_1}\big( \alpha_{K^1}(\zeta_{q_1}^{a_i} t) + \alpha_{K^1}(\zeta_{q_1}^{-a_i} t)+ \tau(T_{2,q_{1}},\chi_{a_i})-\tau(T_{2,q_{1}},\chi_{b_i}) \big) $$ represents $ 0 \in W({\bf C}(t)) \otimes {\bf Z}_{(2)}.$ \end{corollary} To apply this as an obstruction to knots being slice, we must understand invariants of $W({\bf C}(t)) \otimes {\bf Z}_{(2)}$ better. This is accomplished in the next section. \section{Signatures and discriminants} There are two fundamental types of invariants that can detect the nontriviality of elements $\tau\in W({\bf C}(t), \mathcal{J}) \otimes {\bf Z}_{(2)}$: signatures and discriminants. Discriminants can be computed algorithmically (see~\cite{KL1}), but they take values in a 2--torsion group, and thus their use in detecting elements of infinite order is quite tricky. Signatures take value in a torsion free group, ${\bf Z}_{(2)}$, but are difficult to compute. We now describe a method which will allow us to bypass these difficulties by taking advantage of the interplay between signatures and discriminants. An added advantage of this approach is that it helps us avoid the usually challenging problem of identifying and analyzing all possible metabolizers for the linking forms of the relevant 3--manifolds. Useful references for Casson-Gordon discriminant invariants include~\cite{gl2} and~\cite{KL1}. \subsection{Basic facts about $W({\bf C}(t), \mathcal{J}) \otimes {\bf Z}_{(2)}$.} In Appendix~\ref{appwitt} we present some of the details concerning the Witt group $W({\bf C}(t), \mathcal{J}) \otimes {\bf Z}_{(2)}$. Here are the key points that we need. \begin{itemize} \item If $I\in W({\bf C}(t), \mathcal{J})$ is represented by a Hermitian matrix $A$ with polynomial entries, the {\em jump function} $$j(I)(\omega)\colon\thinspace S^1 \to {\bf Z}$$ represents half the jump in the signature function sign($A(\omega)$) defined for $\omega \in S^1$. The function $j(I)$ has finite support. It extends to a well-defined ${\bf Z}_{(2)}$--valued function on $W({\bf C}(t), \mathcal{J}) \otimes {\bf Z}_{(2)}$. \item The {\em discriminant} of a class $I = [A] \in W({\bf C}(t), \mathcal{J})$, where the matrix $A$ is of rank $n$, is given by $$\operatorname{disc}(I)=(-1)^{n(n-1)/2}\det(A).$$ This defines a {\it function} (but not a homomorphism), $$\operatorname{disc}\colon\thinspace W({\bf C}(t), \mathcal{J})\to ({\bf C}(t)^\mathcal{J})^\times/N,$$ where $({\bf C}(t)^\mathcal{J})^\times $ denotes the non-zero symmetric ($f = \mathcal{J}(f)$) rational functions and $N $ denotes the {\em norms}; that is, the multiplicative subgroup of ${\bf C}(t)^\times$ defined as $$N= \{f \mathcal{J}(f) \ |\ f \in {\bf C}(t), f \ne 0 \}.$$ \item By taking the further quotient by the subgroup $\pm 1$ there is a well-defined homomorphism $$\operatorname{disc}_\pm \colon\thinspace W({\bf C}(t), \mathcal{J})\to ({\bf C}(t)^\mathcal{J})^\times/\pm N.$$ \item A class $d\in ({\bf C}(t)^\mathcal{J})^\times/N$ has a canonical representative in $ ({\bf C}(t)^\mathcal{J})^\times$ of the form $$d= at^{-n}\prod_{i=1}^{2n} (t- \omega_i),$$ where the $\omega_i$ are distinct unit complex numbers and $a^2 = 1/ \prod \omega_i$. If $d = \operatorname{disc}(I)$, then the set of numbers $\{\omega_i\}$ are called {\it the roots} of $\operatorname{disc}( I)$. \item There is a natural extension of $\operatorname{disc}_\pm$ to $W({\bf C}(t), \mathcal{J}) \otimes {\bf Z}_{(2)}$, defined by $\operatorname{disc}_\pm (I\otimes\frac{p}{q}) = (\operatorname{disc}_\pm I)^p$. This is again a homomorphism. \item A class $\frac{p}{q}I \in W({\bf C}(t), \mathcal{J}) \otimes {\bf Z}_{(2)}$ has $j(I)(\omega)$ odd if and only if $ \omega$ is a root of $\operatorname{disc}_\pm(\frac{p}{q}I )$. (An element in ${\bf Z}_{(2)}$ is called odd if it is not in $2{\bf Z}_{(2)}$ and is called even otherwise.) \end{itemize} \subsection{Twisted polynomials and the discriminant.} Let $\chi \colon\thinspace \pi_1(M^3(K)) \to C_p$. Then, as described in~\cite{KL1}, we may associate to $K$ and $\chi$ a {\it twisted Alexander polynomial} $\Delta_{K,\chi}(t) \in {\bf C}[t^\pm]$. Theorem 6.5 of~\cite{KL1} states: \begin{theorem} \label{thm:discrim1} $\operatorname{disc}_\pm(\tau(K,\chi))=(1-t)^e\Delta_{{K},\chi}(t),$ where $e=0$ if $\chi$ is trivial and $e=1$ if $\chi$ is non-trivial. \qed \end{theorem} \noindent Note that in this theorem the twisted polynomial is well-defined up to multiplication by $at^k$ for $k \in {\bf Z}$ and $a \in {\bf C}^\times$, while the discriminant is well-defined up to $\pm N$. We refer the reader to~\cite[Sections 2.2 and 6]{KL1} for further details, and to~\cite{HKL} for an alternative description of this twisted Alexander polynomial as a twisted polynomial of a 2--dimensional metabelian representation of $\pi_1(S^3-K)$. Theorem \ref{thm:discrim1} and the discussion that precedes it generalizes to the Casson--Gordon setting the well-known facts that the discriminant of $\alpha_K(x)$ equals the ordinary Alexander polynomial of $K$ modulo norms, and that the jump function of the Levine--Tristram $\omega$--signatures is supported on the roots of the Alexander polynomial (see the first paragraphs of Section \ref{secttorusknot} below). \subsection{The discriminant and jump function of the torus knot ${\pmb T_{2,p}}$.} As an important example, Theorem~\ref{thm:discrim1} allows us to readily compute the discriminant of the Casson-Gordon invariant of $T_{2,p}$ when $p$ is a prime and $\chi$ is any $C_p$--valued character. Combined with Corollary~\ref{jumpcor}, we obtain information about the jumps of the signature function of $\tau(T_{2,p}, \chi_a)$. \begin{lemma}\label{discrim} Let $T_{2,p}$ denote the $(2,p)$--torus knot for some odd prime $p$, and $M^3(T_{2,p})$ its 2--fold branched cover. Let $f(t)=1+t+t^2+\cdots + t^{p-1}$. There exists $d \in\{1,2,\cdots, \tfrac{p-1}{2}\}$ so that for any $a$, $$\operatorname{disc}_\pm(\tau(T_{2,p}, \chi_{a}))= \frac{t^{\frac{3-p}{2}}f(t)}{ (t-\zeta_p^{ad})(t-\zeta_p^{-ad})}. $$ Hence if $a\not\equiv 0\pmod{p}$ and $\theta$ denotes the trivial character, $$ j\big(\tau(T_{2,p}, \chi_{a })-\tau(T_{2,p}, \theta)\big)(\omega)\text{ is }\begin{cases} \text{even}&\text{ if } \omega\ne \zeta^{\pm ad}\\ \text{odd}& \text{ if } \omega=\zeta^{\pm ad}. \end{cases}$$ \end{lemma} \begin{proof} The $(2,p)$--torus knot has presentation $\pi=\langle \alpha, \beta \ | \ \alpha^2\beta^{p}\rangle$. Define $n=\frac{p-1}{2}$. The meridian (that is, the generator of $H_1(S^3-T_{2,p})={\bf Z}$) is given by $\mu=\alpha + n\beta$, and in $H_1(S^3-T_{2,p})$, $\alpha=p\mu$ and $\beta=-2\mu$. We use the methods and notation of~\cite{HKL}. In that article it is explained how a choice of character $\chi \colon\thinspace H_1(M( T_{2,p}))\to C_p$ determines and is determined by a dihedral representation of $\pi$. Let ${\bf Z}/2=\langle x\ | \ x^2=1\rangle$ act on $C_p=\{\zeta_p^i\}$ via $x\cdot \zeta_p=\zeta_p^{-1}$; then given $d\in \{0,1, \cdots, p-1\}$, $$\alpha\mapsto x, \ \beta\mapsto \zeta_p^d$$ determines a ${\bf Z}/2\ltimes C_p$ representation since $x^2 = 1 =(\zeta_p^d)^p$. This representation restricts to a trivial representation on the 2--fold cover if and only if $d=0$, since $\beta=-2\mu$ in $H_1(S^3-T_{2,p})$. From this dihedral representation a recipe is given in~\cite{HKL} to produce a $GL_2({\bf C}[t^{\pm 1}])$ representation of $\pi$ whose associated twisted Alexander polynomial is $\Delta_{ {K},\chi}(t)$. The recipe produces the representation $\rho \colon\thinspace \pi\to GL_2({\bf C}[t^{\pm 1}])$: $$\rho(\alpha)= \begin{pmatrix} 0&1\\t&0 \end{pmatrix}^p=t^n\begin{pmatrix} 0&1 \\ t &0\end{pmatrix}, \ \rho(\beta)= \begin{pmatrix} 0&1\\ t&0\end{pmatrix}^{-2}\begin{pmatrix} \zeta&0\\0&\zeta^{-1}\end{pmatrix}^d=t^{-1}\begin{pmatrix} \zeta^d&0\\0&\zeta^{-d}\end{pmatrix}.$$ Theorem 7.1 of~\cite{HKL} shows that $\Delta_{ {K},\chi}(t)$ is the order of the ${\bf C}[t^{\pm1}]$-torsion of the corresponding twisted first homology module $H_1(S^3-K; ({\bf C}[t^{\pm 1}])^2_\rho)$; here $({\bf C}[t^{\pm 1}])^2_\rho$ is $({\bf C}[t^{\pm 1}])^2 = {\bf C}^2 \otimes {\bf Z}[t^\pm]$ viewed as a ${\bf Z}[\pi_1(M^3_0(T_{2,p}))]$--module via the representation $\rho \otimes \epsilon$, where $\epsilon$ is the canonical action of ${\bf Z}$ on ${\bf Z}[t^\pm]$. Let $\Delta_0$ denote the order of $H_0(S^3-K; ({\bf C}[t^{\pm 1}])^2_\rho)$. Note that $H_0(S^3-K; ({\bf C}[t^{\pm 1}])^2_\rho)$ is the cokernel of the matrix obtained by substituting the extension of $\rho$ to ${\bf Z}\pi\to gl_2({\bf C}[t^{\pm 1}])$ into the matrix $$\partial_1= \begin{pmatrix} \alpha-1 \\ \beta-1 \end{pmatrix}$$ (this matrix represents the differential on $1$-chains in the universal cover). A simple calculation shows that $H_0(S^3-K; ({\bf C}[t^{\pm 1}])^2_\rho)$ is trivial if $d\ne 0$, and ${\bf C}[t^{\pm1}]/\langle t-1\rangle$ if $d=0$. Thus $\Delta_0=(t-1)^{e-1}$ where $e=0$ if $d=0$ and $e=1$ if $d\ne 0$. To compute $\Delta_{ {K},\chi}(t)$ we first compute the Fox matrix $$ \partial_2=\begin{pmatrix} 1+\alpha & \alpha^2(1+\beta+\cdots+\beta^{p-1}) \end{pmatrix}$$ representing the differential on $2$--chains in the universal cover. Then Theorem 4.1 of~\cite{KL1} shows that $$ \Delta_{ {K},\chi}(t) =\frac{\det(\rho(1+\alpha))}{\det(\rho(\beta-1))}\Delta_0=\frac{\det(\rho( \alpha^2(1+\beta+\cdots+\beta^{p-1})))}{\det(\rho(\alpha-1))}\Delta_0.$$ Now $$\det(\rho(1+\alpha))=\det \begin{pmatrix} 1&t^n \\ t^{n+1} &1 \end{pmatrix}=1-t^p $$ and $$\det(\rho(\beta-1))=\det \begin{pmatrix} t^{-1}\zeta_p^d-1&0 \\ 0&t^{-1}\zeta_p^{-d}-1 \end{pmatrix}= t^{-2}(t-\zeta_p^d)(t-\zeta_p^{-d}). $$ Using Theorem~\ref{thm:discrim1} we find that for some $a$ and $k$, $$\operatorname{disc}_\pm(\tau(T_{2,p},\chi))=a t^k (1-t)^e\Delta_{ {K},\chi}(t)=a t^k (1-t)^{2e-1}t^2\frac{1-t^p}{(t-\zeta_p^d)(t-\zeta_p^{-d})}. $$ Since $-t^{-1}(1-t)^2=(1-t)\mathcal{J}(1-t)$, and $f(t)(t-1)=t^p-1$, this can be rewritten (perhaps changing $a$ and $k$) as $$\operatorname{disc}_\pm(\tau(T_{2,p},\chi))= a t^k \frac{f(t)}{(t-\zeta_p^d)(t-\zeta_p^{-d})}. $$ Symmetry of the discriminant then implies that $k = \frac{3-p}{2}$ and $a = \pm 1$. The lemma follows from the fact that all non-trivial characters are multiples of $\chi_1$. Hence if $d$ is chosen so that the character $\chi_1$ takes $\beta$, viewed as a loop in the 2--fold cover, to $\zeta_p^d$, $\chi_a$ corresponds to the character that takes $\beta$ to $\zeta_p^{ad}$. \end{proof} Given an odd prime $p$, define a homomorphism \begin{equation} \Psi_p \colon\thinspace W({\bf C}(t), \mathcal{J})\otimes {\bf Z}_{(2)} \to ( {\bf Z}_{(2)})^{(p-1)/2} \end{equation} by evaluating the jump function at the non-trivial $p$--roots of 1 in the upper half-circle: $$\Psi_p(I)= \big(j(I)(\zeta_p), j(I)(\zeta_p^2),\cdots, j(I)(\zeta_p^{(p-1)/2})\big).$$ Note that for $I\in W({\bf C}(t))\otimes {\bf Z}_{(2)}$, $ \Psi_p(I)\in (2{\bf Z}_{(2)})^{(p-1)/2}$ if and only if $\operatorname{disc}_\pm(I)$ has no roots among $\zeta_p,\zeta_p^2,\cdots, \zeta_p^{(p-1)/2}$. \begin{corollary}\label{cor5.5} If $p$ is an odd prime, the set of Witt classes $$ \tau(T_{2,p}, \chi_{a })-\tau(T_{2,p}, \theta)\in W({\bf C}(t), \mathcal{J})\otimes {\bf Z}_{(2)}, \ a=1,2,\cdots,\tfrac{p-1}{2}$$ are linearly independent and their span is mapped injectively to $ ({\bf Z}_{(2)})^{(p-1)/2}$ by $\Psi_p$. \end{corollary} \begin{proof} Consider the homomorphism $$\Phi \colon\thinspace ({\bf Z}_{(2)})^{(p-1)/2} \to W({\bf C}(t), \mathcal{J})\otimes {\bf Z}_{(2)}$$ which takes the $a$th coordinate vector to the difference $\tau(T_{2,p}, \chi_a)-\tau(T_{2,p},\theta)$ in $ W({\bf C}(t), \mathcal{J})\otimes {\bf Z}_{(2)}$. Lemma \ref{discrim} implies that the matrix for $p \cdot \Psi_p \circ \Phi$ differs from a permutation of the identity by an even matrix, and hence has odd (and, in particular, non-zero) determinant. It follows that $\Phi$ is injective. \end{proof} \section{The main examples} In Section~\ref{sectionli} we considered the knots $$J_i= K^i _{2,q_{2i-1}} + T_{2, q_{2i}} - K^i_{2, q_{2i}} - T_{ 2, q_{2i-1}}.$$ Our goal is to prove that for appropriate choices of $K^i$ and $q_i$, the set $\{J_i\}_{i=1}^\infty$ is linearly independent. The conditions on the knots $K^i$ which we will need to arrive at a contradiction (to the assumption that $J$ is slice) are that the $K^i$ be $p$--{\it deficient} and $p$--{\it independent}. These are conditions on the algebraic concordance class of $K^i$. Roughly stated, $K$ is $p$--deficient if its (Levine--Tristram) signature function has no jumps at $p$th roots of unity and is $p$--independent if the abelian Witt invariant $\alpha_K(t)$ and its translates $\alpha_K(\zeta_p^at)$ are linearly independent in $W({\bf C}(t))$. \begin{definition} Given a knot $K$ and an odd prime $p$, we say that $K$ is {\it $p$--deficient} if $j(\alpha_K(t))(\zeta_p^a) = 0$ for all $a\in\{0,1,\cdots, p-1\}$. \end{definition} \begin{definition} Given a knot $K$ and an odd prime $p$, we say that $K$ is {\it $p$--independent} if the Witt classes $ \alpha_K(\zeta_p^a t), $ $a\in\{0,1,\cdots, p-1\}$, in $W({\bf C}(t)) \otimes {\bf Z}_{(2)}$ are linearly independent. \end{definition} \begin{lemma}\label{defindeplem} If a knot $K$ is $p$--deficient and $p$--independent, then for any choice of integers $n>0$ and $a_1,\cdots, a_n\in\{0,1,\cdots, p-1\}$ with not all $a_i$ zero, $$ -2n \alpha_{K }(t) +\sum_{i=1}^{n }\big( \alpha_{K }(\zeta_{p}^{a_i} t) + \alpha_{K }(\zeta_{p}^{-a_i} t)\big)$$ is a non-zero element of the kernel of $\Psi_p\colon\thinspace W({\bf C}(t)) \otimes {\bf Z}_{(2)}\to ({\bf Z}_{(2)})^{(p-1)/2}$. \end{lemma} \begin{proof} Note that $j(\alpha_K(\zeta_p^{a_i}))(\zeta_p^a)=j(\alpha_K)(\zeta_p^{a_i+a})$, which vanishes since $K$ is $p$--deficient. Hence $\Psi_p\big( -2n \alpha_{K }(t) +\sum_{i=1}^{n }\big( \alpha_{K }(\zeta_{p}^{a_i} t) + \alpha_{K }(\zeta_{p}^{-a_i} t)\big) \big)=0$. Since $K$ is $p$--independent and $-2n \alpha_{K }(t) +\sum_{i=1}^{n }\big( \alpha_{K }(\zeta_{p}^{a_i} t) + \alpha_{K }(\zeta_{p}^{-a_i} t)\big)$ is a non-trivial (not all $a_i$ are zero) linear combination of the $\alpha_K(\zeta^at)$, it is non-zero. \end{proof} Nontrivial examples of $p$--deficient and $p$--independent knots will be presented in Section~\ref{secttorusknot}. In particular, we will show that the trefoil, $T_{2,3}$, is $p$--deficient and $p$--independent for all primes $p>3$. \begin{theorem} Let $J=\sum_{i=1}^N n_iJ_i$ with the $J_i$ as above. If, for some $j$ with $n_j\ne 0$, the knot $K^j$ is $q_j$--deficient and $q_j$--independent, then $J$ is not slice. \end{theorem} \begin{proof} Suppose that $J$ is slice. Assume, by changing sign and reindexing if necessary, that $j=1$ and that $n_1>0$. In this case, we found in Corollary~\ref{deprel} that for some set of elements $a_i, b_i \in \{0, 1, \ldots , \frac{q_1 -1}{2}\}$, not all 0, then $$ -2n_1 \alpha_{K^1}(t) +\sum_{i=1}^{n_1}\big( \alpha_{K^1}(\zeta_{q_1}^{a_i} t) + \alpha_{K^1}(\zeta_{q_1}^{-a_i} t)+ \tau(T_{2,q_{2i-1}},\chi_{a_i})-\tau(T_{2,q_{2i-1}},\chi_{b_i}) \big) =0$$ in $ W({\bf C}(t)) \otimes {\bf Z}_{(2)}.$ Applying the function $\Psi_{q_1}$ to this equation we find, using Lemma \ref{defindeplem}, that $$\Psi_{q_1}(\tau(J,\chi))=\Psi_{q_1}\big(\sum_{i=1}^{n_1}\tau(T_{2,q_1},\chi_{a_i})-\tau(T_{2,q_1},\chi_{b_i}) \big) = 0.$$ This can be rewritten as $$\Psi_{q_1}\big(\sum_{i=1}^{n_1}\big(\tau(T_{2,q_1},\chi_{a_i}) - \tau(T_{2,q_1},\theta)\big) -\big(\tau(T_{2,q_1},\chi_{b_i}) - \tau(T_{2,q_1},\theta) \big) \big) = 0.$$ By Corollary~\ref{cor5.5}, this implies that $$ \sum_{i=1}^{n_1}\big( \tau(T_{2,q_1},\chi_{a_i}) - \tau(T_{2,q_1},\theta)\big) -\big(\tau(T_{2,q_1},\chi_{b_i}) - \tau(T_{2,q_1},\theta) \big) = 0,$$ and thus $\sum_{i=1}^{n_1}\tau(T_{2, q_1},\chi_{a_i})-\tau(T_{2,q_1},\chi_{b_i}) =0$. We also conclude that the (unordered) sets $\{a_1,a_2,\cdots, a_{n_1}\}$ and $\{b_1,b_2,\cdots, b_{n_1}\}$ coincide. In particular, at least one of the $a_i$ is non-zero. Thus \begin{equation*} \label{eq6.2} 0=-2n_1 \alpha_{K^1}(t) +\sum_{i=1}^{n_1} \alpha_{K^1}(\zeta^{a_i } t) + \alpha_{A_1}(\zeta^{-a_i} t). \end{equation*} But this is impossible by Lemma \ref{defindeplem}. Hence $J$ cannot be slice. \end{proof} With this, our main result follows. \begin{theorem}\label{maintheorem} Let $q_i$ be a sequence of positive integers with $q_{2i - 1}$ prime for all $i$ and $q_{2i}$ relatively prime to $q_{2j-1}$ for all $i,j$. Let $K^i$ be a sequence of knots so that $K^i$ is $q_{2i-1}$--deficient and $q_{2i-1}$--independent for all $i$. Let $$J_i = K^i_{ 2, q_{2i-1}} \mathop{\#} T_{ 2, q_{2i}} \mathop{\#} -K^i_{2, q_{2i}} \mathop{\#} -T_{ 2, q_{2i-1}}.$$ Then the $J_i$ are linearly independent, algebraically slice knots. \qed \end{theorem} As mentioned above, the next section shows that $T_{2,3}$ is both $p$--deficient and $p$--independent for all primes $p>3$. Given this, the following corollaries are immediate, and yield Theorems 1, 2, and 3 of the introduction. \begin{corollary}\label{maincor} The algebraically slice knot $$T_{2,3;2, 13} + T_{2,15}-T_{2,3;2,15} -T_{2,13}$$ has infinite order in $\mathcal{C}$. \end{corollary} \begin{proof} The assertion follows immediately from Theorem \ref{maintheorem}. \end{proof} \begin{corollary}\label{large} Let $q_1=13, q_2=17, \cdots$ be the increasing list of primes greater than 11. Then the set of algebraic knots $$\{ T_{2,q_i}, \ T_{2,3;2,q_i}\}_{i=1}^\infty$$ is a basis for a free abelian subgroup of the concordance group $\mathcal{C}$. This subgroup intersects the kernel of $\mathcal{C}\to\mathcal{G}$ in a free abelian subgroup, with basis the set of algebraically slice knots $$ \{T_{2,q_i} -T_{2,3;2,q_i}-T_{2,13} +T_{2,3;2,13}\}_{i=2}^\infty.$$ \end{corollary} \begin{proof} Consider a linear combination $$J=\sum_{i=1}^N n_i T_{2,q_i} + m_i T_{2,3;2,q_i}.$$ Suppose that $J$ is slice. We will show that each $n_i$ and $m_i$ is zero. Fix $\ell$. When evaluated at $\omega=e^{2\pi i/(2q_\ell)}$, the jump function for the Levine--Tristram signature of a knot in $\{ T_{2,q_i}, \ T_{2,3;2,q_i}\}_{i=1}^\infty$ is non-zero only for the knots $T_{2,q_\ell}$ and $T_{2,3;2,q_\ell}$, since the $q_i$ are different primes. Indeed, for $T_{2,q_\ell}$ and $T_{2,3;2,q_\ell}$, the jump is equal to $-1$ (see, for example,~\cite{Litherland1979}). This implies that $m_\ell=-n_\ell$. Furthermore, the jump function for the Levine-Tristam signature of $T_{2,3;2,q_i}$, evaluated at $\omega=e^{2\pi i/12}$, is equal to $-1$ for all $i$. For this value of $\omega$, however, the jump function for $T_{2,q_i}$ is zero. It follows that the sum of the $n_i$ is zero. Thus \begin{equation}\label{eq6.1} J=\sum_{i=1}^N n_i (T_{2,q_i} -T_{2,3;2,q_i}) \text{ with } \sum_{i=1}^N n_i=0. \end{equation} Any knot of the form (\ref{eq6.1}) is algebraically slice: indeed, its Blanchfield form is $$Bl_J(t)= \sum_{i=1}^N n_i( Bl_{T_{2,q_i}}(t) - Bl_{T_{2,3}}(t^2)- Bl_{T_{2,q_i}}(t))=-\sum_{i=1}^N n_i( Bl_{T_{2,3}}(t^2))=0.$$ Since $\sum n_i=0$, we have $$J=\sum_{i=1}^N n_i (T_{2,q_i} -T_{2,3;2,q_i})- \sum_{i=1}^N n_i (T_{2,11} -T_{2,3;2,11}),$$ as an equation in $\mathcal{C}$. Theorem \ref{maintheorem}, together with the fact that $T_{2,3}$ is $p$--deficient and $p$--independent for $p>3 $, implies that each $n_i$ is zero. This proves that the set $\{T_{2,q_i}, T_{2,3;2,q_i}\}$ is linearly independent. Since the jumps in the Levine--Tristram signature functions are determined by the algebraic concordance class of a knot, (\ref{eq6.1}) shows that the intersection of the span of $\{T_{2,q_i}, T_{2,3;2,q_i}\}$ with the kernel of $\mathcal{C}\to \mathcal{G}$ is a free abelian group, with basis the set of algebraically slice knots $\{T_{2,q_i} -T_{2,3;2,q_i}-T_{2,13} +T_{2,3;2,13}\}_{i=2}^\infty$. \end{proof} \begin{corollary} Let $\mathcal{A} \subset \mathcal{C}$ denote the subgroup of the knot concordance group generated by algebraic knots. The intersection of $\mathcal{A}$ with the kernel of the map $\mathcal{C} \to \mathcal{G}$ to the algebraic concordance group contains an infinitely generated free abelian group.\qed \end{corollary} \section{Torus knot examples: $p$--deficiency and $p$--independence.}\label{secttorusknot} Let $K$ be a knot in $S^3$, $F$ a Seifert surface for $K$ and $V$ the Seifert form for $F$. There are several well-known constructions of a $4$-manifold $X^4$ with boundary $S_0^3(K)$ over which $\delta\colon\thinspace H_1(S^3_0(K))\to {\bf Z}$ extends such that the equivariant intersection form of $X^4$ is $I(X^4,\bar{\delta})=(1-x)V+(1-x^{-1})V^T$ and intersection form $I(X)=(1)$. Such constructions can be found in~\cite{CG, Ka, KKR}. It follows that $\alpha_K(x)\in W({\bf Q}(t))$ is represented by the matrix $$\begin{pmatrix}(1-x)V+(1-x^{-1})V^T&0\\0&-1 \end{pmatrix}.$$ Since $(x^{-1}-1)\big(xV- V^T\big)=(1-x)V+(1-x^{-1})V^T$, and the Alexander polynomial satisfies $$\Delta_K(x)=\det\big(xV- V^T\big),$$ it follows that the jumps in the Levine-Tristram signature function of $\alpha_K(x)$ is supported on a subset of the roots of the Alexander polynomial. Notice that this is a more precise statement than saying that the odd jumps occur at roots of the discriminant, since the Alexander polynomial is well--defined in ${\bf Z}[x^{\pm1}]$. Furthermore, if $\omega$ is a root of unity and $(x-\omega)$ divides $\Delta_K(x)$ but $(x-\omega)^2$ does not divide $\Delta_K(x)$, then $j_\omega(\alpha_K)=\pm 1, $ improving the conclusion of Corollary \ref{corA6}. \begin{theorem}For any relatively prime integers $m$, $n$, and $q$, and any prime divisor $p$ of $q$, the torus knot $T_{m,n}$ is $p$--deficient and $p$--independent. \end{theorem} \begin{proof} The Alexander polynomial of $T_{m,n}$ is $$\Delta_{T_{m,n}}(x) = \frac{(x^{mn} -1)(x-1)}{(x^m-1)(x^n-1)}.$$ Thus, the only roots of $\Delta_{T_{m,n}}(x)$ are the $mn$--roots of unity which are not simultaneously $m$ or $n$--roots of unity. It follows that the jumps in the Levine--Tristram signature function of $\alpha_{T_{m,n}}$ occur, and equal $\pm1$, at the unit complex numbers $e^{2\pi i\frac{c}{mn}} = \zeta_{mn}^c$, where $c$ is not divisible by either $m$ or $n$, and $1 \le c \le mn-1$. (There are $(m-1)(n-1)$ such $c$.) \vskip.1in \noindent{\bf $p$--deficiency:} From the definition, we see that if $T_{m,n}$ is not $p$--deficient, then for some $a\in \{0,1,\cdots, p-1\}$ and $c$ as in the previous paragraph, $\zeta_p^a = \zeta_{mn}^c $. This is impossible, since $p$ and $mn$ are relatively prime and $1\leq c \le mn -1$. \vskip.1in \noindent{\bf $p$--independence:} To demonstrate the independence of the $\alpha_{T_{m,n}}(\zeta_p^at)$, we show that for distinct $a_1$ and $a_2$, $0 \le a_1, a_2 \le p-1$, the jumps for the Levine--Tristram signature function occur at distinct points. The jumps for $\alpha_{T_{m,n}}(\zeta_p^{a_i}t)$ occur at $\omega=\zeta_p^{-a_i}\zeta_{mn}^{c_i}$, where $c_i$ is not divisible by either $m$ or $n$ and $1 \le c_i \le mn-1$. If the jumps occured at the same point, then $\zeta_p^{-a_1}\zeta_{mn}^{c_1}=\zeta_p^{-a_2}\zeta_{mn}^{c_2}$, and so $$ \frac{c_1}{mn} - \frac{a_1}{p} = \frac{c_2}{mn} - \frac{a_2}{p} \mod {\bf Z}.$$ This can be rewritten as: $$\frac{ (c_1-c_2)p - (a_1 - a_2)mn}{mnp} \in {\bf Z}.$$ This immediately implies that $a_1 - a_2$ is divisible by $p$, which in turn implies that $a_1 = a_2$, giving the desired contradiction. \end{proof} \section{The 4--ball genus.} We next observe that if $q_1,q_2$ are a pair of integers and $K$ is any knot, the algebraically slice knots $$ J = K_{2,q_1} - K_{2,q_2} - T_{2, q_1} +T_{2, q_2}$$ have 4--ball genus equal to $0$ or $1$. In the case that $K$ is slice, we noted in Lemma~\ref{summarylemma} that $J$ is slice. If $K$ is $q_1$--deficient and independent, then Corollary~\ref{maincor} shows $J$ is not slice. The following argument shows that in this second case $J$ has 4--ball genus at most 1. Figure~\ref{fig:3} illustrates $J$ with three arcs, $\gamma_1,\gamma_2,$ and $\gamma_3$, depicted. In this figure the labels $\pm q_i$ refers to half-twists, and the parallel strands passing through $\pm K$ are to be tied in the knot $\pm K$. \begin{figure} \psfrag{A}{\huge{$K$}} \psfrag{-A}{\huge{$-K$}} \psfrag{k1}{$q_1$} \psfrag{k2}{$q_2$} \psfrag{-k1}{$-q_1$} \psfrag{-k2}{$-q_2$} \psfrag{a1}{$\gamma_1$} \psfrag{a2}{$\gamma_2$} \psfrag{a3}{$\gamma_3$} \begin{center} \includegraphics[height=200pt]{fig3.eps} \caption{\label{fig:3}$ J = K_{2,q_1} - K_{2,q_2} - T_{2, q_1} +T_{2, q_2}$} \end{center} \end{figure} A band move along $\gamma_1$ produces a satellite (link) of the slice knot $K\#(-K)$. Taking the corresponding satellite of the null-concordance and then performing band moves along the arcs labelled $\gamma_2$ and $\gamma_3$ gives a genus 1 cobordism from $J(K,k_1,k_2)$ to a 2-component unlink. Thus $J(K,k_1,k_2)$ has 4--ball genus at most one. \begin{corollary} The knot $T_{2,3;2, 13} + T_{2,15}-T_{2,3;2,15} -T_{2,13}$ has 4-ball genus equal to 1.\qed \end{corollary} \vskip.1in As mentioned in the introduction, the Ozsv{\'a}th-Szab{\'o} and Rasmussen concordance invariants, $\tau$ and $s$, are unable to determine whether any of the algebraically slice linear combination involving positive iterated torus knots, is slice. We make this precise in the following proposition. \begin{prop}\label{prop:stau} Fix $p,q_1,q_2>0$. Suppose $J = K_{p,q_1} - K_{p,q_2} - T_{p, q_1} +T_{p, q_2}$, with $K=K_{r_1,s_1;\cdots;r_n,s_n}$ a positively iterated torus knot; that is, $r_i,s_i>0$ for all $i$. Then $\tau(J)=s(J)=0,$ where $\tau$ and $s$ are the Ozsv{\'a}th-Szab{\'o} and Rasmussen concordance invariants, respectively. \end{prop} \begin{proof} It is well-known that the Seifert genus of $K_{p,q}$ is given by $$g(K_{p,q})=pg(K)+ g(T_{p,q}),$$ and that $2g(T_{p,q})=(p-1)(q-1)$. See, for instance,~\cite[Chapter 1\S 3]{EN1985}. We claim $2\tau(K)=s(K)=2g(K)$ for any positively iterated torus knot. Given this, the proposition follows from the genus formula above, together with the fact that both invariants change sign under reflection, $\tau(K)=-\tau(-K)$ and $s(K)=-s(-K)$. For torus knots, the fact that $2\tau(K)=s(K)=2g(K)$ was proved by Ozsv{\'a}th and Szab{\'o}~\cite{Lens} and Rasmussen~\cite{ra}, respectively. For positively iterated torus knots, the result follows from~\cite[Corollary 1.4]{ComplexCable}, which shows that a positively iterated torus knot bounds a Seifert surface which is isotopic to a piece of a complex curve in the four-ball (for algebraic knots this is well-known, through the work of Milnor~\cite{Milnor1968}). Knots which bound such complex curves satisfy the stated equalities, by~\cite{Livingston2004} (see also~\cite{SQPfiber}). \end{proof}
1,314,259,996,377
arxiv
\section{Introduction} The ability to perform quantum error correction (QEC) by feedback is a crucial step towards fault-tolerant quantum computation \cite{Nielsen2004, Terhal2015quantum}. An open challenge, that has drawn considerable interest recently\cite{Kerckhoff2010,Fowler2012,Fujii2014}, is to find the best strategy for this task. Two nominally distinct feedback strategies for QEC are the measurement-based and driven-dissipative approaches. The former has been more well-understood\cite{Wiseman2009}, owing to an existing foundation in classical control and feedback in engineering. In the measurement-based (MB) approach, a classical controller performs projective measurements of a set of multi-qubit stabilizer operators that encode the logical qubit\cite{Nielsen2004,Fowler2012} in order to track errors and/or perform any necessary correction. Thus for good performance, this approach requires both high-fidelity projective measurements and low-latency control electronics to process the measurement result within the relevant coherence times of the quantum system. The elements required for this MB strategy have been demonstrated for small quantum systems on various physical platforms such as Rydberg atoms\cite{Haroche2011Nature}, trapped ions\cite{Chiaverini2004, Nigg2014quantum}, photons\cite{Yao2012}, spin\cite{Waldherr2014} and superconducting qubits\cite{Riste2012b,Vijay2012,Campagne-Ibarcq2013,Riste2013Nature,Steffen2013Nat,Barends2014Nature, Chow2014Nature, Riste2014NatComm, Kelly2015Nature, Corcoles2015}. However, a steady-state multi-qubit QEC capability has yet to be achieved and one of the key questions for this development is whether the MB strategy is scalable to larger systems or whether an alternative approach is more optimal. One such alternative, driven-dissipative (DD) schemes\cite{Poyatos1996}, also called reservoir/bath engineering or coherent feedback (as discussed below), utilizes coupling between the quantum system of interest and a dissipative environment to transfer the entropy caused by decoherence-induced errors out of the quantum system. They have been demonstrated on a variety of physical systems including atomic ensembles\cite{Krauter2011}, trapped ions\cite{lin2013dissipative}, mechanical resonators\cite{Kerckhoff2013,kienzler2015} and superconducting qubits\cite{Murch2012, Geerlings2013, Leghtas2013, Shankar2013,Leghtas2015, holland2015}. Moreover, experiments with trapped ions\cite{Schindler2011} and superconducting qubits\cite{Reed2012} have demonstrated some of the basic elements of autonomous QEC. These schemes do not require high-fidelity projective measurements, external control and its associated latency. They can also be described as autonomous or coherent feedback\cite{Wiseman1994}, where the reservoir coupled to the target quantum system can be considered as an effective ``quantum controller'' that reacts with quantum degrees of freedom\cite{Nurdin2009}. Adjusting the feedback by changing the ``quantum controller'', however, can be more challenging than re-programming a classical controller, built with conventional electronics. Thus, a further question is whether one can combine this DD approach and the conventional MB approach with minimal negative consequences from their respective drawbacks. Here we report an experiment in which we built a feedback platform utilizing a nearly quantum-limited measurement chain and a customized field-programmable gate array (FPGA) system to perform MB and DD schemes within the same setup. The task of this platform was to stabilize an entangled Bell state of two superconducting transmon qubits\cite{Schreier2008}. This particular task of stabilizing a single state is a proxy for more general QEC experiments where a manifold of states is protected. We realize, for the first time, an MB \textit{stabilization} of a Bell state by repeated active correction through conditional parity measurements\cite{Lalumiere2010, Tornberg2010,Riste2013Nature}. We compare this scheme to a DD entanglement stabilization scheme\cite{Shankar2013} in which the conditional parity switch is autonomous. By performing both schemes on the same hardware setup and circuit QED (cQED) system\cite{Blais2004}, we shed light on their close connection and compare them on a level playing field. Previous theoretical works have compared DD (under the name of ``coherent feedback'') and MB for linear quantum control problems\cite{Yamamoto2014coherent}, such as for minimizing the time required for qubit state purification\cite{Jacobs2014coherent} or for cooling a quantum oscillator\cite{Hamerly2012advantages}. These comparisons showed coherent feedback to be significantly superior. In our particular setup, we find that distinguishing the superior approach among DD and MB is a more subtle task. The subtlety is two-fold. First, the performance difference depends on which process can be better optimized: the design of the cQED Hamiltonian or the efficiency of quantum measurement and classical control. In the current experiment, we show that DD has better steady-state performance as the cQED Hamiltonian parameters are engineered such that DD has a shorter feedback latency. But DD's advantage over MB is not immutable. As certain experimental parameters are improved, such as coherence times and measurement efficiency, MB's performance can catch up with DD. Secondly, in the current situation in which neither the cQED Hamiltonian parameters nor the measurement and control parameters are ideal, we can obtain a boosted performance by combining DD and MB to get the best of both worlds. We explored this by devising a heralding method to improve the performance of both stabilization approaches. This protocol exploits the high-fidelity measurement capability and the programmability of the feedback platform. The protocol is termed ``nested feedback'' since it has an inner feedback loop based on either the DD or MB scheme, and an outer loop that heralds the presence of a high-fidelity entangled state in real-time. Previously, heralding schemes have been demonstrated for state preparation to combat photon loss or decoherence\cite{Moehring2007,Wagenknecht2010,Hofmann2012heralded,Johnson2012, Riste2012, Riste2012b, Riste2013Nature,Bernien2013heralded}. Extending such heralding capability to state stabilization will be a valuable addition to the QEC toolbox. Furthermore, the ability to herald in real time as opposed to post-selection is important for on-demand and deterministic quantum information processing since only successful events lead to subsequent processing. Real-time heralding for entanglement stabilization is particularly challenging for superconducting qubits due to their shorter coherence times compared to other systems. In this article, we implement this real-time heralding capability on a time scale faster than the few microsecond coherence time of our qubit-cavity system. By extending the feedback platform developed primary for the MB approach to the DD approach, our results bring to light a new application of MB. Adding a level of MB feedback can significantly improve performance beyond what a single layer of feedback, whether DD or MB, can achieve. \section{Experiment setup} The simplified schematic of our experimental setup is shown in Fig.~\ref{fig:schematic}a. Two transmon qubits\cite{Schreier2008}, Alice and Bob, are dispersively coupled to a three-dimensional aluminum cavity \cite{Paik2011}, with linewidth $\kappa/2\pi = 2$~MHz (see Supp. Mat. Sec.~I, for other parameters). The qubit-cavity dispersive shifts are nearly equal and in the strong dispersive regime ($\chi_{\text{Alice}}/2\pi = 5$~MHz, $\chi_{\text{Bob}}/2\pi = 4.5$~MHz) with photon-number resolved qubit transition frequencies\cite{Schuster2007}. The cavity output is amplified by a Josephson Parametric Converter (JPC) operated as a nearly quantum-limited phase-preserving amplifier \cite{Bergeal2010a} enabling rapid, single-shot readout\cite{Hatridge2013} and thus real-time feedback. The key component of the experiment is a controller realized with two FPGA boards \footnote{X6-1000M from Innovative Integration} that both measure and actively control the cavity-qubit system. An essential operation for our experiment is a two-qubit joint quasi-parity measurement using the common readout cavity \cite{Tornberg2010, Lalumiere2010, Riste2013Nature}. As shown in Fig.~\ref{fig:schematic}b, the cavity is driven at $f_{gg}$ (both qubits in ground state) and at $f_{ee}$ (both in the excited state) at the same time. The output at $f_{gg}$ and $f_{ee}$ together distinguishes the even parity manifold $\{\ket{gg},\ket{ee}\}$ from the odd parity manifold $\{\ket{ge},\ket{eg}\}$. When the two cavity output responses \textit{both} have an amplitude below a certain threshold, the qubits are declared to be in odd parity; when either one has amplitude above the threshold, the qubits are declared to be in even parity. We note that, unlike a true parity measurement, this readout actually distinguishes the two even parity states $\ket{gg}$ and $\ket{ee}$, hence we refer to it as a ``quasi'' parity measurement. However, the feedback schemes described below apply the same operation on both even states, and thus we need only record the parity of the measured state. The choice of driving at the ``even'' cavity resonances rather than between the ``odd'' resonances ($f_{eg}$ and $f_{ge}$) mitigates the effect of the $\chi$ mismatch, reducing associated measurement-induced dephasing of the odd manifold\cite{Tornberg2010}. The controller FPGA a (b) modulates the $f_{gg}$ ($f_{ee}$) drive to the cavity and also demodulates the response. The two FPGAs share their measurements of the cavity response to jointly determine the parity. In addition, FPGA a and b generate the qubit pulses to Alice and Bob, respectively, which are conditioned on the joint state estimation during real-time feedback. \section{Principle of experiment and results} We first briefly outline the DD stabilization of entanglement, described in detail in Ref.~\onlinecite{Leghtas2013} and \onlinecite{Shankar2013}. This stabilization targets the two-qubit Bell state $\ket{{\phi}_{-}}=\frac{1}{\sqrt{2}}(\ket{ge}-\ket{eg})$. Figure ~\ref{fig:comparison}a displays the states coupled by the autonomous feedback loop. Two Rabi drives on Alice and Bob at their zero-photon qubit frequencies ($\omega_{\text{Alice}}^0$ and $\omega_{\text{Bob}}^0$, see Supp. Mat. Sec.~I) couple the wrong Bell state $\ket{{\phi}_{+}}$ to the even states, $\ket{gg}$, $\ket{ee}$, in the energy manifold with zero cavity photons. A second pair of Rabi drives at the $n$-photon qubit frequencies ($\omega_{\text{Alice}}^0-n\bar\chi$ and $\omega_{\text{Bob}}^0-n\bar\chi$, $\bar\chi = \left(\chi_{\text{Alice}}+\chi_{\text{Bob}}\right)/2$), with their relative phase opposite to the first pair, couple $\ket{gg, n}$, $\ket{ee, n}$ to the Bell state $\ket{{\phi}_{-}, n}$. The two cavity drives, at $f_{gg}$ and $f_{ee}$ connect the two manifolds and hence the combined action of the six drives transfers the population from $\ket{gg}$, $\ket{ee}$ and $\ket{{\phi} _{+}}$ to $\ket{{\phi}_{-}, n}$. Finally, cavity photon decay brings $\ket{{\phi}_{-}, n}$ back to $\ket{{\phi}_{-}, 0}$. In effect, the cavity drives separate qubit states based on their parity, allowing one pair of Rabi drives to move the erroneous odd population to the even states while the other pair transfers the even states population to $\ket{{\phi}_{-}}$. Counterparts to these elements of the DD feedback loop can be found in the corresponding MB feedback scheme. The action of our MB algorithm is shown as a state machine in Fig.~\ref{fig:comparison}. We describe the quasi-parity measurement $\tilde{P}$ by the projectors $P_{odd} = \ket{ge}\bra{ge} + \ket{eg}\bra{eg}$, $P_{gg} = \ket{gg}\bra{gg}$ and $P_{ee} = \ket{ee}\bra{ee}$. We assign the outcomes $\tilde{p} = +1$ to the even projectors, $P_{gg}$ and $P_{ee}$ and $\tilde{p} = -1$ to $P_{odd}$. The MB algorithm is built with a sequence of correction steps, each of which consists of a conditional unitary and a quasi-parity measurement. The two possible states of the state machine correspond to whether we apply the unitary $U_E$ or $U_O$, followed by the quasi-parity measurement. Specifically, $U_E=R^{\text{a}}_{\text{x}}(\frac{\pi}{2}) \otimes R^{\text{b}}_{-\text{x}}(\frac{\pi}{2})$ where a (b) denotes Alice (Bob), and $U_O = R^{\text{a}}_{\text{x}}(\frac{\pi}{2}) \otimes R^{\text{b}}_{\text{x}}(\frac{\pi}{2})$. In a correction step $k$, the qubits are initially in either $\ket{gg}$, $\ket{ee}$ or in the odd manifold, due to the projective quasi-parity measurement in step $k-1$; the controller then applies $U_E$ ($U_O$) if $\tilde{p}$ in previous step reported $+1$ ($-1$). The effect of the state machine on the two-qubit states is shown in Tab.~\ref{tabu:mb}, where the action of the controller during one correction step is described in terms of the four basis states, $\ket{{\phi}_{-}}$, $\ket{{\phi}_{+}}$, $\ket{gg}$ and $\ket{ee}$ (the latter two are grouped in the "even" column). The quasi-parity measurement infidelity, labeled by $\epsilon_{E|O}$ ($\epsilon_{O|E}$), gives the error probability of obtaining an even (odd) parity outcome after generating an odd (even) state. Because these measurement infidelities are small, the dominant events are those that occur without measurement errors. At each step, $U_E$ on either $\ket{gg}$ or $\ket{ee}$ followed by the quasi-parity measurement $\tilde{P}$ transfers the states to $\ket{\phi_-}$ with 50\% probability. Since $\ket{{\phi}_{-}}$ is an eigenstate of $U_O$ and $\tilde{P}$ (modulo a deterministic phase shift that can be undone, see later discussion), these operations leave it unaffected. On the other hand, $U_O$ and $\tilde{P}$ transform $\ket{\phi_+}$ into $\{\ket{gg},\ket{ee}\}$; more generally, they take population in any other odd state (i.e., a superposition of $\ket{{\phi}_{-}}$ and $\ket{\phi_+}$ ) into $\ket{{\phi}_{-}}$ and the even states. \begin{table} \begin{tabular}{L{4cm} | P{1.8cm} | P{1.8cm} | P{1.8cm} | P{1.8cm}| P{0.9cm} | P{0.9cm} | P{0.9cm} | P{0.9cm}} \hline \hline Previous state & \multicolumn{2}{c|}{$\ket{\phi_-}$} & \multicolumn{2}{c|}{$\ket{\phi_+}$} & \multicolumn{4}{c}{even} \\ \hline $\tilde{p}_{k-1}$ & ${+1}$ & ${-1}$ & ${+1}$ & ${-1}$ & \multicolumn{2}{c|}{${+1}$} & \multicolumn{2}{c}{${-1}$} \\ \hline Outcome probability & $\epsilon_{E|O}$& $1- \epsilon_{E|O}$ & $\epsilon_{E|O}$ & $1- \epsilon_{E|O}$ & \multicolumn{2}{c|}{$1- \epsilon_{O|E}$} & \multicolumn{2}{c}{$\epsilon_{O|E}$} \\ \hline Unitary & $U_E$ & $U_O$ & $U_E$ & $U_O$ & \multicolumn{2}{c|}{$U_E$} & \multicolumn{2}{c}{$U_O$} \\ \hline Next state & even & $\ket{\phi_-} $ & $\ket{\phi_+}$ & even & \multicolumn{2}{c|}{even/$\ket{\phi_-}$} & \multicolumn{2}{c}{even/$\ket{\phi_+}$} \\ \hline \hline \end{tabular} \caption{\label{tabu:mb} Effects of the MB finite state machine of Fig.~2 on two-qubit system in the $k$-th step of feedback, for different starting cases (columns). Row 2 through 4 describe the result of the previous quasi-parity measurement and the corresponding unitary that will be applied in the $k$-th step. The symbols, $\epsilon_{E|O}$ and $\epsilon_{O|E}$, denote parity measurement errors (see text). The last row describes the possible system states attained by the applied unitary. The two alternative states for a previous ``even'' occur with $50\%$ probability.} \end{table} By repeating a sufficient number of these correction steps in sequence, the controller stabilizes the target Bell state irrespective of the initial two-qubit state. The similarity between this active feedback and DD is that MB also transfers population between different parity states by conditional Rabi drives. However, while the Rabi drives in DD are conditioned autonomously by the photon number in the cavity, the unitary Rabi pulses in MB are conditioned by real-time parity measurement performed by active monitoring of cavity outputs. The pulse sequences for DD and MB are shown in Fig.~\ref{fig:comparison}b and e. In DD, a set of continuous-wave drives are applied for a fixed time $T_{\text{s}}$ and after some delay $T_{\text{w}}$ to allow remaining cavity photons to decay, a two-qubit state tomography is performed \cite{Filipp2009,Chow2010}. The cavity and Rabi drive amplitudes and phases were tuned for maximum entanglement fidelity, following the procedure described in Ref.~\onlinecite{Shankar2013}. In particular, the optimal cavity drive amplitudes were found to be $\bar n = 4.0$. For MB, the continuous drives are replaced by a pre-defined number of correction steps $N$, resulting in a stabilization duration of $T_s = N T_{step}$ where $T_{step} = 1.5$~$\mu$s. There is no extra delay before tomography since each correction step already contains a delay after the quasi-parity measurement due to feedback decision latency. The strength and duration of the quasi-parity measurement $\tilde{P}$ were optimized as discussed in Supp.~Mat.~Sec.~II. The optimization achieved low parity measurement infidelities $\epsilon_{E|O}$ and $\epsilon_{O|E}$ while keeping the measurement-induced dephasing arising from the $\chi$ mismatch\cite{Tornberg2010,Lalumiere2010} small compared to the natural decoherence in the same duration. We experimentally determined the infidelity of the quasi-parity measurement to be $\epsilon_{E|O}$ and $\epsilon_{O|E}$ of $0.04$ and $0.05$, respectively. The quasi-parity measurement also causes a deterministic qubit rotation about the respective $Z$ axis due to an AC Stark shift~\cite{Tornberg2010}; this rotation was corrected within the unitary gate $U_O$ as discussed in Supp.~Mat.~Sec.~VI. Fig.~\ref{fig:comparison}c,f show the fidelity to the target Bell state $\ket{{\phi}_{-}}$ as a function of stabilization time for DD and MB, respectively. The fidelity rises exponentially with a characteristic time constant of $0.78$~$\mu$s ($1.4$~$\mu$s) and a steady-state fidelity of $76\%$ ($57\%$) for DD (MB). Both fidelity values agree with numerical modeling based on master equation simulation, which gives $76\%$ and $58\%$ for DD and MB, respectively (see Supp.~Mat.~Sec.~IV). The exponential dependence is a signature of feedback and arises from the characteristic loop time. The experimentally determined time constants are in reasonable agreement with their simulated values of $1.0$~$\mu$s ($1.4$~$\mu$s) for DD (MB). In MB, this loop time is related to the step length ($1.5$~$\mu$s), which is given by the sum of the quasi-parity measurement duration ($0.66$~$\mu$s), the cable, instrument and FPGA latencies ($0.69$~$\mu$s), and the duration of unitary pulses ($0.15$~$\mu$s). On the other hand for DD, the measured loop time is close to $10$ cavity lifetimes, the expected time as shown in Ref.~\onlinecite{Leghtas2013}. The superior performance of DD over MB for the steady-state fidelity is due to the difference in correction loop time, which needs to be shorter than the coherence times of the two qubits for high fidelity entanglement. For the current experimental setup, the latency of the controller and quantum efficiency of the measurement chain, which affects the fidelity of the single-shot readout, result in a longer loop time in MB. A source of the longer feedback loop time is the quasi-parity measurement duration. This measurement duration, which was optimized as discussed in Supp.~Mat.~Sec.~II, is limited by dephasing induced by the mismatch in $\chi$ ($\sim 10\%$) and the measurement efficiency of the output chain ($\sim 30\%$), which can both be improved in future experiments. Our simulations (see Supp.~Mat.~Sec.~IV) suggest that with current state-of-the-art measurement efficiency value and optimization of the FPGA/cable latency, the MB steady state fidelity can be improved to $66\%$. The limited measurement efficiency does not affect the performance of DD because the parity measurement and correction take place autonomously within the qubit-cavity system, indicative of its robustness against this hardware limitation. On the other hand, both DD and MB schemes benefit from longer intrinsic coherence times and reduction of the $\chi$ mismatch. For example, simulations show that if the coherence times are improved to one hundred microseconds (achieved in other state-of-the-art cQED setups), both DD and MB fidelities can increase to above 85\%. For the rest of the article, however, we consider boosting the fidelity in a different manner, without making any physical changes to the qubit-cavity system. The DD and MB schemes described so far are synchronous in the sense that the stabilization always ends after a pre-determined duration and the tomography follows. Decoherence causes the qubits to have a finite probability of jumping out of the target state immediately before the stabilization terminates. Hence, this ``fixed time'' protocol does not always output the target state with maximum fidelity. An optimum protocol would rather utilize all available information. For both DD and MB, we can measure the cavity output at the end of the stabilization period. The outcomes of these measurements, $I_{gg}$ and $I_{ee}$, give real-time information on the state of the two qubits, and thus can herald a successful stabilization sequence. In Fig.~\ref{fig:thresh_sweep}, we describe how monitoring the cavity outputs improves target state fidelity. We introduce two thresholds $\{I_{gg}^{herald},I_{ee}^{herald}\}$ (see supplementary material for details) to post-select the measurement outcomes of $I_{gg}$ and $I_{ee}$ respectively, and identify successful stabilization runs \cite{Riste2013Nature, Shankar2013}. The results of varying $\{I_{gg}^{herald},I_{ee}^{herald}\}$ are shown in Fig.~\ref{fig:thresh_sweep}b,d for DD and MB, respectively. The color plots show fidelity improving as the thresholds become more stringent. The success probability defined as the percentage of stabilization runs kept for tomography given a set of thresholds, is also plotted as contours for both DD and MB. There is a clear trade-off between success probability and fidelity. To reach the maximum fidelity in DD of 82\%, at least 75\% of experiment runs need to be discarded. The trade-off is less severe in MB, where only 50\% of runs need to be discarded to reach the maximum fidelity of 75\%. However we aim to eliminate this trade-off all together, i.e., to improve the fidelity while maintaining a high success probability. This goal is achieved by introducing a nested feedback protocol (NFP), in which the stabilization feedback loop enters into a higher layer of feedback for ``fidelity boosting'' instead of proceeding to state tomography directly. In contrast to the ``fixed time'' protocol, NFP conditions the termination of stabilization on the quality of the entanglement, i.e., it heralds a successful stabilization run in real-time, as illustrated by the state machine diagram in Fig.~\ref{fig:real_time}a. The control variable $C$ is given by $C = (I_{gg} < I_{gg}^{herald})\ \textbf{AND}\ (I_{ee} < I_{ee}^{herald})$, where $\{I_{gg}^{herald},I_{ee}^{herald}\}$ are determined by the same post-selection experiment discussed previously to optimize the fidelity (black square in Fig.~3b and d). If the controller determines that the entanglement quality is not sufficient ($C = 0$), a boost phase is attempted which comprises exactly one correction step for MB or a stabilization period of similar duration for DD ($1.4$~$\mu$s). During the boost phase, the cavity outputs are integrated to give $\{I_{gg},I_{ee}\}$ which enables the next real-time assessment of $C$. In DD, the parity measurement and first layer of feedback is accomplished autonomously, therefore the FPGA only needs to check $C$. However in the MB scheme both layers of feedback are performed solely by the FPGA. It therefore checks if $C=1$ to herald that the entanglement meets the desired quality. If not, it uses the quasi-parity thresholds (grey circles in Fig.~3d) to decide whether the qubits are in even or odd state in order to continue stabilization. This asynchronous pulse sequencing and conditioning by multiple thresholds exploit the programmable nature of the FPGA-based platform. The asynchronous behavior of NFP is displayed in Fig.~\ref{fig:real_time}b(e) for DD (MB), which demonstrates 200 single-shot runs. The DD (MB) fidelity boosting sequence continues until either success or a maximum limit on boost attempts (set to 11 in the experiment) is reached. For the MB protocol, the trajectory of the qubits' parity can be tracked by the conditioning outcomes of the inner-loop control variable $\tilde{p}$ and the outer-loop control variable $C$, which are independent. Through repeated boost attempts until success, NFP significantly improves the overall success probability. Within 11 attempts, 95\% (99.8\%) of DD (MB) runs satisfy the success condition compared to just 25\% (50\%) with simple post-selection. This is assessed by the cumulative probability, the integral of the probability of having completed a certain number of boost attempts before tomography, as plotted in Fig.~\ref{fig:real_time}c,f. Since MB requires a less stringent threshold than DD to gain fidelity improvement, the MB success probability converges to unity much faster than that of DD. Finally, we show that the high success probability does not come at the cost of reduced fidelity. The fidelity to $\ket{{\phi}_{-}}$ for DD improves from an unconditioned value of 76\% to 82\% (averaged over all successful attempts). For MB, the improvement is more pronounced: fidelity rises from an unconditioned value of 57\% to 74\%. Thus for both DD and MB, NFP attain close to the fidelity achieved via stringent post-selection. These results for NFP also agree well with a numerical simulation (see Supp.~Mat.~Sec.~V). One will note, however, a continuous downward trend of the fidelity in both DD and MB schemes as the number of attempts increases. This is due to the non-negligible population in the $\ket{f}$ states of the two qubits in the experiment, which escape correction by the stabilization feedback loops. After each further boost attempt of stabilization, the probability of the population escaping outside the correction space thus increases, diminishing the fidelity (see Supp.~Mat.~Sec.~VII). Also note that the error bars on the fidelity of MB are bigger than those in DD for large attempt numbers simply because the probability of needing many attempts is lower in MB than in DD. While real-time heralding by NFP removes the trade-off between fidelity and success probability, it does so by introducing a different trade-off -- high fidelity and success probability are achieved but the protocol length now varies from run to run. If NFP is a module within a larger quantum information processing (QIP) algorithm, then this asynchronous nature must be accommodated by the controller. For our FPGA-based control, NFP is easily accommodated because it is a natural extension to ``fixed-time'' or synchronous operation. In ``fixed time'' operation, the controller conditions its state by the protocol length which is pre-determined and stored in an internal counter by the experimenter. On the other hand in NFP, the controller conditions its state on a pre-determined logical function of its real-time inputs. \section{Conclusion} In conclusion, we have implemented a new MB stabilization of an entangled state of two qubits, which parallels a previous DD stabilization scheme. Instead of coherent feedback by reservoir engineering, MB relies on actively controlled feedback by classical high-speed electronics external to the quantum system. When comparing both schemes in the ``fixed-time'' protocol, we observe that DD gives a higher fidelity to the target state due to lower feedback latency. Furthermore, we have improved the fidelity of both schemes by a nested feedback protocol which heralds stabilization runs with high-quality entanglement in real time. The real time heralding brings about the fidelity improvement without a common trade-off in QIP: it does not sacrifice the experiment success probability. It eliminates this trade-off by allowing asynchronicity in the experiment. Our experiment shows some of the key advantages of MB platforms that have not been previously explored. Typically, the performance of MB feedback has not been at par with methods based on post-selection, due to the latency of the controller. However it is widely recognized that the trade-off of success probability for fidelity in the case of post-selection is untenable for large scale systems. Therefore, existing digital feedback\cite{Campagne-Ibarcq2013, Riste2012b, Riste2013Nature,Steffen2013Nat} have focused on achieving nearly perfect success probability. Here, we are exploring another direction of feedback which achieves high fidelity with high success probability. Our nested feedback strategy maximizes the use of the information coming out of the qubit-cavity system in order to make the correction process as efficient as possible. We find that our feedback platform, comprised of a nearly-quantum-limited measurement chain and a real-time classical controller, provides the necessary tool-set to implement such a strategy. We show that this technology can be extended to improve the performance of DD approaches as well as single-layer MB approaches themselves. This strategy could be carried out further in the future. For example, the FPGA state estimator could perform a more sophisticated quantum filter of the microwave output of the DD stabilization to herald successful events with better accuracy, significantly improving the success probability convergence rate. Similar ideas can be applied in the future towards other forms of stabilization, such as for stabilizing Schr{\"o}dinger cat states of a cavity mode\cite{Mazyar2014}, a proposed logical qubit. Initial experiments on such logical qubits with high fidelity-measurement\cite{Sun2014} or dissipation engineering\cite{Leghtas2015} have been performed and could now be combined. Likewise, future logical qubits based on the surface code\cite{Fowler2012} could also be stabilized by either active stabilizer measurements\cite{Barends2014Nature, Chow2014Nature, Riste2014NatComm} or as recently proposed by dissipation engineering\cite{Kapit2015PRA,Fujii2014}. Our experiment demonstrates that measurement-based and driven-dissipative approaches, far from being antagonistic, can be merged to perform better than either approach on its own. \section{Acknowledgements} We thank Zaki Leghtas, Mazyar Mirrahimi, Matti Silveri and Shantanu Mundhada for helpful discussions. This research was supported by the U.S. Army Research Office (W911NF-14-1-0011 and W911NF-14-1-0563). Facilities use was supported by the Yale Institute for Nanoscience and Quantum Engineering and NSF MRSEC DMR 1119826. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of the U.S. Government.
1,314,259,996,378
arxiv
\section{Introduction}\label{sec:intro} \begin{figure}[h] \centering \includegraphics[width = 0.95\linewidth]{Fig/intro.pdf} \caption{As an MLP model, our method performs even better than GNN models on \texttt{Pubmed}\xspace, but with a much faster inference speed. GRAND~\citep{grand} is one of the SOTA GNN models on this task. Circled markers denote MLP baselines, and squared markers indicate GNN baselines.} \label{fig:intro} \end{figure} Graph Machine Learning (GML) has been attracting increasing attention due to its wide applications in many real-world scenarios, like social network analysis~\citep{social_rec}, recommender systems~\citep{gcmc, danser}, chemical molecules~\citep{molclr, 3dinfomax} and biology structures. Graph Neural Networks (GNNs)~\citep{gcn, graphsage, gat, gin} are currently the dominant models for GML thanks to their powerful representation capability through iteratively aggregating information from neighbors. Despite their successes, such an explicit utilization of graph structure information hinders GNNs from being widely applied in industry-level tasks. On the one hand, GNNs rely on layer-wise message passing to aggregate features from the neighborhood, which is computationally inefficient during inference, especially when the model becomes deep ~\citep{glnn}. On the other hand, recent studies have shown that GNN models can not perform satisfactorily in cold-start scenarios where the connections of new incoming nodes are few or unknown~\citep{coldbrew}. By contrast, Multi-Layer Perceptrons (MLPs) involve no dependence between pairs of nodes, indicating that they can infer much faster than GNNs ~\citep{glnn}. Besides, they can predict for all nodes fairly regardless of the number of connections, thus can infer more reasonably when neighborhoods are missing ~\citep{coldbrew}. However, it remains challenging to inject the knowledge of graph structure information into learning MLPs. One classical and popular method to mitigate this issue is Graph-Regularized MLPs (GR-MLPs in short). Generally, besides the basic supervised loss (e.g., cross-entropy), GR-MLPs employ an additional regularization term on the final node embeddings or predictions based on the graph structure~\citep{lap-reg, label-propagation, p-reg, graphmlp}. Though having different formulations, the basic idea is to make node embeddings/predictions smoothed over the graph structure. Even though these GR-MLP models can implicitly encode the graph structure information into model parameters, there is still a considerable gap between their performance compared with GNNs~\citep{lap-reg, p-reg}. Recently, another line of work, GNN-to-MLP knowledge distillation methods (termed by KD-MLPs)~\citep{glnn, coldbrew}, have been explored to incorporate graph structure with MLPs. In KD-MLPs, a student MLP model is trained using supervised loss and a knowledge-distillation loss from a well-trained teacher GNN model. Empirical results demonstrate that with merely node features as input, the performance of KD-MLPs can still match that of GNNs as long as they are appropriately learned. However, the 2-step training of KD-MLPs is undesirable, and they still require a well-trained GNN model as a teacher. This motivates us to rethink the failure of previous GR-MLPs to solve graph-related applications and study the reasons that limit their performance. \textbf{Presented work:} In this paper, we first demonstrate that node embeddings learned from existing GR-MLPs suffer from dimensional collapse~\citep{feature-decorrelation,understand-dimensional-collapse}, a phenomenon that the embedding space of nodes is dominated by the largest (a few) eigenvalue(s). Our theoretical analysis demonstrates that the dimensional collapse in GR-MLP is due to the irregular feature interaction caused by the graph Laplacian matrix (see Lemma~\ref{lemma:weight_matrix-shrink}). We then propose Orthogonality Regularization ({\sc Ortho-Reg}\xspace in short), a novel GR-MLP model, to mitigate the dimensional collapse issue in semi-supervised node representation learning tasks. The key design of {\sc Ortho-Reg}\xspace is to enforce an additional regularization term on the output node embeddings, making them \textbf{orthogonal} so that different embedding dimensions can learn to express various aspects of information. Besides, {\sc Ortho-Reg}\xspace extends the traditional first-order proximity preserving target to a more flexible one, improving the model's expressive power and generalization ability to non-homophily graphs. We provide a thorough evaluation for {\sc Ortho-Reg}\xspace on various node classification tasks. The empirical results demonstrate that {\sc Ortho-Reg}\xspace can achieve competitive or even better performance than GNNs. Besides, using merely node features to make predictions, {\sc Ortho-Reg}\xspace can infer much faster on large-scale graphs and make predictions more reasonable for new nodes without connections. In Fig.~\ref{fig:intro} we present the performance of {\sc Ortho-Reg}\xspace compared with GNNs and other MLPs on \texttt{Pubmed}\xspace, where {\sc Ortho-Reg}\xspace achieves SOTA performance with the fastest inference speed. \textbf{We summarize our contributions as follows}: \begin{itemize} \item[\textbf{1)}] We are the first to examine the limited representation power of existing GR-MLP models from the perspective of dimensional collapse. We provide theoretical analysis and empirical studies to justify our claims. \item[\textbf{2)}] To mitigate the dimensional collapse problem, we design a novel GR-MLP model named {\sc Ortho-Reg}\xspace. {\sc Ortho-Reg}\xspace encourages the node embeddings to be orthogonal through explicit soft regularization, thus can naturally avoid dimensional collapse. \item[\textbf{3)}] We conduct experiments on traditional transductive semi-supervised node classification tasks and inductive node classification under cold-start scenarios on public datasets of various scales. The numerical results and analysis demonstrate that by learning orthogonal node representations, {\sc Ortho-Reg}\xspace can outperform GNN models on these tasks. \end{itemize} \vspace{-6pt} \section{Backgrounds and Related Works} \vspace{-6pt} \subsection{Problem Formulation} We mainly study a general semi-supervised node classification task on a single homogeneous graph where we only have one type of node and edge. We denote a graph by ${\mathcal{G}} = ({\mathcal{V}}, {\mathcal{E}})$, where ${\mathcal{V}}$ is the node set, and ${\mathcal{E}}$ is the edge set. For a graph with $N$ nodes (i.e., $|{\mathcal{V}}| = N$), we denote the node feature matrix by ${\bm{X}} \in {\mathbb{R}}^{N \times D}$, the adjacency matrix by ${\bm{A}} \in {\mathbb{R}}^{N \times N}$. In semi-supervised node classification tasks, only a small portion of nodes are labeled, and the task is to infer the labels of unlabeled nodes using the node features and the graph structure. Denote the labeled node set by ${\mathcal{V}}^{L}$ and the unlabeled node set by ${\mathcal{V}}^{U}$, then we have ${\mathcal{V}}^{L} \cap {\mathcal{V}}^{U} = \varnothing$ and ${\mathcal{V}}^{L} \cup {\mathcal{V}}^{U} = {\mathcal{V}}$. Denote the one-hot ground-truth labels of nodes by $\hat{{\bm{Y}}} \in \mathbb{R}^{N \times C}$, and the predicted labels by ${{\bm{Y}}}$. One can learn node embeddings ${\bm{H}}$ using node features ${\bm{X}}$ and adjacency matrix ${\bm{A}}$, and use the embeddings to generate predicted labels $\hat{{\bm{Y}}}$. For example, GNNs generate node representations through iteratively aggregating and transforming the embeddings from the neighbors and could be generally formulated as ${\bm{H}} = f_{\theta}({\bm{X}}, {\bm{A}})$. Then a linear layer is employed on top of node embeddings to predict the labels $ {\bm{Y}} = g_{\theta}({\bm{H}})$. The model could be trained in an end-to-end manner by optimizing the cross-entropy loss between predicted labels and ground-truth labels of labeled nodes: $\mathcal{L}_{sup} = \bm{\ell}_{xent}({\bm{Y}}^{L}, \hat{{\bm{Y}}}^{L}) = \sum\limits_{i \in {\mathcal{V}}^L} \bm{\ell}_{xent} ({\bm{y}}_i, \hat{{\bm{y}}}_i )$. Note that GNNs explicitly utilize the graph structure information through learning the mapping from node features and graph adjacency matrix to predicted labels. However, due to the limitations introduced in Sec.~\ref{sec:intro} (inefficiency at inference and poor performance for cold-start nodes), we seek to learn an MLP encoder, i.e., $\bm{H} = f_{\theta} (\bm{X})$ that only takes node features for making predictions. \vspace{-5pt} \subsection{Graph-regularized MLPs} Graph-Regularized MLPs (GR-MLPs in short) implicitly inject the graph knowledge to the MLP model with an auxiliary regularization term on the node embeddings/predictions over the graph structure~\citep{label-propagation,p-reg,graphmlp}, whose objective function could be generally formulated as: $ \mathcal{L} = \mathcal{L}_{{sup}} + \lambda \mathcal{L}_{{reg}}, \; \mbox{where} \; \mathcal{L}_{{reg}} = \bm{\ell} (\bm{H}, \bm{A}) \; \mbox{or} \; \bm{\ell} ({\bm{Y}}, \bm{A}) $. The most representative graph regularization method, Graph Laplacian regularization~\citep{label-propagation, lap-reg}, enforces local smoothness of embeddings/predictions between two connected nodes: $\bm{\ell}(\bm{Y}, \bm{A}) = \textrm{tr}[\bm{Y}^{\top}\bm{L}\bm{Y}]$, where $\bm{L} = \bm{I} - \tilde{\bm{A}} = \bm{I} - \bm{D}^{-1/2}\bm{A}\bm{D}^{-1/2}$ is the (symmetric normalized) Laplacian matrix of the graph. Note that $\bm{Y}$ can be replaced with $\bm{H}$ if one would like to regularize node embeddings instead of predicted labels. Later works apply advanced forms of regularization, like propagation regularization (P-Reg,~\citet{p-reg}), contrastive regularization~\citep{graphmlp}, etc. Regardless of the minor differences, they are all based on the graph homophily assumption that connected nodes should have similar representations/labels. With the graph structure information implicitly encoded into the model parameters, GR-MLPs can improve the representative power of MLP encoders. However, their performances are still hard to match compared to those of GNN models. \begin{remark} {\rm{(Differences from confusing concepts)}}. Though sound similar, Graph-regularized MLPs(GR-MLPs) are totally different from Graph-augmented MLPs (GA-MLPs). Although trained with implicit graph structure regularization, GR-MLPs make predictions directly through the MLP model. By contrast, GA-MLPs, such as SGC~\citep{sgc}, APPNP~\citep{appnp}, GFNN~\citep{gfnn} and SIGN~\citep{sign} explicitly employ the graph structure to augment the node representation generated from an MLP model. GR-MLP is also different from a recent work named P(ropagational)MLP~\citep{pmlp}. Note that PMLP uses message passing (or graph structure information in general) in testing instead of training, while GR-MLPs use message passing in training instead of testing. \end{remark} \subsection{Dimensional Collapse} Dimensional collapse (also known as spectral collapse in some work~\citep{spectral-collapse}) is a phenomenon in representation learning where the embedding space is dominated by the largest a few singular values (other singular values decay significantly as the training step increases)~\citep{gap_whiten, feature-decorrelation, understand-dimensional-collapse, gnn-express}. As the actual embedding dimension is usually large, the dimensional collapse phenomenon prevents different dimensions from learning diverse information, limiting their representation power and ability to be linearly discriminated. ~\citet{understand-dimensional-collapse} has analyzed the dimensional collapse phenomenon from a theoretical perspective and attributed it to the effect of strong data augmentation and implicit regularization effect of neural networks~\citep{implicit-reg, gradient-descent-align}. Previous methods usually adopt whitening operation~\citep{feature-decorrelation, w-mse} to mitigate this issue, while such explicit whitening methods are usually computationally inefficient and thus are not applicable to GR-MLP where efficiency is much more important. In this paper, we demonstrate that node embeddings learned from conventional Graph-Regularized MLPs also suffer from dimensional collapse as well. We provide theoretical analysis on how it is caused and develop a computationally efficient soft regularization term to mitigate it. \section{Dimensional Collapse in GR-MLPs}\label{sec:dimensional-collapse} In this section, we investigate the reasons behind the weak representation power of previous GR-MLPs. In short, we find that the expressive power of traditional GR-MLPs (e.g., with graph Laplacian regularization~\citep{lap-reg}) is restricted by the dimensional collapse issue, which indicates that the embedding space is dominated by the largest few eigenvalues. We first provide empirical results to demonstrate the existence of the dimensional collapse phenomenon in MLPs with Laplacian regularization. Then we analyze the causes of it from a theoretical perspective by analyzing the dynamics of Laplacian regularization. Note that the objective function of Graph Laplacian regularization for semi-supervised node classification tasks could be formulated as follows: \begin{equation}\label{eqn:lap-reg} \begin{split} \mathcal{L} & = \bm{\ell}_{xent}({\bm{Y}}^{L}, \hat{{\bm{Y}}}^{L}) + \lambda \text{tr} [{{\bm{H}}}^{\top}{{\bm{L}}}{{\bm{H}}}] \end{split} \end{equation} Following ~\citet{gap_whiten}, we study the eigenvalues of node embeddings' correlation matrix ${\bm{C}} = \{C_{kk '}\}\in {\mathbb{R}}^{D \times D}$, where $C_{kk'}$ is defined as: \begin{equation}\label{eqn-corrleation} C_{kk'} = \frac{\Sigma_{kk'}}{\sqrt{\Sigma_{kk}\Sigma_{k'k'}}}, \; \mbox{and} \; \bm{\Sigma} = \sum_{i \in {\mathcal{V}}}\frac{({\bm{h}}_i - \overline{{\bm{h}}})^{\top}({\bm{h}}_i - \overline{{\bm{h}}})}{|{\mathcal{V}}|} \end{equation} Note that ${\bm{h}}_i \in \mathbb{R}^{1\times D}$, $\overline{{\bm{h}}} = \sum_{i=1}^{|{\mathcal{V}}|} {\bm{h}}_i / |{\mathcal{V}}|$ is the averaged node embedding vector, so $\bm{\Sigma}$ is the covariance matrix of $ {\bm{H}}$, and we denote ${\bm{C}}$'s eigenvalues in a descending order by $\{\lambda_1^{\bm{{\bm{C}}}}, \lambda_2^{\bm{{\bm{C}}}}, \cdots, \lambda_D^{\bm{{\bm{C}}}}\}$. \begin{figure*}[t] \begin{minipage}[h]{0.65\linewidth} \vspace{0pt} \centering \includegraphics[width=1.0\textwidth,angle=0]{Fig/eigenvalue.pdf} \caption{Eigenspectra for node embeddings with different strengths of Laplacian regularization $\lambda$ (the upper three figures), at different training epochs (the lower three figures). x-axis represents the index of sorted eigenvalues and y-axis is the normalized eigenvalue (the ratio to the largest one). The results are averaged over 10 random initialization with $95\%$ confidence intervals. } \label{fig:eigenvalue-lapreg} \end{minipage} \begin{minipage}[h]{0.3\linewidth} \vspace{0pt} \centering \includegraphics[width=1.0\textwidth,angle=0]{Fig/nesum.pdf} \caption{Evolving of NESum as the training epoch increases, with different regularization factors.} \label{fig:eigensum} \end{minipage} \end{figure*} \paragraph{Empirical Observations.} We train a 3-layer MLP model using Eq.~\ref{eqn:lap-reg} as the objective function on \texttt{Cora}\xspace dataset. To study the relationship between Laplacian regularization and the dimensional collapse phenomenon, we try different regularization factor $\lambda$ (i.e., $0$, $0.001$, $0.1$ respectively). Note that $\lambda = 0$ corresponds to a pure MLP without regularization. Fig~\ref{fig:eigenvalue-lapreg} plots the evolvement of top eigenvalues (which are re-normalized as the ratio between each eigenvalue to the largest one $\lambda_i^{\bm{{\bm{C}}}} / \lambda_1^{\bm{{\bm{C}}}}$) as the training step increases and with different factors $\lambda$. We can observe that without Laplacian regularization (i.e., $\lambda = 0$), the decay of top eigenvalues is very slow and is almost negligible (e.g., $\lambda^{\bm{{\bm{C}}}}_6 / \lambda^{\bm{{\bm{C}}}}_1 \ge 0.5$ even after $100$ steps). By contrast, even with a small regularization factor $\lambda^{\bm{{\bm{C}}}} = 0.001$, we can observe a fast decay rate (e.g, $\lambda^{\bm{{\bm{C}}}}_5 / \lambda^{\bm{{\bm{C}}}}_1 \le 0.2 $ after $40$ steps). This phenomenon is even more remarkable with a large factor. These observations demonstrate a positive correlation between Laplacian regularization and the dimensional collapse phenomenon. We further employ \textit{normalized eigenvalue sum} (NESum) introduced in~\citet{gap_whiten} as a metric to measure the extent of the dimensional collapse. Formally, NESum is defined as the ratio between the summation of all eigenvalues and the largest one: $ {\rm NESum}(\{\lambda_i^{{\bm{C}}}\}) \triangleq \sum_{i=1}^{D} {\lambda_i^{{\bm{C}}}} /{\lambda_1^{{\bm{C}}}}$. Intuitively, a large NESum value indicates that the eigenvalues are fluently distributed, while a very small one indicates the dimensional collapse phenomenon (the largest eigenvalue becomes dominant). In Fig.~\ref{fig:eigensum}, we plot the evolution of NESum with different regularization strengths. It is observed that 1) NESum decreases as training goes on because the model learns to pay more attention to important features for downstream classification tasks. 2) NESum trained with purely cross-entropy loss converges to a high value. 3) With additional Laplacian regularization, NESum decreases quickly and converges to a small value even if the regularization factor $\lambda$ is small. The above observations demonstrate that Laplacian regularization leads to a larger decay rate of top eigenvalues. The significant decay rate will make the learned representations less informative for classifications due to \textit{information loss}~\citep{gnn-express}. \paragraph{Theoretical Analysis.} The empirical results above have verified the inner connection between Laplacian regularization and dimensional collapse. We'd like to further study how the optimization of Laplacian regularization leads to dimensional collapse. To simplify the analysis, we first consider a simple linear model as the encoder to learn node embeddings, i.e., ${\bm{H}} = {\bm{X}}{\bm{W}}$, where ${\bm{W}} \in {\mathbb{R}}^{F\times D}$ is the weight matrix (we further assume $F = D$ in this part for simplicity). The model (i.e., the weight matrix ${\bm{W}}$) is optimized using stochastic gradient descent. Then we have the following lemma on the evolvement of the weight matrix's singular values. \begin{lemma}\label{lemma:weight_matrix-shrink} {\rm{(Shrinking singular-space of weight matrix.)}} Consider the linear model above which is optimized with $\mathcal{L}_{reg} = \text{tr} [{{\bm{H}}}^{\top}{{\bm{L}}}{{\bm{H}}}]$. Let ${\bm{P}} = {\bm{X}}^{\top} {\bm{L}} {\bm{X}} = \sum\limits_{ij}L_{ij}{\bm{x}}_i \cdot {\bm{x}}_j^{\top}$ and denote its non-ascending eigenvalues by $\{\lambda^{{\bm{P}}}_1, \lambda^{{\bm{P}}}_2, \cdots, \lambda^{{\bm{P}}}_D \}$. Denote the randomly initialized weight matrix by ${\bm{W}}(0)$ and the updated weight matrix at time $t$ by ${\bm{W}}(t)$, respectively. We further denote the non-ascending singular values of ${\bm{W}}$ at time $t$ by $\{\sigma^{{\bm{W}}}_i (t)\}_{i=1}^D$. Then the relative value of the smaller eigenvalues to the larger ones will decrease as $t$ increases. Formally, $\frac{\sigma^{{\bm{W}}}_i(t)}{\sigma^{{\bm{W}}}_j(t)} \le \frac{\sigma^{{\bm{W}}}_i(t')}{\sigma^{{\bm{W}}}_j(t')}, \; \; \forall \; t < t', i\le j. $ Furthermore, if the following condition holds: $\lambda^{{\bm{P}}}_1 \ge \cdots \ge \lambda^{{\bm{P}}}_d > \lambda^{{\bm{P}}}_{d+1} \ge \cdots \ge \lambda^{{\bm{P}}}_D $, then \begin{equation} \lim_{t \rightarrow \infty} \frac{\sigma^{{\bm{W}}}_i(t)}{\sigma^{{\bm{W}}}_j(t)} = 0, \; \forall \; i \le d \; \mbox{and} \; j \ge d+1. \end{equation} \end{lemma} See proof in Appendix~\ref{proof:weight_matrix-shrink}. Lemma~\ref{lemma:weight_matrix-shrink} indicates that the singular values of ${\bm{W}}$ (in proportional to the larger ones) shrink as the training step increases. With Lemma~\ref{lemma:weight_matrix-shrink}, we can conclude the following theorem that reveals a dimensional collapse phenomenon under this condition: \begin{theorem}\label{theorem:dimensional-collapse} {\rm (Laplacian regularization leads to dimensional collapse.)} For the linear model above optimized with Graph Laplacian regularization, the embedding space of nodes tends to be dominated by the largest a few eigenvalues. Specifically, if the covariance matrix of input features is an identity matrix, we have: \begin{equation} \lim_{t \rightarrow \infty} \frac{\lambda^{{\bm{C}}}_i(t)}{\lambda^{{\bm{C}}}_j(t)} = 0, \; \forall \; i \le d \; \mbox{and} \; j \ge d+1. \end{equation} \end{theorem} See proof in Appendix~\ref{proof:dimensional-collapse}. Theorem~\ref{theorem:dimensional-collapse} reveals that with the effect of Graph Laplacian regularization, the eigenspectrum is dominated by its largest few eigenvalues, leading to the dimensional collapse phenomenon. In a more general case, the encoder should be more complicated (e.g., an MLP with non-linearity) rather than a linear model. In this case, we can study the asymptotic behavior (e.g., dynamics) in feature space. Gradient descent with step size $\tau$ derives the following update rule of the node embedding matrix: \begin{equation} {\bm{H}}^{(t+1)} = [(1-2\tau){\bm{I}} + 2\tau \tilde{{\bm{A}}}]{\bm{H}}^{(t)}. \end{equation} Let $\tau = 1/2$, we have a step-wise updating formula ${\bm{H}}^{(t+1)} = \tilde{{\bm{A}}}{\bm{H}}^{(t)}$, where $\tilde{{\bm{A}}} = {\bm{D}}^{-1/2}{\bm{A}}{\bm{D}}^{-1/2}$. \citet{gnn-express} has proved that such an updating rule leads to a shrinking low-dimensional embedding subspace as $t \rightarrow \infty$, which restricts the expressive power due to information loss. \section{Overcoming Dimensional Collapse via Orthogonality Regularization} \subsection{Explicit Regularization on the Correlation Matrix} Our thorough analysis in Sec.~\ref{sec:dimensional-collapse} reveals that the poor performance of GR-MLPs could be attributed to less-expressive node representations (due to dimensional collapse). Specifically, we establish that the eigenspectrum of the embeddings' correlation matrix is dominated by the largest eigenvalue (different dimensions are \textbf{over-correlated}). In contrast to dimensional collapse, whitened representations have an identity correlation matrix with equally distributed eigenvalues. Motivated by this, a natural idea should be enforcing a soft regularization term on the correlation matrix of node embeddings, e.g., minimizing the distance between ${\bm{C}}$ and the identity matrix ${\bm{I}}$: \begin{equation}\label{eqn:corr_reg} \bm{\ell}_{corr\_reg} = \Vert \bm{C} - \bm{I} \Vert_F^2 = \sum\limits_{i=1}^{d} (1-{{C}}_{ii})^2 + \sum\limits_{i \neq j} {C}_{ij}^2 = \sum\limits_{i \neq j} {C}_{ij}^2 . \end{equation} Note that the on-diagonal terms ${C}_{ii} = 1$ for all $i$, so Eq.~\ref{eqn:corr_reg} is essentially forcing the off-diagonal terms of the correlation matrix to become zero, or in other words, making the embeddings \textbf{orthogonal}, so that different dimensions of node embeddings can capture orthogonal information. One may directly equip Eq.~\ref{eqn:corr_reg} with existing GR-MLPs for alleviating the dimensional collapse issue. However, we would like to design a more general, flexible, and elegant formulation that can handle high-order connectivity and non-homophily graphs~\citep{geom-gcn,heterophily-survey}. We then introduce {\sc Ortho-Reg}\xspace, a powerful and flexible GR-MLP model, step by step. \subsection{Graph-Regularized MLP with {\sc Ortho-Reg}\xspace}\label{sec:method-ortho-reg} Similar to previous GR-MLPs, we first use an MLP encoder to map raw node features to the embeddings. This process can be formulated as ${\bm{H}} = {\rm MLP}_{\theta}({\bm{X}})$, where ${\bm{X}} = \{{\bm{x}}_i\}_{i=1}^{|{\mathcal{V}}|}$ is raw node features while $\bm{H} = \{ {\bm{h}}_i\}_{i=1}^{|{\mathcal{V}}|}$ is the embedding matrix. The next question is what kind of graph structure information is more beneficial. Previous GR-MLPs resort to either edge-centric smoothing~\citep{label-propagation,lap-reg} or node-centric matching~\citep{p-reg, graphmlp}. While recent studies indicate that the node-centric method is more appropriate for node-level tasks as edge-centric methods overemphasize the ability to recover the graph structure~\citep{p-reg}. Inspired by this, we employ a \textbf{neighborhood abstraction} operation to summarize the neighborhood information as guidance of the central node. Formally, for a node $i \in {\mathcal{V}}$ and the embeddings of its (up-to) $T$-hop neighbors $\{\bm{h}_j\}^{(1:T)}(i)$, we can get the summary if its $T$-hop neighborhoods through a pooling function ${\bm{s}}_i = {\rm Pool}( \{\bm{h}_j\}^{(1:T)}(i) )$. The exact formulation of the pooling function could be flexible to fit graphs with different properties. However, here we consider a simple average pooling of node embeddings from different order's neighborhoods for simplicity, which can work in most cases: \begin{equation}\label{eqn:neighbor-summary} {\bm{S}} = \sum_{t=1}^T \tilde{{\bm{A}}}^t{\bm{H}} / L, \; \mbox{where} \; \tilde{{\bm{A}}} = {\bm{A}} {{\bm{D}}}^{-1}. \end{equation} To make the node embeddings aware of structural information, we employ the following regularization term on the cross-correlation matrix of node embeddings ${\bm{H}}$ and summary embeddings ${\bm{S}}$: \begin{equation}\label{eqn:ortho-reg} {\mathcal{L}}_{reg} = -\alpha \sum\limits_{k=1}^D C_{kk} + \beta\sum\limits_{k\neq k'}C_{kk'}^2, \end{equation} where ${\bm{C}} = \{C_{kk'}\} \in {\mathbb{R}}^{D\times D}$ is the cross-correlation matrix of ${\bm{H}}$ and ${\bm{S}}$. We show in the following theorem that with Eq.~\ref{eqn:ortho-reg}, the node embeddings will be locally smoothed and at the same time, prevent dimensional collapse: \begin{theorem}\label{theorem:ortho-reg} Assume $T$ = 1 and ${\bm{H}}$ are free vectors. Let ${\bm{H}}^*$ be a global optimizer of Eq.~\ref{eqn:ortho-reg}, then ${\bm{H}}^*$ is smoothed over the graph structure and is orthogonal. \end{theorem} See proof in Appendix~\ref{proof:ortho-reg}. Finally, we can employ an additional linear layer to make predictions ${\bm{Y}} = {\rm LIN}_{\phi}({\bm{H}})$. Then the final objective function to be optimized is: \begin{equation} {\mathcal{L}} = \bm{\ell}_{xent}({\bm{Y}}^L, \hat{{\bm{Y}}}^L) - \alpha \sum\limits_{k=1}^D C_{kk} + \beta\sum\limits_{k\neq k'}C_{kk'}^2, \end{equation} where $\alpha, \beta$ are trade-off hyperparameters to balance the strengths of regularization. \begin{remark} With a well-trained model, we can mkae prediction for an upcoming node with feature $\bm{x}$ with $\bm{y} = {\rm {Lin}}_{\phi} ({\rm {MLP}}_{\theta}(\bm{x}))$ quickly, and without the help of graph structure. \end{remark} \section{Experiments}\label{sec:experiments} In this section, we conduct experiments to evaluate {\sc Ortho-Reg}\xspace by answering the following research questions: \begin{itemize} \item \textbf{RQ1}: What's the performance of {\sc Ortho-Reg}\xspace on common transductive node classification tasks compared with GNN models and other MLP models? (Sec.~\ref{exp:transductive}) \item \textbf{RQ2}: On cold-start settings where we do not know the connections of testing nodes, can {\sc Ortho-Reg}\xspace demonstrate better performance than other methods? (Sec.~\ref{exp:inductive}) \item \textbf{RQ3}: Does {\sc Ortho-Reg}\xspace mitigate the dimensional collapse issue, and is each design of {\sc Ortho-Reg}\xspace really necessary to its success? (Sec.~\ref{exp:abl}) \item \textbf{RQ4}: Can {\sc Ortho-Reg}\xspace demonstrate better robustness against structural perturbations compared with Graph Neural Networks? (Sec.~\ref{sec:exp-robustness}) \end{itemize} Due to space limits, we defer the experiments on heterophily graphs and scalability comparison in Appendix~\ref{appendix-exp-hete} and Appendix~\ref{appendix-exp-scale}, respectively. A brief introduction of the baselines is given in Appendix~\ref{baselines}. \subsection{Experiment setups}\label{sec:exp-steups} \textbf{Datasets.} We consider $7$ benchmark graph datasets and their variants in this section: \texttt{Cora}\xspace, \texttt{Citeseer}\xspace, \texttt{Pubmed}\xspace, \texttt{Amazon-Computer}\xspace, \texttt{Amazon-Photo}\xspace, \texttt{Coauthor-CS}\xspace, and \texttt{Coauthor-Physics}\xspace as they are representative datasets used for semi-supervised node classification~\citep{gcn, graphmlp, glnn, coldbrew}. The detailed introduction and statistics of them are presented in Appendix~\ref{appendix-exp-detail}. To evaluate {\sc Ortho-Reg}\xspace on large-scale graphs, we further consider two OGB datasets~\citep{ogb}: \texttt{Ogbn-Arxiv}\xspace and \texttt{Ogbn-Products}\xspace. Note that the two OGB datasets are designed for fully-supervised node classification tasks, so we defer their results to Appendix~\ref{appendix-exp-add}. \textbf{Implementations.} If not specified, we use a two-layer MLP model as the encoder to generate node embeddings, then another linear layer takes node embeddings as input and outputs predicted node labels. We use Pytorch to implement the model and DGL~\citep{dgl} to implement the neighborhood summarizing operation in Eq.~\ref{eqn:neighbor-summary}. If not specified, all our experiments are conducted on an NVIDIA V100 GPU with 16G memory with Adam optimizer~\citep{adam}. \subsection{Transductive Semi-supervised Node Classification (RQ1)}\label{exp:transductive} \begin{table*}[t] \centering \caption{Prediction accuracy of semi-supervised node classification tasks on the seven benchmark graphs. {\sc Ortho-Reg}\xspace outperforms powerful GNN models and competitive MLP-architectured baselines on 6 out of 7 datasets.} \label{tbl-exp-semi} \small \begin{threeparttable} { \scalebox{0.87} { \begin{tabular}{c|l|ccccccc} \toprule[0.8pt] & Methods & \texttt{Cora}\xspace & \texttt{Citeseer}\xspace & \texttt{Pubmed}\xspace & \texttt{Computer} & \texttt{Photo} & \texttt{CS} & \texttt{Physics} \\ \midrule \multirow{3}{*}{GNNs} & SGC & 81.0$\pm$0.5 & 71.9$\pm$0.5 & 78.9$\pm$0.4 & 80.6$\pm$1.9 & 90.3$\pm$0.8 & 87.9$\pm$0.7 & 90.3$\pm$1.4 \\ & GCN & 82.2$\pm$0.5 & 71.6$\pm$0.4 & 79.3$\pm$0.3 & 82.9$\pm$2.1 & 91.8$\pm$0.6 & 89.9$\pm$0.7 & 91.9$\pm$1.2 \\ & GAT & 83.0$\pm$0.7 & 72.5$\pm$0.7 & 79.0$\pm$0.3 & 82.5$\pm$1.6 & 91.4$\pm$0.8 & 90.5$\pm$0.8 & 92.3$\pm$1.5 \\ \midrule[0.5pt] KD-MLPs & GLNN & 82.6$\pm$0.5 & 72.8$\pm$0.4 & 80.2$\pm$0.6 & 82.1$\pm$1.9 & 91.3$\pm$1.0 & 92.6$\pm$1.0 & \textbf{93.3}${\bm{\pm}}$\textbf{0.5} \\ \midrule[0.5pt] \multirow{5}{*}{GR-MLPs} & MLP & 59.7$\pm$1.0 & 57.1$\pm$0.5 & 68.4$\pm$0.5 & 62.6$\pm$1.8 & 76.2$\pm$1.4 & 86.9$\pm$1.0 & 89.4$\pm$0.7 \\ & Lap-Reg & 60.3$\pm$2.5 & 58.6$\pm$2.4 & 68.7$\pm$1.4 & 62.6$\pm$2.0 & 76.4$\pm$1.1 & 87.9$\pm$0.6 & 89.5$\pm$0.5 \\ & P-Reg & 64.4$\pm$4.5 & 61.1$\pm$2.1 & 72.3$\pm$1.7 & 68.9$\pm$3.3 & 79.7$\pm$3.7 & 90.9$\pm$1.9 & 91.6$\pm$0.7 \\ & GraphMLP & 79.5$\pm$0.6 & 73.1$\pm$0.4 & 79.7$\pm$0.4 & 79.3$\pm$1.7 & 90.1$\pm$0.5 & 90.3$\pm$0.6 & 91.6$\pm$0.8 \\ & N2N & 83.2$\pm$0.4 & 73.3$\pm$0.5 & 80.9$\pm$0.4 & 81.4$\pm$1.6 & 90.9$\pm$0.7 & 91.5$\pm$0.7 & 91.8$\pm$0.7 \\ \midrule[0.5pt] Ours & {\sc Ortho-Reg}\xspace & \textbf{84.7}${\bm{\pm}}$\textbf{0.4} & \textbf{73.5}${\bm{\pm}}$\textbf{0.4} & \textbf{82.8}${\bm{\pm}}$\textbf{0.5} & \textbf{83.7}${\bm{\pm}}$\textbf{1.5} & \textbf{92.3}${\bm{\pm}}$\textbf{1.0} & \textbf{92.9}${\bm{\pm}}$\textbf{1.1} & 92.8$\pm$0.8 \\ \bottomrule[0.8pt] \end{tabular} } } \end{threeparttable} \end{table*} We first evaluate our method on transductive semi-supervised node classification tasks. For comparison, we consider three types of baseline models: 1) Graph Neural Networks (GNNs), including SGC~\citep{sgc}, GCN~\citep{gcn} and GAT~\citep{gat}. 2) Representative knowledge distillation (KD-MLP) method GLNN~\citep{glnn}. 3) Basic MLP and GR-MLP models, including Laplacian regularization (Lap-Reg,~\citet{label-propagation}, \citet{lap-reg}), Propagation Regularization (P-Reg,~\citet{p-reg}), GraphMLP~\citep{graphmlp}, and Node-to-Neighborhood Mutual Information Maximization (N2N,~\citet{n2n}) For each dataset, we use $20$ nodes per class for training, $500$ nodes for validation, and another $1000$ nodes for testing. For \texttt{Cora}\xspace, \texttt{Citeseer}\xspace, and \texttt{Pubmed}\xspace we use the public split, while for the remaining datasets, we split randomly. We report the average prediction accuracy with standard deviation over 20 random trials in Table~\ref{tbl-exp-semi}. As demonstrated in the table, {\sc Ortho-Reg}\xspace outperforms previous GR-MLPs by a large margin, which greatly validates the importance and effectiveness of orthogonal node embeddings. Compared with the competitive knowledge distillation method GLNN, {\sc Ortho-Reg}\xspace also demonstrates better performance on $6$ out of $7$ graphs. It is also worth noting that our method even outperforms powerful GNN models such as GCN and GAT, which indicates that node features of the graphs are less exploited by these GNN models. In contrast, our method can fully exploit the potential of node features. \subsection{Inductive Node Classification for Cold-Start Scenarios (RQ2)}\label{exp:inductive} To evaluate the performance of {\sc Ortho-Reg}\xspace on cold-start scenarios where the connections between newly encountered nodes and existing nodes are missing, we follow the setups in ColdBrew that selects a proportion of nodes as \textit{isolated} nodes which will be removed from the original graph. Then the model is evaluated on the isolated nodes in the testing set. Due to the space limit, we present the detailed setups and evaluation methods in Appendix~\ref{appendix-exp-inductive}. Besides the baselines used in~\citet{coldbrew}, we also include GLNN for a fair comparison. \begin{table}[t] \centering \caption{Test accuracy on the isolated nodes.} \label{tbl-exp-cold} \small { \begin{threeparttable} { \scalebox{0.9}{ \begin{tabular}{llcccccc} \toprule[0.8pt] \multicolumn{2}{c}{{Methods}} & \texttt{Cora}\xspace & \texttt{Citeseer}\xspace & \texttt{Pubmed}\xspace \\ \midrule \multirow{2}{*}{{GNNs}} & GCN & 53.02$\pm$1.78 & 47.09$\pm$1.38 & 71.50$\pm$2.21 \\ & GraphSAGE & 55.38$\pm$1.92 & 41.46$\pm$1.57 & 69.87$\pm$2.13 \\ \specialrule{0em}{1pt}{1pt} \cline{1-2} \specialrule{0em}{1pt}{1pt} \multirow{2}{*}{{KD-MLPs}} & ColdBrew & 58.75$\pm$2.11 & 53.17$\pm$1.41 & {72.31$\pm$1.99} \\ & GLNN & {59.34$\pm$1.97} & {53.64$\pm$1.51} & 73.19$\pm$2.31 \\ \specialrule{0em}{1pt}{1pt} \cline{1-2} \specialrule{0em}{1pt}{1pt} \multirow{4}{*}{{GR-MLPs}} & MLP & 52.35$\pm$1.83 & 53.26$\pm$1.41 & 65.84$\pm$2.08 \\ & GraphMLP & 59.32$\pm$1.81 & 53.17$\pm$1.48 & 72.33$\pm$2.11 \\ & {\sc Ortho-Reg}\xspace & {\textbf{61.93}${\bm{\pm}}$\textbf{1.77}} & {\textbf{56.31}${\bm{\pm}}$\textbf{1.54}} & {\textbf{73.42}${\bm{\pm}}$\textbf{1.99}} \\ \bottomrule[0.8pt] \end{tabular}}} \end{threeparttable}} \end{table} In Table.~\ref{tbl-exp-cold}, we report the experimental results of {\sc Ortho-Reg}\xspace and baseline methods on the isolation nodes. As demonstrated in the table, for isolated nodes whose connectivity in the graph is unknown, GNN models perform poorly as they require both the node features and graph structure for accurate inference. By contrast, MLP-based models generalize better on isolated nodes as they make the best of the available node features. The proposed {\sc Ortho-Reg}\xspace outperforms both GNNs and MLPs (including KD MLPs and GR-MLPs) baselines. \subsection{Studies of {\sc Ortho-Reg}\xspace (RQ3)}\label{exp:abl} \subsubsection{Does {\sc Ortho-Reg}\xspace mitigate dimensional collapse?} In Sec.~\ref{sec:dimensional-collapse} we have attributed the limitation of previous GR-MLPs to the dimensional collapse phenomenon, and in Sec.~\ref{sec:method-ortho-reg} we have proposed {\sc Ortho-Reg}\xspace to mitigate such a problem from a theoretical perspective. In this part, we would like to empirically show that {\sc Ortho-Reg}\xspace can avoid the dimensional collapse issue by keeping node embeddings' eigenspectra. \begin{figure}[t] \centering \includegraphics[width = 0.95\linewidth]{Fig/vis.pdf} \caption{Visualization of {\sc Ortho-Reg}\xspace's impact on node embeddings' Eigenspectra on \texttt{Cora}\xspace and \texttt{Pubmed}\xspace.} \label{fig:vis} \end{figure} In consistency with the settings in Sec.~\ref{sec:dimensional-collapse}, we evaluate the embeddings learned from {\sc Ortho-Reg}\xspace at different training epochs (we take both \texttt{Cora}\xspace and \texttt{Pubmed}\xspace for illustrations). The decay of eigenvalues of node embeddings' correlation matrix at different epochs is plotted in Fig.~\ref{fig:vis} (a) and (c). It is observed that the top eigenvalues are well-reserved thanks to the explicit regularization of node embeddings' correlation matrix. In Fig.~\ref{fig:vis} (b) and (d) we also plot the change of testing accuracy as well as the NESum value as the training epoch increases, from which we could observe a positive relationship between the NESum value and the test accuracy: neglecting the initial oscillations, we notice the test accuracy will grow smoothly as the NESum value increases and will reach its peak when NESum overwhelms (\texttt{Cora}\xspace) or converges (\texttt{Pubmed}\xspace). The above observations demonstrate that {\sc Ortho-Reg}\xspace does mitigate the dimensional collapse problem and lead to a more powerful model. \subsubsection{Ablation Studies} We then conduct ablation studies to study the effect of different components of {\sc Ortho-Reg}\xspace, and we present the results in Table \ref{tbl-exp-abl}. We first study the impact of the two regularization terms by setting the corresponding factors ($\alpha$ and $\beta$) to $0$, respectively. When $\alpha = 0$ (i.e., only decorrelating different dimensions), we observe that the model's performance is even worse than the pure MLP model (see in Table~\ref{tbl-exp-semi}). This indicates that adding orthogonal regularization is not always beneficial (e.g., for vanilla MLP), but is indeed beneficial for GR-MLPs. By contrast, without orthogonal regularization (i.e., $\beta = 0$), the power of structure regularization is restricted, and decorrelating different dimensions can boost performance greatly. We further investigate whether considering a larger neighborhood would improve the model's performance. The empirical results demonstrate that considering a larger neighborhood improves the performance compared to only using first-order neighborhoods, but $T = 2$ is already optimal for most datasets. \begin{table}[t] \centering \caption{Effects of different components of {\sc Ortho-Reg}\xspace} \label{tbl-exp-abl} \small { \begin{threeparttable} { \scalebox{1.0}{ \begin{tabular}{c|ccc} \toprule[0.8pt] Variants & {\texttt{Cora}\xspace} & {\texttt{Citeseer}\xspace} & {\texttt{Pubmed}\xspace} \\ \midrule[0.5pt] Baseline & 84.7 & 73.5 & 82.8 \\ \midrule[0.5pt] $\alpha = 0$ & 54.7 & 51.4 & 47.2 \\ $\beta = 0$ & 79.3 & 68.7 & 76.8 \\ \midrule[0.5pt] $T = 1$ & 83.9 & 72.9 & 82.1 \\ $T = 2$ & \textbf{84.7} & \textbf{73.5} & \textbf{82.8}\\ $T = 3$ & 84.3 & 73.3 & 82.5\\ \bottomrule[0.8pt] \end{tabular}} } \end{threeparttable} } \end{table} \subsubsection{Hyperparameter Analysis} \begin{figure}[t] \centering \includegraphics[width = 1.0\linewidth]{Fig/sense.pdf} \caption{Performance heat map when using different $\alpha$, $\beta$ combinations in Eq.~\ref{eqn:ortho-reg}, on \texttt{Cora}\xspace and \texttt{Pubmed}\xspace.} \label{fig:sense} \end{figure} We further study how the two trade-off hyperparameters $\alpha$ and $\beta$ affect the performance of {\sc Ortho-Reg}\xspace. We try different combinations of $\alpha$ and $\beta$ on \texttt{Cora}\xspace, and \texttt{Pubmed}\xspace (we defer the results on \texttt{Citeseer}\xspace to Appendix~\ref{appendix-exp-sense} due to space limit), and plot the performance heatmap in Fig.~\ref{fig:sense}. The conclusion is very interesting: the performance of {\sc Ortho-Reg}\xspace is not very sensitive to a specific value of $\alpha$ or $\beta$. In other words, for a reasonable value of $\alpha$ ($\beta$), we can easily find another value of $\beta$ ($\alpha$) that can achieve similarly high performance. The ratio between $\alpha$ and $\beta$ seems much more important. From Fig.~\ref{fig:sense}, we can observe that for \texttt{Cora}\xspace, $\alpha/\beta = 2* 10^3$, and for \texttt{Pubmed}\xspace, $\alpha/\beta = 1* 10^3$ can lead to the optimal performance; changing the value of $\alpha$ while fixing $\alpha/\beta$ will not change the performance very much. \subsection{Robustness Against Structural Perturbations (RQ4)}\label{sec:exp-robustness} Finally, we study the robustness of {\sc Ortho-Reg}\xspace against attacks on the graph structures compared with GNN models. As {\sc Ortho-Reg}\xspace uses node features rather than a combination of node features and edges for prediction, we expect it to demonstrate better robustness under mild structural perturbations. To reach this target, we randomly mask a fraction of the edges of the graph and evaluate the performance of {\sc Ortho-Reg}\xspace and GCN under different edge-masking ratios. In Fig.~\ref{fig:exp-robustness}, we plot how the model's performance changes (with standard deviation) as the masking ratio increases with 20 random trials. \begin{figure}[t] \centering \includegraphics[width = 0.8\linewidth]{Fig/robustness.pdf} \caption{Effects of increasing of edge masking ratios.} \label{fig:exp-robustness} \end{figure} As demonstrated in Fig.~\ref{fig:exp-robustness}, our method demonstrates better robustness against moderate-level edge perturbations. This is because we do not explicitly use the graph structure for generating predictions, making {\sc Ortho-Reg}\xspace less sensitive to perturbations on the graph structure. \section{Conclusions}\label{sec:conclusion} In this paper, we have proposed {\sc Ortho-Reg}\xspace, a novel Graph-Regularized MLP method for node representation learning. We show that simple graph regularization methods can cause dimensionally collapsed node embeddings both theoretically and empirically. We show that the proposed {\sc Ortho-Reg}\xspace, which enforces the orthogonality of the correlation matrix of node embeddings, can naturally avoid the feature collapse phenomenon. We have conducted extensive experiments, including traditional transductive semi-supervised node classification tasks and inductive node classification for cold-start nodes, demonstrating the superiority of {\sc Ortho-Reg}\xspace. { \bibliographystyle{icml2023} \section{General response to all reviewers} We thank all the reviewers for their thorough comments and valuable opinions/suggestions. We've uploaded the revised version, which has addressed most of the concerns. Here we'd like to summarize the major changes in the revised version: 1) We emphasize the preconditions/assumptions of Lemma 1. Our Lemma 1 focuses on the Laplacian regularization loss, neglecting the impact of supervised loss. Besides, we assume a linear model to simplify the proof. 2) We replot Fig.2, focusing on the top-$8$ eigenvalues instead of all the $512$ eigenvalues, to better demonstrate the differences in different subfigures. Besides, we add another three subfigures in Fig.2, showing that at the same epoch, the drop rate of eigenvalues will increase as the trade-off hyperparameter $\lambda$ increases. 3) We add more content in Appendix: 1) In Appendix C.5, we conduct a sensitivity analysis of the trade-off hyperparameters $\alpha$ and $\beta$ in Eq.7. 2) in Appendix E, we provide more analysis of why dimensional collapse is not beneficial for linear classification. 4) We fix some typos and misleading symbols that might affect the reading. Besides, we have provided individual responses for each reviewer, which answer each question detailedly. We hope our responses can address your concerns. \section{Responses to Reviewer Mv49} We thank the reviewer for the valuable comments. We note that your concerns are mainly about the dimensional collapse phenomenon, i.e., whether it really exists and why collapsed representations are not desired. Here we would like to answer your questions one by one, and we hope this can address your concerns. Q1: Theoretical analysis in Sec.3 A1: Our theoretical analysis here does focus on the Laplacian regularization term (we study the gradient of the regularization loss with respect to the weight matrix $\mathbf{W}$). This is reasonable as both the supervised cross-entropy loss function and Laplacian regularization are important for shaping the final learned node representations. Even if the supervised cross-entropy loss function can prevent completely collapsed representations, the gradient of the Laplacian regularization term evitably has such an effect (that makes different dimensions of node embeddings over-correlated). Besides, in the proof of Lemma 1, we didn't assume the smoothing term would be optimized to zero. The proof studies the evolving of the singular values of the weight matrix when optimized using the Laplacian regularization term with respect to the optimization step $t$. The conclusion is that the ratio between smaller eigenvalues and the largest one is monotonically decreasing as the training step $t$ increases, which can then lead to Lemma 1. We understand the reviewer's concern that in reality, the supervised loss function can alleviate this issue to some extent. However, we do point out that the theoretical analysis only works on the regularization term in the initially submitted version. Besides, we have clearly stated that the analysis has its limitations (see the last paragraph before the empirical justification in Sec.3). This is why we provide further empirical results (a non-linear model trained using a weighted sum of the supervised cross-entropy loss function and Laplacian regularization) to justify the existence of dimensional collapse phenomenon. Q2: Experiments of Sec.3 are not clear. A2: We are sorry for causing a misunderstanding of Fig.2. The reason that the differences of the eigenspectra as the weight term increases are difficult to discriminate is that we plot the changes of all $512$ eigenvalues. To better show the change. We replot Fig. 2, where we only focus on the decay of top-8 eigenvalues (we put the original figure to Fig.7 in Appendix C.4 as a reference). Besides, to better show the change of eigenspectra for increasing $\lambda$, we add another three subfigures that plot the top eigenvalues of three different $\lambda$ values at the same training epoch (epoch 0, 20, and 100). As suggested by Reviewer 4, we also plot the 95\% confidence intervals. In the updated Fig.2 (especially the lower three subfigures), we can easily observe that as $\lambda$ increases, the ratio of top eigenvalues (w.r.t. the largest one) decreases quicker. Specifically, at training epoch 100, the second largest eigenvalue for $\lambda = 0.001$ and $\lambda = 0.1$ becomes only $20\%$ of the largest one, indicating the largest eigenvalue contains most of the information, while the remainings are much less important. The empirical results are consistent with our analysis above that the dimensional collapse phenomenon does exist in Laplacian regularization, even if considering the supervised loss and a non-linear model. Q3: Relationship between the dimensional collapse phenomenon and the poor performance of existing GR-MLPs . A3: We thank the reviewer for pointing out that the relationship between the dimensional collapse phenomenon of the poor performance of existing GR-MLPs is not clearly explained. Intuitively, when complete dimensional collapse happens, all data points will be embedded to fall on a line, and thus cannot be separated with a linear classifier. In Appendix E in the updated version, we further explain the limitation of weakly (or modestly) collapsed representations using a 2-dimensional example. Our main proposition is that, even if the training data points of modestly collapsed representations can be successfully separated by a linear classifier, they demonstrate worse robustness against attacks, and worse generalization ability to testing data (whose distribution might be a little bit shifted from the training one), due to the narrow embedding space on the direction of small eigenvalues. Another important thing is that when the dimensional collapse phenomenon exists, it can be hard to figure out whether it is a severe or moderate one. In this case, we'd rather directly eliminate the effect of dimensional collapse (as what we do in OrthoReg). Besides, enforcing orthogonality on node embeddings is just shaping the representations without regularization or constraint on the information that the representations should carry for downstream tasks. If the ability to filter noise is due to the supervised loss + lap-reg, enforcing orthogonality as what we do can hardly damage this ability as it uses the same embedding dimension, and it is just rebalancing the information each dimension should carry. Q4: Model inconsistency. A4: We do consider higher-order connectivity information in our final model, which is decided by the hyperparameter $T$. Note that when $T = 1$, our method only considers first-order connectivity, and we have studied the performance with different $T$ in Table 3 in the original version. When $T = 1$, our method can still achieve a very satisfying performance (only a little bit lower than the best ones when $T = 2$). These results can validate that the improvement in the model's performance is not because of the high-order connectivity. Besides, these results also demonstrate that considering higher-order connectivity can bring improvement to the model, which was not considered in traditional GR-MLPs. This is also an important contribution of this work. \section{Responses to Reviewer 2EfD} We thank the reviewer for the valuable comments and positive feedback. We are glad to answer your questions. Q1: Extensions to other graph types, e.g., knowledge graphs. A1: Though this paper focuses on representation learning on homogeneous graphs, the analysis and the proposed method in this paper can naturally generalize to complex graphs like knowledge graphs. However, there may be some obstacles hindering the direct application of our method to these more complicated graph-structured data. 1) As we aim at using MLPs instead of GNNs for learning node embeddings, a very basic assumption should be that the node features are rich enough so that we can implicitly infer the structures using node features with our regularization technique. As The number of relation types of knowledge graphs can be large, the processed node features in knowledge graph datasets might be insufficient to infer both the existence and the type of an edge. A more promising way should be to combine the model with Langauge Models, e.g., we can generate node features using Language Models, which could be finetuned together with the top MLPs. 2) Different from homogeneous graphs, where there is only one type of node and edges, knowledge graphs usually have multiple types of nodes and edges. Note that to perform our regularization, we have to extract the neighborhood information with Eq.(6); this brings a challenge for knowledge graphs as we have to discriminate different types of neighborhoods according to the node types and edge types. One trivial way is to use different models for different edge types (like RGCN[1]), which means we require a regularization loss term for every edge type. This can be very inefficient. Besides, how to effectively fuse information from different edge types is also crucial and challenging. References: [1] Schlichtkrull, Michael, et al. "Modeling relational data with graph convolutional networks." European semantic web conference. Springer, Cham, 2018. https://arxiv.org/abs/1703.06103 Q2: Explain the limitations of GNNs. A2: We are sorry for causing this ambiguous expression. Actually, in this sentence, we are mentioning the limitations of GNNs, which have been elaborated in Sec.1, the first paragraph: 1) GNNs rely on layer-wise message passing to aggregate features from the neighborhood, which is computationally inefficient during **inference**. For this limitation, you can refer to the analysis in GLNN[2] (Fig.1), which shows that the number of fetches and the inference time of GNNs are both magnitudes more than MLPs and grow exponentially as functions of the number of layers. 2) GNN models can not perform satisfactorily in cold-start scenarios where the connections of new incoming nodes are few or unknown. This could be observed if we study the classification accuracy of nodes with different degrees, like ColdBrew[3]. The conclusion is, the classification accuracy of nodes with a large degree is much higher than those of nodes with a low degree. Besides, GNNs perform poorly when predicting nodes having connections to the existing graph, even worse than the vanilla MLPs. References: [2] Zhang, Shichang, et al. "Graph-less neural networks: Teaching old mlps new tricks via distillation." In ICLR, 2022. https://arxiv.org/abs/2110.08727 [3] Zheng, Wenqing, et al. "Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods." In ICLR, 2022. https://arxiv.org/abs/2111.04840 Q3: Extend the analysis to non-linear cases. A3: Our theoretical analysis in Sec.3 is restricted to linear models as for a linear model, we are convenient to analyze the evolving of a single weight matrix. When extending to non-linear models, there will be more than one weight matrixes together with the effect of activation functions, which would be much more complicated. Despite this, we can still analyze the evolution of the final embeddings intuitively. The proof is based on the Universal Approximation Theorem[4], which shows that a shallow MLP model can approximate any function. With this assumption, we can treat node embeddings $\mathbf{H}$ as free learnable vectors. Then we can take the gradient of Laplacian regularization loss to $\mathbf{H}$: $$ \frac{\partial \mathcal{L}_{reg}}{\partial \mathbf{H}} = \frac{\partial \; {{\rm tr}(\mathbf{H}^{\top}\mathbf{L}\mathbf{H})}}{\partial \mathbf{H}} = \frac{\partial \; {{\rm tr}(\mathbf{H}^{\top}\mathbf{L}\mathbf{H})}}{\partial \mathbf{H}} \\ = (\mathbf{L} + \mathbf{L}^{\top})\mathbf{H} = 2\mathbf{L}\mathbf{H} \\ $$ Similarly, we can treat the embedding matrix as a function of the training step $t$, i.e., $\mathbf{W} = \mathbf{H}(t)$, then we have $\frac{{\rm d}\mathbf{H}(t)}{{\rm d}t} = 2\mathbf{LH}$, and we can solve the equation analytically: $$ \mathbf{H}(t) = \exp(\mathbf{L}t) \cdot\mathbf{H}(0). $$ The remaining proof is just similar to that in the proof of Lemma 1 in Appendix A.1. References: [4] Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. "Multilayer feedforward networks are universal approximators." Neural networks 2.5 (1989): 359-366. Q4: What are the values of $\alpha$ and $\beta$ in Table 3, and what's the trade-off between them? A4: The $\alpha$ and $\beta$ in Table 3 are the same as in Table 1. Cora: $\alpha = 2e-3, \beta = 1e-6$; Citeseer: $\alpha = 1e-3, \beta = 1e-6$; Pubmed: $\alpha = 2e-6, \beta = 2e-6$. Although the optimal hyperparameters for different datasets are different, they are close in their values, and the model's performance is not very sensitive to their specific values. In the revised version, we add another section in Appendix C.5, studying how the trade-off between $\alpha$ and $\beta$. The conclusion is that when $\alpha / \beta$ is about $10^3$, our method can achieve satisfying performance as long as $\alpha$ is within a reasonable range, e.g., from $0.0005$ to $0.005$. With this observation, it will not be hard to deploy our method to a new dataset. For example, we can set the initial values as $\alpha = 0.001$, and $\beta = 10^{-6}$. Then we can slightly tune the two hyperparameters to find the optimal combination. Q5: Study other types of graph regularization. A5: Our analysis and experiments in Sec.3 study Laplacian regularization only but can be extended to other graph regularization methods that enforce smoothness of representations over the graph structure and without additional regularizations, e.g., Propagation Regularization (P-reg [5]), which is defined as follows: $$ \mathcal{L}_{P-reg} = \Vert \mathbf{\tilde{A}H} - \mathbf{H} \Vert_F^2. $$ Similarly, we can take the gradient of $\mathcal{L}_{P-reg}$ to $\mathbf{H}$: $$ \frac{\partial \mathcal{L}_{P-reg}}{\partial \mathbf{H}} = \frac{\partial \Vert \mathbf{\tilde{A}H} - \mathbf{H} \Vert_F^2}{\partial \mathbf{H}} \\ = 2(\mathbf{\tilde{A} - I})^{\top}(\mathbf{\tilde{A} - I})\mathbf{H}\\ = 2 \mathbf{L}^{\top}\mathbf{L}\mathbf{H}. $$ As a result, similar conclusions can be derived. \section{Responses to Reviewer GzRn} We thank the reviewer for the thorough reviews and kind suggestions. We note that the reviewer's concerns are mainly about the theoretical parts in Sec.3. E.g., some preconditions/assumptions are not emphasized and need more descriptions of the eigenvalues of the embedding matrix. Besides, there might be a misunderstanding of the claimed efficiency benefit of our method over GNNs. Here we'd like to address these concerns: Q1: Emphasize and clarify the assumptions for Lemma 1. A1: We thank the reviewer for pointing out that some preconditions and assumptions are not clearly stated. In the revised version, we have emphasized the linear model assumption in both the abstract and the introduction part of Lemma 1. However, we'd like to emphasize that a similar analysis can be extended to non-linear cases when the Universal Approximation Property of MLP is assumed. In Sec.3, we consider the linear model because it is much more convenient to analyze its gradient. We didn't consider the effect of supervised loss because in Lemma 1, we are mainly analyzing the effect of the regularization term on the gradient of the weight matrix. We agree that the supervised loss has a non-negligible effect on the gradient as well, but what we'd like to express through Lemma 1 is, the Laplacian regularization loss does have an effect to lead to dimensional collapse, even if the collapse phenomenon is not very severe. In the revised version, we have emphasized that we neglect the supervised loss and focus on the effect of Laplacian regularization in Lemma 1, and we hope in this way, the analysis in Sec.3 is clear and will not cause misunderstandings anymore. Besides, we'd like to point out that we clearly stated the two limitations: 1) linear model, 2) neglect of the effect of supervised loss, in Sec.3, the last paragraph before the empirical justifications. Furthermore, our empirical result in Sec.3 does not rely on the two assumptions, and it validates that the dimensional collapse phenomenon does exist. Q2: More descriptions of $\{\lambda_i^C\}_{i=1}^D$ after Lemma 1. A2: We have added more descriptions of $\{\lambda_i^C\}_{i=1}^D$ in Theorem 1 in the revised version. In theorem 1, we conclude that dimensional collapse behaves as the vanishing of small eigenvalues with respect to the largest few ones: $$ \lim_{t \rightarrow \infty} \frac{\lambda_i^{\mathbf{C}}(t)}{\lambda_j^{\mathbf{C}}(t)} = 0, \forall i \le d \; {\rm and} \; j \ge d+1. $$ In this way, we can turn back to the analysis of the eigenvalues of embeddings $\mathbf{H}$ instead of $\mathbf{W}$. Besides, we also mentioned $\lambda_i^{\mathbf C}$ in the empirical justification part, where Fig. 2 plots the evolving of eigenvalues $\{\lambda_i^C\}_{i=1}^D$, and Fig.3 plots the evolving of NESum ${\text{NESum}}(\mathbf{C}) = \{\lambda_i^{\mathbf{C}}\} \triangleq \sum_{i=1}^{d} {\lambda_i^{\mathbf{C}}} /{\lambda_1^{\mathbf{C}}}$. Q3: Differences between our method with existing methods with regard to efficiency. A3: We'd like to emphasize that the efficiency claimed in this paper is about the **inference stage** (instead of training). As stated in both Abstract and the first paragraph in Sec.1, GNNs rely on layer-wise message passing to aggregate features from the neighborhood, which is computationally inefficient during inference, especially when the model becomes deep and the graph is large. The training cost of GNNs and GR-MLPs (including ours) is similar, as all the methods require the utilization of the graph structure information, either explicitly or implicitly. The inference time, however, varies a lot. Consider a $L$-layer GNN, considering minibatch training (batch size is $B$) and neighbor sampling with a fixed number of neighbors as $k$, the inference complexity should be $O(BLd^2k^{L})$, while that for a $L$-layer MLP is only $O(BLd^2)$. As a result, MLPs (all GR-MLPs) can be inferred much faster than GNNs, especially when the graph is large and the model is deep. We also provide an empirical study in Fig.6, Appendix C.3 to show the superiority of our method over GNNs with regard to inference speed. \section{Responses to Reviewer JMnM} We thank the reviewer for the thorough comments and valuable suggestions. We've fixed the typos and would like to address your concerns by answering your questions one by one: Q1: Didn't choose the baselines like those in [B]. A1: We thank the reviewer for mentioning [B]. [B] studies a supervised/unsupervised node representation learning task where the encoder is also an MLP. Considering that the supervised model, named N2N(JL) could also be regarded as a GR-MLP model, it is highly related to our work and should also be a baseline. As in [B] the authors use a different experimental setting; we reproduce it according to the authors' codes and have included the results in the revised version (in Table 1). It has to be mentioned that the TAPS sampling strategy is not clearly explained in the paper and is also not presented in the authors' codes (they simply provide the sampled positive examples for Cora dataset), the version we reproduce is N2N (JL) without using TAPS. For the baselines used in [B], it is good that [B] can use so many baseline methods, but it seems unnecessary as most of these baseline methods are not representative ones. Besides, they perform similarly and much worse than the proposed method. The purpose of using these baselines is to basically show that the proposed N2N model can beat a lot GNN models under the given experimental setting. In our paper, we mainly study the performance gap between Graph-Regularized MLPs and GNNs in supervised settings. As a result, we mainly compare our method with other methods using MLP as encoders (e.g., KD-MLPs and GR-MLPs). To show that our method can match the performance of GNNs, we select three representative methods as GNN baselines: SGC, GCN and GAT. We didn't select more advanced GNN models as they might consist of complicated designs (e.g., more parameters or deeper architectures). Note that although we employ a GNN-like module to summarize neighborhood embeddings (i.e., Eq.(5)), it is shallow (i.e., T=2) and parameter-free (it is even simpler than SGC). As a result, it is fair to compare the proposed OrthoReg with the three GNN baselines. Besides, we'd like to mention that our method can even match the performance of SOTA GNN models with complicated regularizations, like GRAND[2]. E.g., on Pubmed, our method gets 82.8\% accuracy, which is higher than 82.7\% of GRAND (see Fig.1). References: [1] Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu, Qiang Yang, Evgeny Kharlamov, and Jie Tang. Graph random neural networks for semi-supervised learning on graphs. In NeurIPS, 2020. Q2: Differences between different coefficients in Fig 2. A2: We are sorry for presenting a figure where the difference between different coefficients is hard to discriminate. The reason that the differences of the eigenspectra as the weight term increases are challenging to determine is that we plot the changes of all $512$ eigenvalues. To better show the change. We replot Fig. 2, where we only focus on the decay of top-8 eigenvalues (we put the original figure to Fig.7 in Appendix C.4 as a reference). Besides, to better show the change of eigenspectum for increasing $\lambda$, we add another three subfigures that plot the top eigenvalues of three different $\lambda$ values at the same training epoch (epoch 0, 20, and 100). As suggested by you, we also plot the 95\% confidence intervals (with ten random initializations). In the updated Fig.2 (especially the lower three subfigures), we can easily observe that as $\lambda$ increases, the ratio of top eigenvalues (w.r.t. the largest one) decreases quicker. Q3: The relation of $\mathbf{T}$ in the appendix to the $T$ in Eq.(5). A3: The two terms are different. $\mathbf{T}$ is a matrix which is defined as $\mathbf{T} = \mathbf{X}^{\top}\mathbf{L}\mathbf{X}$ in Lemma 1, whereas $T$ is defined in Sec. 4.2 denoting $T$-hop neighbors. We are sorry for causing such confusion, and we have replaced $\mathbf{T}$ with $\mathbf{P}$ in the updated version. Q4: Last two steps of the derivation in Eq.(8). A4: We already have: $$\frac{\partial \mathcal{L}_{reg}}{\partial \mathbf{W}} = \frac{\partial {\rm tr} ((\mathbf{XW})^{\top}\mathbf{L}(\mathbf{XW}))}{\partial \mathbf{W}} \\ = \frac{\partial {\rm tr} (\mathbf{W}^{\top}\mathbf{X}^{\top}\mathbf{L}\mathbf{XW})}{\partial \mathbf{W}} $$ Denote $\mathbf{P}=\mathbf{X}^{\top}\mathbf{LX}$, we have: $$\frac{\partial {\rm tr} (\mathbf{W}^{\top}\mathbf{X}^{\top}\mathbf{L}\mathbf{XW})}{\partial \mathbf{W}} = \frac{\partial {\rm tr} (\mathbf{W}^{\top}\mathbf{P}\mathbf{W})}{\partial \mathbf{W}}\\={\mathbf{PW}+\mathbf{P^{\top}W}} $$ As $\mathbf{P}=\mathbf{X}^{\top}\mathbf{LX} = (\mathbf{X}^{\top}\mathbf{LX})^{\top} = \mathbf{P}^{\top}$ is symmetric, we finally have: $$\mathbf{PW}+\mathbf{P^{\top}W} = 2\mathbf{PW}$$ The reviewer's confusion about the previous derivation might be that we use two similar terms $\mathbf{T}$ and $T$ in the initial version. In the latest version, we have replaced $\mathbf{T}$ with $\mathbf{P}$ to avoid confusion. Q5: Model-level contribution. A5: We do have model-level contribution besides the regularization term. Our model-level contribution is mainly about the neighborhood abstraction function, i.e., Eq.(6), which introduces an edge-centric operation to capture the (multi-hop) neighborhood information of a target node. This mitigates the issue in existing GR-MLPs that only consider first-order neighborhood information. Besides, our ablation study in Table 3 validates the values of higher-order information. Q6: Do we regularize the correlations or the covariance matrix? A6: In our implementation, we regularize the correlation matrix $\mathbf{C}$ instead of the covariance matrix $\mathbf{\Sigma}$. Note that $$ {C}_{kk'} = \frac{{\Sigma}_{kk'}}{\sqrt{\Sigma_{kk}\Sigma_{k'k'}}} \;(\text{as in Eq.(2)})$$ So regularizing the covariance matrix should have a similar effect. Q7: Model name. A7: We thank the reviewer for mentioning a previous work that uses the same model name as ours. I agree it will be better to use another name for our method, but we haven't come up with a better idea. Currently, we plan to continue using OrthoReg as our model name. Thanks for your suggestion! \end{document} \section{Introduction}\label{sec:intro} \begin{figure}[h] \centering \includegraphics[width = 0.95\linewidth]{Fig/intro.pdf} \caption{As an MLP model, our method performs even better than GNN models on \texttt{Pubmed}\xspace, but with a much faster inference speed. GRAND~\citep{grand} is one of the SOTA GNN models on this task. Circled markers denote MLP baselines, and squared markers indicate GNN baselines.} \label{fig:intro} \end{figure} Graph Machine Learning (GML) has been attracting increasing attention due to its wide applications in many real-world scenarios, like social network analysis~\citep{social_rec}, recommender systems~\citep{gcmc, danser}, chemical molecules~\citep{molclr, 3dinfomax} and biology structures. Graph Neural Networks (GNNs)~\citep{gcn, graphsage, gat, gin} are currently the dominant models for GML thanks to their powerful representation capability through iteratively aggregating information from neighbors. Despite their successes, such an explicit utilization of graph structure information hinders GNNs from being widely applied in industry-level tasks. On the one hand, GNNs rely on layer-wise message passing to aggregate features from the neighborhood, which is computationally inefficient during inference, especially when the model becomes deep ~\citep{glnn}. On the other hand, recent studies have shown that GNN models can not perform satisfactorily in cold-start scenarios where the connections of new incoming nodes are few or unknown~\citep{coldbrew}. By contrast, Multi-Layer Perceptrons (MLPs) involve no dependence between pairs of nodes, indicating that they can infer much faster than GNNs ~\citep{glnn}. Besides, they can predict for all nodes fairly regardless of the number of connections, thus can infer more reasonably when neighborhoods are missing ~\citep{coldbrew}. However, it remains challenging to inject the knowledge of graph structure information into learning MLPs. One classical and popular method to mitigate this issue is Graph-Regularized MLPs (GR-MLPs in short). Generally, besides the basic supervised loss (e.g., cross-entropy), GR-MLPs employ an additional regularization term on the final node embeddings or predictions based on the graph structure~\citep{lap-reg, label-propagation, p-reg, graphmlp}. Though having different formulations, the basic idea is to make node embeddings/predictions smoothed over the graph structure. Even though these GR-MLP models can implicitly encode the graph structure information into model parameters, there is still a considerable gap between their performance compared with GNNs~\citep{lap-reg, p-reg}. Recently, another line of work, GNN-to-MLP knowledge distillation methods (termed by KD-MLPs)~\citep{glnn, coldbrew}, have been explored to incorporate graph structure with MLPs. In KD-MLPs, a student MLP model is trained using supervised loss and a knowledge-distillation loss from a well-trained teacher GNN model. Empirical results demonstrate that with merely node features as input, the performance of KD-MLPs can still match that of GNNs as long as they are appropriately learned. However, the 2-step training of KD-MLPs is undesirable, and they still require a well-trained GNN model as a teacher. This motivates us to rethink the failure of previous GR-MLPs to solve graph-related applications and study the reasons that limit their performance. \textbf{Presented work:} In this paper, we first demonstrate that node embeddings learned from existing GR-MLPs suffer from dimensional collapse~\citep{feature-decorrelation,understand-dimensional-collapse}, a phenomenon that the embedding space of nodes is dominated by the largest (a few) eigenvalue(s). Our theoretical analysis demonstrates that the dimensional collapse in GR-MLP is due to the irregular feature interaction caused by the graph Laplacian matrix (see Lemma~\ref{lemma:weight_matrix-shrink}). We then propose Orthogonality Regularization ({\sc Ortho-Reg}\xspace in short), a novel GR-MLP model, to mitigate the dimensional collapse issue in semi-supervised node representation learning tasks. The key design of {\sc Ortho-Reg}\xspace is to enforce an additional regularization term on the output node embeddings, making them \textbf{orthogonal} so that different embedding dimensions can learn to express various aspects of information. Besides, {\sc Ortho-Reg}\xspace extends the traditional first-order proximity preserving target to a more flexible one, improving the model's expressive power and generalization ability to non-homophily graphs. We provide a thorough evaluation for {\sc Ortho-Reg}\xspace on various node classification tasks. The empirical results demonstrate that {\sc Ortho-Reg}\xspace can achieve competitive or even better performance than GNNs. Besides, using merely node features to make predictions, {\sc Ortho-Reg}\xspace can infer much faster on large-scale graphs and make predictions more reasonable for new nodes without connections. In Fig.~\ref{fig:intro} we present the performance of {\sc Ortho-Reg}\xspace compared with GNNs and other MLPs on \texttt{Pubmed}\xspace, where {\sc Ortho-Reg}\xspace achieves SOTA performance with the fastest inference speed. \textbf{We summarize our contributions as follows}: \begin{itemize} \item[\textbf{1)}] We are the first to examine the limited representation power of existing GR-MLP models from the perspective of dimensional collapse. We provide theoretical analysis and empirical studies to justify our claims. \item[\textbf{2)}] To mitigate the dimensional collapse problem, we design a novel GR-MLP model named {\sc Ortho-Reg}\xspace. {\sc Ortho-Reg}\xspace encourages the node embeddings to be orthogonal through explicit soft regularization, thus can naturally avoid dimensional collapse. \item[\textbf{3)}] We conduct experiments on traditional transductive semi-supervised node classification tasks and inductive node classification under cold-start scenarios on public datasets of various scales. The numerical results and analysis demonstrate that by learning orthogonal node representations, {\sc Ortho-Reg}\xspace can outperform GNN models on these tasks. \end{itemize} \vspace{-6pt} \section{Backgrounds and Related Works} \vspace{-6pt} \subsection{Problem Formulation} We mainly study a general semi-supervised node classification task on a single homogeneous graph where we only have one type of node and edge. We denote a graph by ${\mathcal{G}} = ({\mathcal{V}}, {\mathcal{E}})$, where ${\mathcal{V}}$ is the node set, and ${\mathcal{E}}$ is the edge set. For a graph with $N$ nodes (i.e., $|{\mathcal{V}}| = N$), we denote the node feature matrix by ${\bm{X}} \in {\mathbb{R}}^{N \times D}$, the adjacency matrix by ${\bm{A}} \in {\mathbb{R}}^{N \times N}$. In semi-supervised node classification tasks, only a small portion of nodes are labeled, and the task is to infer the labels of unlabeled nodes using the node features and the graph structure. Denote the labeled node set by ${\mathcal{V}}^{L}$ and the unlabeled node set by ${\mathcal{V}}^{U}$, then we have ${\mathcal{V}}^{L} \cap {\mathcal{V}}^{U} = \varnothing$ and ${\mathcal{V}}^{L} \cup {\mathcal{V}}^{U} = {\mathcal{V}}$. Denote the one-hot ground-truth labels of nodes by $\hat{{\bm{Y}}} \in \mathbb{R}^{N \times C}$, and the predicted labels by ${{\bm{Y}}}$. One can learn node embeddings ${\bm{H}}$ using node features ${\bm{X}}$ and adjacency matrix ${\bm{A}}$, and use the embeddings to generate predicted labels $\hat{{\bm{Y}}}$. For example, GNNs generate node representations through iteratively aggregating and transforming the embeddings from the neighbors and could be generally formulated as ${\bm{H}} = f_{\theta}({\bm{X}}, {\bm{A}})$. Then a linear layer is employed on top of node embeddings to predict the labels $ {\bm{Y}} = g_{\theta}({\bm{H}})$. The model could be trained in an end-to-end manner by optimizing the cross-entropy loss between predicted labels and ground-truth labels of labeled nodes: $\mathcal{L}_{sup} = \bm{\ell}_{xent}({\bm{Y}}^{L}, \hat{{\bm{Y}}}^{L}) = \sum\limits_{i \in {\mathcal{V}}^L} \bm{\ell}_{xent} ({\bm{y}}_i, \hat{{\bm{y}}}_i )$. Note that GNNs explicitly utilize the graph structure information through learning the mapping from node features and graph adjacency matrix to predicted labels. However, due to the limitations introduced in Sec.~\ref{sec:intro} (inefficiency at inference and poor performance for cold-start nodes), we seek to learn an MLP encoder, i.e., $\bm{H} = f_{\theta} (\bm{X})$ that only takes node features for making predictions. \vspace{-5pt} \subsection{Graph-regularized MLPs} Graph-Regularized MLPs (GR-MLPs in short) implicitly inject the graph knowledge to the MLP model with an auxiliary regularization term on the node embeddings/predictions over the graph structure~\citep{label-propagation,p-reg,graphmlp}, whose objective function could be generally formulated as: $ \mathcal{L} = \mathcal{L}_{{sup}} + \lambda \mathcal{L}_{{reg}}, \; \mbox{where} \; \mathcal{L}_{{reg}} = \bm{\ell} (\bm{H}, \bm{A}) \; \mbox{or} \; \bm{\ell} ({\bm{Y}}, \bm{A}) $. The most representative graph regularization method, Graph Laplacian regularization~\citep{label-propagation, lap-reg}, enforces local smoothness of embeddings/predictions between two connected nodes: $\bm{\ell}(\bm{Y}, \bm{A}) = \textrm{tr}[\bm{Y}^{\top}\bm{L}\bm{Y}]$, where $\bm{L} = \bm{I} - \tilde{\bm{A}} = \bm{I} - \bm{D}^{-1/2}\bm{A}\bm{D}^{-1/2}$ is the (symmetric normalized) Laplacian matrix of the graph. Note that $\bm{Y}$ can be replaced with $\bm{H}$ if one would like to regularize node embeddings instead of predicted labels. Later works apply advanced forms of regularization, like propagation regularization (P-Reg,~\citet{p-reg}), contrastive regularization~\citep{graphmlp}, etc. Regardless of the minor differences, they are all based on the graph homophily assumption that connected nodes should have similar representations/labels. With the graph structure information implicitly encoded into the model parameters, GR-MLPs can improve the representative power of MLP encoders. However, their performances are still hard to match compared to those of GNN models. \begin{remark} {\rm{(Differences from confusing concepts)}}. Though sound similar, Graph-regularized MLPs(GR-MLPs) are totally different from Graph-augmented MLPs (GA-MLPs). Although trained with implicit graph structure regularization, GR-MLPs make predictions directly through the MLP model. By contrast, GA-MLPs, such as SGC~\citep{sgc}, APPNP~\citep{appnp}, GFNN~\citep{gfnn} and SIGN~\citep{sign} explicitly employ the graph structure to augment the node representation generated from an MLP model. GR-MLP is also different from a recent work named P(ropagational)MLP~\citep{pmlp}. Note that PMLP uses message passing (or graph structure information in general) in testing instead of training, while GR-MLPs use message passing in training instead of testing. \end{remark} \subsection{Dimensional Collapse} Dimensional collapse (also known as spectral collapse in some work~\citep{spectral-collapse}) is a phenomenon in representation learning where the embedding space is dominated by the largest a few singular values (other singular values decay significantly as the training step increases)~\citep{gap_whiten, feature-decorrelation, understand-dimensional-collapse, gnn-express}. As the actual embedding dimension is usually large, the dimensional collapse phenomenon prevents different dimensions from learning diverse information, limiting their representation power and ability to be linearly discriminated. ~\citet{understand-dimensional-collapse} has analyzed the dimensional collapse phenomenon from a theoretical perspective and attributed it to the effect of strong data augmentation and implicit regularization effect of neural networks~\citep{implicit-reg, gradient-descent-align}. Previous methods usually adopt whitening operation~\citep{feature-decorrelation, w-mse} to mitigate this issue, while such explicit whitening methods are usually computationally inefficient and thus are not applicable to GR-MLP where efficiency is much more important. In this paper, we demonstrate that node embeddings learned from conventional Graph-Regularized MLPs also suffer from dimensional collapse as well. We provide theoretical analysis on how it is caused and develop a computationally efficient soft regularization term to mitigate it. \section{Dimensional Collapse in GR-MLPs}\label{sec:dimensional-collapse} In this section, we investigate the reasons behind the weak representation power of previous GR-MLPs. In short, we find that the expressive power of traditional GR-MLPs (e.g., with graph Laplacian regularization~\citep{lap-reg}) is restricted by the dimensional collapse issue, which indicates that the embedding space is dominated by the largest few eigenvalues. We first provide empirical results to demonstrate the existence of the dimensional collapse phenomenon in MLPs with Laplacian regularization. Then we analyze the causes of it from a theoretical perspective by analyzing the dynamics of Laplacian regularization. Note that the objective function of Graph Laplacian regularization for semi-supervised node classification tasks could be formulated as follows: \begin{equation}\label{eqn:lap-reg} \begin{split} \mathcal{L} & = \bm{\ell}_{xent}({\bm{Y}}^{L}, \hat{{\bm{Y}}}^{L}) + \lambda \text{tr} [{{\bm{H}}}^{\top}{{\bm{L}}}{{\bm{H}}}] \end{split} \end{equation} Following ~\citet{gap_whiten}, we study the eigenvalues of node embeddings' correlation matrix ${\bm{C}} = \{C_{kk '}\}\in {\mathbb{R}}^{D \times D}$, where $C_{kk'}$ is defined as: \begin{equation}\label{eqn-corrleation} C_{kk'} = \frac{\Sigma_{kk'}}{\sqrt{\Sigma_{kk}\Sigma_{k'k'}}}, \; \mbox{and} \; \bm{\Sigma} = \sum_{i \in {\mathcal{V}}}\frac{({\bm{h}}_i - \overline{{\bm{h}}})^{\top}({\bm{h}}_i - \overline{{\bm{h}}})}{|{\mathcal{V}}|} \end{equation} Note that ${\bm{h}}_i \in \mathbb{R}^{1\times D}$, $\overline{{\bm{h}}} = \sum_{i=1}^{|{\mathcal{V}}|} {\bm{h}}_i / |{\mathcal{V}}|$ is the averaged node embedding vector, so $\bm{\Sigma}$ is the covariance matrix of $ {\bm{H}}$, and we denote ${\bm{C}}$'s eigenvalues in a descending order by $\{\lambda_1^{\bm{{\bm{C}}}}, \lambda_2^{\bm{{\bm{C}}}}, \cdots, \lambda_D^{\bm{{\bm{C}}}}\}$. \begin{figure*}[t] \begin{minipage}[h]{0.65\linewidth} \vspace{0pt} \centering \includegraphics[width=1.0\textwidth,angle=0]{Fig/eigenvalue.pdf} \caption{Eigenspectra for node embeddings with different strengths of Laplacian regularization $\lambda$ (the upper three figures), at different training epochs (the lower three figures). x-axis represents the index of sorted eigenvalues and y-axis is the normalized eigenvalue (the ratio to the largest one). The results are averaged over 10 random initialization with $95\%$ confidence intervals. } \label{fig:eigenvalue-lapreg} \end{minipage} \begin{minipage}[h]{0.3\linewidth} \vspace{0pt} \centering \includegraphics[width=1.0\textwidth,angle=0]{Fig/nesum.pdf} \caption{Evolving of NESum as the training epoch increases, with different regularization factors.} \label{fig:eigensum} \end{minipage} \end{figure*} \paragraph{Empirical Observations.} We train a 3-layer MLP model using Eq.~\ref{eqn:lap-reg} as the objective function on \texttt{Cora}\xspace dataset. To study the relationship between Laplacian regularization and the dimensional collapse phenomenon, we try different regularization factor $\lambda$ (i.e., $0$, $0.001$, $0.1$ respectively). Note that $\lambda = 0$ corresponds to a pure MLP without regularization. Fig~\ref{fig:eigenvalue-lapreg} plots the evolvement of top eigenvalues (which are re-normalized as the ratio between each eigenvalue to the largest one $\lambda_i^{\bm{{\bm{C}}}} / \lambda_1^{\bm{{\bm{C}}}}$) as the training step increases and with different factors $\lambda$. We can observe that without Laplacian regularization (i.e., $\lambda = 0$), the decay of top eigenvalues is very slow and is almost negligible (e.g., $\lambda^{\bm{{\bm{C}}}}_6 / \lambda^{\bm{{\bm{C}}}}_1 \ge 0.5$ even after $100$ steps). By contrast, even with a small regularization factor $\lambda^{\bm{{\bm{C}}}} = 0.001$, we can observe a fast decay rate (e.g, $\lambda^{\bm{{\bm{C}}}}_5 / \lambda^{\bm{{\bm{C}}}}_1 \le 0.2 $ after $40$ steps). This phenomenon is even more remarkable with a large factor. These observations demonstrate a positive correlation between Laplacian regularization and the dimensional collapse phenomenon. We further employ \textit{normalized eigenvalue sum} (NESum) introduced in~\citet{gap_whiten} as a metric to measure the extent of the dimensional collapse. Formally, NESum is defined as the ratio between the summation of all eigenvalues and the largest one: $ {\rm NESum}(\{\lambda_i^{{\bm{C}}}\}) \triangleq \sum_{i=1}^{D} {\lambda_i^{{\bm{C}}}} /{\lambda_1^{{\bm{C}}}}$. Intuitively, a large NESum value indicates that the eigenvalues are fluently distributed, while a very small one indicates the dimensional collapse phenomenon (the largest eigenvalue becomes dominant). In Fig.~\ref{fig:eigensum}, we plot the evolution of NESum with different regularization strengths. It is observed that 1) NESum decreases as training goes on because the model learns to pay more attention to important features for downstream classification tasks. 2) NESum trained with purely cross-entropy loss converges to a high value. 3) With additional Laplacian regularization, NESum decreases quickly and converges to a small value even if the regularization factor $\lambda$ is small. The above observations demonstrate that Laplacian regularization leads to a larger decay rate of top eigenvalues. The significant decay rate will make the learned representations less informative for classifications due to \textit{information loss}~\citep{gnn-express}. \paragraph{Theoretical Analysis.} The empirical results above have verified the inner connection between Laplacian regularization and dimensional collapse. We'd like to further study how the optimization of Laplacian regularization leads to dimensional collapse. To simplify the analysis, we first consider a simple linear model as the encoder to learn node embeddings, i.e., ${\bm{H}} = {\bm{X}}{\bm{W}}$, where ${\bm{W}} \in {\mathbb{R}}^{F\times D}$ is the weight matrix (we further assume $F = D$ in this part for simplicity). The model (i.e., the weight matrix ${\bm{W}}$) is optimized using stochastic gradient descent. Then we have the following lemma on the evolvement of the weight matrix's singular values. \begin{lemma}\label{lemma:weight_matrix-shrink} {\rm{(Shrinking singular-space of weight matrix.)}} Consider the linear model above which is optimized with $\mathcal{L}_{reg} = \text{tr} [{{\bm{H}}}^{\top}{{\bm{L}}}{{\bm{H}}}]$. Let ${\bm{P}} = {\bm{X}}^{\top} {\bm{L}} {\bm{X}} = \sum\limits_{ij}L_{ij}{\bm{x}}_i \cdot {\bm{x}}_j^{\top}$ and denote its non-ascending eigenvalues by $\{\lambda^{{\bm{P}}}_1, \lambda^{{\bm{P}}}_2, \cdots, \lambda^{{\bm{P}}}_D \}$. Denote the randomly initialized weight matrix by ${\bm{W}}(0)$ and the updated weight matrix at time $t$ by ${\bm{W}}(t)$, respectively. We further denote the non-ascending singular values of ${\bm{W}}$ at time $t$ by $\{\sigma^{{\bm{W}}}_i (t)\}_{i=1}^D$. Then the relative value of the smaller eigenvalues to the larger ones will decrease as $t$ increases. Formally, $\frac{\sigma^{{\bm{W}}}_i(t)}{\sigma^{{\bm{W}}}_j(t)} \le \frac{\sigma^{{\bm{W}}}_i(t')}{\sigma^{{\bm{W}}}_j(t')}, \; \; \forall \; t < t', i\le j. $ Furthermore, if the following condition holds: $\lambda^{{\bm{P}}}_1 \ge \cdots \ge \lambda^{{\bm{P}}}_d > \lambda^{{\bm{P}}}_{d+1} \ge \cdots \ge \lambda^{{\bm{P}}}_D $, then \begin{equation} \lim_{t \rightarrow \infty} \frac{\sigma^{{\bm{W}}}_i(t)}{\sigma^{{\bm{W}}}_j(t)} = 0, \; \forall \; i \le d \; \mbox{and} \; j \ge d+1. \end{equation} \end{lemma} See proof in Appendix~\ref{proof:weight_matrix-shrink}. Lemma~\ref{lemma:weight_matrix-shrink} indicates that the singular values of ${\bm{W}}$ (in proportional to the larger ones) shrink as the training step increases. With Lemma~\ref{lemma:weight_matrix-shrink}, we can conclude the following theorem that reveals a dimensional collapse phenomenon under this condition: \begin{theorem}\label{theorem:dimensional-collapse} {\rm (Laplacian regularization leads to dimensional collapse.)} For the linear model above optimized with Graph Laplacian regularization, the embedding space of nodes tends to be dominated by the largest a few eigenvalues. Specifically, if the covariance matrix of input features is an identity matrix, we have: \begin{equation} \lim_{t \rightarrow \infty} \frac{\lambda^{{\bm{C}}}_i(t)}{\lambda^{{\bm{C}}}_j(t)} = 0, \; \forall \; i \le d \; \mbox{and} \; j \ge d+1. \end{equation} \end{theorem} See proof in Appendix~\ref{proof:dimensional-collapse}. Theorem~\ref{theorem:dimensional-collapse} reveals that with the effect of Graph Laplacian regularization, the eigenspectrum is dominated by its largest few eigenvalues, leading to the dimensional collapse phenomenon. In a more general case, the encoder should be more complicated (e.g., an MLP with non-linearity) rather than a linear model. In this case, we can study the asymptotic behavior (e.g., dynamics) in feature space. Gradient descent with step size $\tau$ derives the following update rule of the node embedding matrix: \begin{equation} {\bm{H}}^{(t+1)} = [(1-2\tau){\bm{I}} + 2\tau \tilde{{\bm{A}}}]{\bm{H}}^{(t)}. \end{equation} Let $\tau = 1/2$, we have a step-wise updating formula ${\bm{H}}^{(t+1)} = \tilde{{\bm{A}}}{\bm{H}}^{(t)}$, where $\tilde{{\bm{A}}} = {\bm{D}}^{-1/2}{\bm{A}}{\bm{D}}^{-1/2}$. \citet{gnn-express} has proved that such an updating rule leads to a shrinking low-dimensional embedding subspace as $t \rightarrow \infty$, which restricts the expressive power due to information loss. \section{Overcoming Dimensional Collapse via Orthogonality Regularization} \subsection{Explicit Regularization on the Correlation Matrix} Our thorough analysis in Sec.~\ref{sec:dimensional-collapse} reveals that the poor performance of GR-MLPs could be attributed to less-expressive node representations (due to dimensional collapse). Specifically, we establish that the eigenspectrum of the embeddings' correlation matrix is dominated by the largest eigenvalue (different dimensions are \textbf{over-correlated}). In contrast to dimensional collapse, whitened representations have an identity correlation matrix with equally distributed eigenvalues. Motivated by this, a natural idea should be enforcing a soft regularization term on the correlation matrix of node embeddings, e.g., minimizing the distance between ${\bm{C}}$ and the identity matrix ${\bm{I}}$: \begin{equation}\label{eqn:corr_reg} \bm{\ell}_{corr\_reg} = \Vert \bm{C} - \bm{I} \Vert_F^2 = \sum\limits_{i=1}^{d} (1-{{C}}_{ii})^2 + \sum\limits_{i \neq j} {C}_{ij}^2 = \sum\limits_{i \neq j} {C}_{ij}^2 . \end{equation} Note that the on-diagonal terms ${C}_{ii} = 1$ for all $i$, so Eq.~\ref{eqn:corr_reg} is essentially forcing the off-diagonal terms of the correlation matrix to become zero, or in other words, making the embeddings \textbf{orthogonal}, so that different dimensions of node embeddings can capture orthogonal information. One may directly equip Eq.~\ref{eqn:corr_reg} with existing GR-MLPs for alleviating the dimensional collapse issue. However, we would like to design a more general, flexible, and elegant formulation that can handle high-order connectivity and non-homophily graphs~\citep{geom-gcn,heterophily-survey}. We then introduce {\sc Ortho-Reg}\xspace, a powerful and flexible GR-MLP model, step by step. \subsection{Graph-Regularized MLP with {\sc Ortho-Reg}\xspace}\label{sec:method-ortho-reg} Similar to previous GR-MLPs, we first use an MLP encoder to map raw node features to the embeddings. This process can be formulated as ${\bm{H}} = {\rm MLP}_{\theta}({\bm{X}})$, where ${\bm{X}} = \{{\bm{x}}_i\}_{i=1}^{|{\mathcal{V}}|}$ is raw node features while $\bm{H} = \{ {\bm{h}}_i\}_{i=1}^{|{\mathcal{V}}|}$ is the embedding matrix. The next question is what kind of graph structure information is more beneficial. Previous GR-MLPs resort to either edge-centric smoothing~\citep{label-propagation,lap-reg} or node-centric matching~\citep{p-reg, graphmlp}. While recent studies indicate that the node-centric method is more appropriate for node-level tasks as edge-centric methods overemphasize the ability to recover the graph structure~\citep{p-reg}. Inspired by this, we employ a \textbf{neighborhood abstraction} operation to summarize the neighborhood information as guidance of the central node. Formally, for a node $i \in {\mathcal{V}}$ and the embeddings of its (up-to) $T$-hop neighbors $\{\bm{h}_j\}^{(1:T)}(i)$, we can get the summary if its $T$-hop neighborhoods through a pooling function ${\bm{s}}_i = {\rm Pool}( \{\bm{h}_j\}^{(1:T)}(i) )$. The exact formulation of the pooling function could be flexible to fit graphs with different properties. However, here we consider a simple average pooling of node embeddings from different order's neighborhoods for simplicity, which can work in most cases: \begin{equation}\label{eqn:neighbor-summary} {\bm{S}} = \sum_{t=1}^T \tilde{{\bm{A}}}^t{\bm{H}} / L, \; \mbox{where} \; \tilde{{\bm{A}}} = {\bm{A}} {{\bm{D}}}^{-1}. \end{equation} To make the node embeddings aware of structural information, we employ the following regularization term on the cross-correlation matrix of node embeddings ${\bm{H}}$ and summary embeddings ${\bm{S}}$: \begin{equation}\label{eqn:ortho-reg} {\mathcal{L}}_{reg} = -\alpha \sum\limits_{k=1}^D C_{kk} + \beta\sum\limits_{k\neq k'}C_{kk'}^2, \end{equation} where ${\bm{C}} = \{C_{kk'}\} \in {\mathbb{R}}^{D\times D}$ is the cross-correlation matrix of ${\bm{H}}$ and ${\bm{S}}$. We show in the following theorem that with Eq.~\ref{eqn:ortho-reg}, the node embeddings will be locally smoothed and at the same time, prevent dimensional collapse: \begin{theorem}\label{theorem:ortho-reg} Assume $T$ = 1 and ${\bm{H}}$ are free vectors. Let ${\bm{H}}^*$ be a global optimizer of Eq.~\ref{eqn:ortho-reg}, then ${\bm{H}}^*$ is smoothed over the graph structure and is orthogonal. \end{theorem} See proof in Appendix~\ref{proof:ortho-reg}. Finally, we can employ an additional linear layer to make predictions ${\bm{Y}} = {\rm LIN}_{\phi}({\bm{H}})$. Then the final objective function to be optimized is: \begin{equation} {\mathcal{L}} = \bm{\ell}_{xent}({\bm{Y}}^L, \hat{{\bm{Y}}}^L) - \alpha \sum\limits_{k=1}^D C_{kk} + \beta\sum\limits_{k\neq k'}C_{kk'}^2, \end{equation} where $\alpha, \beta$ are trade-off hyperparameters to balance the strengths of regularization. \begin{remark} With a well-trained model, we can mkae prediction for an upcoming node with feature $\bm{x}$ with $\bm{y} = {\rm {Lin}}_{\phi} ({\rm {MLP}}_{\theta}(\bm{x}))$ quickly, and without the help of graph structure. \end{remark} \section{Experiments}\label{sec:experiments} In this section, we conduct experiments to evaluate {\sc Ortho-Reg}\xspace by answering the following research questions: \begin{itemize} \item \textbf{RQ1}: What's the performance of {\sc Ortho-Reg}\xspace on common transductive node classification tasks compared with GNN models and other MLP models? (Sec.~\ref{exp:transductive}) \item \textbf{RQ2}: On cold-start settings where we do not know the connections of testing nodes, can {\sc Ortho-Reg}\xspace demonstrate better performance than other methods? (Sec.~\ref{exp:inductive}) \item \textbf{RQ3}: Does {\sc Ortho-Reg}\xspace mitigate the dimensional collapse issue, and is each design of {\sc Ortho-Reg}\xspace really necessary to its success? (Sec.~\ref{exp:abl}) \item \textbf{RQ4}: Can {\sc Ortho-Reg}\xspace demonstrate better robustness against structural perturbations compared with Graph Neural Networks? (Sec.~\ref{sec:exp-robustness}) \end{itemize} Due to space limits, we defer the experiments on heterophily graphs and scalability comparison in Appendix~\ref{appendix-exp-hete} and Appendix~\ref{appendix-exp-scale}, respectively. A brief introduction of the baselines is given in Appendix~\ref{baselines}. \subsection{Experiment setups}\label{sec:exp-steups} \textbf{Datasets.} We consider $7$ benchmark graph datasets and their variants in this section: \texttt{Cora}\xspace, \texttt{Citeseer}\xspace, \texttt{Pubmed}\xspace, \texttt{Amazon-Computer}\xspace, \texttt{Amazon-Photo}\xspace, \texttt{Coauthor-CS}\xspace, and \texttt{Coauthor-Physics}\xspace as they are representative datasets used for semi-supervised node classification~\citep{gcn, graphmlp, glnn, coldbrew}. The detailed introduction and statistics of them are presented in Appendix~\ref{appendix-exp-detail}. To evaluate {\sc Ortho-Reg}\xspace on large-scale graphs, we further consider two OGB datasets~\citep{ogb}: \texttt{Ogbn-Arxiv}\xspace and \texttt{Ogbn-Products}\xspace. Note that the two OGB datasets are designed for fully-supervised node classification tasks, so we defer their results to Appendix~\ref{appendix-exp-add}. \textbf{Implementations.} If not specified, we use a two-layer MLP model as the encoder to generate node embeddings, then another linear layer takes node embeddings as input and outputs predicted node labels. We use Pytorch to implement the model and DGL~\citep{dgl} to implement the neighborhood summarizing operation in Eq.~\ref{eqn:neighbor-summary}. If not specified, all our experiments are conducted on an NVIDIA V100 GPU with 16G memory with Adam optimizer~\citep{adam}. \subsection{Transductive Semi-supervised Node Classification (RQ1)}\label{exp:transductive} \begin{table*}[t] \centering \caption{Prediction accuracy of semi-supervised node classification tasks on the seven benchmark graphs. {\sc Ortho-Reg}\xspace outperforms powerful GNN models and competitive MLP-architectured baselines on 6 out of 7 datasets.} \label{tbl-exp-semi} \small \begin{threeparttable} { \scalebox{0.87} { \begin{tabular}{c|l|ccccccc} \toprule[0.8pt] & Methods & \texttt{Cora}\xspace & \texttt{Citeseer}\xspace & \texttt{Pubmed}\xspace & \texttt{Computer} & \texttt{Photo} & \texttt{CS} & \texttt{Physics} \\ \midrule \multirow{3}{*}{GNNs} & SGC & 81.0$\pm$0.5 & 71.9$\pm$0.5 & 78.9$\pm$0.4 & 80.6$\pm$1.9 & 90.3$\pm$0.8 & 87.9$\pm$0.7 & 90.3$\pm$1.4 \\ & GCN & 82.2$\pm$0.5 & 71.6$\pm$0.4 & 79.3$\pm$0.3 & 82.9$\pm$2.1 & 91.8$\pm$0.6 & 89.9$\pm$0.7 & 91.9$\pm$1.2 \\ & GAT & 83.0$\pm$0.7 & 72.5$\pm$0.7 & 79.0$\pm$0.3 & 82.5$\pm$1.6 & 91.4$\pm$0.8 & 90.5$\pm$0.8 & 92.3$\pm$1.5 \\ \midrule[0.5pt] KD-MLPs & GLNN & 82.6$\pm$0.5 & 72.8$\pm$0.4 & 80.2$\pm$0.6 & 82.1$\pm$1.9 & 91.3$\pm$1.0 & 92.6$\pm$1.0 & \textbf{93.3}${\bm{\pm}}$\textbf{0.5} \\ \midrule[0.5pt] \multirow{5}{*}{GR-MLPs} & MLP & 59.7$\pm$1.0 & 57.1$\pm$0.5 & 68.4$\pm$0.5 & 62.6$\pm$1.8 & 76.2$\pm$1.4 & 86.9$\pm$1.0 & 89.4$\pm$0.7 \\ & Lap-Reg & 60.3$\pm$2.5 & 58.6$\pm$2.4 & 68.7$\pm$1.4 & 62.6$\pm$2.0 & 76.4$\pm$1.1 & 87.9$\pm$0.6 & 89.5$\pm$0.5 \\ & P-Reg & 64.4$\pm$4.5 & 61.1$\pm$2.1 & 72.3$\pm$1.7 & 68.9$\pm$3.3 & 79.7$\pm$3.7 & 90.9$\pm$1.9 & 91.6$\pm$0.7 \\ & GraphMLP & 79.5$\pm$0.6 & 73.1$\pm$0.4 & 79.7$\pm$0.4 & 79.3$\pm$1.7 & 90.1$\pm$0.5 & 90.3$\pm$0.6 & 91.6$\pm$0.8 \\ & N2N & 83.2$\pm$0.4 & 73.3$\pm$0.5 & 80.9$\pm$0.4 & 81.4$\pm$1.6 & 90.9$\pm$0.7 & 91.5$\pm$0.7 & 91.8$\pm$0.7 \\ \midrule[0.5pt] Ours & {\sc Ortho-Reg}\xspace & \textbf{84.7}${\bm{\pm}}$\textbf{0.4} & \textbf{73.5}${\bm{\pm}}$\textbf{0.4} & \textbf{82.8}${\bm{\pm}}$\textbf{0.5} & \textbf{83.7}${\bm{\pm}}$\textbf{1.5} & \textbf{92.3}${\bm{\pm}}$\textbf{1.0} & \textbf{92.9}${\bm{\pm}}$\textbf{1.1} & 92.8$\pm$0.8 \\ \bottomrule[0.8pt] \end{tabular} } } \end{threeparttable} \end{table*} We first evaluate our method on transductive semi-supervised node classification tasks. For comparison, we consider three types of baseline models: 1) Graph Neural Networks (GNNs), including SGC~\citep{sgc}, GCN~\citep{gcn} and GAT~\citep{gat}. 2) Representative knowledge distillation (KD-MLP) method GLNN~\citep{glnn}. 3) Basic MLP and GR-MLP models, including Laplacian regularization (Lap-Reg,~\citet{label-propagation}, \citet{lap-reg}), Propagation Regularization (P-Reg,~\citet{p-reg}), GraphMLP~\citep{graphmlp}, and Node-to-Neighborhood Mutual Information Maximization (N2N,~\citet{n2n}) For each dataset, we use $20$ nodes per class for training, $500$ nodes for validation, and another $1000$ nodes for testing. For \texttt{Cora}\xspace, \texttt{Citeseer}\xspace, and \texttt{Pubmed}\xspace we use the public split, while for the remaining datasets, we split randomly. We report the average prediction accuracy with standard deviation over 20 random trials in Table~\ref{tbl-exp-semi}. As demonstrated in the table, {\sc Ortho-Reg}\xspace outperforms previous GR-MLPs by a large margin, which greatly validates the importance and effectiveness of orthogonal node embeddings. Compared with the competitive knowledge distillation method GLNN, {\sc Ortho-Reg}\xspace also demonstrates better performance on $6$ out of $7$ graphs. It is also worth noting that our method even outperforms powerful GNN models such as GCN and GAT, which indicates that node features of the graphs are less exploited by these GNN models. In contrast, our method can fully exploit the potential of node features. \subsection{Inductive Node Classification for Cold-Start Scenarios (RQ2)}\label{exp:inductive} To evaluate the performance of {\sc Ortho-Reg}\xspace on cold-start scenarios where the connections between newly encountered nodes and existing nodes are missing, we follow the setups in ColdBrew that selects a proportion of nodes as \textit{isolated} nodes which will be removed from the original graph. Then the model is evaluated on the isolated nodes in the testing set. Due to the space limit, we present the detailed setups and evaluation methods in Appendix~\ref{appendix-exp-inductive}. Besides the baselines used in~\citet{coldbrew}, we also include GLNN for a fair comparison. \begin{table}[t] \centering \caption{Test accuracy on the isolated nodes.} \label{tbl-exp-cold} \small { \begin{threeparttable} { \scalebox{0.9}{ \begin{tabular}{llcccccc} \toprule[0.8pt] \multicolumn{2}{c}{{Methods}} & \texttt{Cora}\xspace & \texttt{Citeseer}\xspace & \texttt{Pubmed}\xspace \\ \midrule \multirow{2}{*}{{GNNs}} & GCN & 53.02$\pm$1.78 & 47.09$\pm$1.38 & 71.50$\pm$2.21 \\ & GraphSAGE & 55.38$\pm$1.92 & 41.46$\pm$1.57 & 69.87$\pm$2.13 \\ \specialrule{0em}{1pt}{1pt} \cline{1-2} \specialrule{0em}{1pt}{1pt} \multirow{2}{*}{{KD-MLPs}} & ColdBrew & 58.75$\pm$2.11 & 53.17$\pm$1.41 & {72.31$\pm$1.99} \\ & GLNN & {59.34$\pm$1.97} & {53.64$\pm$1.51} & 73.19$\pm$2.31 \\ \specialrule{0em}{1pt}{1pt} \cline{1-2} \specialrule{0em}{1pt}{1pt} \multirow{4}{*}{{GR-MLPs}} & MLP & 52.35$\pm$1.83 & 53.26$\pm$1.41 & 65.84$\pm$2.08 \\ & GraphMLP & 59.32$\pm$1.81 & 53.17$\pm$1.48 & 72.33$\pm$2.11 \\ & {\sc Ortho-Reg}\xspace & {\textbf{61.93}${\bm{\pm}}$\textbf{1.77}} & {\textbf{56.31}${\bm{\pm}}$\textbf{1.54}} & {\textbf{73.42}${\bm{\pm}}$\textbf{1.99}} \\ \bottomrule[0.8pt] \end{tabular}}} \end{threeparttable}} \end{table} In Table.~\ref{tbl-exp-cold}, we report the experimental results of {\sc Ortho-Reg}\xspace and baseline methods on the isolation nodes. As demonstrated in the table, for isolated nodes whose connectivity in the graph is unknown, GNN models perform poorly as they require both the node features and graph structure for accurate inference. By contrast, MLP-based models generalize better on isolated nodes as they make the best of the available node features. The proposed {\sc Ortho-Reg}\xspace outperforms both GNNs and MLPs (including KD MLPs and GR-MLPs) baselines. \subsection{Studies of {\sc Ortho-Reg}\xspace (RQ3)}\label{exp:abl} \subsubsection{Does {\sc Ortho-Reg}\xspace mitigate dimensional collapse?} In Sec.~\ref{sec:dimensional-collapse} we have attributed the limitation of previous GR-MLPs to the dimensional collapse phenomenon, and in Sec.~\ref{sec:method-ortho-reg} we have proposed {\sc Ortho-Reg}\xspace to mitigate such a problem from a theoretical perspective. In this part, we would like to empirically show that {\sc Ortho-Reg}\xspace can avoid the dimensional collapse issue by keeping node embeddings' eigenspectra. \begin{figure}[t] \centering \includegraphics[width = 0.95\linewidth]{Fig/vis.pdf} \caption{Visualization of {\sc Ortho-Reg}\xspace's impact on node embeddings' Eigenspectra on \texttt{Cora}\xspace and \texttt{Pubmed}\xspace.} \label{fig:vis} \end{figure} In consistency with the settings in Sec.~\ref{sec:dimensional-collapse}, we evaluate the embeddings learned from {\sc Ortho-Reg}\xspace at different training epochs (we take both \texttt{Cora}\xspace and \texttt{Pubmed}\xspace for illustrations). The decay of eigenvalues of node embeddings' correlation matrix at different epochs is plotted in Fig.~\ref{fig:vis} (a) and (c). It is observed that the top eigenvalues are well-reserved thanks to the explicit regularization of node embeddings' correlation matrix. In Fig.~\ref{fig:vis} (b) and (d) we also plot the change of testing accuracy as well as the NESum value as the training epoch increases, from which we could observe a positive relationship between the NESum value and the test accuracy: neglecting the initial oscillations, we notice the test accuracy will grow smoothly as the NESum value increases and will reach its peak when NESum overwhelms (\texttt{Cora}\xspace) or converges (\texttt{Pubmed}\xspace). The above observations demonstrate that {\sc Ortho-Reg}\xspace does mitigate the dimensional collapse problem and lead to a more powerful model. \subsubsection{Ablation Studies} We then conduct ablation studies to study the effect of different components of {\sc Ortho-Reg}\xspace, and we present the results in Table \ref{tbl-exp-abl}. We first study the impact of the two regularization terms by setting the corresponding factors ($\alpha$ and $\beta$) to $0$, respectively. When $\alpha = 0$ (i.e., only decorrelating different dimensions), we observe that the model's performance is even worse than the pure MLP model (see in Table~\ref{tbl-exp-semi}). This indicates that adding orthogonal regularization is not always beneficial (e.g., for vanilla MLP), but is indeed beneficial for GR-MLPs. By contrast, without orthogonal regularization (i.e., $\beta = 0$), the power of structure regularization is restricted, and decorrelating different dimensions can boost performance greatly. We further investigate whether considering a larger neighborhood would improve the model's performance. The empirical results demonstrate that considering a larger neighborhood improves the performance compared to only using first-order neighborhoods, but $T = 2$ is already optimal for most datasets. \begin{table}[t] \centering \caption{Effects of different components of {\sc Ortho-Reg}\xspace} \label{tbl-exp-abl} \small { \begin{threeparttable} { \scalebox{1.0}{ \begin{tabular}{c|ccc} \toprule[0.8pt] Variants & {\texttt{Cora}\xspace} & {\texttt{Citeseer}\xspace} & {\texttt{Pubmed}\xspace} \\ \midrule[0.5pt] Baseline & 84.7 & 73.5 & 82.8 \\ \midrule[0.5pt] $\alpha = 0$ & 54.7 & 51.4 & 47.2 \\ $\beta = 0$ & 79.3 & 68.7 & 76.8 \\ \midrule[0.5pt] $T = 1$ & 83.9 & 72.9 & 82.1 \\ $T = 2$ & \textbf{84.7} & \textbf{73.5} & \textbf{82.8}\\ $T = 3$ & 84.3 & 73.3 & 82.5\\ \bottomrule[0.8pt] \end{tabular}} } \end{threeparttable} } \end{table} \subsubsection{Hyperparameter Analysis} \begin{figure}[t] \centering \includegraphics[width = 1.0\linewidth]{Fig/sense.pdf} \caption{Performance heat map when using different $\alpha$, $\beta$ combinations in Eq.~\ref{eqn:ortho-reg}, on \texttt{Cora}\xspace and \texttt{Pubmed}\xspace.} \label{fig:sense} \end{figure} We further study how the two trade-off hyperparameters $\alpha$ and $\beta$ affect the performance of {\sc Ortho-Reg}\xspace. We try different combinations of $\alpha$ and $\beta$ on \texttt{Cora}\xspace, and \texttt{Pubmed}\xspace (we defer the results on \texttt{Citeseer}\xspace to Appendix~\ref{appendix-exp-sense} due to space limit), and plot the performance heatmap in Fig.~\ref{fig:sense}. The conclusion is very interesting: the performance of {\sc Ortho-Reg}\xspace is not very sensitive to a specific value of $\alpha$ or $\beta$. In other words, for a reasonable value of $\alpha$ ($\beta$), we can easily find another value of $\beta$ ($\alpha$) that can achieve similarly high performance. The ratio between $\alpha$ and $\beta$ seems much more important. From Fig.~\ref{fig:sense}, we can observe that for \texttt{Cora}\xspace, $\alpha/\beta = 2* 10^3$, and for \texttt{Pubmed}\xspace, $\alpha/\beta = 1* 10^3$ can lead to the optimal performance; changing the value of $\alpha$ while fixing $\alpha/\beta$ will not change the performance very much. \subsection{Robustness Against Structural Perturbations (RQ4)}\label{sec:exp-robustness} Finally, we study the robustness of {\sc Ortho-Reg}\xspace against attacks on the graph structures compared with GNN models. As {\sc Ortho-Reg}\xspace uses node features rather than a combination of node features and edges for prediction, we expect it to demonstrate better robustness under mild structural perturbations. To reach this target, we randomly mask a fraction of the edges of the graph and evaluate the performance of {\sc Ortho-Reg}\xspace and GCN under different edge-masking ratios. In Fig.~\ref{fig:exp-robustness}, we plot how the model's performance changes (with standard deviation) as the masking ratio increases with 20 random trials. \begin{figure}[t] \centering \includegraphics[width = 0.8\linewidth]{Fig/robustness.pdf} \caption{Effects of increasing of edge masking ratios.} \label{fig:exp-robustness} \end{figure} As demonstrated in Fig.~\ref{fig:exp-robustness}, our method demonstrates better robustness against moderate-level edge perturbations. This is because we do not explicitly use the graph structure for generating predictions, making {\sc Ortho-Reg}\xspace less sensitive to perturbations on the graph structure. \section{Conclusions}\label{sec:conclusion} In this paper, we have proposed {\sc Ortho-Reg}\xspace, a novel Graph-Regularized MLP method for node representation learning. We show that simple graph regularization methods can cause dimensionally collapsed node embeddings both theoretically and empirically. We show that the proposed {\sc Ortho-Reg}\xspace, which enforces the orthogonality of the correlation matrix of node embeddings, can naturally avoid the feature collapse phenomenon. We have conducted extensive experiments, including traditional transductive semi-supervised node classification tasks and inductive node classification for cold-start nodes, demonstrating the superiority of {\sc Ortho-Reg}\xspace. { \bibliographystyle{icml2023} \section{General response to all reviewers} We thank all the reviewers for their thorough comments and valuable opinions/suggestions. We've uploaded the revised version, which has addressed most of the concerns. Here we'd like to summarize the major changes in the revised version: 1) We emphasize the preconditions/assumptions of Lemma 1. Our Lemma 1 focuses on the Laplacian regularization loss, neglecting the impact of supervised loss. Besides, we assume a linear model to simplify the proof. 2) We replot Fig.2, focusing on the top-$8$ eigenvalues instead of all the $512$ eigenvalues, to better demonstrate the differences in different subfigures. Besides, we add another three subfigures in Fig.2, showing that at the same epoch, the drop rate of eigenvalues will increase as the trade-off hyperparameter $\lambda$ increases. 3) We add more content in Appendix: 1) In Appendix C.5, we conduct a sensitivity analysis of the trade-off hyperparameters $\alpha$ and $\beta$ in Eq.7. 2) in Appendix E, we provide more analysis of why dimensional collapse is not beneficial for linear classification. 4) We fix some typos and misleading symbols that might affect the reading. Besides, we have provided individual responses for each reviewer, which answer each question detailedly. We hope our responses can address your concerns. \section{Responses to Reviewer Mv49} We thank the reviewer for the valuable comments. We note that your concerns are mainly about the dimensional collapse phenomenon, i.e., whether it really exists and why collapsed representations are not desired. Here we would like to answer your questions one by one, and we hope this can address your concerns. Q1: Theoretical analysis in Sec.3 A1: Our theoretical analysis here does focus on the Laplacian regularization term (we study the gradient of the regularization loss with respect to the weight matrix $\mathbf{W}$). This is reasonable as both the supervised cross-entropy loss function and Laplacian regularization are important for shaping the final learned node representations. Even if the supervised cross-entropy loss function can prevent completely collapsed representations, the gradient of the Laplacian regularization term evitably has such an effect (that makes different dimensions of node embeddings over-correlated). Besides, in the proof of Lemma 1, we didn't assume the smoothing term would be optimized to zero. The proof studies the evolving of the singular values of the weight matrix when optimized using the Laplacian regularization term with respect to the optimization step $t$. The conclusion is that the ratio between smaller eigenvalues and the largest one is monotonically decreasing as the training step $t$ increases, which can then lead to Lemma 1. We understand the reviewer's concern that in reality, the supervised loss function can alleviate this issue to some extent. However, we do point out that the theoretical analysis only works on the regularization term in the initially submitted version. Besides, we have clearly stated that the analysis has its limitations (see the last paragraph before the empirical justification in Sec.3). This is why we provide further empirical results (a non-linear model trained using a weighted sum of the supervised cross-entropy loss function and Laplacian regularization) to justify the existence of dimensional collapse phenomenon. Q2: Experiments of Sec.3 are not clear. A2: We are sorry for causing a misunderstanding of Fig.2. The reason that the differences of the eigenspectra as the weight term increases are difficult to discriminate is that we plot the changes of all $512$ eigenvalues. To better show the change. We replot Fig. 2, where we only focus on the decay of top-8 eigenvalues (we put the original figure to Fig.7 in Appendix C.4 as a reference). Besides, to better show the change of eigenspectra for increasing $\lambda$, we add another three subfigures that plot the top eigenvalues of three different $\lambda$ values at the same training epoch (epoch 0, 20, and 100). As suggested by Reviewer 4, we also plot the 95\% confidence intervals. In the updated Fig.2 (especially the lower three subfigures), we can easily observe that as $\lambda$ increases, the ratio of top eigenvalues (w.r.t. the largest one) decreases quicker. Specifically, at training epoch 100, the second largest eigenvalue for $\lambda = 0.001$ and $\lambda = 0.1$ becomes only $20\%$ of the largest one, indicating the largest eigenvalue contains most of the information, while the remainings are much less important. The empirical results are consistent with our analysis above that the dimensional collapse phenomenon does exist in Laplacian regularization, even if considering the supervised loss and a non-linear model. Q3: Relationship between the dimensional collapse phenomenon and the poor performance of existing GR-MLPs . A3: We thank the reviewer for pointing out that the relationship between the dimensional collapse phenomenon of the poor performance of existing GR-MLPs is not clearly explained. Intuitively, when complete dimensional collapse happens, all data points will be embedded to fall on a line, and thus cannot be separated with a linear classifier. In Appendix E in the updated version, we further explain the limitation of weakly (or modestly) collapsed representations using a 2-dimensional example. Our main proposition is that, even if the training data points of modestly collapsed representations can be successfully separated by a linear classifier, they demonstrate worse robustness against attacks, and worse generalization ability to testing data (whose distribution might be a little bit shifted from the training one), due to the narrow embedding space on the direction of small eigenvalues. Another important thing is that when the dimensional collapse phenomenon exists, it can be hard to figure out whether it is a severe or moderate one. In this case, we'd rather directly eliminate the effect of dimensional collapse (as what we do in OrthoReg). Besides, enforcing orthogonality on node embeddings is just shaping the representations without regularization or constraint on the information that the representations should carry for downstream tasks. If the ability to filter noise is due to the supervised loss + lap-reg, enforcing orthogonality as what we do can hardly damage this ability as it uses the same embedding dimension, and it is just rebalancing the information each dimension should carry. Q4: Model inconsistency. A4: We do consider higher-order connectivity information in our final model, which is decided by the hyperparameter $T$. Note that when $T = 1$, our method only considers first-order connectivity, and we have studied the performance with different $T$ in Table 3 in the original version. When $T = 1$, our method can still achieve a very satisfying performance (only a little bit lower than the best ones when $T = 2$). These results can validate that the improvement in the model's performance is not because of the high-order connectivity. Besides, these results also demonstrate that considering higher-order connectivity can bring improvement to the model, which was not considered in traditional GR-MLPs. This is also an important contribution of this work. \section{Responses to Reviewer 2EfD} We thank the reviewer for the valuable comments and positive feedback. We are glad to answer your questions. Q1: Extensions to other graph types, e.g., knowledge graphs. A1: Though this paper focuses on representation learning on homogeneous graphs, the analysis and the proposed method in this paper can naturally generalize to complex graphs like knowledge graphs. However, there may be some obstacles hindering the direct application of our method to these more complicated graph-structured data. 1) As we aim at using MLPs instead of GNNs for learning node embeddings, a very basic assumption should be that the node features are rich enough so that we can implicitly infer the structures using node features with our regularization technique. As The number of relation types of knowledge graphs can be large, the processed node features in knowledge graph datasets might be insufficient to infer both the existence and the type of an edge. A more promising way should be to combine the model with Langauge Models, e.g., we can generate node features using Language Models, which could be finetuned together with the top MLPs. 2) Different from homogeneous graphs, where there is only one type of node and edges, knowledge graphs usually have multiple types of nodes and edges. Note that to perform our regularization, we have to extract the neighborhood information with Eq.(6); this brings a challenge for knowledge graphs as we have to discriminate different types of neighborhoods according to the node types and edge types. One trivial way is to use different models for different edge types (like RGCN[1]), which means we require a regularization loss term for every edge type. This can be very inefficient. Besides, how to effectively fuse information from different edge types is also crucial and challenging. References: [1] Schlichtkrull, Michael, et al. "Modeling relational data with graph convolutional networks." European semantic web conference. Springer, Cham, 2018. https://arxiv.org/abs/1703.06103 Q2: Explain the limitations of GNNs. A2: We are sorry for causing this ambiguous expression. Actually, in this sentence, we are mentioning the limitations of GNNs, which have been elaborated in Sec.1, the first paragraph: 1) GNNs rely on layer-wise message passing to aggregate features from the neighborhood, which is computationally inefficient during **inference**. For this limitation, you can refer to the analysis in GLNN[2] (Fig.1), which shows that the number of fetches and the inference time of GNNs are both magnitudes more than MLPs and grow exponentially as functions of the number of layers. 2) GNN models can not perform satisfactorily in cold-start scenarios where the connections of new incoming nodes are few or unknown. This could be observed if we study the classification accuracy of nodes with different degrees, like ColdBrew[3]. The conclusion is, the classification accuracy of nodes with a large degree is much higher than those of nodes with a low degree. Besides, GNNs perform poorly when predicting nodes having connections to the existing graph, even worse than the vanilla MLPs. References: [2] Zhang, Shichang, et al. "Graph-less neural networks: Teaching old mlps new tricks via distillation." In ICLR, 2022. https://arxiv.org/abs/2110.08727 [3] Zheng, Wenqing, et al. "Cold Brew: Distilling graph node representations with incomplete or missing neighborhoods." In ICLR, 2022. https://arxiv.org/abs/2111.04840 Q3: Extend the analysis to non-linear cases. A3: Our theoretical analysis in Sec.3 is restricted to linear models as for a linear model, we are convenient to analyze the evolving of a single weight matrix. When extending to non-linear models, there will be more than one weight matrixes together with the effect of activation functions, which would be much more complicated. Despite this, we can still analyze the evolution of the final embeddings intuitively. The proof is based on the Universal Approximation Theorem[4], which shows that a shallow MLP model can approximate any function. With this assumption, we can treat node embeddings $\mathbf{H}$ as free learnable vectors. Then we can take the gradient of Laplacian regularization loss to $\mathbf{H}$: $$ \frac{\partial \mathcal{L}_{reg}}{\partial \mathbf{H}} = \frac{\partial \; {{\rm tr}(\mathbf{H}^{\top}\mathbf{L}\mathbf{H})}}{\partial \mathbf{H}} = \frac{\partial \; {{\rm tr}(\mathbf{H}^{\top}\mathbf{L}\mathbf{H})}}{\partial \mathbf{H}} \\ = (\mathbf{L} + \mathbf{L}^{\top})\mathbf{H} = 2\mathbf{L}\mathbf{H} \\ $$ Similarly, we can treat the embedding matrix as a function of the training step $t$, i.e., $\mathbf{W} = \mathbf{H}(t)$, then we have $\frac{{\rm d}\mathbf{H}(t)}{{\rm d}t} = 2\mathbf{LH}$, and we can solve the equation analytically: $$ \mathbf{H}(t) = \exp(\mathbf{L}t) \cdot\mathbf{H}(0). $$ The remaining proof is just similar to that in the proof of Lemma 1 in Appendix A.1. References: [4] Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. "Multilayer feedforward networks are universal approximators." Neural networks 2.5 (1989): 359-366. Q4: What are the values of $\alpha$ and $\beta$ in Table 3, and what's the trade-off between them? A4: The $\alpha$ and $\beta$ in Table 3 are the same as in Table 1. Cora: $\alpha = 2e-3, \beta = 1e-6$; Citeseer: $\alpha = 1e-3, \beta = 1e-6$; Pubmed: $\alpha = 2e-6, \beta = 2e-6$. Although the optimal hyperparameters for different datasets are different, they are close in their values, and the model's performance is not very sensitive to their specific values. In the revised version, we add another section in Appendix C.5, studying how the trade-off between $\alpha$ and $\beta$. The conclusion is that when $\alpha / \beta$ is about $10^3$, our method can achieve satisfying performance as long as $\alpha$ is within a reasonable range, e.g., from $0.0005$ to $0.005$. With this observation, it will not be hard to deploy our method to a new dataset. For example, we can set the initial values as $\alpha = 0.001$, and $\beta = 10^{-6}$. Then we can slightly tune the two hyperparameters to find the optimal combination. Q5: Study other types of graph regularization. A5: Our analysis and experiments in Sec.3 study Laplacian regularization only but can be extended to other graph regularization methods that enforce smoothness of representations over the graph structure and without additional regularizations, e.g., Propagation Regularization (P-reg [5]), which is defined as follows: $$ \mathcal{L}_{P-reg} = \Vert \mathbf{\tilde{A}H} - \mathbf{H} \Vert_F^2. $$ Similarly, we can take the gradient of $\mathcal{L}_{P-reg}$ to $\mathbf{H}$: $$ \frac{\partial \mathcal{L}_{P-reg}}{\partial \mathbf{H}} = \frac{\partial \Vert \mathbf{\tilde{A}H} - \mathbf{H} \Vert_F^2}{\partial \mathbf{H}} \\ = 2(\mathbf{\tilde{A} - I})^{\top}(\mathbf{\tilde{A} - I})\mathbf{H}\\ = 2 \mathbf{L}^{\top}\mathbf{L}\mathbf{H}. $$ As a result, similar conclusions can be derived. \section{Responses to Reviewer GzRn} We thank the reviewer for the thorough reviews and kind suggestions. We note that the reviewer's concerns are mainly about the theoretical parts in Sec.3. E.g., some preconditions/assumptions are not emphasized and need more descriptions of the eigenvalues of the embedding matrix. Besides, there might be a misunderstanding of the claimed efficiency benefit of our method over GNNs. Here we'd like to address these concerns: Q1: Emphasize and clarify the assumptions for Lemma 1. A1: We thank the reviewer for pointing out that some preconditions and assumptions are not clearly stated. In the revised version, we have emphasized the linear model assumption in both the abstract and the introduction part of Lemma 1. However, we'd like to emphasize that a similar analysis can be extended to non-linear cases when the Universal Approximation Property of MLP is assumed. In Sec.3, we consider the linear model because it is much more convenient to analyze its gradient. We didn't consider the effect of supervised loss because in Lemma 1, we are mainly analyzing the effect of the regularization term on the gradient of the weight matrix. We agree that the supervised loss has a non-negligible effect on the gradient as well, but what we'd like to express through Lemma 1 is, the Laplacian regularization loss does have an effect to lead to dimensional collapse, even if the collapse phenomenon is not very severe. In the revised version, we have emphasized that we neglect the supervised loss and focus on the effect of Laplacian regularization in Lemma 1, and we hope in this way, the analysis in Sec.3 is clear and will not cause misunderstandings anymore. Besides, we'd like to point out that we clearly stated the two limitations: 1) linear model, 2) neglect of the effect of supervised loss, in Sec.3, the last paragraph before the empirical justifications. Furthermore, our empirical result in Sec.3 does not rely on the two assumptions, and it validates that the dimensional collapse phenomenon does exist. Q2: More descriptions of $\{\lambda_i^C\}_{i=1}^D$ after Lemma 1. A2: We have added more descriptions of $\{\lambda_i^C\}_{i=1}^D$ in Theorem 1 in the revised version. In theorem 1, we conclude that dimensional collapse behaves as the vanishing of small eigenvalues with respect to the largest few ones: $$ \lim_{t \rightarrow \infty} \frac{\lambda_i^{\mathbf{C}}(t)}{\lambda_j^{\mathbf{C}}(t)} = 0, \forall i \le d \; {\rm and} \; j \ge d+1. $$ In this way, we can turn back to the analysis of the eigenvalues of embeddings $\mathbf{H}$ instead of $\mathbf{W}$. Besides, we also mentioned $\lambda_i^{\mathbf C}$ in the empirical justification part, where Fig. 2 plots the evolving of eigenvalues $\{\lambda_i^C\}_{i=1}^D$, and Fig.3 plots the evolving of NESum ${\text{NESum}}(\mathbf{C}) = \{\lambda_i^{\mathbf{C}}\} \triangleq \sum_{i=1}^{d} {\lambda_i^{\mathbf{C}}} /{\lambda_1^{\mathbf{C}}}$. Q3: Differences between our method with existing methods with regard to efficiency. A3: We'd like to emphasize that the efficiency claimed in this paper is about the **inference stage** (instead of training). As stated in both Abstract and the first paragraph in Sec.1, GNNs rely on layer-wise message passing to aggregate features from the neighborhood, which is computationally inefficient during inference, especially when the model becomes deep and the graph is large. The training cost of GNNs and GR-MLPs (including ours) is similar, as all the methods require the utilization of the graph structure information, either explicitly or implicitly. The inference time, however, varies a lot. Consider a $L$-layer GNN, considering minibatch training (batch size is $B$) and neighbor sampling with a fixed number of neighbors as $k$, the inference complexity should be $O(BLd^2k^{L})$, while that for a $L$-layer MLP is only $O(BLd^2)$. As a result, MLPs (all GR-MLPs) can be inferred much faster than GNNs, especially when the graph is large and the model is deep. We also provide an empirical study in Fig.6, Appendix C.3 to show the superiority of our method over GNNs with regard to inference speed. \section{Responses to Reviewer JMnM} We thank the reviewer for the thorough comments and valuable suggestions. We've fixed the typos and would like to address your concerns by answering your questions one by one: Q1: Didn't choose the baselines like those in [B]. A1: We thank the reviewer for mentioning [B]. [B] studies a supervised/unsupervised node representation learning task where the encoder is also an MLP. Considering that the supervised model, named N2N(JL) could also be regarded as a GR-MLP model, it is highly related to our work and should also be a baseline. As in [B] the authors use a different experimental setting; we reproduce it according to the authors' codes and have included the results in the revised version (in Table 1). It has to be mentioned that the TAPS sampling strategy is not clearly explained in the paper and is also not presented in the authors' codes (they simply provide the sampled positive examples for Cora dataset), the version we reproduce is N2N (JL) without using TAPS. For the baselines used in [B], it is good that [B] can use so many baseline methods, but it seems unnecessary as most of these baseline methods are not representative ones. Besides, they perform similarly and much worse than the proposed method. The purpose of using these baselines is to basically show that the proposed N2N model can beat a lot GNN models under the given experimental setting. In our paper, we mainly study the performance gap between Graph-Regularized MLPs and GNNs in supervised settings. As a result, we mainly compare our method with other methods using MLP as encoders (e.g., KD-MLPs and GR-MLPs). To show that our method can match the performance of GNNs, we select three representative methods as GNN baselines: SGC, GCN and GAT. We didn't select more advanced GNN models as they might consist of complicated designs (e.g., more parameters or deeper architectures). Note that although we employ a GNN-like module to summarize neighborhood embeddings (i.e., Eq.(5)), it is shallow (i.e., T=2) and parameter-free (it is even simpler than SGC). As a result, it is fair to compare the proposed OrthoReg with the three GNN baselines. Besides, we'd like to mention that our method can even match the performance of SOTA GNN models with complicated regularizations, like GRAND[2]. E.g., on Pubmed, our method gets 82.8\% accuracy, which is higher than 82.7\% of GRAND (see Fig.1). References: [1] Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu, Qiang Yang, Evgeny Kharlamov, and Jie Tang. Graph random neural networks for semi-supervised learning on graphs. In NeurIPS, 2020. Q2: Differences between different coefficients in Fig 2. A2: We are sorry for presenting a figure where the difference between different coefficients is hard to discriminate. The reason that the differences of the eigenspectra as the weight term increases are challenging to determine is that we plot the changes of all $512$ eigenvalues. To better show the change. We replot Fig. 2, where we only focus on the decay of top-8 eigenvalues (we put the original figure to Fig.7 in Appendix C.4 as a reference). Besides, to better show the change of eigenspectum for increasing $\lambda$, we add another three subfigures that plot the top eigenvalues of three different $\lambda$ values at the same training epoch (epoch 0, 20, and 100). As suggested by you, we also plot the 95\% confidence intervals (with ten random initializations). In the updated Fig.2 (especially the lower three subfigures), we can easily observe that as $\lambda$ increases, the ratio of top eigenvalues (w.r.t. the largest one) decreases quicker. Q3: The relation of $\mathbf{T}$ in the appendix to the $T$ in Eq.(5). A3: The two terms are different. $\mathbf{T}$ is a matrix which is defined as $\mathbf{T} = \mathbf{X}^{\top}\mathbf{L}\mathbf{X}$ in Lemma 1, whereas $T$ is defined in Sec. 4.2 denoting $T$-hop neighbors. We are sorry for causing such confusion, and we have replaced $\mathbf{T}$ with $\mathbf{P}$ in the updated version. Q4: Last two steps of the derivation in Eq.(8). A4: We already have: $$\frac{\partial \mathcal{L}_{reg}}{\partial \mathbf{W}} = \frac{\partial {\rm tr} ((\mathbf{XW})^{\top}\mathbf{L}(\mathbf{XW}))}{\partial \mathbf{W}} \\ = \frac{\partial {\rm tr} (\mathbf{W}^{\top}\mathbf{X}^{\top}\mathbf{L}\mathbf{XW})}{\partial \mathbf{W}} $$ Denote $\mathbf{P}=\mathbf{X}^{\top}\mathbf{LX}$, we have: $$\frac{\partial {\rm tr} (\mathbf{W}^{\top}\mathbf{X}^{\top}\mathbf{L}\mathbf{XW})}{\partial \mathbf{W}} = \frac{\partial {\rm tr} (\mathbf{W}^{\top}\mathbf{P}\mathbf{W})}{\partial \mathbf{W}}\\={\mathbf{PW}+\mathbf{P^{\top}W}} $$ As $\mathbf{P}=\mathbf{X}^{\top}\mathbf{LX} = (\mathbf{X}^{\top}\mathbf{LX})^{\top} = \mathbf{P}^{\top}$ is symmetric, we finally have: $$\mathbf{PW}+\mathbf{P^{\top}W} = 2\mathbf{PW}$$ The reviewer's confusion about the previous derivation might be that we use two similar terms $\mathbf{T}$ and $T$ in the initial version. In the latest version, we have replaced $\mathbf{T}$ with $\mathbf{P}$ to avoid confusion. Q5: Model-level contribution. A5: We do have model-level contribution besides the regularization term. Our model-level contribution is mainly about the neighborhood abstraction function, i.e., Eq.(6), which introduces an edge-centric operation to capture the (multi-hop) neighborhood information of a target node. This mitigates the issue in existing GR-MLPs that only consider first-order neighborhood information. Besides, our ablation study in Table 3 validates the values of higher-order information. Q6: Do we regularize the correlations or the covariance matrix? A6: In our implementation, we regularize the correlation matrix $\mathbf{C}$ instead of the covariance matrix $\mathbf{\Sigma}$. Note that $$ {C}_{kk'} = \frac{{\Sigma}_{kk'}}{\sqrt{\Sigma_{kk}\Sigma_{k'k'}}} \;(\text{as in Eq.(2)})$$ So regularizing the covariance matrix should have a similar effect. Q7: Model name. A7: We thank the reviewer for mentioning a previous work that uses the same model name as ours. I agree it will be better to use another name for our method, but we haven't come up with a better idea. Currently, we plan to continue using OrthoReg as our model name. Thanks for your suggestion! \end{document}
1,314,259,996,379
arxiv
\section*{APPENDIX A: The derivation of Eq.~(8) from Eq.~(4)} In the following, we give the details about the derivation of Eq.~(8) from Eq.~(4). Following the same procedure as that in Eqs.~(7.21-7.23) in Ref.~\cite{th1952}, the series expansion of the cumulant CFW $\ln \chi(v)$ can be straightforwardly expressed as the integrals of the $n$-point cumulant correlation functions $G_c(s_1,\cdots,s_n)$. That is Eq.~(6) in the main text. Then, to the second order of the work parameter $\lambda(s)$, we have \begin{widetext} \begin{gather} \begin{split} \label{se6} \ln\chi(v)=&\left(\int_{C'} \mathrm d\bar{s}_1-\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_1\right)G_c(s_1)+\left(\int_{C'} \mathrm d\bar{s}_1\int_{C'} \mathrm d\bar{s}_2-\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_1\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_2\right)G_c(s_1,s_2)+O(\lambda(s)^3),\\ =&\left(\int_{C'} \mathrm d\bar{s}_1-\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_1\right) G_c(s_1)+\left(\int_{C'} \mathrm d\bar{s}_1\int_{C'} \mathrm d\bar{s}_2-\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_1\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_2\right)G^>_c(s_1-s_2)+O(\lambda(s)^3),\\ =&iv(\lambda_1-\lambda_0)\langle\hat{H}_1\rangle_c+\int_{-\infty}^{\infty}\frac{\mathrm{d}\omega}{2\pi}G^>_c(\omega)\left(\int_{C'} \mathrm d\bar{s}_1\int_{C'} \mathrm d\bar{s}_2-\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_1\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_2\right)e^{-i\omega(s_1-s_2)}+O(\lambda(s)^3), \end{split} \end{gather} \end{widetext} where $C'$ denotes the contour for work statistics (see Fig.~1b) in the main text. Since the work parameter $\lambda(s)$ is a piecewise function along the contour $C'$, we divide the interval of the integral along $C'$ into four parts: \begin{enumerate}[fullwidth,itemindent=0em,label=(\roman*)] \item Part 1, $s\in[0,t], \lambda_{C'}(s)=\lambda(s)$; \item Part 2, $s\in[t,t-v], \lambda_{C'}(s)=\lambda_1$; \item Part 3, $s\in[t-v,-v], \lambda_{C'}(s)=\lambda(s+v)$; \item Part 4, $s\in[-v,-i\beta], \lambda_{C'}(s)=\lambda_0$. \end{enumerate} Then, the double integral along $C'$ in Eq.~\eqref{se6} is equal to the sum of the double integrals in Part $(i,j)$ ($i,j=1,2,3,4.$), i.e., the double integral along $C'$ in Eq.~\eqref{se6} consists of 16 terms, and each term is labeled by a pair of $(i,j)$. Notice that due to the contour step function $\theta_{C'}(s_1-s_2)$ in $\mathrm d\bar{s}_1$, the double integrals for $i<j$ (6 terms) are equal to zero. According to the value of the work parameter $\lambda(s)$ in four parts along the contour, we can further classify the 10 non-zero terms into 6 sets. For every set, we give the expression of the sum of the double integral: \begin{enumerate}[fullwidth,itemindent=0em,label=(\roman*)] \item $(i,j)=(2,2)$: \begin{equation} \frac{\lambda_1^2(1-e^{i\omega \hbar v}+i\omega \hbar v)}{\omega^2}; \end{equation} \item $(i,j)= (4,2)$: \begin{equation} \frac{-2\lambda_0\lambda_1}{\omega^2}(1-e^{i\omega \hbar v})\cos(\omega t); \end{equation} \item $(i,j)=(4,4)$: \begin{gather} \begin{split} \frac{\lambda_0^2[e^{-\beta\hbar\omega}(1-e^{-i\omega \hbar v})-i\omega \hbar v]}{\omega^2}\\ +\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_1\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_2e^{-i\omega(s_1-s_2)}; \end{split} \end{gather} \item $(i,j)=(1,1), (3,3), (3,1)$: \begin{equation} (1-e^{i\omega \hbar v})\int_{0}^{t}\mathrm ds_1\int_{0}^{t}\mathrm ds_2\lambda(s_1)\lambda(s_2)e^{i\omega (s_1-s_2)}; \end{equation} \item $(i,j)=(2,1), (3,2)$: \begin{equation} \frac{-2\lambda_1}{\omega}(1-e^{i\omega \hbar v})\int_{0}^{t}\mathrm ds\lambda(s)\sin[\omega (t-s)]; \end{equation} \item $(i,j)= (4,1), (4,3)$: \begin{equation} \frac{-2\lambda_0}{\omega}(1-e^{i\omega \hbar v})\int_{0}^{t}\mathrm ds\lambda(s)\sin(\omega s). \end{equation} \end{enumerate} The double integral along $C'$ in Eq.~\eqref{se6} is equal to the sum of the above 6 expressions: \begin{widetext} \begin{gather} \begin{split} \label{se10} &\int_{C'} \mathrm d\bar{s}_1\int_{C'} \mathrm d\bar{s}_2e^{-i\omega(s_1-s_2)}-\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_1\int_{0}^{-i\hbar\beta}\mathrm d\bar{s}_2e^{-i\omega(s_1-s_2)}\\ =&\frac{1-e^{i\omega \hbar v}}{\omega^2}\left\{\lambda_1^2-2\lambda_0\lambda_1\cos(\omega t)+\lambda_0^2+\omega^2\int_{0}^{t}\mathrm ds_1\int_{0}^{t}\mathrm ds_2\lambda(s_1)\lambda(s_2)e^{i\omega (s_1-s_2)}-2\omega\lambda_1\int_{0}^{t}\mathrm ds\lambda(s)\sin[\omega (t-s)]\right.\\ &\left.-2\omega\lambda_0\int_{0}^{t}\mathrm ds\lambda(s)\sin(\omega s)\right\}+\frac{i\hbar v(\lambda_1^2-\lambda_0^2)}{\omega}+\frac{\lambda_0^2[e^{-\beta\hbar\omega}(1-e^{-i\omega \hbar v})-(1-e^{i\omega \hbar v})]}{\omega^2}\\ =&\frac{1-e^{i\omega \hbar v}}{\omega^2}\left|\lambda_1e^{i\omega t}-\lambda_0-i\omega\int_{0}^{t}\mathrm ds\lambda(s)e^{i\omega s}\right|^2+\frac{i\hbar v(\lambda_1^2-\lambda_0^2)}{\omega}+\frac{\lambda_0^2[e^{-\beta\hbar\omega}(1-e^{-i\omega \hbar v})-(1-e^{i\omega \hbar v})]}{\omega^2}\\ =&\frac{1-e^{i\omega \hbar v}}{\omega^2}\left|\int_{0}^{t}\mathrm ds \dot{\lambda}(s)e^{i\omega s}\right|^2+\frac{i\hbar v(\lambda_1^2-\lambda_0^2)}{\omega}+\frac{\lambda_0^2[e^{-\beta\hbar\omega}(1-e^{-i\omega \hbar v})-(1-e^{i\omega \hbar v})]}{\omega^2}\\ =&\frac{1-e^{i\omega \hbar v}}{\omega^2}A(\omega)+\frac{i\hbar v(\lambda_1^2-\lambda_0^2)}{\omega}+\frac{\lambda_0^2[e^{-\beta\hbar\omega}(1-e^{-i\omega \hbar v})-(1-e^{i\omega \hbar v})]}{\omega^2}. \end{split} \end{gather} \end{widetext} Substituting Eq.~(\ref{se10}) into Eq.~(\ref{se6}) and using the Kubo-Martin-Schwinger condition, we finally obtain Eq.~(8) in the main text. \section*{APPENDIX B: Exact expression of the CFW by perturbation expansion} When Wick's theorem can be applied and $G_c(s_1,s_2)$ is represented by an arrow in connected Feynman diagrams, there must not be connected Feynman diagrams to the third or higher order of $\lambda(s)$. Hence, Eq.~(8) in the main text is the exact expression of the CFW now. One example is a forced harmonic oscillator~\cite{sta2008}, where the time-dependent Hamiltonian is \begin{equation} \hat H(s)=\frac{\hat{p}^2}{2m}+\frac{1}{2}m\omega_0^2\hat{x}^2+\lambda(s)\omega_0\sqrt{2m}\hat {x}. \end{equation} And the exact expression is shown in Eq.~(23). Another example is a driven quantum scalar field~\cite{wor2019}, where the time-dependent Hamiltonian in the Heisenberg picture is \begin{equation} \hat H(s)=\frac{1}{2}\int\mathrm d^3x[\hat{\pi}^2+(\nabla \hat\phi)^2+m^2\hat \phi+2\lambda(s)F(x)\hat \phi]. \end{equation} Here, $\lambda(s)$ and $F(x)$ are called the switching and the smearing functions respectively. Then from Eq.~(8), the exact expression of the CFW reads \begin{widetext} \begin{equation} \label{es3} \ln \chi(v)=\int\frac{\mathrm {d}^3p|\tilde{F}(\mathbf p)|^2}{(2\pi)^32\omega_{\mathbf p}^3}\left[\frac{-4\sin^2(v\omega_z/2)}{e^{\beta\omega_{\mathbf p}}-1}A(\omega_{\mathbf p})-iv\omega_{\mathbf p}(\lambda_1^2-\lambda_0^2)+(e^{iv\omega_{\mathbf p}}-1)A(\omega_{\mathbf p})\right]. \end{equation} \end{widetext} where $\omega_{\mathbf p}=\sqrt{\mathbf{p}^2+m^2}$, $\tilde{F}(\mathbf p)=\int\mathrm d^3xF(x)e^{i\mathbf{p}\cdot\mathbf{x}}$ and we have set $\hbar=c=1$. We would like to emphasize that Eq.~(\ref{es3}) extends the results for a special protocol in Ref.~\cite{wor2019} to the results for an arbitrary driving protocol. \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} \section*{APPENDIX C: The CFW for noninteracting identical particles: degenerate case ($\overline{N}(\mu)\approx N$, $\hbar\omega_g,\hbar\omega_z\sim N^{-1/3}$)} In this section, we only discuss the perturbation expansion of the cumulant CFW with the grand canonical initial state $\ln \chi_{\mu}(v)$ to the second order of $\lambda(s)$ for simplicity. To replace the sum in Eq.~(22) by an integral, let us first introduce two types of density of states, $g_0(\varepsilon)=\sum_{\mathbf k}\delta(\varepsilon+\varepsilon_0-\varepsilon_{\mathbf k})=\varepsilon^2/[2(\hbar\omega_g)^3]$ and $g_1(\varepsilon)=\sum_{\mathbf k}k_z\delta(\varepsilon+\varepsilon_0-\varepsilon_{\mathbf k})=\varepsilon^3/[6\hbar\omega_z(\hbar\omega_g)^3]$. Notice that $g_0(\varepsilon)\ll g_1(\varepsilon)$ when $\hbar\omega_z\ll k_BT$. Then for bosons, when the temperature is higher than the critical temperature of Bose-Einstein condensation, i.e., $k_B T\geq k_BT_c\sim N^{1/3}\hbar\omega_{g}$, we have $\beta\varepsilon_0,\beta\hbar\omega_z\ll 1$. Also according to Eq.~(22), we have \begin{widetext} \begin{gather} \begin{split} \label{es5} \ln \chi_{\mu}(v)\approx&-v^2\hbar\omega_z A(\omega_z)\int_{0}^{\infty}\mathrm d\varepsilon g_1(\varepsilon)\overline {n}^B_{\varepsilon}(\mu)[1+\overline {n}^B_{\varepsilon}(\mu)]-Niv[\lambda_1^2-\lambda_0^2-A(\omega_z)]\\ =&N\{-v^2k_BTA(\omega_z)-iv[\lambda_1^2-\lambda_0^2-A(\omega_z)]\}\\ =&N\ln\chi_1^{cl}(v), \end{split} \end{gather} \end{widetext} where $\overline {n}^B_{\varepsilon}(\mu)=1/(\alpha^{-1}e^{\beta\varepsilon}-1)$ and \begin{equation} \label{es6} N=\int_{0}^{\infty}\mathrm d\varepsilon g_0(\varepsilon)\overline {n}^B_{\varepsilon}(\mu)=\left(\frac{k_BT}{\hbar\omega_g}\right)^3\mathrm{Li}_3(\alpha), \end{equation} where $\mathrm{Li}_n(x)=\sum_{l=0}^{\infty}x^l/l^n$ is the polylogarithm function. When $\hbar\omega_z\ll k_B T<k_BT_c$, the contribution in the sum in Eq.~(22) from the particles in the single-particle ground state can not be ignored. And we have $\mu\approx\varepsilon_0$, $\overline {n}^{B}_{\hbar\omega_z}(\mu)\approx (k_BT)/(\hbar\omega_z)$, $\beta\varepsilon_0,\beta\hbar\omega_z\ll 1$. Thus according to Eq.~(22), we have \begin{widetext} \begin{gather} \begin{split} \label{es7} \ln \chi_{\mu}(v)\approx&-v^2\hbar\omega_z A(\omega_z)\left\{\overline {n}^{B}_{0}(\mu)\overline {n}^{B}_{\hbar\omega_z}(\mu)+\int_{0}^{\infty}\mathrm d\varepsilon g_1(\varepsilon)\overline {n}^B_{\varepsilon}(\mu)[1+\overline {n}^B_{\varepsilon}(\mu)]\right\}-Niv[\lambda_1^2-\lambda_0^2-A(\omega_z)]\\ =&N\{-v^2k_BTA(\omega_z)-iv[\lambda_1^2-\lambda_0^2-A(\omega_z)]\}\\ =&N\ln\chi_1^{cl}(v), \end{split} \end{gather} \end{widetext} where \begin{equation} \overline {n}^{B}_{0}(\mu)=\frac{\alpha}{1-\alpha}=\left[1-\left(\frac{T}{T_c}\right)^3\right]N. \end{equation} Finally when $k_BT\lesssim\hbar\omega_z\approx0$, all particles are almost in the single-particle ground state and $\hbar\omega_z$ cannot be considered as a perturbation anymore. Almost all contributions in the sum in Eq.~(22) are from the particles in the single-particle ground state. We have $\mu\approx \varepsilon_0$, $N\approx\overline {n}^{B}_{0}(\mu)$, $\overline {n}^{B}_{\hbar\omega_z}(\mu)\approx 1/(e^{\beta\hbar\omega_z}-1)$. Thus according to Eq.~(22), we have \begin{widetext} \begin{gather} \begin{split} \label{es8} \ln \chi_{\mu}(v)\approx& \frac{-4\sin^2(v\hbar\omega_z/2)}{\hbar\omega_z}A(\omega_z)\overline {n}^{B}_{0}(\mu)\overline {n}^{B}_{\hbar\omega_z}(\mu)+N\left[-iv(\lambda_1^2-\lambda_0^2)+\frac{e^{iv\hbar\omega_z}-1}{\hbar\omega_z}A(\omega_z)\right]\\ =&N\ln \chi_1(v). \end{split} \end{gather} \end{widetext} For fermions, when $\hbar\omega_z\ll k_B T$, we have $\beta\varepsilon_0,\beta\hbar\omega_z\ll 1$. Thus according to Eq.~(22), we have \begin{widetext} \begin{gather} \begin{split} \label{es9} \ln \chi_{\mu}(v)\approx&-v^2\hbar\omega_z A(\omega_z)\int_{0}^{\infty}\mathrm d\varepsilon g_1(\varepsilon)\overline {n}^F_{\varepsilon}(\mu)[1-\overline {n}^F_{\varepsilon}(\mu)]-Niv[\lambda_1^2-\lambda_0^2-A(\omega_z)]\\ =&N\{-v^2k_BTA(\omega_z)-iv[\lambda_1^2-\lambda_0^2-A(\omega_z)]\}\\ =&N\ln\chi_1^{cl}(v), \end{split} \end{gather} \end{widetext} where $\overline {n}^F_{\varepsilon}(\mu)=1/(\alpha^{-1}e^{\beta\varepsilon}+1)$ and \begin{equation} N=\int_{0}^{\infty}\mathrm d\varepsilon g_0(\varepsilon)\overline {n}^F_{\varepsilon}(\mu)=-\left(\frac{k_BT}{\hbar\omega_g}\right)^3\mathrm{Li}_3(-\alpha). \end{equation} When $k_BT\lesssim\hbar\omega_z\approx0$, $\hbar\omega_z$ cannot be considered as a perturbation anymore but $\varepsilon_0$ can still be ignored due to the large $\mu$. Then according to Eq.~(22), we have \begin{widetext} \begin{gather} \begin{split} \label{es10} \ln \chi_{\mu}(v)\approx&\frac{-4\sin^2(v\hbar\omega_z/2)}{\hbar\omega_z}A(\omega_z)\int_{0}^{\infty}\mathrm d\varepsilon \left\{g_1(\varepsilon)\overline {n}^F_{\varepsilon}(\mu)[1-\overline {n}^F_{\varepsilon+\hbar\omega_z}(\mu)]-g_0(\varepsilon)\overline {n}^F_{\varepsilon}(\mu)\overline {n}^F_{\varepsilon+\hbar\omega_z}(\mu)\right\}\\ &+N\left[-iv(\lambda_1^2-\lambda_0^2)+\frac{e^{iv\hbar\omega_z}-1}{\hbar\omega_z}A(\omega_z)\right]\\ =&\frac{-4\sin^2(v\hbar\omega_z/2)}{\hbar\omega_z}A(\omega_z)\left(\frac{k_BT}{\hbar\omega_g}\right)^3\left\{\frac{e^{\beta\hbar\omega_z}[\mathrm{Li}_4(-\alpha e^{-\beta\hbar\omega_z})-\mathrm{Li}_4(-\alpha)]}{\beta\hbar\omega_z(e^{\beta\hbar\omega_z}-1)}+\frac{e^{\beta\hbar\omega_z}\mathrm{Li}_3(-\alpha e^{-\beta\hbar\omega_z})-\mathrm{Li}_3(-\alpha)}{e^{\beta\hbar\omega_z}-1}\right\}\\ &+N\left[-iv(\lambda_1^2-\lambda_0^2)+\frac{e^{iv\hbar\omega_z}-1}{\hbar\omega_z}A(\omega_z)\right]\\ \approx& N\ln \chi_1(v), \end{split} \end{gather} \end{widetext} where $N=\mu^3/6(\hbar\omega_g)^3$. Here in the calculation, we have used the property: for large $\alpha$, $-\mathrm{Li}_3(-\alpha)\approx(\ln\alpha)^3/3!$. From the above analysis, we found that: (1) the cumulant CFW for the degenerate case is approximately equal to that of a single particle multiplied by a factor $N$; (2) When $k_BT\gg\hbar\omega_z$, the cumulant CFW for the single particle is replaced by its classical counterpart. We would like to emphasize that the multiplicity relation between the many-particle system and a single-particle system is due to the peculiarity of this model. For a generic model, e.g., a harmonic potential with a time-dependent frequency, the cumulant CFW of a many-particle system is not equal to that of a single particle multiplied by a factor $N$.
1,314,259,996,380
arxiv
\section{Introduction} Brownian motion \cite{brown, lange} and the fluctuation-dissipation theorem \cite{kubo} stand until today as two of the most important subjects within non-equilibrium statistical mechanics. Its intersections and contributions spread over many branches of science and in particular at high energy physics, such as matter under extreme conditions or the quark-gluon-plasma (QGP) \cite{Policastro:2001yc, CasalderreySolana:2011us}. In this case, the constituents of nuclear matter, under high temperature or density, present erratic trajectories due to their interactions with each other behaving like a Brownian motion. In this sense, by studying QGP one can investigate those phenomena. A very interesting approach to deal with non-perturbative aspects of strong interactions, which appear in such processes, is based on the AdS/CFT correspondence \cite{Aharony:1999ti} which relates a weak coupled theory in a curved spacetime (AdS$_5$) with a strong coupled theory in four dimensional Minkowski spacetime. An incomplete list of references which dealt with Brownian motion, dissipation, fluctuation and related topics resorting to the AdS/CFT correspondence can be found, for instance, in Refs. \cite{deBoer:2008gu, Son:2009vu, Atmaja:2010uu, Tong:2012nf, Edalati:2012tc, Fischler:2014tka, Giataganas:2013hwa, Giataganas:2018ekx}. In Ref. \cite{Caldeira:2020sot} the authors studied the fluctuation and the dissipation through an AdS/QCD model based on a deformation of the AdS-Schwarzschild spacetime. This deformation is due to the introduction of a conformal factor $\exp({k}/{r^{2}})$ in the metric of such a space. Then they computed the string energy, the response function, the mean square displacement, the diffusion coefficient and checked the fluctuation-dissipation theorem. This and related deformed AdS/QCD models were used successfully in many holographic problems as can be seen, for example, in Refs. \cite{Andreev:2006vy, Rinaldi:2017wdn, Bruni:2018dqm, Afonin:2018era, FolcoCapossoli:2019imm, FolcoCapossoli:2020pks}. Here, in this work we will use a probe string attached to a probe brane in a Lorentz invariant deformed AdS/QCD model taking into account the backreaction from the exponential factor on the metric of the AdS-Schwarzschild space. This will allow us to extend the work done in Ref. \cite{Caldeira:2020sot} and investigate the contribution of the backreaction on the admittance, the diffusion coefficient, the mean square displacement and the fluctuation-dissipation theorem in this set up. In previous studies \cite{Son:2009vu, Atmaja:2010uu, Tong:2012nf, Edalati:2012tc, Fischler:2014tka, Giataganas:2013hwa, Giataganas:2018ekx}, the backreaction was not considered. This work is organized as follows: In Sec. \ref{emd} we present the Einstein-dilaton action and solve the corresponding field equations. From these field equations we obtain the backreacted horizon function consitent with the deformed warp factor and the dilaton potential. In Sec. \ref{bulkdesc} we describe our holographic model and study the effects of the backreaction on the fluctuation and dissipation at the string endpoint attached to the brane. In Sec. \ref{meanflu} we compute the mean square displacement from which we obtain the ballistic and the diffusive regimes characteristic of the Brownian motion. We also check the fluctuation-dissipation theorem in this set up. Finally, in Sec. \ref{conc} we present final discussions and our conclusions. \section{The Einstein-Dilaton action and the deformed AdS-Schwarschild space with backreaction} \label{emd} In order to capture all features of our deformed and backreacted space, let us start with a 5-dimensional Einstein-dilaton action in Einstein frame: \begin{equation} \label{EMD action} S = \dfrac{1}{16\pi G_5}\int d^{5}x \sqrt{-g}\left(R - \dfrac{4}{3}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi + \mathcal{V}(\phi) \right), \end{equation} where $ G_5 $ is the 5-dimensional Newton's constant, $g$ is the metric determinant, $ R $ is the Ricci scalar, $\phi$ is the dilaton field and $\mathcal{V}(\phi)$ its potential. From this action one obtains the field equations \begin{eqnarray} G_{\mu\nu} -\dfrac{4}{3}\left(\partial_{\mu}\phi\partial_{\nu}\phi - \dfrac{1}{2}g_{\mu\nu}(\partial \phi)^2\right) - \dfrac{1}{2}g_{\mu\nu}\mathcal{V}(\phi) &=& 0, \label{EinsteinEqn} \label{eom1}\qquad \\ \nabla^{2}\phi + \dfrac{3}{8}\frac{\partial\,\mathcal{V}(\phi )}{\partial\,\phi}&=& 0, \label{DilatonEqn} \label{eom2} \end{eqnarray} where $G_{\mu\nu} = R_{\mu\nu}- \dfrac{1}{2}g_{\mu\nu}R$ is the Einstein tensor. For our purposes, as done in Refs. \cite{Ballon-Bayona:2017sxa, Ballon-Bayona:2020xls}, we will consider the ansatz \begin{eqnarray} ds^2 &=& \dfrac{1}{\zeta(z)^2}\left(\dfrac{dz^2}{f(z)} - f(z)dt^2 + d\vec{x}^2\right), \label{Metriczeta} \end{eqnarray} \noindent where $z$ is the holographic coordinate, $f(z)$ is the horizon function and $\zeta(z)$ is the metric warp factor which we choose to be \begin{equation}\label{warp} \zeta(z) = z \; e^{-\frac{1}{2} \left(k z^2\right)} \,, \end{equation} \noindent with $k$ being the deformation parameter which plays the role of the IR scale in the model. Replacing the ansatz, Eq. \eqref{Metriczeta}, into the Einstein-dilaton equations of motion \eqref{eom1} and \eqref{eom2} one gets: \begin{eqnarray} \frac{d}{dz}\left(\zeta(z)^{-3} \frac{d}{dz} f(z)\right) &=& 0\,, \label{freqn3} \\ \frac{\zeta''(z)}{\zeta(z)} - \frac{4}{9}\phi'(z)^2 &=& 0,\label{breqn3} \end{eqnarray} where $' \equiv d/dz$. By using Eq. \eqref{eom2} one can obtain an expression for the dilaton potential, so that: \begin{eqnarray} \label{potential} {\cal V}(\phi) = 12\zeta'(z)^2 f(z) - 3\zeta'(z)f'(z)\zeta(z) - \frac{4}{3}f(z)\zeta (z)^2\phi'(z)^2 \;\;, \end{eqnarray} and from Eq. \eqref{breqn3} we get: \begin{equation}\label{dil} \phi(z)=c_1\pm\frac{3}{4} \sqrt{k \left(k z^2-3\right)} \left(z-\frac{3 \log \left(\sqrt{k(k z^2-3)}+k z\right)}{\sqrt{k(k z^2-3)}}\right)\,, \end{equation} where $c_1$ is an integration constant. Substituting Eq. \eqref{warp} into Eq. \eqref{freqn3}, satisfying $f(0) = 1$ and the horizon property $f(z_h) = 0$, one can solve it analytically, so that: \begin{align}\label{horfunfinal} f(z)=1&-\left(\frac{3 k z^2-2 e^{\frac{3}{2} k z^2}+2}{3 k z_{h} ^2-2 e^{\frac{3}{2} k z_{h} ^2}+2}\right) \; e^{\frac{3}{2} k (z_{h}^2- z^2)}\,. \end{align} This is the horizon function with backreaction coming from the exponential deformation in the metric, disregarding contributions from the probe string or the probe brane. These probes will be introduced in the following. One can verify that the AdS-Schwarzschild space is recovered in the limit $k \to 0$ for which $f^{\rm AdS-Sch}(z) = 1 -z^4/z_h^4$ and the potential ${\cal V}(\phi)$ reduces to a constant ${\cal V}(\phi) = 12$ with AdS radius $L=1$. The Eq. \eqref{horfunfinal} also fulfills the condition $f'(z_h)<0$. If one substitutes the warp factor, Eq. \eqref{warp}, the dilaton profile in Eq. \eqref{dil} and the horizon function, Eq. \eqref{horfunfinal} into the expression of the potential \eqref{potential}, one can obtain an expression for the potential in terms of the $z$ coordinate. For simplicity, we are not presenting this expression explicitly. In Figure \ref{hor_mu_zero}, we present the behavior of the horizon function in terms of the holographic coordinate $z$ for both signs of the constant $k$. \begin{figure}[ht] \centering \includegraphics[scale = 0.4]{horAdSk.pdf} \caption{The horizon function $ f(z) $, Eq. \eqref{horfunfinal}, vs the holographic coordinate $z$ for $k=0$ and $k \pm1$ and $z_h = 1$ in arbitrary units.} \label{hor_mu_zero} \end{figure} \section{Probe string in the bulk with backreaction effects}\label{bulkdesc} In this section we implement the description of a probe string attached to a probe brane in a thermal bath with backreaction from the exponential deformation of the metric. For convenience, we change the coordinate $z$ to $r=1/z$ so that the metric, Eq. \eqref{Metriczeta}, is rewritten as \begin{equation}\label{metrictemp} ds^2 = e^{\frac{k}{r^2}} \left[-r^{2}f(r)dt^{2}+{r^2}\left(\eta_{i j} dx^{i}dx^{j} \right) +\frac{dr^2}{r^{2}f(r)}\right]. \end{equation} Also in $r$ coordinate, the horizon function, Eq. \eqref{horfunfinal}, reads: \begin{equation}\label{hormuzeror} f(r) = 1-\left(\frac{\frac{3 k}{r^2}-2 e^{\frac{3 k}{2 r^2}}+2 }{\frac{3 k}{r_{h}^2}-2 e^{\frac{3 k}{2 r_{h}^2}}+2} \right) e^{-\frac{3}{2} k \left(\frac{1}{r^2}-\frac{1}{r_{h}^2}\right)}\,. \end{equation} In order to describe the string we consider the Nambu-Goto action, given by $S_{NG} = - \frac{1}{2 \pi \alpha'} \int d\tau d\sigma \sqrt{-\gamma}$, where $\alpha'$ is the string tension, $\gamma = {\rm det} (\gamma_{\alpha \beta})$ and $\gamma_{\alpha \beta} = g_{mn} \partial_{\alpha}X^m \partial_{\beta}X^n $ is the induced metric on the worldsheet with $m,n = 0, 1, 2, 3, 5$. We also choose a static gauge, where $t = \tau$, $r = \sigma$ and $X= X(\tau, \sigma)$. Expanding the Nambu-Goto action, keeping only the quadratic terms $\dot{X}^2$, $X'^2$, and using the metric, Eq. \eqref{metrictemp}, we get: \begin{equation}\label{ngapprox} S_{NG} \approx - \frac{1}{4 \pi \alpha'} \int dt dr \left[ \;\dot{X}^2 \frac{e^{\frac{ k}{r^2}}}{f(r)}-X'^2 r^4 f(r) e^{\frac{k}{r^2}} \right]\,, \end{equation} \noindent where $\dot{X}=\partial_{t} X$ and $X'=\partial_{r} X$. Factorizing $X(t,r)$ as $X(t,r)=e^{i\omega t}h_{\omega}(r)$, the equation of motion reads: \begin{equation}\label{EquationofMotionSecondVersion} \frac{\partial }{\partial r}\left(r^{4}f(r)e^{\frac{k}{r^{2}}}h_{\omega}'(r,t)\right)-\frac{e^{\frac{k}{r^{2}}}\omega^{2}}{f(r)}h_{\omega}(r)=0. \end{equation} Going to the tortoise coordinate $ r_{*}=\int {dr}/\left({r^{2}f(r)}\right)$ \noindent and making a Bogoliubov transformation $ h_{\omega}(r_{*})=e^{B(r_*)}\psi(r_*)$, where $B(r)= -k/{2 r^2}-\log (r)$, we obtain a Schrödinger-like equation: \begin{equation}\label{sch} \frac{d^{2}\psi(r_{*})}{dr_{*}^{2}}+\left(\omega^{2}-V(r)\right)\psi(r_{*})=0, \end{equation} with potential \begin{eqnarray} V(r)&=&-f(r)\left(\left(-\frac{k^2}{r^2}+k-2 r^2\right) f(r)+r \left(k-r^2\right) f'(r)\right)\,. \end{eqnarray} As the equation \eqref{sch} cannot be analytically solved, we will apply the monodromy patching procedure \cite{deBoer:2008gu, Caldeira:2020sot} and seek for approximate analytical solutions. For our purposes we will choose three regions: \textbf{A}, \textbf{B}, \textbf{C} and explore their solutions. First, we consider the region {\bf A} which is near the horizon ($r\sim r_{h}$). In this region one has $V(r)\ll\omega^{2}$, so that the Schrödinger equation \eqref{sch} reads \begin{equation}\label{schsemw} \frac{d^{2}\psi(r_{*})}{dr_{*}^{2}}+\omega^{2}\psi(r_{*})=0, \end{equation} which has the ingoing solution $ \psi(r_{*}) = A_1 e^{-i\omega r_*}.$ For low frequencies one can expand this solution as $ \psi(r_{*}) = A_1 -i A_1 \omega r_* $ allowing us to compute $h_{\omega}(r_{*})$ in this region: \begin{equation}\label{hAr*} h_{\omega}^{A}(r_*)=A_1\frac{e^{-\frac{k}{2 r^2_h}}}{r_h}\left(1- i\omega\lambda \log \left(\frac{r}{r_{h}}-1\right) \right)\,, \end{equation} \noindent where we defined the quantities \begin{equation} \lambda \equiv \frac{2\left(e^{\frac{3}{2}x}-1\right)-3x}{9x^{2}r_{h}} \,\, ; \qquad x\equiv k/r_{h}^{2}\,. \end{equation} The next region, {\bf B}, is defined as $V(r)\gg\omega^{2}$. In this region the equation of motion \eqref{EquationofMotionSecondVersion} becomes: \begin{equation} \label{reg2} \frac{d }{d r}\left(r^{4}f(r)e^{\frac{k}{r^{2}}}h_{\omega}'\right)=0\,. \end{equation} The solution of the IR part of this region can be written as \begin{eqnarray} h^{B}_{\omega (\rm IR)}(r) \approx B_{1} \frac{\lambda}{r_{h}^{2}}e^{-x}\log\left(\frac{r}{r_{h}}-1\right)+B_{2}. \end{eqnarray} where $B_1$ and $B_2$ are constants. Comparing with \eqref{hAr*} we get \begin{eqnarray} B_{1}=-i A_{1} r_{h} \, \omega \, e^{\frac{k}{2 r_{h} ^2}}, \qquad B_{2}=\frac{A_{1}}{r_{h}}e^{-\frac{k}{2r_{h}^{2}}}. \end{eqnarray} On the other hand, the solution of the UV part of this region can be approximated by \begin{equation} \label{HBUV} h^{B}_{\omega(\rm UV)}(r)\approx -\frac{B_{1}}{3 r^3}+B_{2}. \end{equation} The last region, {\bf C}, represents the deep UV meaning that the horizon function reduces to $f(r) =1$. In this case, Eq. \eqref{EquationofMotionSecondVersion} has the solution: \begin{equation}\label{hwc} h^C_{\omega}(r)=C_1\, {}_1F_1\left(\frac{\omega^{2}}{4k},-\frac{1}{2},-\frac{k}{r^{2}}\right)+ C_2 \frac{(-k)^{3/2}}{r^3}{}_1F_1\left(\frac{3}{2} + \frac{\omega^{2}}{4k},-\frac{5}{2},-\frac{k}{r^{2}}\right)\,, \end{equation} where ${}_1F_1(a,b,z)$ is the confluent hypergeometric function of the first kind. Close to the boundary, keeping only terms up to $O(\omega)$, it can be expanded as \begin{eqnarray} h^{C}_{\omega}(r)\approx C_1+\frac{C_2 k^{3/2}}{r^3}. \end{eqnarray} Matching $h^{B}_{\omega(\rm UV)}(r)$ and $h^{C}_{\omega}(r)$, one can write the solution close to the boundary (the brane) as \begin{eqnarray} h^{C}_{\omega}(r)\approx \frac{A_1}{{r_{h}}} \left(e^{-\frac{k}{r_{h}^{2}}} +i \omega\frac{ r_{h}^2 } {3r^3}\right) e^{\frac{k}{2 r_{h} ^2}}\,. \end{eqnarray} From this solution one can calculate the linear response or the admittance $\chi(\omega)$ of the string endpoint on the brane. Such a response is due to the action of an external force in an arbitrary brane direction, $x^{i}$, and can be written as $F(t) = E \, e^{-i\omega t} F(\omega)$, where $E$ is the electric field on the brane. Following Refs. \cite{ Tong:2012nf, Caldeira:2020sot} one can write the force as: \begin{eqnarray} F(\omega)=\frac{A_{1}}{2 \pi \alpha'}\left[-i \omega r_{h} e^{\frac{k}{2 r_{h} ^2}} f(r_{b})e^{\frac{ k}{r_b^2}}\right]\,. \end{eqnarray} Considering the limits where the brane is far away from the horizon $r_{b}\gg r_{h}$, and the scale of the probe brane is much greater than the IR scale $r_{b}\gg \sqrt{k}$, then $f(r_{b})\to 1$, therefore the admittance, is given by: \begin{eqnarray} \label{Admittance} \chi(\omega)\equiv \frac{h^{C}_{\omega}(\omega)}{F(\omega)} =\frac{2\pi i\alpha'}{\omega r_{h}^{2}} e^{-x}=\frac{2\pi i\alpha'}{\omega g_{ ii}(r_{h})}, \,\,\,{\rm with}\,\,\,x\equiv k/r_{h}^{2}\,. \end{eqnarray} Note that the last equality in the above equation was proposed in Ref. \cite{Giataganas:2018ekx}, in the context of a general polynomial metric, where $g_{ii}$ is the metric component in the $x^i$ direction. Following this reference, one can write $\chi(\omega)$ as: \begin{equation} \chi(\omega)=2\pi\alpha'\left(\frac{i}{\gamma \omega} - \frac{\Delta m}{\gamma^{2}} +\mathcal{O}(\omega)\right), \end{equation} \noindent with \begin{align} \gamma=e^{\frac{k}{r_{h}^{2}}}r_{h}^{2}\left(1+\frac{k}{r_{b}^2}+O\left(\frac{1}{r_{b}^{3 }}\right)\right), && \Delta m=\frac{e^{-\frac{3k}{2r_{h}^{2}}}r_{h}^{4}}{r_{b}^{3}}\left(1+O\left(\frac{1}{r_{b}^{2}}\right)\right), \end{align} where $\gamma$ is the friction coefficient and $\Delta m$ corresponds to the change in the bare mass $m$ of the particle in the Langevin equation \cite{lange,kubo}. The Hawking temperature associated with the black hole in our deformed AdS-Schwarzschild space, is given by: \begin{eqnarray} \label{HawkingTemperature} T&=&\frac{r^{2}}{4\pi}\left|\frac{d f(r)}{d r}\right|\Bigg|_{r=r_{h}} = \frac{r_{h}}{\pi}g(x)\,,\,\,\quad {\rm where} \,\,\, g(x) \equiv \left|\left(\frac{9 x^2}{4\left(2 \left( e^{\frac{3}{2}x}-1\right )-3x \right)}\right)\right|\,. \end{eqnarray} It is worthwhile to mention that in the limit $k\to 0$ (or equivalently $x\to 0$), one recovers the AdS-Schwarzschild meaning $T\to r_{h}/\pi$. By using the above definition of the Hawking temperature in the expression for the admittance, Eq. \eqref{Admittance}, one can rewrite it as: \begin{equation}\label{chi} \chi(\omega)=\frac{2 i\alpha'}{\omega \pi T^{2}}\, e^{-x} \, g(x)^{2}. \end{equation} At this point it is interesting to compare our result for the admittance with Ref. \cite{Caldeira:2020sot}, which computes this quantity for a deformed AdS-Schwarzschild space with no backreaction ($\chi_{_{NBR}}(\omega)$), and Refs. \cite{Tong:2012nf, Edalati:2012tc, Giataganas:2018ekx} where the authors compute the admittance in a geometry which includes the pure AdS-Schwarzschild ($\chi_{_{AdS}}(\omega)$). Note that: \begin{equation}\label{comparison} \chi(\omega)=\chi_{_{AdS}}(\omega)\, e^{-x} \, g(x)^{2}=\chi_{_{NBR}}(\omega)g(x)^{2}. \end{equation} In Fig. \ref{fig:resumo} we present the behavior of $g(x)$ and compare the admittances presented in Eq. \eqref{comparison}. \begin{figure} \centering \includegraphics[scale = 0.38]{gx_function.pdf} \hfill \includegraphics[scale = 0.38]{Admittance_Comparison.pdf} \caption{{\sl Left panel:} Plot of the function $g(x)$ against $x=k/r_h^2$ that measures the shift from AdS-Schwarzschild Hawking temperature after backreaction. {\sl Right panel:} Imaginary part of the admittance $\chi$ times $\pi T^2$ against $x$, for three situations: pure AdS-Schwarzschild, deformed AdS-Schwarzschild \cite{Caldeira:2020sot}, and deformed AdS-Schwarzschild with backreaction. Note the asymmetry between positive and negative values of $k$ (or $x$) in both panels.} \label{fig:resumo} \end{figure} Using the result for the admittance found here we can calculate the diffusion coefficient, which is given by \begin{equation} \label{DCoefficient} D = T \lim_{\omega \to 0}(- i \omega \chi(\omega))=\frac{2\alpha'}{\pi T}\, e^{-x}\, g(x)^{2} \,, \end{equation} where one can clearly see the contributions from the deformation ($e^{-x}$) of the AdS-Schwarzschild metric and the backreaction ($g(x)^{2}$), analogously to the admittances discussed above. It is worthwhile to mention that $x$ and $g(x)$ are implicit functions of the temperature $T$ which cannot be analytically inverted and then all the results presented are involved functions of the temperature, as can be seen, for instance, in Eqs. \eqref{chi} and \eqref{DCoefficient}. \section{Mean Square Displacement and Fluctuation-Dissipation Theorem}\label{meanflu} The Schr\"odinger equation \eqref{schsemw} has as solution a linear combination between the ingoing and outgoing modes. Considering the outgoing mode as $ \psi^{out}(r)=A_{2}e^{i\omega r_{*}}$ one can follow the above steps of the monodromy patching procedure and obtain for the region {\bf A} an expression given by: \begin{equation} h_{\omega}(r)=A\frac{e^{-\frac{k}{2r_{h}^{2}}}}{r_{h}}\left[\left(1 +\frac{i \omega r^{2}_{h} e^{\frac{k}{ r_{h} ^2}} }{3r^3}\right)+B\left(1 -\frac{i \omega r^{2}_{h} e^{\frac{k}{ r_{h} ^2}} }{3r^3}\right)\right]. \end{equation} Similarly, for the region {\bf C} in terms of ingoing and outgoing modes, one has: \begin{eqnarray} h^{C}_{\omega}&=&A[h^{out}_{\omega}(r)+Bh^{in}_{\omega}(r)]\nonumber\\ &=&A\left[ \,_1F_1\left(\frac{\omega ^2}{4 k };-\frac{1}{2};-\frac{k }{r^2}\right)-i\omega \frac{r^{2}_{h} }{3r^3} \,_1F_1\left(\frac{\omega ^2}{4 k }+\frac{3}{2};\frac{5}{2};-\frac{k }{r^2}\right) e^{\frac{k}{r_{h} ^2}} \right.\nonumber\\ & &\left.+ B\left( \,_1F_1\left(\frac{\omega ^2}{4 k };-\frac{1}{2};-\frac{k }{r^2}\right)+ i\omega \frac{ r^{2}_{h} }{3r^3} \,_1F_1\left(\frac{\omega ^2}{4 k }+\frac{3}{2};\frac{5}{2};-\frac{k }{r^2}\right) e^{\frac{k}{r_{h} ^2}}\right) \right]\,, \end{eqnarray} \noindent where $A$ and $B$ are constants to be determined. On the other hand, close to the horizon one can write the general solution as: \begin{equation} h_{\omega}(r)=A\frac{e^{-\frac{k}{2r^{2}}}}{r}[e^{i\omega \lambda\log\left(\frac{r}{r_{h}}-1\right)}+Be^{-i\omega \lambda\log\left(\frac{r}{r_{h}}-1\right)}]\,,\quad {\rm where}\;\; \lambda = \frac{1}{4 r_h g(x)}\,. \end{equation} \noindent Following \cite{deBoer:2008gu}, by imposing Neumann boundary conditions at the brane ($r=r_{b}$) and at the horizon where $r/r_{h}=1+\epsilon$, with $\epsilon\ll 1$, one can write \begin{equation} B=-\frac{h_{\omega}^{'out}(r)}{h_{\omega}^{'in}(r)}\Bigg|_{\frac{r}{r_{h}}=1+\epsilon} \approx e^{-2 i\omega \lambda\log\left({1}/{\epsilon}\right)}\,, \end{equation} \noindent which produces discrete frequencies, $\Delta\omega=\pi/\lambda\log\left({1}/{\epsilon}\right)$. In order to compute the constant $A$ one can use the normalized Klein-Gordon inner product \cite{Giataganas:2018ekx, Caldeira:2020sot} \begin{align} (X_{\omega}(r,t),X_{\omega}(r,t)) &=\frac{\omega}{2\pi \alpha'}\int_{r_{h}}^{r_{b}}dr\;\frac{e^{\frac{k}{r^{2}}}}{f(r)} |h_{\omega}(r)|^{2}=1. \label{IKGP} \end{align} This integral is dominated by the near horizon region: \begin{eqnarray*} \frac{2\omega|A|^{2}}{\pi \alpha'}\int_{r_{h}(1+\epsilon)}\frac{dr}{r^{2}f(r)}\approx \frac{2\omega\lambda |A|^{2}}{\pi \alpha'}\log\left(\frac{1}{\epsilon}\right) \end{eqnarray*} so that \begin{equation} A=\sqrt{\frac{\pi \alpha'}{2\omega\lambda \log\left({1}/{\epsilon}\right)}}\,. \end{equation} To compute the mean square displacement of the string endpoint located at the brane one has to write the thermal two-point function as a Fourier series: \begin{equation}\label{fourrier} X(t,r) = \sum_{\omega>0} \left(h^C_{\omega }(r)e^{-\text{i$\omega t $}} a_{\omega} + h^{C*}_{\omega}(r)e^{\text{i$\omega t $}} a^{\dagger}_{\omega}\right) \, , \end{equation} where the frequencies $\omega$ are discrete while $a_{\omega}$ and $a^{\dagger}_{\omega}$ are the annihilation and creation operators, respectively. Then, disregarding terms of the order $1/r_b$ or less, one gets \begin{equation} \label{TwoPoint1} \langle x(t)x(0) \rangle \equiv \langle X(t,r_{b})X(0,r_{b}) \rangle=\frac{2\alpha' e^{-\frac{k}{r_{h}^{2}}}}{r^{2}_{h}}\int_{0}^{\infty}\frac{d\omega}{\omega} \left( \frac{2\cos(\omega t)}{e^{\beta\omega}-1} + e^{-i\omega t}\right)\,, \end{equation} \noindent where we have approximated the sum by an integral considering $d\omega\sim \Delta\omega$. Analogously one has \begin{equation} \label{TwoPoint2} \langle x(0)x(t) \rangle=\frac{2\alpha' e^{-\frac{k}{r_{h}^{2}}}}{r^{2}_{h}}\int_{0}^{\infty}\frac{d\omega}{\omega} \left( \frac{2\cos(\omega t)}{e^{\beta\omega}-1} + e^{i\omega t}\right) = \langle x(t)x(0) \rangle^\ast \end{equation} \noindent and \begin{align} \label{eq:CorrelationPositionTime} \langle x(t)x(t) \rangle= \frac{2\alpha' e^{-\frac{k}{r_{h}^{2}}}}{r^{2}_{h}}\int_{0}^{\infty}\frac{d\omega}{\omega}\left( \frac{2}{e^{\beta\omega}-1} + 1\right) = \langle x(0)x(0) \rangle \,, \end{align} where these integrals are divergent, as well as the mean square displacement. Using the normal ordering prescription, the regularized mean square displacement can be written as: \begin{eqnarray} s^2_{\rm reg}(t) &\equiv& \langle : [x(t) - x(0)]^2 : \rangle \cr &=&\langle :x(t)x(t):\rangle^{2} + \langle :x(0)x(0):\rangle^{2} - \langle :x(t)x(0):\rangle- \langle :x(0)x(t):\rangle \label{meanreg} \end{eqnarray} Then, one gets: \begin{equation}\label{disp2} s^2_{\rm reg}(t) = \frac{16\alpha' e^{-\frac{k}{r_{h}^{2}}}}{r^{2}_{h}}\int_{0}^{\infty}\frac{d\omega}{\omega}\frac{\sin^{2}\left(\frac{\omega t}{2}\right)}{e^{\beta \omega}- 1} =\frac{ 4\alpha'e^{-\frac{k}{r_{h}^{2}}} }{r^{2}_{h}} \log \left(\frac{\sinh (\frac{t \pi}{\beta})}{\frac{t \pi}{\beta}} \right). \end{equation} Considering the late time approximation $t \gg \beta/\pi$, we get: \begin{equation}\label{latetime} s^2_{\rm reg}(t) \approx \frac{4\alpha'e^{-x}}{\pi T}g(x)^{2}\, t = 2Dt\,. \end{equation} which is identified with the diffusive regime since $s^{2}_{reg}\sim 2D t$. The diffusion coefficient $D$ obtained here coincides with the one given by Eq.~\eqref{DCoefficient}, coming from the imaginary part of the admittance. The factor 2 in this equation is a characteristic of a one dimensional problem. On the other hand, for the short time approximation $t\ll\beta/\pi$, one finds \begin{equation} s^{2}_{reg} (t) \approx \frac{2 \alpha'e^{-x} g(x)^{2}}{3}\, t^{2} \,. \end{equation} which corresponds to the ballistic regime since $s^{2}_{reg}\sim t^2$. Finally, we are going to verify explicitly the fluctuation-dissipation theorem in our holographic set up. It can be stated as \cite{Grabert:1988yt}: \begin{equation}\label{theorem} G_{\rm sym}(\omega)\equiv \frac{1}{2}\left[\langle x(\omega)x(0) \rangle+\langle x(0)x(\omega) \rangle\right]=(2n_{\rm B}+1)\rm Im(\chi(\omega)), \end{equation} where $G_{\rm sym}(\omega)$ is the symmetric Green's function in Fourier space and $n_{\rm B}=(e^{\beta\omega}-1)^{-1}$ is the Bose-Einstein distribution associated with thermal noise effects. Then, one can write the corresponding symmetric {\sl time dependent} Green's function, using Eqs. \eqref{TwoPoint1} and \eqref{TwoPoint2}, as: \begin{eqnarray} G_{\rm sym}(t) &=&\frac{2\pi\alpha' e^{-\frac{k}{r_{h}^{2}}}}{r^{2}_{h}}\left(\frac{1}{2\pi}\int_{-\infty}^{\infty} d\omega \frac{2e^{-i\omega t}}{\left|\omega\right|(e^{\beta \left|\omega\right|}-1)}+\frac{1}{2\pi}\int_{-\infty}^{\infty}d\omega\frac{e^{-i\omega t}}{\left|\omega\right|}\right)\,. \end{eqnarray} \noindent So, the symmetric Green's function in Fourier space is found to be: \begin{equation}\label{Gsym} G_{\rm sym}(\omega)=\left(2n_{\rm B}(\omega)+1\right)\frac{2\pi\alpha' e^{-\frac{k}{r_{h}^{2}}}}{r^{2}_{h}\omega}\,. \end{equation} Furthermore, the imaginary part of the admittance $\chi(\omega)$, given by Eq. \eqref{Admittance}, is \begin{equation}\label{Imchi} {\rm Im}(\chi(\omega)) = \frac{2\pi\alpha' e^{-\frac{k}{r_{h}^{2}}}}{r^{2}_{h}\omega} \,, \end{equation} so that our deformed and backreacted model satisfies the well known fluctuation-dissipation theorem defined in Eq. \eqref{theorem}. \section{Conclusions}\label{conc} Here, in this work, taking into account a conformal exponential factor $\exp(k/r^2)$ and the horizon function obtained from the solutions of Einstein-dilaton equations we have constructed a deformed and backreacted Lorentz invariant holographic model. It is important to remark that the string and brane considered are in the probe approximation, so that we have disregarded their contribution to the backreaction on the metric. This means that the backreaction contribution considered here comes only from the exponential factor in the metric. By using our model we could investigate the fluctuation and dissipation of a string in this set up. In particular, we computed the response function (admittance), the diffusion coefficient, the relevant two-point functions and the regularized mean square displacement. From this last result we obtained the diffusive and the ballistic regimes characteristics of the Brownian motion. We also verified the fluctuation-dissipation theorem within our model from the two-point functions and the imaginary part of the admittance. This analysis can be thought as an extension of the ones described in Refs. \cite{deBoer:2008gu, Tong:2012nf, Edalati:2012tc, Giataganas:2018ekx, Caldeira:2020sot}. The backreacted horizon function, Eq. \eqref{horfunfinal}, is displayed in Fig. \ref{hor_mu_zero} for $k\pm 1$ and $z_h=1$, where we clearly see the difference between these two choices, although they merge for low values of the holographic coordinate $z$ and also meet at $z=z_h$, satifying the condition $f(z_h)=0$. Remember that $z=1/r$, where $r$ is the radial holographic coordinate pointing outwards the black hole, so that the interval $0 < z < z_h$ represents the region outside the horizon. The backreaction effects on the fluctuation and dissipation of the string are encoded in the function $g(x)$, defined in Eq. \eqref{HawkingTemperature}, where $x=k/r_h^2$ and $k$ is the IR scale of the model. This function corresponds to the deviation from the Hawking temperature due to the deformation $\exp(k/r^2)$ and the backreaction in our model with respect to the pure AdS-Schwarzschild case. In the left panel of Fig. \ref{fig:resumo}, we show the shape of this function where one notes the asymmetry between the two branches identified with $k<0$ and $k>0$. At $k=0$ and finite $r_h$, $g(x)|_{x=0}=1$, there is no deformation or backreaction and the Hawking temperature reduces to its usual form, $T=r_h/\pi$. For the branch $k<0$ the function $g(x)$ grows exponentially with $|x|\to\infty$, so the deviation from the AdS-Schwarzschild becomes larger with larger $|x|$. On the other hand, for $k>0$, $g(x)$ decreases exponentially with $x\to\infty$, vanishing for very large $x$. In particular, the backreaction effect on the imaginary part of the admittance is shown in the right panel of Fig. \ref{fig:resumo}. From this picture, we see that for $k<0$ the deviation from the pure AdS-Schwarzschild and the deformed with no backreaction cases increases with increasing $x$. On the other side, for $k>0$ we note that the deviation from the pure AdS-Schwarzschild is limited and the two deformed solutions with or without backreaction vanish for high $x$. Note that the admittance $\chi(\omega)$ found here within this model could be compared with the ones computed from a deformed AdS space model without backreaction $\chi_{_{NBR}}(\omega)$ \cite{Caldeira:2020sot}, and the one in a geometry which includes the pure AdS-Schwarzschild case $\chi_{_{AdS}}(\omega)$ \cite{Tong:2012nf, Edalati:2012tc, Giataganas:2018ekx}, as given by Eq. \eqref{comparison}. Analogously, the diffusion coefficient $D$, Eq. \eqref{DCoefficient}, obtained from the admittance is also modified by the deformation exponential and the backreaction effects by a factor $e^{-x}g(x)^2$. This result was checked in the calculation of the regularized mean square displacement $s^2_{reg}(t)$ from the two-point functions in the limit of late times, Eq. \eqref{latetime}. This result can be interpreted as a check of the fluctuation-dissipation theorem, Eq. \eqref{theorem}, which we verify explicitly in Eqs. \eqref{Gsym}-\eqref{Imchi}. At this point it is appropriate to mention that Ref. \cite{Giataganas:2018ekx} extended and generalized the models discussed in Refs. \cite{Tong:2012nf, Edalati:2012tc} obtaining, from a polynomial metric, various observables related to Brownian motion as discussed in this work. Since our model is based on a conformally deformed metric, which is asymptotically AdS, both approaches can be related by a regular exponential factor $\exp{(k/r_h^2)}$. In this sense, some of our results, as the admittance Eq. \eqref{Admittance}, could have been inferred from the work \cite{Giataganas:2018ekx}. As a last comment, we would like to highlight the main result of this work. The effect of the backreaction considered here enhances the results on the physical quantities related to the fluctuation and dissipation of the string presented in Ref. \cite{Caldeira:2020sot}. \begin{acknowledgments} The authors would like to thank Diego M. Rodrigues and Alfonso Ballon-Bayona for discussions. N.G.C. is supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES). H.B.-F. and C.A.D.Z. are partially supported by Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq) under grants No. 311079/2019-9 and No.309982/2018-9, respectively. \end{acknowledgments}
1,314,259,996,381
arxiv
\section{Supplemental Material for "Dirac magnons pairing via pumping"} \section{Ferromagnet on a honeycomb lattice} \begin{figure}[h] \centerline{ \includegraphics[width=0.19\textwidth,height=0.14\textheight]{honeySM.pdf} } \protect\caption{Schematics of the honeycomb lattice. Ferromagnetic order is assumed to be in the $z-$ direction. Vectors connecting the nearest neighbor cites are ${\bm \tau}_{1}=\frac{1}{2}\left( \frac{1}{\sqrt{3}},1\right)$, ${\bm \tau}_{2} = \frac{1}{2}\left( \frac{1}{\sqrt{3}},-1\right)$, and ${\bm \tau}_{3} = \frac{1}{\sqrt{3}}(-1,0)$. Green dashed lines correspond to the sign convention of the $\nu_{ij} = \pm 1$, which enter the Dzyaloshinskii-Moriya interaction.} \label{fig:honey} \end{figure} We study spins of the length $S$ on the honeycomb lattice. The spins interact via the ferromagnetic Heisenberg interaction. We assume the order to be in $z$-direction, and wish to understand the spin waves about the order. We follow standard procedure discussed, for example, in books on magnetism \cite{ABP1967,Auerbach,Rezende}. Holstein-Primakoff bosons for the spin operators $S^{\pm} = S^{x}\pm i S^{y}$, and $S^{z}$ read \begin{align} S^{+} = \sqrt{2S - a^{\dag}a}a,~~~ S^{-} = a^{\dag}\sqrt{2S - a^{\dag}a},~~~ S^{z} = S - a^{\dag}a. \end{align} Exchange interaction is \begin{align} H_{\mathrm{ex}} = -J\sum_{\langle ij \rangle} \left( S^{x}_{i}S^{x}_{j} + S^{y}_{i}S^{y}_{j}+ S^{z}_{i}S^{z}_{j} \right) = -J\sum_{\langle ij \rangle} \left(\frac{1}{2} S^{+}_{i}S^{-}_{j} + \frac{1}{2} S^{-}_{i}S^{+}_{j}+ S^{z}_{i}S^{z}_{j} \right), \end{align} where $\langle .. \rangle$ stands for the nearest-neighbor interaction. We are assuming $S>1$ so that $\frac{1}{S}$ expansion applies. This allows us to drop out higher orders of interaction between magnons. Hamiltonian of interacting spin-waves reads, \begin{align} H_{\mathrm{sw}} = & - JS \sum_{\langle ij \rangle} \left( a_{i}^{\dag}b_{j} + b_{j}^{\dag}a_{i} \right) +3JS \sum_{\langle ij \rangle} \left( a_{i}^{\dag}a_{i} + b_{j}^{\dag}b_{j} \right) \\ & + \frac{J}{4}\sum_{\langle ij \rangle} a_{i}^{\dag}a_{i}a_{i}b_{j}^{\dag} + \frac{J}{4}\sum_{\langle ij \rangle} a_{i}b_{j}^{\dag}b_{j}^{\dag}b_{j} + \frac{J}{4}\sum_{\langle ij \rangle} a_{i}^{\dag}a_{i}^{\dag}a_{i}b_{j} + \frac{J}{4}\sum_{\langle ij \rangle} a_{i}^{\dag}b_{j}^{\dag}b_{j}b_{j} -J\sum_{\langle ij \rangle}a_{i}^{\dag}a_{i} b_{j}^{\dag}b_{j}. \end{align} Fourier transform of the Hamiltonian reads as \begin{align} & H_{\mathrm{sw}} \approx - JS \int_{\bf k} \left( \gamma_{{\bf k}} a_{{\bf k}}^{\dag}b_{{\bf k}} + \gamma_{{\bf k}}^{*} b_{{\bf k}}^{\dag}a_{{\bf k}} \right) +3JS \int_{\bf k} \left( a_{{\bf k}}^{\dag}a_{{\bf k}} + b_{{\bf k}}^{\dag}b_{{\bf k}} \right) -J\int_{\{ {\bf k}\}}\delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \gamma_{{\bf k}_{4}-{\bf k}_{3}} a_{{\bf k}_{1}}^{\dag}b_{{\bf k}_{3}}^{\dag} a_{{\bf k}_{2}} b_{{\bf k}_{4}} \\ & + \frac{J}{4}\int_{\{ {\bf k}\}} \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \left[ \gamma_{{\bf k}_{3}}^{*} a_{{\bf k}_{1}}^{\dag}b_{{\bf k}_{3}}^{\dag} a_{{\bf k}_{2}}a_{{\bf k}_{4}} + \gamma_{{\bf k}_{3}} a_{{\bf k}_{2}}^{\dag}a_{{\bf k}_{4}}^{\dag} a_{{\bf k}_{1}}b_{{\bf k}_{3}} \right] + \frac{J}{4}\int_{\{ {\bf k}\}} \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \left[ \gamma_{{\bf k}_{1}} a_{{\bf k}_{1}}^{\dag}b_{{\bf k}_{3}}^{\dag} b_{{\bf k}_{2}}b_{{\bf k}_{4}} + \gamma_{{\bf k}_{1}}^{*} b_{{\bf k}_{2}}^{\dag}b_{{\bf k}_{4}}^{\dag} a_{{\bf k}_{1}}b_{{\bf k}_{3}} \right], \nonumber \end{align} where $\gamma_{\bf k} = \sum_{i=1,2,3}e^{i{\bf k}{\bm \tau}_{i}} = 2e^{i\frac{k_{x}}{2\sqrt{3}}}\cos\left( \frac{k_{y}}{2}\right)+ e^{-i\frac{k_{x}}{\sqrt{3}}}$ is the dispersion (see Fig. \ref{fig:honey} for defintions of ${\bm \tau}_{i}$ vectors) , $\{ {\bf k}\} \equiv {\bf k}_{1},{\bf k}_{2},{\bf k}_{3},{\bf k}_{4}$, and $\delta_{{\bf k}_{1},{\bf k}_{2}}\equiv 2\pi \delta({\bf k}_{1}-{\bf k}_{2})$ is the delta-function. Note that the two first lines of the interaction are written in the convenient for conjugation way. The last line is already Hermitian conjugate to itself. The interaction is instantaneous in time. This implies certain frequency dependence, for example, \begin{align} & -J\int_{\{ {\bf k}\}}\delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \int_{\epsilon_{1},\epsilon_{2},\epsilon_{3},\epsilon_{4}} a_{\epsilon_{1};{\bf k}_{1}}^{\dag}b_{\epsilon_{3};{\bf k}_{3}}^{\dag} a_{\epsilon_{2};{\bf k}_{2}} b_{\epsilon_{4};{\bf k}_{4}} \delta_{\epsilon_{1}-\epsilon_{2},\epsilon_{4}-\epsilon_{3}} \\ = & -J\int_{\{ {\bf k}\}}\delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \int_{\epsilon_{1},\epsilon_{3},\omega} a_{\epsilon_{1};{\bf k}_{1}}^{\dag} b_{\epsilon_{3};{\bf k}_{3}}^{\dag} a_{\epsilon_{1}-\omega;{\bf k}_{2}} b_{\epsilon_{3}+\omega;{\bf k}_{4}}. \end{align} In the space of unitary cell, in which case the boson operators are defined by $\Psi_{\bf k}^\dag = ( a_{\bf k}^{\dag}, ~b_{\bf k}^{\dag})$ the Hamiltonian of linear spin-waves reads as \begin{align} \hat{H} = JS \left[ \begin{array}{cc} 3 & - \gamma_{\bf k} \\ - \gamma_{\bf k}^{*} & 3 \end{array} \right], \end{align} diagonalization immediatly gives energy spectrum, \begin{align} \epsilon_{\pm {\bf k}} =JS\left( 3 \pm \vert \gamma_{\bf k}\vert \right) \end{align} with corresponding wave functions \begin{align} \varphi_{+} = \frac{1}{\sqrt{2}} \left[ \begin{array}{c} -\frac{\gamma_{\bf k}}{\vert \gamma_{\bf k} \vert} \\ 1 \end{array} \right],~~~ \varphi_{-} = \frac{1}{\sqrt{2}} \left[ \begin{array}{c} \frac{\gamma_{\bf k}}{\vert \gamma_{\bf k} \vert} \\ 1 \end{array} \right], \end{align} Green function is \begin{align} G_{\alpha\beta}^{\mathrm{R}/\mathrm{A}} (\epsilon,{\bf k}) = \frac{\varphi_{+,{\bf k}}\varphi^{\dag}_{+,{\bf k}}}{\epsilon - \epsilon_{+,{\bf k}} \pm i0} + \frac{\varphi_{-,{\bf k}}\varphi^{\dag}_{-,{\bf k}}}{\epsilon - \epsilon_{-,{\bf k}} \pm i0}, \end{align} where $\alpha$ and $\beta$ are pseudospins. Green function can be presented in a more convenient way \begin{align} G_{\alpha\beta}^{\mathrm{R}/\mathrm{A}} (\epsilon,{\bf k}) = & \frac{1}{2}\left( \frac{1}{\epsilon - \epsilon_{+,{\bf k}} \pm i0} + \frac{1}{\epsilon - \epsilon_{-,{\bf k}} \pm i0} \right) - \frac{1}{2}\left( \frac{1}{\epsilon - \epsilon_{+,{\bf k}} \pm i0} - \frac{1}{\epsilon - \epsilon_{-,{\bf k}} \pm i0} \right) \left[ \begin{array}{cc} 0 & \frac{\gamma_{\bf k}}{\vert \gamma_{\bf k}\vert } \\ \frac{\gamma_{\bf k}^{*}}{\vert \gamma_{\bf k}\vert } & 0 \end{array}\right]. \end{align} The pumping is \begin{align} H_{\mathrm{pump}} & = \Gamma \sum_{i} \left[ S^{x}_{i}\cos(\Omega t) + S^{y}_{i}\sin(\Omega t) \right] = \frac{\Gamma}{2} \sum_{i} \left[ S^{+}_{i}e^{-i\Omega t} + S^{-}_{i}e^{i\Omega t} \right] \\ & \approx \sqrt{2S}\frac{\Gamma}{2} \sum_{i} \left[ a_{i}e^{-i\Omega t} + a^{\dag}_{i}e^{i\Omega t} \right] + \sqrt{2S}\frac{\Gamma}{2} \sum_{i} \left[ b_{i}e^{-i\Omega t} + b^{\dag}_{i}e^{i\Omega t} \right]. \end{align} For the sake of discussion, we also consider Dzyaloshinskii-Moriya interaction \begin{align}\label{DMI_Hamiltonian} H_{\mathrm{DMI}} = D \sum_{\langle\langle ij \rangle\rangle}\nu_{ij}[{\bf S}_{i}\times{\bf S}_{j}]_{z}, \end{align} where $\langle\langle ij \rangle\rangle$ stands for the next-nearest neighbor interaction, and $\nu_{ij}=\pm 1$ depending on the direction of interaction with the signs defined by green dashed arrows in Fig. (\ref{fig:honey}). In Holstein-Primakoff boson representation of spins, the DMI becomes \begin{align} H_{\mathrm{DMI}} = SD\int_{\bf k} \xi_{\bf k} \left(a^{\dag}_{\bf k}a_{\bf k} - b^{\dag}_{\bf k}b_{\bf k} \right), \end{align} where $\xi_{\bf k} =2\left[ \sin(k_{y}) - 2\sin\left( \frac{k_{y}}{2}\right)\cos\left(\frac{\sqrt{3}k_{x}}{2} \right)\right]$. \section{Keldysh formalism} We stress that in the hindsight, the Keldysh technique is certainly not the only choice for the problem at hand. It seems that Matsubara frequency space should work equally well. However, as the system under study is pumped and formally out-of-equilibrium, we decided to be on a safe side and follow non-equilibrium field theory technique - the Keldysh technique. Here we briefly outline steps of the Keldysh technique, which we utilized in analysis of the system. For a detailed review of the Keldysh formalism see book \cite{Kamenev}, which is going to be followed below. When considering the action of non-interacting magnons, the integral over the Keldysh contour is split as usual in to forward $\bar{\Psi}^{+}, \Psi^{+}$ and backward $\bar{\Psi}^{-},\Psi^{-}$ parts. For example, a part containing non-interacting Hamiltonian transforms as \begin{align} \int_{{\cal C}}dt \bar{\Psi}(t)\hat{H}\Psi (t)= \int_{-\infty}^{+\infty}dt \bar{\Psi}^{+}(t)\hat{H} \Psi^{+} (t) - \int_{-\infty}^{+\infty}dt \bar{\Psi}^{-}(t)\hat{H} \Psi^{-}(t) = \int_{-\infty}^{+\infty}dt \left[ \bar{\Psi}^{\mathrm{cl}}(t)\hat{H}\Psi^{\mathrm{q}}(t) + \bar{\Psi}^{\mathrm{q}}(t)\hat{H}\Psi^{\mathrm{cl}}(t) \right], \end{align} where \begin{align} \Psi^{\mathrm{cl}/\mathrm{q}} = \frac{1}{\sqrt{2}}\left( \Psi^{+} \pm \Psi^{-} \right), \end{align} and the same for $\bar{\Psi}$ fields. The action of non-interacting magnons is \begin{align} iS = i\int_{-\infty}^{+\infty}dt \bar{\Psi}(t) \left[ \begin{array} {cc} 0 & \left[G^{-1} \right]^{\mathrm{A}} \\ \left[ G^{-1} \right]^{\mathrm{R}} & \left[ G^{-1} \right]^{\mathrm{K}} \end{array} \right] \Psi(t) \end{align} where \begin{align} \Psi = \left[ \begin{array}{c} \Psi^{\mathrm{cl}} \\ \Psi^{\mathrm{q}}\end{array}\right], ~~~~ \bar{\Psi} = \left[ \begin{array}{cc} \bar{\Psi}^{\mathrm{cl}} & \bar{\Psi}^{\mathrm{q}}\end{array}\right], \end{align} and $ \left[G^{-1}(\epsilon) \right]^{\mathrm{R}/\mathrm{A}} = \epsilon \pm i0 - \hat{H}$ is the inverse Green function in the Fourier space. Note that the $[G^{-1}]^{\mathrm{K}}$ is the quantum-quantum component of the action, and the classical-classical component of the action is absent. The Green function is \begin{align} \langle \Psi(t) \bar{\Psi}(t^\prime) \rangle_{S} = i \left[ \begin{array} {cc} G^{\mathrm{K}}(t-t^\prime) & G^{\mathrm{R}}(t-t^\prime) \\ G^{\mathrm{A}}(t-t^\prime) & 0 \end{array} \right], \end{align} where in particular \begin{align} & \langle \Psi^{\mathrm{cl}}(t) \bar{\Psi}^{\mathrm{cl}}(t^\prime) \rangle_{S} =\sum_{\epsilon} i G^{\mathrm{K}}(\epsilon) e^{-i\epsilon (t-t^\prime)}, \\ & \langle \Psi^{\mathrm{cl}}(t) \bar{\Psi}^{\mathrm{q}}(t^\prime) \rangle_{S} =\sum_{\epsilon} i G^{\mathrm{R}}(\epsilon) e^{-i\epsilon (t-t^\prime)}, \\ & \langle \Psi^{\mathrm{q}}(t) \bar{\Psi}^{\mathrm{cl}}(t^\prime) \rangle_{S} = \sum_{\epsilon} i G^{\mathrm{A}}(\epsilon) e^{-i\epsilon (t-t^\prime)}. \end{align} In frequency space \begin{align} & \langle \Psi^{\mathrm{cl}}(\epsilon_{1}) \bar{\Psi}^{\mathrm{cl}}(\epsilon_{2}) \rangle_{S} = i G^{\mathrm{K}}(\epsilon_{1}) \delta_{\epsilon_{1},\epsilon_{2}}, \\ & \langle \Psi^{\mathrm{cl}}(\epsilon_{1}) \bar{\Psi}^{\mathrm{q}}(\epsilon_{2}) \rangle_{S} = i G^{\mathrm{R}}(\epsilon_{1}) \delta_{\epsilon_{1},\epsilon_{2}}, \\ & \langle \Psi^{\mathrm{q}}(\epsilon_{1}) \bar{\Psi}^{\mathrm{cl}}(\epsilon_{2}) \rangle_{S} = i G^{\mathrm{A}}(\epsilon_{1}) \delta_{\epsilon_{1},\epsilon_{2}}, \end{align} where $\delta_{\epsilon_{1},\epsilon_{2}} = 2\pi\delta(\epsilon_{1}-\epsilon_{2})$ is the delta-function. The Green function must satisfy unity identity (here everywhere multiplication assumes convolution in time), \begin{align} \left[ \begin{array} {cc} 0 & \left[G^{-1} \right]^{\mathrm{A}} \\ \left[ G^{-1} \right]^{\mathrm{R}} & \left[ G^{-1} \right]^{\mathrm{K}} \end{array} \right] \left[ \begin{array} {cc} G^{\mathrm{K}} & G^{\mathrm{R}} \\ G^{\mathrm{A}} & 0 \end{array} \right] = 1, \end{align} which gives us a condition on $G^{\mathrm{K}}$ function \begin{align} \left[G^{-1} \right]^{\mathrm{R}} G^{\mathrm{K}} + \left[ G^{-1} \right]^{\mathrm{K}} G^{\mathrm{A}} = 0, \end{align} which means \begin{align} \left[ G^{-1} \right]^{\mathrm{K}} = - \left[G^{-1} \right]^{\mathrm{R}} G^{\mathrm{K}} \left[G^{-1} \right]^{\mathrm{A}}. \end{align} With the parametrization \begin{align} G^{\mathrm{K}} = G^{\mathrm{R}} {\cal F} - {\cal F} G^{\mathrm{A}}, \end{align} where ${\cal F}$ is the distribution function, we get \begin{align} \left[ G^{-1} \right]^{\mathrm{K}} = \left[G^{-1} \right]^{\mathrm{R}} {\cal F} - {\cal F} \left[G^{-1} \right]^{\mathrm{A}}. \end{align} This is the kinetic equation determining distribution function. The pumping field is described by \begin{align} \frac{\sqrt{2S}\Gamma}{2} \int_{\cal C}dt \left( \Psi e^{-i\Omega t} + \bar{\Psi} e^{i\Omega t} \right) = \Gamma \sqrt{S} \int_{-\infty}^{+\infty}dt \left( \Psi^{\mathrm{q}} e^{-i\Omega t} + \bar{\Psi}^{\mathrm{q}} e^{i\Omega t} \right). \end{align} This might update the Hamiltonian and the Green functions. To check this, we can use the following identity, \begin{align} \int d[\bar{\Psi},\Psi] e^{ - \sum_{ij} \bar{\Psi}_{i}\hat{A}_{ij}\Psi_{j} + \sum_{i}\left( \bar{\Psi}_{i}J_{i} + \bar{J}_{i}\Psi_{i} \right) } = \frac{1}{\mathrm{det}\hat{A}}e^{\sum_{ij} \bar{J}_{i}(\hat{A}^{-1})_{ij}J_{j}} \end{align} and since there is no $\mathrm{q}$-$\mathrm{q}$ element in the $\hat{A}^{-1}$ matrix, the pumping field will not enter the final result of integration. However, the corresponding classical fields and consequently Green functions are going to be affected by the pumping fields. We are going to go over that in the next subsection. Now let us include interactions between magnons. Schematically, general four-boson interaction rewritten in terms of Keldysh fields is \begin{align} \int_{{\cal C}} dt \bar{\Psi}_{1}\Psi_{2}\bar{\Psi}_{3}\Psi_{4} & = \int_{-\infty}^{+\infty}dt \bar{\Psi}_{1}^{+}\Psi_{2}^{+}\bar{\Psi}_{3}^{+}\Psi_{4}^{+} - \int_{-\infty}^{+\infty}dt \bar{\Psi}_{1}^{-}\Psi_{2}^{-}\bar{\Psi}_{3}^{-}\Psi_{4}^{-} \\ & = \frac{1}{2} \int_{-\infty}^{+\infty}dt \left(\bar{\Psi}_{1}^{\mathrm{cl}}\Psi_{2}^{\mathrm{cl}} + \bar{\Psi}_{1}^{\mathrm{q}}\Psi_{2}^{\mathrm{q}} \right) \left(\bar{\Psi}_{3}^{\mathrm{cl}}\Psi_{4}^{\mathrm{q}} + \bar{\Psi}_{3}^{\mathrm{q}}\Psi_{4}^{\mathrm{cl}}\right) \\ & + \frac{1}{2} \int_{-\infty}^{+\infty}dt \left(\bar{\Psi}_{1}^{\mathrm{cl}}\Psi_{2}^{\mathrm{q}} + \bar{\Psi}_{1}^{\mathrm{q}}\Psi_{2}^{\mathrm{cl}} \right) \left(\bar{\Psi}_{3}^{\mathrm{cl}}\Psi_{4}^{\mathrm{cl}} + \bar{\Psi}_{3}^{\mathrm{q}}\Psi_{4}^{\mathrm{q}}\right) \end{align} where $1,2,3,4$ indeces stand for a general frequency-momentum-spin variable. Under relabelling, the two terms after second equality sign double each other, but for the sake of generality kept as they are. \subsection{Shifting the pump field away} Lagrangian describing non-interacting magnons with the pump's frequency $\Omega$ and momentum ${\bf k} = 0$ is schematically written as \begin{align} {\cal L}_{0,\Omega} & = \sum_{m,n}\bar{\Psi}^{\mathrm{q}}_{m, 0,\Omega} \hat{{\cal L}}^{\mathrm{K}}_{mn,0,\Omega} \Psi^{\mathrm{q}}_{n, 0,\Omega} +\sum_{m,n}\bar{\Psi}^{\mathrm{cl}}_{m, 0, \Omega} \hat{{\cal L}}^{\mathrm{A}}_{mn,0,\Omega} \Psi^{\mathrm{q}}_{n, 0, \Omega} + \sum_{m,n}\bar{\Psi}^{\mathrm{q}}_{m, 0, \Omega} \hat{{\cal L}}^{\mathrm{R}}_{mn,0,\Omega} \Psi^{\mathrm{cl}}_{n, 0, \Omega} \\ & -\Gamma\sqrt{S}\sum_{n} \Psi^{\mathrm{q}}_{n, 0, \Omega} -\Gamma\sqrt{S}\sum_{n} \bar{\Psi}^{\mathrm{q}}_{n, 0, \Omega}, \end{align} where $\hat{{\cal L}}^{\mathrm{K}/\mathrm{R}/\mathrm{A}}_{mn,0,\Omega}$ is the Lagrangian density corresponding to Keldysh, retarded or advanced part correspondingly. For example $\hat{{\cal L}}^{\mathrm{R}/\mathrm{A}}_{mn,{\bf k},\Omega} = (\Omega \pm i0)\delta_{mn} - [\hat{H}_{0}]_{mn,{\bf k}}$. The advanced part of the Lagrangian is \begin{align} {\cal L}_{0,\Omega}^{\mathrm{A}}= \sum_{m,n}\bar{\Psi}^{\mathrm{cl}}_{m, 0, \Omega} \hat{{\cal L}}^{\mathrm{A}}_{mn,0,\Omega} \Psi^{\mathrm{q}}_{n, 0, \Omega} -\Gamma\sqrt{S}\sum_{n} \Psi^{\mathrm{q}}_{n, 0, \Omega} , \end{align} in which we would like to shift away terms linear in $\Psi^{\mathrm{q}}_{n, 0, \Omega}$. We achieve it with \begin{align} & \bar{\Psi}^{\mathrm{cl}}_{\alpha, 0, \Omega} \rightarrow \bar{\Psi}^{\mathrm{cl}}_{\alpha, 0, \Omega} + x_{\mathrm{A}}, \\ & \bar{\Psi}^{\mathrm{cl}}_{\beta, 0, \Omega} \rightarrow \bar{\Psi}^{\mathrm{cl}}_{\beta, 0, \Omega} + y_{\mathrm{A}}, \end{align} with \begin{align} & x_{\mathrm{A}}= \frac{{\cal L}^{\mathrm{A}}_{\beta\alpha,0,\Omega} - {\cal L}^{\mathrm{A}}_{\beta\beta,0,\Omega} }{ {\cal L}^{\mathrm{A}}_{\alpha\beta,0,\Omega}{\cal L}^{\mathrm{A}}_{\beta\alpha,0,\Omega} - {\cal L}^{\mathrm{A}}_{\beta\beta,0,\Omega}{\cal L}^{\mathrm{A}}_{\alpha\alpha,0,\Omega}}\Gamma\sqrt{S}, \\ & y_{\mathrm{A}}= \frac{{\cal L}^{\mathrm{A}}_{\alpha\beta,0,\Omega} - {\cal L}^{\mathrm{A}}_{\alpha\alpha,0,\Omega} }{ {\cal L}^{\mathrm{A}}_{\alpha\beta,0,\Omega}{\cal L}^{\mathrm{A}}_{\beta\alpha,0,\Omega} - {\cal L}^{\mathrm{A}}_{\beta\beta,0,\Omega}{\cal L}^{\mathrm{A}}_{\alpha\alpha,0,\Omega}}\Gamma\sqrt{S}. \end{align} For the retarded analog of the Lagrangian, \begin{align} {\cal L}_{0,\Omega}^{\mathrm{R}}= \sum_{m,n}\bar{\Psi}^{\mathrm{q}}_{m, 0, \Omega} {\cal L}^{\mathrm{R}}_{mn,0,\Omega} \Psi^{\mathrm{cl}}_{n, 0, \Omega} -\Gamma\sqrt{S}\sum_{n} \bar{\Psi}^{\mathrm{q}}_{n, 0, \Omega} , \end{align} in which we would like to shift away terms linear in $\bar{\Psi}^{\mathrm{q}}_{n, 0, \Omega}$. We achieve it with \begin{align} & \Psi^{\mathrm{cl}}_{\alpha, 0, \Omega} \rightarrow \Psi^{\mathrm{cl}}_{\alpha, 0, \Omega} + x_{\mathrm{R}}, \\ & \Psi^{\mathrm{cl}}_{\beta, 0, \Omega} \rightarrow \Psi^{\mathrm{cl}}_{\beta, 0, \Omega} + y_{\mathrm{R}}, \end{align} with \begin{align} & x_{\mathrm{R}}= \frac{{\cal L}^{\mathrm{R}}_{\beta\alpha,0,\Omega} - {\cal L}^{\mathrm{R}}_{\beta\beta,0,\Omega} }{ {\cal L}^{\mathrm{R}}_{\alpha\beta,0,\Omega}{\cal L}^{\mathrm{R}}_{\beta\alpha,0,\Omega} - {\cal L}^{\mathrm{R}}_{\beta\beta,0,\Omega}{\cal L}^{\mathrm{R}}_{\alpha\alpha,0,\Omega}}\Gamma\sqrt{S}, \\ & y_{\mathrm{R}}= \frac{{\cal L}^{\mathrm{R}}_{\alpha\beta,0,\Omega} - {\cal L}^{\mathrm{R}}_{\alpha\alpha,0,\Omega} }{ {\cal L}^{\mathrm{R}}_{\alpha\beta,0,\Omega}{\cal L}^{\mathrm{R}}_{\beta\alpha,0,\Omega} - {\cal L}^{\mathrm{R}}_{\beta\beta,0,\Omega}{\cal L}^{\mathrm{R}}_{\alpha\alpha,0,\Omega}}\Gamma\sqrt{S}. \end{align} \section{Pumping to the Dirac points with a $\Omega = 3SJ$ frequency pump} \subsection{Two quanta pumping to Dirac points} Here we discuss off-resonance pumping, when the frequency of the pump is half the band-width, namely $\Omega = 3SJ$. There are no mass-shell states with ${\bf k} = 0$ at this frequency. Thus, there is no possibility to pump single magnon to this point, but due to the interactions, there is a possibility to pump a pair of magnons. See Fig. \ref{fig:pump} for the schematics of the process of absorption of two pump field quanta. This processes is known in the literature as the second-order Suhl process \cite{Suhl1957,Rezende} One can see it by absorbing the pumping field by shifting corresponding classical (only) fields, \begin{align} & \bar{\Psi}^{\mathrm{cl}}_{\alpha, 0, \Omega} \rightarrow \bar{\Psi}^{\mathrm{cl}}_{\alpha, 0, \Omega} - \frac{\Gamma\sqrt{S}}{3SJ}, \\ & \Psi^{\mathrm{cl}}_{\alpha, 0, \Omega} \rightarrow \Psi^{\mathrm{cl}}_{\alpha, 0, \Omega} - \frac{\Gamma\sqrt{S}}{3SJ}. \end{align} The shift means that a physical state with corresponding quantum numbers acquires a classical value. For example, if it was a Bose-Einstein condensate we were talking about, it would mean that the magnon accumulate in the state. However, since the shifted state is off-shell, one would not expect any magnon accumulation in it. Instead, the magnons can rescatter from this virtual state to the on-shell states according to the frequency and momentum conservation. To describe these effects, we notice that the interaction part of the action will be affected by the shift. \begin{align} & iS_{\mathrm{interaction};1}= -i\frac{J}{4} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} a_{{\bf k}_{1}}^{\dag}a_{{\bf k}_{2}}a_{{\bf k}_{3}}^{\dag}b_{{\bf k}_{4}} \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \\ & \rightarrow -i\frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \bigg( \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \nonumber \\ & + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bigg) \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \nonumber \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{0} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} \nonumber \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{0} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}}. \nonumber \end{align} Regarding cubic terms, in experimentally relevant limit of $\left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 < 1$ they can be ignored. They will contribute to the interaction between magnons, but will have $\left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 < 1$ small factor as compared to the original interaction. It is not possible to generate $\propto \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}}$ or $\propto \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{2};\omega_{2}}$ or other similar terms as they all sum up to zero. This cancellation occurs between all terms in the interaction (between $\propto -J$ and $\propto \frac{J}{4}$ terms in the interaction). We give an example of such cancellation in the end of this subsection. \begin{figure}[h] \centerline{ \includegraphics[width=0.18\textwidth,height=0.08\textheight]{twoquanta.pdf} } \protect\caption{Schematics of two pump quanta absoroption. Here the dashed lines correspond to the pump field, while the wavy line to interaction between the magnons.} \label{fig:pump} \end{figure} Below we list four remaining terms in the interaction. \begin{align} & iS_{\mathrm{interaction};2}= -i\frac{J}{4} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{3}}^{*} a_{{\bf k}_{1}}^{\dag}a_{{\bf k}_{2}}b_{{\bf k}_{3}}^{\dag}a_{{\bf k}_{4}} \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \\ & \rightarrow -i\frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{3}}^{*} \bigg( \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{4};\omega_{4}} \nonumber \\ & + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{4};\omega_{4}} \bigg) \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \nonumber \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{0} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{3}}^{*} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} \nonumber \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{0} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{3}}^{*} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}}, \nonumber \end{align} and \begin{align} & iS_{\mathrm{interaction};3}= -i\frac{J}{4} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{1}} a_{{\bf k}_{1}}^{\dag}b_{{\bf k}_{2}}b_{{\bf k}_{3}}^{\dag}b_{{\bf k}_{4}} \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \\ & \rightarrow -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{1}} \bigg( \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \nonumber \\ & + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bigg) \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \nonumber \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{0} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{1}} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} \nonumber \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{0} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \Psi^{\mathrm{q}}_{\beta;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{1}} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}}, \nonumber \end{align} and \begin{align} & iS_{\mathrm{interaction};4}= -i\frac{J}{4} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{2}}^{*} b_{{\bf k}_{1}}^{\dag}a_{{\bf k}_{2}}b_{{\bf k}_{3}}^{\dag}b_{{\bf k}_{4}} \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \\ & \rightarrow -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{2}}^{*} \bigg( \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \nonumber \\ & + \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bigg) \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \nonumber \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{2}}^{*} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{0} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} \nonumber \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{2}}^{*} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} \\ & -i \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{0} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}}. \nonumber \end{align} There is also $\propto -J$ interaction term, which also gets shifted accordingly. \begin{align} & iS_{\mathrm{interaction};5}= i J \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}-{\bf k}_{3}} a_{{\bf k}_{1}}^{\dag}a_{{\bf k}_{2}} b_{{\bf k}_{3}}^{\dag}b_{{\bf k}_{4}} \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \\ & \rightarrow iJ\frac{1}{2} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}-{\bf k}_{3}} \bigg( \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \nonumber \\ & + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bigg) \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \nonumber \\ & + i J \frac{1}{2} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} \\ & + i J \frac{1}{2} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{-{\bf k}_{3}} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} \nonumber \\ & + iJ \frac{1}{2} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} \\ & + iJ \frac{1}{2} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{-{\bf k}_{3}} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}}. \end{align} Collecting now terms quadratic in fields, we get for the pump \begin{align} &H_{\mathrm{pump}} \\ & = \frac{J}{4} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \int_{\{{\bf k}\}} \bigg[ - \gamma_{{\bf k}_{4}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} - \gamma_{{\bf k}_{4}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} \nonumber \\ & - \gamma_{{\bf k}_{3}}^{*} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} - \gamma_{{\bf k}_{3}}^{*} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} \nonumber \\ & + \gamma_{0} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} + \gamma_{0} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} \nonumber \\ & + \gamma_{0} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{-{\bf k}_{2},{\bf k}_{4}} \delta_{\Omega-\omega_{2},\omega_{4}-\Omega} + \gamma_{0} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} \nonumber \bigg]. \end{align} Let us now demonstrate that indeed terms of the $\propto \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}}$ type sum up to zero and, hence, can't be generated by the pump process. Recall, that overall there are five interaction terms listed in this subsection. We refer to them in the order they have appeared. From the first interaction term we have \begin{align} & \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \bigg( \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \nonumber \\ & + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bigg) \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \nonumber \\ & \rightarrow \frac{J}{4} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{0} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \delta_{{\bf k}_{1},{\bf k}_{2}}\delta_{\omega_{1},\omega_{2}}. \end{align} From the second interaction term we have \begin{align} & \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{3}}^{*} \bigg( \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{4};\omega_{4}} \nonumber \\ & + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{4};\omega_{4}} \bigg) \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \nonumber \\ & \rightarrow \frac{J}{4} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{0} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \delta_{{\bf k}_{1},{\bf k}_{2}}\delta_{\omega_{1},\omega_{2}}. \end{align} From the fifth interaction term we have \begin{align} & -J\frac{1}{2} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}-{\bf k}_{3}} \bigg( \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \nonumber \\ & + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bigg) \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \nonumber \\ & \rightarrow -\frac{J}{2} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{0} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \delta_{{\bf k}_{1},{\bf k}_{2}}\delta_{\omega_{1},\omega_{2}}. \end{align} Three terms sum up to zero. The same can be proven for the other combinations of the same type. \subsection{Hartree-Fock corrections} \begin{figure}[h] \centerline{ \includegraphics[width=0.25\textwidth,height=0.08\textheight]{HartreeFock.pdf} } \protect\caption{Hartree-Fock corrections to the magnon dispersion. } \label{fig:HartreeFock} \end{figure} In order to understand possible instabilities in the system due to the magnon pair creation, we also need to take in to account Hartree-Fock corrections to the magnon dispersion \cite{BlochPRL1962,Pershoguba}. They are expected to give temperature dependent correction, and, thus, might be important when discussing the experimental details. For example, let us pick the first interaction term, \begin{align}\label{firstHF} \langle iS_{\mathrm{interaction};1}\rangle = & - i\frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \nonumber \\ & + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \rangle \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}}. \end{align} We found that for the task at hand it is more convenient to come back to time domain rather to work in frequency domain. In this way, equal-time commutation relations \begin{align} [\Psi^{\mathrm{cl}}_{n;{\bf k}_{1}}(t),\bar{\Psi}^{\mathrm{cl}}_{m;{\bf k}_{2}}(t)] = \delta_{n,m} \delta_{{\bf k}_{1},{\bf k}_{2}} \end{align} are written in the most transparent way. For example, picking the first term in Eq. (\ref{firstHF}), \begin{align} & \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} \rangle \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \\ = & \frac{J}{8} \int_{t} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1}}(t) \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4}}(t) \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3}}(t) \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2}}(t) \rangle \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \\ + & \int_{t} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3}}(t) \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4}}(t) \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1}}(t) \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2}}(t) \rangle \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \\ = & \frac{J}{4} \int_{t} \int_{{\bf k}} \gamma_{{\bf k}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{q}}_{\beta;{\bf k}}(t) \int_{\bf q} \left[ -1 + i\int_{\epsilon} G^{\mathrm{K}}_{\alpha\alpha}(\epsilon;{\bf q})\right] \\ = & \frac{J}{4} \int_{t} \int_{{\bf k}} \gamma_{{\bf k}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{q}}_{\beta;{\bf k}}(t) \int_{\bf q} \left[ n_{\mathrm{B}}(\epsilon_{+;{\bf q}})+ n_{\mathrm{B}}(\epsilon_{-;{\bf q}})\right], \end{align} where we used ${\cal F}(\epsilon) = 1 + \frac{2}{e^{\frac{\epsilon}{T}}-1} \equiv 1+2n_{\mathrm{B}}(\epsilon)$ identity, and where $-1$ in the $\left[ -1 + i\int_{\epsilon} G^{\mathrm{K}}_{\alpha\alpha}(\epsilon;{\bf q})\right]$ factor is due to the commutation relations. Now picking the second term in Eq. (\ref{firstHF}), \begin{align} & \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \rangle \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \\ = & \frac{J}{8} \int_{t} \int_{\bf k} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \int_{\bf q} \int_{\epsilon} iG^{\mathrm{K}}_{\beta\alpha}(\epsilon;{\bf q}) \gamma_{\bf q} + \frac{J}{8} \int_{t} \int_{{\bf k}} \gamma_{{\bf k}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\beta;{\bf k}}(t) \int_{\bf q} \left[ -1 + i\int_{\epsilon} G^{\mathrm{K}}_{\alpha\alpha}(\epsilon;{\bf q})\right] \\ = & - \frac{J}{8} \int_{t} \int_{\bf k} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \int_{\bf q} \vert \gamma_{\bf q} \vert \left[ n_{\mathrm{B}}(\epsilon_{+;{\bf q}}) - n_{\mathrm{B}}(\epsilon_{-;{\bf q}})\right] + \frac{J}{8} \int_{t} \int_{{\bf k}} \gamma_{{\bf k}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\beta;{\bf k}}(t) \int_{\bf q} \left[ n_{\mathrm{B}}(\epsilon_{+;{\bf q}}) + n_{\mathrm{B}}(\epsilon_{-;{\bf q}})\right]. \end{align} Third term in Eq. (\ref{firstHF}) reads, \begin{align} & \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \rangle \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \\ = & \frac{J}{4} \int_{t} \int_{\bf k} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{q}}_{\alpha;{\bf k}}(t) \int_{\bf q} \int_{\epsilon} iG^{\mathrm{K}}_{\beta\alpha}(\epsilon;{\bf q}) \gamma_{\bf q} \\ = & - \frac{J}{4} \int_{t} \int_{\bf k} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{q}}_{\alpha;{\bf k}}(t) \int_{\bf q} \vert \gamma_{\bf q} \vert \left[ n_{\mathrm{B}}(\epsilon_{+;{\bf q}}) - n_{\mathrm{B}}(\epsilon_{-;{\bf q}})\right]. \end{align} Finally, the last term in Eq. (\ref{firstHF}) reads, \begin{align} & \frac{J}{8} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \langle \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \rangle \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \\ = & - \frac{J}{8} \int_{t} \int_{\bf k} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \int_{\bf q} \vert \gamma_{\bf q} \vert \left[ n_{\mathrm{B}}(\epsilon_{+;{\bf q}}) - n_{\mathrm{B}}(\epsilon_{-;{\bf q}})\right] + \frac{J}{8} \int_{t} \int_{{\bf k}} \gamma_{{\bf k}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\beta;{\bf k}}(t) \int_{\bf q} \left[ n_{\mathrm{B}}(\epsilon_{+;{\bf q}}) + n_{\mathrm{B}}(\epsilon_{-;{\bf q}})\right], \end{align} which essentially doubles the second term in Eq. (\ref{firstHF}). Collecting all the four terms, we get \begin{align} \langle iS_{\mathrm{interaction};1}\rangle = & -i \frac{J}{4} I_{1} \int_{t} \int_{{\bf k}} \gamma_{{\bf k}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{q}}_{\beta;{\bf k}}(t) -i \frac{J}{4} I_{2} \int_{t} \int_{\bf k} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{q}}_{\alpha;{\bf k}}(t) \\ & -i \frac{J}{4} I_{2} \int_{t} \int_{\bf k} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\alpha;{\bf k}}(t) -i \frac{J}{4} I_{1} \int_{t} \int_{{\bf k}} \gamma_{{\bf k}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\beta;{\bf k}}(t), \end{align} where \begin{align} & I_{1} = \int_{\bf q} \left[ -1 + i\int_{\epsilon} G^{\mathrm{K}}_{\alpha\alpha}(\epsilon;{\bf q})\right] = \int_{\bf q} \left[ n_{\mathrm{B}}(\epsilon_{+;{\bf q}}) + n_{\mathrm{B}}(\epsilon_{-;{\bf q}})\right], \\ & I_{2} = \int_{\bf q} \int_{\epsilon} iG^{\mathrm{K}}_{\beta\alpha}(\epsilon;{\bf q}) \gamma_{\bf q} =- \int_{\bf q} \vert \gamma_{\bf q} \vert \left[ n_{\mathrm{B}}(\epsilon_{+;{\bf q}}) - n_{\mathrm{B}}(\epsilon_{-;{\bf q}})\right]. \end{align} Expressions for the three other interaction terms, i.e. $\langle iS_{\mathrm{interaction};2,3,4}\rangle $, are similar to the obtained one. Fifth interaction is \begin{align} \langle iS_{\mathrm{interaction};5}\rangle = & i\frac{J}{2} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}-{\bf k}_{3}} \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{q}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \nonumber \\ & + \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{q}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} + \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \rangle \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \\ = & i\frac{J}{2} \int_{t} \int_{\bf k} I_{3}({\bf k}) \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{q}}_{\beta;{\bf k}}(t) + i\frac{J}{2} I_{1} \int_{t} \int_{\bf k} \gamma_{0} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}}(t) \Psi^{\mathrm{q}}_{\beta;{\bf k}}(t) \\ + & i\frac{J}{2} \int_{t} \int_{\bf k} I_{3}^{*}({\bf k}) \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\alpha;{\bf k}}(t) + i\frac{J}{2} I_{1} \int_{t} \int_{\bf k} \gamma_{0} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\beta;{\bf k}}(t) \\ + & i\frac{J}{2} \int_{\bf k} I_{3}^{*}({\bf k}) \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}}(t) \Psi^{\mathrm{q}}_{\alpha;{\bf k}}(t) + i\frac{J}{2} I_{1} \int_{t} \int_{\bf k} \gamma_{0} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \\ + & i\frac{J}{2} \int_{t} \int_{\bf k} I_{3}({\bf k}) \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\beta;{\bf k}}(t) + i\frac{J}{2} I_{1} \int_{t} \int_{\bf k} \gamma_{0} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \end{align} New integral appeared above is \begin{align} I_{3}({\bf k}) = \int_{\bf q}\int_{\epsilon}iG_{\alpha\beta}^{\mathrm{K}}(\epsilon;{\bf q})\gamma_{{\bf k}-{\bf q}} = -\int_{\bf q}\frac{\gamma_{{\bf k}-{\bf q}} \gamma_{{\bf q}}}{ \vert \gamma_{{\bf q}} \vert} \left[ n_{\mathrm{B}}(\epsilon_{+;{\bf q}}) - n_{\mathrm{B}}(\epsilon_{-;{\bf q}})\right]. \end{align} Overall, we have for the Hartree-Fock corrections \begin{align} \sum_{j=1}^{5}\langle iS_{\mathrm{interaction};j}\rangle = & -i \frac{J}{2} \int_{t} \int_{{\bf k}} \left[I_{1}\gamma_{{\bf k}} - I_{3}({\bf k})\right] \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{q}}_{\beta;{\bf k}}(t) -i \frac{J}{2} (I_{2} - I_{1}\gamma_{0}) \int_{t} \int_{\bf k} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{q}}_{\alpha;{\bf k}}(t) \\ & -i \frac{J}{2} (I_{2} - I_{1}\gamma_{0}) \int_{t} \int_{\bf k} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\alpha;{\bf k}}(t) -i \frac{J}{2} \int_{t} \int_{{\bf k}} \left[I_{1}\gamma_{{\bf k}} - I_{3}({\bf k})\right] \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\beta;{\bf k}}(t) \\ & -i \frac{J}{2} \int_{t} \int_{{\bf k}} \left[I_{1}\gamma_{{\bf k}}^{*} - I_{3}^{*}({\bf k})\right] \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}}(t) \Psi^{\mathrm{q}}_{\alpha;{\bf k}}(t) -i \frac{J}{2} (I_{2} - I_{1}\gamma_{0}) \int_{t} \int_{\bf k} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf k}}(t) \Psi^{\mathrm{q}}_{\beta;{\bf k}}(t) \\ & - i \frac{J}{2} (I_{2} - I_{1}\gamma_{0}) \int_{t} \int_{\bf k} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\beta;{\bf k}}(t) -i \frac{J}{2} \int_{t} \int_{{\bf k}} \left[I_{1}\gamma_{{\bf k}}^{*} - I_{3}^{*}({\bf k})\right] \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf k}}(t) \Psi^{\mathrm{cl}}_{\alpha;{\bf k}}(t). \end{align} Integrals are \begin{align} I_{2} - I_{1}\gamma_{0} = -\int_{\bf q}\left( \gamma_{0} + \vert \gamma_{\bf q}\vert \right) n_{\mathrm{B}}(\epsilon_{+;{\bf q}}) -\int_{\bf q}\left( \gamma_{0} - \vert \gamma_{\bf q}\vert \right) n_{\mathrm{B}}(\epsilon_{-;{\bf q}}) \approx -\left( \frac{T}{3SJ}\right)^2 \frac{\pi}{2}\gamma_{0}, \end{align} and \begin{align} I_{1}\gamma_{\bf k} - I_{3}({\bf k}) =\int_{\bf q}\left(\gamma_{\bf k} + \frac{\gamma_{{\bf k}-{\bf q}}\gamma_{\bf q}}{\vert \gamma_{\bf q}\vert} \right)n_{\mathrm{B}}(\epsilon_{+;{\bf q}}) + \int_{\bf q}\left(\gamma_{\bf k} - \frac{\gamma_{{\bf k}-{\bf q}}\gamma_{\bf q}}{\vert \gamma_{\bf q}\vert} \right)n_{\mathrm{B}}(\epsilon_{-;{\bf q}}) \approx \left( \frac{T}{3SJ}\right)^2 \frac{\pi}{2}\gamma_{{\bf k}}, \end{align} which are approximated at low temperatures, $T<3SJ$, under assumption that only the $\epsilon_{-;{\bf q}}$ magnon band contributes to the integrals. At temperatures $T\sim 3SJ$ (in the vicinity of the Curie temperature) both magnon bands will contribute, and, hence, the magnitude of integrals increase. We then get Hartree-Fock corrected Hamiltonian describing the magnons \begin{align} \hat{H} = JS\left[1-\frac{\pi}{4S}\left( \frac{T}{3SJ}\right)^2 \right] \left[ \begin{array}{cc} 3 & - \gamma_{\bf k} \\ - \gamma_{\bf k}^{*} & 3 \end{array} \right] \equiv \tilde{J}S\left[ \begin{array}{cc} 3 & - \gamma_{\bf k} \\ - \gamma_{\bf k}^{*} & 3 \end{array} \right] , \end{align} where $\tilde{J} = J\left[1-\frac{\pi}{4S}\left( \frac{T}{3SJ}\right)^2 \right]$. Exactly this Hamiltonian will be used below when calculating the ladder equation. \subsection{Instability due to pumping} We neglect the Hartree-Fock corrections by setting $T=0$. Collecting all generated pumping terms, we construct a secular equation for $\Omega = 3SJ$, \begin{align}\label{secularSM} \mathrm{det} \left[ \begin{array}{cccc} \Omega+\epsilon - 3SJ & SJ\gamma_{\bf k} & -\Delta^2\gamma_{0} & \Delta^2\gamma_{{\bf k}} \\ SJ\gamma_{\bf k}^{*} & \Omega+\epsilon - 3SJ & \Delta^2\gamma^{*}_{{\bf k}} & -\Delta^2\gamma_{0} \\ -\Delta^2\gamma_{0} & \Delta^2\gamma_{{\bf k}} & \Omega - \epsilon - 3SJ & SJ\gamma_{\bf k} \\ \Delta^2\gamma^{*}_{{\bf k}} & -\Delta^2\gamma_{0} & SJ\gamma^{*}_{\bf k} & \Omega - \epsilon - 3SJ \end{array} \right] = 0. \end{align} The Hamiltonian is similar to that of the BdG model, but only due to the presence of the anomalous terms. The frequency structure is different because of the boson commutation relation the fields obey in our case. We get \begin{align} \epsilon_{\pm}^{2} = \left(\Omega - 3SJ \pm SJ\vert \gamma_{\bf k} \vert \right)^2 - \Delta^4(\gamma_{0} \mp \vert \gamma_{\bf k}\vert)^2. \end{align} Let us study the effect of Dzyaloshinskii-Moriya interaction Eq. (\ref{DMI_Hamiltonian}) on the magnon pairing in the vicinity of the Dirac points, i.e. for $\zeta = 0$. This is motivated by the fact that the DMI is the largest at the Dirac points. The secular equation is now \begin{align}\label{secularDMI} \mathrm{det} & \left[ \begin{array}{cccc} \chi+\epsilon & SJ\gamma_{\bf k} & -3\Delta^2 & 0 \\ SJ\gamma_{\bf k}^{*} & -\chi+\epsilon & 0 & -3\Delta^2 \\ -3\Delta^2 & 0 & \chi - \epsilon & SJ\gamma_{\bf k} \\ 0 & -3\Delta^2 & SJ\gamma_{\bf k}^{*} & -\chi - \epsilon \end{array} \right] = 0, \end{align} where $\chi =3\sqrt{3}SD$, and $\vert \gamma_{\bf k} \vert \approx \frac{\sqrt{3}}{2}k$. The spectrum of magnon pairs is now \begin{align} \epsilon^2_{\pm} = (SJ)^2 \frac{3}{4}k^2 + \chi^2 - 9\Delta^4. \end{align} We conclude that if $\vert \chi \vert \geq 3\Delta^2$ there will be no instability in the system. In unpumped ferromagnet such Dzyaloshinskii-Moriya interaction opens up a gap at the Dirac points in the spectrum of the magnons. Then, for the Dirac magnons paired state to occur, pumping should overcome this gap. \subsection{Ladder equation} \begin{figure}[h] \centerline{ \includegraphics[width=0.35\textwidth,height=0.15\textheight]{vertex.pdf} } \protect\caption{Graphic equation for the pairing interaction strength. Here empty triangle stands for the initial pairing interaction strength $\Delta^2_{ij}$ defined in accordance with Eq. (\ref{secularSM}), $\Delta^2_{\mathrm{aa}}=\Delta^2_{\mathrm{bb}} =- \Delta^2 \gamma_{0}$, and $\Delta^2_{\mathrm{ab}} =(\Delta^2_{\mathrm{ba}})^{*} = \Delta^2\gamma_{\bf k}$. Black tringle is intermediately renormalized pairing interaction strength, and the wavy lines stand for the interaction. Lined triangle is the overall renormalized pairing interaction strength.} \label{fig:ladderSM} \end{figure} The action describing the pump is \begin{align} iS_{\mathrm{pump}} = -i\int_{t} H_{\mathrm{pump}} \rightarrow & - i \int_{\{\epsilon\}} \int_{\{{\bf p}\}} \frac{J}{4} 3 \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf p}_{3};\epsilon_{3}} \delta_{{\bf p}_{1},-{\bf p}_{3}} \delta_{\epsilon_{1}-\Omega,\Omega-\epsilon_{3}} \label{pump1} \\ & + i \int_{\{\epsilon\}} \int_{\{{\bf p}\}} \frac{J}{4} \gamma_{{\bf p}_{1}} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf p}_{3};\epsilon_{3}} \delta_{{\bf p}_{1},-{\bf p}_{3}} \delta_{\epsilon_{1}-\Omega,\Omega-\epsilon_{3}} \label{pump2} \\ & - i \int_{\{\epsilon\}} \int_{\{{\bf p}\}} \frac{J}{4} 3 \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf p}_{3};\epsilon_{3}} \delta_{{\bf p}_{1},-{\bf p}_{3}} \delta_{\epsilon_{1}-\Omega,\Omega-\epsilon_{3}} \label{pump3} \\ & + i \int_{\{\epsilon\}} \int_{\{{\bf p}\}} \frac{J}{4} \gamma_{{\bf p}_{1}}^{*} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf p}_{3};\epsilon_{3}} \delta_{{\bf p}_{1},-{\bf p}_{3}} \delta_{\epsilon_{1}-\Omega,\Omega-\epsilon_{3}} \label{pump4} \end{align} where by right arrow we mean picking a particular term from the overall expression. Below, as an example, we wish to see how structure of Eq. (\ref{pump1}) gets renormalized by the interactions. For that we construct a ladder equation shown in Fig. \ref{fig:ladderSM}. It turns out that only \begin{align}\label{interactionSM} iS_{\mathrm{interaction}} = -i\int_{t} H_{\mathrm{interaction}} \rightarrow -i \frac{J}{4} \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \end{align} part of the interaction can reproduce selected by us part of the pump. Contraction of the interaction Eq. (\ref{interactionSM}) with the first term, namely Eq. (\ref{pump1}), in the pump's Hamiltonian, gives the following expression \begin{align} & \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf p}_{3};\epsilon_{3}} \rangle \\ & = \langle \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf p}_{1};\epsilon_{1}} \rangle \langle \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf p}_{3};\epsilon_{3}} \rangle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} + \langle \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf p}_{1};\epsilon_{1}} \rangle \langle \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf p}_{3};\epsilon_{3}} \rangle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \\ & = - [ G^{\mathrm{K}}_{\beta\alpha}({\bf k}_{4};\omega_{4}) G^{\mathrm{R}}_{\alpha\alpha}({\bf k}_{2};\omega_{2})\delta_{{\bf k}_{4},{\bf p}_{1}}\delta_{\omega_{4},\epsilon_{1}} \delta_{{\bf k}_{2},{\bf p}_{3}}\delta_{\omega_{2},\epsilon_{3}} + G^{\mathrm{R}}_{\beta\alpha}({\bf k}_{4};\omega_{2}) G^{\mathrm{K}}_{\alpha\alpha}({\bf k}_{2};\omega_{4}) \delta_{{\bf k}_{2},{\bf p}_{1}}\delta_{\omega_{2},\epsilon_{1}} \delta_{{\bf k}_{4},{\bf p}_{3}}\delta_{\omega_{4},\epsilon_{3}} ] \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}}. \end{align} Contraction of the interaction Eq. (\ref{interactionSM}) with the second term in the pump's Hamiltonian, namely Eq. (\ref{pump2}), results in the following expression \begin{align} & \gamma_{{\bf p}_{1}} \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf p}_{3};\epsilon_{3}} \rangle \\ & = - \gamma_{{\bf p}_{1}} [ G^{\mathrm{K}}_{\alpha\alpha}({\bf k}_{2};\omega_{2}) G^{\mathrm{R}}_{\beta\beta}({\bf k}_{4};\omega_{4}) \delta_{{\bf k}_{2},{\bf p}_{1}}\delta_{\omega_{2},\epsilon_{1}} \delta_{{\bf k}_{4},{\bf p}_{3}}\delta_{\omega_{4},\epsilon_{3}} + G^{\mathrm{R}}_{\alpha\beta}({\bf k}_{2};\omega_{2}) G^{\mathrm{K}}_{\beta\alpha}({\bf k}_{4};\omega_{4}) \delta_{{\bf k}_{2},{\bf p}_{3}}\delta_{\omega_{2},\epsilon_{3}} \delta_{{\bf k}_{4},{\bf p}_{1}}\delta_{\omega_{4},\epsilon_{1}} ] \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}}. \end{align} Contraction of the interaction Eq. (\ref{interactionSM}) with the third term, namely Eq. (\ref{pump3}), in the pump's Hamiltonian, gives the following bracket \begin{align} & \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf p}_{3};\epsilon_{3}} \rangle \\ & = \langle \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf p}_{1};\epsilon_{1}} \rangle \langle \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf p}_{3};\epsilon_{3}} \rangle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} + \langle \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf p}_{1};\epsilon_{1}} \rangle \langle \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf p}_{3};\epsilon_{3}} \rangle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \\ & = - [ G^{\mathrm{K}}_{\beta\beta}({\bf k}_{4};\omega_{4}) G^{\mathrm{R}}_{\alpha\beta}({\bf k}_{2};\omega_{2})\delta_{{\bf k}_{4},{\bf p}_{1}}\delta_{\omega_{4},\epsilon_{1}} \delta_{{\bf k}_{2},{\bf p}_{3}}\delta_{\omega_{2},\epsilon_{3}} + G^{\mathrm{R}}_{\beta\beta}({\bf k}_{4};\omega_{2}) G^{\mathrm{K}}_{\alpha\beta}({\bf k}_{2};\omega_{4}) \delta_{{\bf k}_{2},{\bf p}_{1}}\delta_{\omega_{2},\epsilon_{1}} \delta_{{\bf k}_{4},{\bf p}_{3}}\delta_{\omega_{4},\epsilon_{3}} ] \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}}. \end{align} Contraction of the interaction Eq. (\ref{interactionSM}) with the second term in the pump's Hamiltonian, namely Eq. (\ref{pump4}), results in the following expression \begin{align} & \gamma_{{\bf p}_{1}}^{*} \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf p}_{3};\epsilon_{3}} \rangle \\ & = - \gamma_{{\bf p}_{1}}^{*} [ G^{\mathrm{K}}_{\alpha\beta}({\bf k}_{2};\omega_{4}) G^{\mathrm{R}}_{\beta\alpha}({\bf k}_{4};\omega_{2}) \delta_{{\bf k}_{4},{\bf p}_{3}}\delta_{\omega_{4},\epsilon_{3}} \delta_{{\bf k}_{2},{\bf p}_{1}}\delta_{\omega_{2},\epsilon_{1}} + G^{\mathrm{R}}_{\alpha\alpha}({\bf k}_{2};\omega_{2}) G^{\mathrm{K}}_{\beta\beta}({\bf k}_{4};\omega_{4}) \delta_{{\bf k}_{2},{\bf p}_{3}}\delta_{\omega_{2},\epsilon_{3}} \delta_{{\bf k}_{4},{\bf p}_{1}}\delta_{\omega_{4},\epsilon_{1}} ] \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}}. \end{align} Summing all four contributions, we get \begin{align} & \langle(iS_{\mathrm{interaction}})(iS_{\mathrm{pump}} )\rangle \rightarrow \left( \frac{J}{4}\right)^2 \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \int_{\{\omega\}} \int_{\{{\bf k}\}} \gamma_{{\bf k}_{4}} \int_{\{\epsilon\}} \int_{\{{\bf p}\}} \delta_{{\bf k}_{1}-{\bf k}_{2},{\bf k}_{4}-{\bf k}_{3}} \delta_{\omega_{1}-\omega_{2},\omega_{4}-\omega_{3}} \delta_{{\bf p}_{1},-{\bf p}_{3}} \delta_{\epsilon_{1}-\Omega,\Omega-\epsilon_{3}} \\ & \times ( -3\langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf p}_{3};\epsilon_{3}} \rangle + \gamma_{{\bf p}_{1}} \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf p}_{3};\epsilon_{3}} \rangle \\ & -3 \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\beta;{\bf p}_{3};\epsilon_{3}} \rangle + \gamma_{{\bf p}_{1}}^{*} \langle \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \Psi^{\mathrm{cl}}_{\alpha;{\bf k}_{2};\omega_{2}} \Psi^{\mathrm{cl}}_{\beta;{\bf k}_{4};\omega_{4}} \bar{\Psi}^{\mathrm{cl}}_{\beta;{\bf p}_{1};\epsilon_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf p}_{3};\epsilon_{3}} \rangle ) \\ & = \left( \frac{J}{4}\right)^2 \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bigg\{ \int_{{\bf k}}\int_{\epsilon} 3\gamma_{-{\bf k}} \left[ G^{\mathrm{K}}_{\beta\alpha}(-{\bf k};\Omega- \epsilon) G^{\mathrm{R}}_{\alpha\alpha}({\bf k};\Omega+ \epsilon) + G^{\mathrm{R}}_{\beta\alpha}(-{\bf k};\Omega- \epsilon) G^{\mathrm{K}}_{\alpha\alpha}({\bf k};\Omega+ \epsilon) \right] \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - \int_{{\bf k}}\int_{\epsilon} \left[ \vert \gamma_{\bf k}\vert^2 G^{\mathrm{K}}_{\alpha\alpha}({\bf k};\Omega + \epsilon) G^{\mathrm{R}}_{\beta\beta}(-{\bf k};\Omega - \epsilon) + \gamma_{\bf k}^2 G^{\mathrm{K}}_{\beta\alpha}({\bf k};\Omega+ \epsilon) G^{\mathrm{R}}_{\alpha\beta}(-{\bf k};\Omega - \epsilon) \right] \bigg\} \\ & ~~~~~~~~~ \times \int_{\{ {\bf k}\}}\int_{\{ \omega \}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} \\ & + \left( \frac{J}{4}\right)^2 \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \bigg\{ \int_{{\bf k}}\int_{\epsilon} 3\gamma_{-{\bf k}} \left[ G^{\mathrm{K}}_{\beta\beta}(-{\bf k};\Omega- \epsilon) G^{\mathrm{R}}_{\alpha\beta}({\bf k};\Omega+ \epsilon) + G^{\mathrm{R}}_{\beta\beta}(-{\bf k};\Omega- \epsilon) G^{\mathrm{K}}_{\alpha\beta}({\bf k};\Omega+ \epsilon) \right] \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - \int_{{\bf k}}\int_{\epsilon} \left[ \vert \gamma_{\bf k}\vert^2 G^{\mathrm{K}}_{\beta\beta}({\bf k};\Omega + \epsilon) G^{\mathrm{R}}_{\alpha\alpha}(-{\bf k};\Omega - \epsilon) + \gamma_{-\bf k}^2 G^{\mathrm{K}}_{\alpha\beta}({\bf k};\Omega+ \epsilon) G^{\mathrm{R}}_{\beta\alpha}(-{\bf k};\Omega - \epsilon) \right] \bigg\} \\ & ~~~~~~~~~ \times \int_{\{ {\bf k}\}}\int_{\{ \omega \}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}}. \end{align} It can be shown that the two terms simply double each other. We will be using \begin{align} G^{\mathrm{K}}({\bf k};\epsilon) = G^{\mathrm{R}}({\bf k} ;\epsilon){\cal F}_{\epsilon}- {\cal F}_{\epsilon}G^{\mathrm{A}}({\bf k}; \epsilon ) \end{align} identity, and a generalization of $G^{\mathrm{R}}({\bf k};\epsilon)- G^{\mathrm{A}}({\bf k};\epsilon) = - 2\pi i \delta(\epsilon-\epsilon_{\bf k})$ identity for the honeycomb lattice. \subsubsection{Case of $\Omega = 3S\tilde{J}$} Let us calculate the step of the ladder for the $\Omega = 3S\tilde{J}$. Recall, that $\tilde{J}=J\left[1-\frac{\pi}{4S}\left( \frac{T}{3SJ}\right)^2 \right]$. First integral reads \begin{align} & \int_{{\bf k}}\int_{\epsilon} 3\gamma_{-{\bf k}} \left[ G^{\mathrm{K}}_{\beta\alpha}(-{\bf k};\Omega- \epsilon) G^{\mathrm{R}}_{\alpha\alpha}({\bf k};\Omega+ \epsilon) + G^{\mathrm{R}}_{\beta\alpha}(-{\bf k};\Omega- \epsilon) G^{\mathrm{K}}_{\alpha\alpha}({\bf k};\Omega+ \epsilon) \right] \\ & = \frac{i}{2} \int_{{\bf k}} 3\vert\gamma_{{\bf k}}\vert \left[ \frac{{\cal F}_{\epsilon_{+;{\bf k}}} }{2\Omega - 6S\tilde{J} - 2S\tilde{J}\vert \gamma_{\bf k} \vert + i0} - \frac{{\cal F}_{\epsilon_{-;{\bf k}}} }{2\Omega - 6S\tilde{J} + 2S\tilde{J}\vert \gamma_{\bf k} \vert + i0} \right] = -\frac{i}{4S\tilde{J}} \int_{\bf k} 3\left[ {\cal F}_{\epsilon_{+;{\bf k}}} + {\cal F}_{\epsilon_{-;{\bf k}}} \right]. \end{align} Here and below $\epsilon_{\pm {\bf k}} =\tilde{J}S\left( 3 \pm \vert \gamma_{\bf k}\vert \right)$, unperturbed energy of the magnons. Second integrals reads \begin{align} & \int_{{\bf k}} \int_{\epsilon} \left[ \vert \gamma_{\bf k}\vert^2 G^{\mathrm{K}}_{\alpha\alpha}({\bf k};\Omega + \epsilon) G^{\mathrm{R}}_{\beta\beta}(-{\bf k};\Omega - \epsilon) + \gamma_{\bf k}^2 G^{\mathrm{K}}_{\beta\alpha}({\bf k};\Omega+ \epsilon) G^{\mathrm{R}}_{\alpha\beta}(-{\bf k};\Omega - \epsilon) \right] \\ & = - \frac{i}{2} \int_{{\bf k}} \vert\gamma_{{\bf k}}\vert^2 \left[ \frac{{\cal F}_{\epsilon_{+;{\bf k}}} }{2\Omega - 6S\tilde{J} - 2S\tilde{J}\vert \gamma_{\bf k} \vert + i0} + \frac{{\cal F}_{\epsilon_{-;{\bf k}}} }{2\Omega - 6S\tilde{J} + 2S\tilde{J}\vert \gamma_{\bf k} \vert + i0} \right] = \frac{i}{4S\tilde{J}} \int_{\bf k} \vert\gamma_{{\bf k}}\vert \left[ {\cal F}_{\epsilon_{+;{\bf k}}} - {\cal F}_{\epsilon_{-;{\bf k}}} \right]. \end{align} Summing the two, we get \begin{align} & \int_{{\bf k}}\int_{\epsilon} 3\gamma_{-{\bf k}} \left[ G^{\mathrm{K}}_{\beta\alpha}(-{\bf k};\Omega- \epsilon) G^{\mathrm{R}}_{\alpha\alpha}({\bf k};\Omega+ \epsilon) + G^{\mathrm{R}}_{\beta\alpha}(-{\bf k};\Omega- \epsilon) G^{\mathrm{K}}_{\alpha\alpha}({\bf k};\Omega+ \epsilon) \right] \\ - & \int_{{\bf k}}\int_{\epsilon} \left[ \vert \gamma_{\bf k}\vert^2 G^{\mathrm{K}}_{\alpha\alpha}({\bf k};\Omega + \epsilon) G^{\mathrm{R}}_{\beta\beta}(-{\bf k};\Omega - \epsilon) + \gamma_{\bf k}^2 G^{\mathrm{K}}_{\beta\alpha}({\bf k};\Omega+ \epsilon) G^{\mathrm{R}}_{\alpha\beta}(-{\bf k};\Omega - \epsilon) \right] \\ & = - \frac{i}{4S\tilde{J}} \int_{\bf k}\left[ \left( 3+\vert \gamma_{\bf k} \vert \right){\cal F}_{\epsilon_{+;{\bf k}}} + \left( 3-\vert \gamma_{\bf k} \vert \right){\cal F}_{\epsilon_{-;{\bf k}}}\right] \\ & \approx -\frac{i}{4S\tilde{J}} \left[ 6 +\frac{\pi}{3} \left(\frac{T}{S\tilde{J}}\right)^2 \right], \end{align} where \begin{align} \int_{\bf k} \left[ \left( 3+\vert \gamma_{\bf k} \vert \right){\cal F}_{\epsilon_{+;{\bf k}}} + \left( 3-\vert \gamma_{\bf k} \vert \right){\cal F}_{\epsilon_{-;{\bf k}}}\right] & \approx \int_{\bf k} \left[ 6 + 2 \frac{3-\vert \gamma_{\bf k} \vert}{e^{\frac{S\tilde{J}(3-\vert \gamma_{\bf k} \vert)}{T}} - 1} \right] = 24\sqrt{3} + \frac{1}{8\pi}\left(\frac{4T}{S\tilde{J}}\right)^2 \int_{0}^{\infty} \frac{zdz}{e^{z}-1} \\ & = 6 + 3\pi\left(\frac{T}{3S\tilde{J}}\right)^2 , \end{align} where $\int_{\bf k} 6 =\frac{6}{(2\pi)^2} \int_{0}^{2\pi}dk_{x}\int_{0}^{2\pi}dk_{y}= 6$ is an integral over the period of the magnon's dispersion defined by $\gamma_{\bf k} = 2e^{i\frac{k_{x}}{2\sqrt{3}}}\cos\left( \frac{k_{y}}{2}\right)+ e^{-i\frac{k_{x}}{\sqrt{3}}}$. The integral counts all available for pairing magnon states. Second term above can be neglected as it is always small, $T \ll SJ$. We used \begin{align} {\cal F}_{\epsilon} = \coth\left(\frac{\epsilon}{2T}\right) = 1 + \frac{2}{e^{\frac{\epsilon}{T}}-1}, \end{align} and \begin{align} \left( 3+\vert \gamma_{\bf k} \vert \right){\cal F}_{\epsilon_{+;{\bf k}}} + \left( 3-\vert \gamma_{\bf k} \vert \right){\cal F}_{\epsilon_{-;{\bf k}}} = 6 + \frac{2\left( 3+\vert \gamma_{\bf k} \vert \right)}{e^{\frac{S\tilde{J} }{T}\left( 3+\vert \gamma_{\bf k} \vert \right)}-1} + \frac{2\left( 3-\vert \gamma_{\bf k} \vert \right)}{e^{\frac{S\tilde{J}}{T}\left( 3-\vert \gamma_{\bf k} \vert \right)}-1} \approx 6+\frac{2\left( 3-\vert \gamma_{\bf k} \vert \right)}{e^{\frac{S\tilde{J}}{T}\left( 3-\vert \gamma_{\bf k} \vert \right)}-1} , \end{align} which is a natural approximation, as only the low-energy magnons with $\epsilon_{-;{\bf k}}$ dispersion can contribute to the integral. The $\epsilon_{+;{\bf k}}$ are exponentially suppressed at small temperatures. Then we have for the step of the ladder, \begin{align} \langle(iS_{\mathrm{interaction}})(iS_{\mathrm{pump}} )\rangle & \approx -2 \frac{i}{4S\tilde{J}} \left( \frac{J}{4}\right)^2 \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \left[ 6 + 3\pi\left(\frac{T}{3S\tilde{J}}\right)^2 \right] \int_{\{ {\bf k}\}}\int_{\{ \omega \}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} \end{align} Summing the original pumping term, the first step of the ladder, and iterating the steps further, we get, \begin{align} & iS_{\mathrm{pump}} + \langle(iS_{\mathrm{interaction}})(iS_{\mathrm{pump}} )\rangle \\ = & -i3\frac{J}{4} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \left[ 1+ \frac{1}{4S} \frac{J}{\tilde{J}} + \frac{\pi }{8S }\frac{J}{\tilde{J}} \left(\frac{T}{3S\tilde{J}}\right)^2 \right] \int_{\{ {\bf k}\}}\int_{\{ \omega \}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}} \\ & \rightarrow -i3\frac{J}{4} \left(\frac{\Gamma \sqrt{S}}{3SJ}\right)^2 \frac{1}{ 1- \frac{1}{4S} \frac{J}{\tilde{J}} - \frac{\pi }{8S }\frac{J}{\tilde{J}}\left(\frac{T}{3S\tilde{J}}\right)^2 } \int_{\{ {\bf k}\}}\int_{\{ \omega \}} \bar{\Psi}^{\mathrm{cl}}_{\alpha;{\bf k}_{1};\omega_{1}} \bar{\Psi}^{\mathrm{q}}_{\alpha;{\bf k}_{3};\omega_{3}} \delta_{{\bf k}_{1},-{\bf k}_{3}} \delta_{\omega_{1}-\Omega,\Omega-\omega_{3}}, \end{align} clearly there is an enhancement of pairing. \subsubsection{Case of $\Omega \neq 3SJ$} Here we demonstrate that for $\Omega \neq 3SJ$ each step of the ladder acquires an imaginary part. Besides, we are going to show that the pumping gets suppressed by the rescattering processes described by the ladder as the frequency approaches $6SJ$. To see the general tendency of the renormalization of the pairing strength away from the Dirac points, we disregard Hartree-Fock corrections to the magnon dispersion. We have for the step of the ladder, \begin{align} & M(\Omega) \equiv \int_{{\bf k}}\int_{\epsilon} 3\gamma_{-{\bf k}} \left[ G^{\mathrm{K}}_{\beta\alpha}(-{\bf k};\Omega- \epsilon) G^{\mathrm{R}}_{\alpha\alpha}({\bf k};\Omega+ \epsilon) + G^{\mathrm{R}}_{\beta\alpha}(-{\bf k};\Omega- \epsilon) G^{\mathrm{K}}_{\alpha\alpha}({\bf k};\Omega+ \epsilon) \right] \\ & - \int_{{\bf k}}\int_{\epsilon} \left[ \vert \gamma_{\bf k}\vert^2 G^{\mathrm{K}}_{\alpha\alpha}({\bf k};\Omega + \epsilon) G^{\mathrm{R}}_{\beta\beta}(-{\bf k};\Omega - \epsilon) + \gamma_{\bf k}^2 G^{\mathrm{K}}_{\beta\alpha}({\bf k};\Omega+ \epsilon) G^{\mathrm{R}}_{\alpha\beta}(-{\bf k};\Omega - \epsilon) \right] \\ & = \frac{i}{2} \int_{{\bf k}} \vert\gamma_{{\bf k}}\vert \left[ \frac{(3+\vert\gamma_{{\bf k}}\vert){\cal F}_{\epsilon_{+;{\bf k}}} }{2\Omega - 6SJ - 2SJ\vert \gamma_{\bf k} \vert + i0} - \frac{(3-\vert\gamma_{{\bf k}}\vert){\cal F}_{\epsilon_{-;{\bf k}}} }{2\Omega - 6SJ + 2SJ\vert \gamma_{\bf k} \vert + i0} \right] \\ & = \frac{i}{4} \mathrm{PV} \int_{{\bf k}} \vert\gamma_{{\bf k}}\vert \left[ \frac{(3+\vert\gamma_{{\bf k}}\vert){\cal F}_{\epsilon_{+;{\bf k}}} }{\zeta- SJ\vert \gamma_{\bf k} \vert } - \frac{(3-\vert\gamma_{{\bf k}}\vert){\cal F}_{\epsilon_{-;{\bf k}}} }{\zeta + SJ\vert \gamma_{\bf k} \vert } \right] \\ & +\frac{i}{4}\left(-\frac{i\pi}{2}\right) \int_{{\bf k}} \vert\gamma_{{\bf k}}\vert \delta( \zeta- SJ\vert \gamma_{\bf k} \vert ) (3+\vert\gamma_{{\bf k}}\vert){\cal F}_{\epsilon_{+;{\bf k}}} -\frac{i}{4}\left(-\frac{i\pi}{2}\right) \int_{{\bf k}} \vert\gamma_{{\bf k}}\vert \delta( \zeta +SJ\vert \gamma_{\bf k} \vert ) (3-\vert\gamma_{{\bf k}}\vert){\cal F}_{\epsilon_{-;{\bf k}}} , \end{align} where $\mathrm{PV}$ is the principal value of the integral, and where $\zeta=\Omega-3SJ$. The imaginary part for $\zeta >0$ is evaluated as \begin{align} & -\frac{i\pi}{2} \int_{{\bf k}} \vert\gamma_{{\bf k}}\vert \delta( \zeta- SJ\vert \gamma_{\bf k} \vert ) (3+\vert\gamma_{{\bf k}}\vert){\cal F}_{\epsilon_{+;{\bf k}}} +\frac{i\pi}{2} \int_{{\bf k}} \vert\gamma_{{\bf k}}\vert \delta( \zeta +SJ\vert \gamma_{\bf k} \vert ) (3-\vert\gamma_{{\bf k}}\vert){\cal F}_{\epsilon_{-;{\bf k}}} \\ & = -\frac{i\pi}{2}\frac{\zeta}{(SJ)^2}\left(3+\frac{\zeta}{SJ}\right) {\cal F}\left(3SJ+\zeta\right)\int_{\bf k}\delta( \zeta -SJ\vert \gamma_{\bf k} \vert ), \end{align} where we kept the integral as it is. The imaginary part is non-zero and works towards weakening of the pairing between magnons. Let us estimate the step of the ladder when the pump frequency is $\Omega = 6SJ - \alpha$ and $\alpha$ is small. Then $\zeta = 3SJ - \alpha$, we approximate $\vert \gamma_{\bf k}\vert \approx 3 -\frac{k^2}{4}$, and we write for the step of the ladder \begin{align} & \mathrm{Im}\left[M(\Omega)\right] \approx \frac{1}{4} \mathrm{PV} \int_{{\bf k}} \vert\gamma_{{\bf k}}\vert \frac{(3+\vert\gamma_{{\bf k}}\vert){\cal F}_{\epsilon_{+;{\bf k}}} }{\zeta- SJ\vert \gamma_{\bf k} \vert } \approx \frac{9}{2} \mathrm{PV} \int_{{\bf k}} \frac{1}{-\alpha + SJ\frac{k^2}{4} } = \frac{9}{2\pi SJ} \mathrm{PV}\int_{0}^{\Lambda^2} \frac{dz}{z -\alpha \frac{4}{SJ}} \approx \frac{9}{2\pi SJ} \ln\left(\frac{SJ \Lambda^2 }{4\alpha} \right), \\ & \mathrm{Re}\left[ M(\Omega)\right] \approx \frac{\pi}{4} \int_{{\bf k}} \vert\gamma_{{\bf k}}\vert \delta( \zeta- SJ\vert \gamma_{\bf k} \vert ) (3+\vert\gamma_{{\bf k}}\vert){\cal F}_{\epsilon_{+;{\bf k}}} \approx \frac{9}{2SJ}. \end{align} We then get for the renormalization of the pairing \begin{align} \Delta^2 \rightarrow \frac{\Delta^2}{1+ \frac{3}{16\pi S} \ln\left(\frac{SJ \Lambda^2 }{4\alpha} \right) + i\frac{3}{16S}}. \end{align} Importantly, for frequencies away from the Dirac points, the structure of the renormalization due to the rescattering processes drastically changes. Namely, the sign of each ladder changes as compared to the Dirac magnons case, and, as a result, there is no way for the divergency to occur. Moreover, away from $\Omega = 6SJ$, the pairing is only weakly suppressed by the rescattering processes. However, when pump's frequency $\Omega$ approaches $6SJ$, $\alpha \rightarrow 0$, the pairing vanishes. \subsubsection{Example: shifting the rescattered field away for $\Omega = 6SJ$} When pump's frequency is $\Omega = 6SJ$ there is a resonant absorption of magnons. This can be see from $ {\cal L}^{\mathrm{R}/\mathrm{A}}_{\alpha\beta,0,\Omega}{\cal L}^{\mathrm{R}/\mathrm{A}}_{\beta\alpha,0,\Omega} - {\cal L}^{\mathrm{R}/\mathrm{A}}_{\beta\beta,0,\Omega}{\cal L}^{\mathrm{R}/\mathrm{A}}_{\alpha\alpha,0,\Omega} = 0 \mp i0$ for non-interacting magnons. Upon inserting life-time of magnons at $\omega = 6SJ$ and ${\bf k} = 0$, the quantity becomes finite, imaginary and can be large. Let us call it \begin{align} {\cal L}^{\mathrm{R}/\mathrm{A}}_{\alpha\beta,0,\Omega}{\cal L}^{\mathrm{R}/\mathrm{A}}_{\beta\alpha,0,\Omega} - {\cal L}^{\mathrm{R}/\mathrm{A}}_{\beta\beta,0,\Omega}{\cal L}^{\mathrm{R}/\mathrm{A}}_{\alpha\alpha,0,\Omega} = \mp \frac{i}{2\tau_{6}}(6SJ \pm \frac{i}{2\tau_{6}}). \end{align} Also \begin{align} & {\cal L}^{\mathrm{R}/\mathrm{A}}_{\beta\alpha,0,\Omega}- {\cal L}^{\mathrm{R}/\mathrm{A}}_{\beta\beta,0,\Omega} = \mp \frac{i}{2\tau_{6}}, \\ & {\cal L}^{\mathrm{R}/\mathrm{A}}_{\alpha\beta,0,\Omega}- {\cal L}^{\mathrm{R}/\mathrm{A}}_{\alpha\alpha,0,\Omega} = \mp \frac{i}{2\tau_{6}}, \end{align} and, hence, we get \begin{align} \frac{{\cal L}^{\mathrm{R}/\mathrm{A}}_{\beta\alpha,0,\Omega}- {\cal L}^{\mathrm{R}/\mathrm{A}}_{\beta\beta,0,\Omega}}{{\cal L}^{\mathrm{R}/\mathrm{A}}_{\alpha\beta,0,\Omega}{\cal L}^{\mathrm{R}/\mathrm{A}}_{\beta\alpha,0,\Omega} - {\cal L}^{\mathrm{R}/\mathrm{A}}_{\beta\beta,0,\Omega}{\cal L}^{\mathrm{R}/\mathrm{A}}_{\alpha\alpha,0,\Omega} } = \frac{1}{6SJ\pm \frac{i}{2\tau_{6}}}. \end{align} Therefore, the shift of the $\omega = 6SJ$, ${\bf k}=0$ fields reads as \begin{align} & \bar{\Psi}^{\mathrm{cl}}_{n;0;6SJ} \rightarrow \bar{\Psi}^{\mathrm{cl}}_{n;0;6SJ} + \frac{\Gamma \sqrt{S}}{6SJ-\frac{i}{2\tau_{6}}}, \\ & \Psi^{\mathrm{cl}}_{n;0;6SJ} \rightarrow \Psi^{\mathrm{cl}}_{n;0;6SJ} + \frac{\Gamma \sqrt{S}}{6SJ+\frac{i}{2\tau_{6}}}. \end{align} For physically relevant scenario, $6SJ>\frac{1}{2\tau_{6}}$, thus, we can neglect the inverse life-time, and recover the claim made in the Main Text. \section{Supplemental Material References} \begin{enumerate} \item A.I. Akhiezer, V.G. Bar'yakhtar, and S.V. Peletminskii, \textit{Spin Waves} (Nauka, Moscow, in Russian, 1967). \item A. Auerbach, \textit{Interacting Electrons and Quantum Magnetism} (Springer, New York, 1994). \item S.M. Rezende, \textit{Fundamentals of magnonics} (Springer, 2020). \item A. Kamenev, \textit{Field theory of non-equilibrium systems} (Cambridge, University Press, 2012). \item H. Suhl, J. Phys. Chem. Solids, {\bf 1}, 209 (1957). \item M. Bloch, Phys. Rev. Lett. {\bf 9}, 286 (1962) \item S.S. Pershoguba, S. Banerjee, C. Lashley, J. Park, H. \AA gren, G. Aeppli, and A.V. Balatsky, Phys. Rev. X {\bf 8}, 011010 (2018). \end{enumerate} \end{widetext} \end{document} \end{document}
1,314,259,996,382
arxiv
\section{Introduction} \label{Introduction} The spin-$\frac{1}{2}$ Heisenberg antiferromagnet (HAFM) has long been studied as a simple example of a strongly interacting quantum many-body system~\cite{manousakis91}. Recently, it has attracted considerable attention in the context of the copper oxide high-temperature superconductors~\cite{anderson87,zhang88}. The Hamiltonian of the HAFM is given by \begin{equation} H=J\sum_{\langle i,j \rangle}\vec{S}_{i}\cdot\vec{S}_{j} \equiv J\sum_{\langle i,j \rangle} S_{i}^{z}S_{j}^{z} + \frac{1}{2}\left(S_{i}^{+}S_{j}^{-}+S_{i}^{-}S_{j}^{+}\right) \end{equation} where $J$ takes positive values, $\langle i,j \rangle$ refers to nearest neighbor pairs, $\vec{S}_{i}$ is the spin operator for a spin-$\frac{1}{2}$ located at site $i$, and $S_{i}^{+}$ and $S_{i}^{-}$ are the corresponding raising and lowering operators. The operator $S_{i}^{+}S_{j}^{-}+S_{i}^{-}S_{j}^{+}$ exchanges antiparallel spins, but vanishes when applied to a pair of parallel spins. The terms of this type produce off-diagonal matrix elements equal to $\frac{J}{2}$ between basis states (i.e.\ spin configurations) that are related by a single exchange of nearest neighbor spins. The terms of the form $S_{i}^{z}S_{j}^{z}$ combine to give a diagonal matrix element for each state equal to $\frac{J}{4}$ times the difference between the number of parallel nearest neighbor spins and the number of anti-parallel nearest neighbor spins in that configuration. Despite the simplicity of the model, no analytic solutions have been found for nontrivial structures except in one dimension~\cite{manousakis91}. Since the Hamiltonian is invariant under uniform rotations of the spins, one can choose its eigenstates to be simultaneous eigenstates of the operators $\vec{S}_{TOT}^{2}$ and $S_{TOT}^{z}$, where $\vec{S}_{TOT}$ is the total spin. For a system containing an even number of spins $n$, whatever the ground state value of $\vec{S}_{TOT}^{2}$, there is always a ground state with $S_{TOT}^{z} = 0$. Therefore, a ground state can always be found in the subspace spanned by the \begin{equation} N_{total} = \frac{n!}{(n/2)!(n/2)!} \end{equation} basis states with an equal number of up and down spins. The generalization to an odd number of spins is straightforward. Since the Hamiltonian is real, the ground state eigenvector can be chosen to be real. In this paper, we solve this model for a series of structures that embody the basic structural features of the fullerenes, which are spherical shells of threefold coordinated carbon atoms arranged in pentagonal and hexagonal rings. It can be shown that every such structure must have twelve pentagonal faces~\cite{growth_and_form}. The total number of sites can be varied by changing the number of hexagons. The smallest such structure contains no hexagons and has $20$ sites. Figure \ref{frustrated_structures} shows several fullerene related structures that we discuss in this paper. We shall refer to the structures in Fig.~\ref{frustrated_structures}~(a)--(e) as F-20, F-24, F-26, F-28, and F-32, respectively. For simplicity, we shall treat all of the bonds in these structures as equivalent even though in actual carbon clusters they may differ. On a pentagonal ring, it is impossible to arrange all spins in an antiferromagnetic pattern. This introduces frustration in the classical ground state where nearest neighbor spins would prefer to be antiparallel. For comparison, we also study several unfrustrated structures that are derived from the honeycomb lattice by applying periodic boundary conditions. These structures are shown in Fig.~\ref{hexagonal_structures}~(a)--(c). We refer to these structures as H-18, H-24, and H-26, respectively. These structures have toroidal topology rather than the spherical topology of the frustrated structures. Table \ref{struct_props} summarizes the geometrical features of the structures that we investigate. A group of powerful techniques used to investigate quantum many-body systems such as the HAFM are based on quantum Monte-Carlo methods. In systems with frustration, these methods either require the summation of a very large number of terms with alternating signs (known as the sign problem) or depend on a ``guiding'' wavefunction which must be properly guessed. Here we use a different approach based on exact diagonalization of the Hamiltonian matrix. This approach has the advantage of not being affected by the sign problem, but is limited to rather small system sizes because the number of states in the Hilbert space grows exponentially with the size of the system. For example, in Table \ref{struct_props} we list the number of states in the $S_{TOT}^{z} = 0$ subspace for each cluster that we investigate. Thus, it takes a major increase in either computer power or efficiency of the algorithm to get a modest increase in the size of system that can be investigated. Using exact diagonalization techniques, modern computers can handle systems with $\leq 36$ spins. A $36$ spin system has about $9$ billion basis states in the subspace with $S_{TOT}^{z} = 0$. The Hamiltonian matrix is sparse and has only about $300$ billion nonzero entries for this size system. Memory constraints make it difficult to store this matrix. The symmetries of the structure must be used to reduce the size of the basis space in order to make calculations tractable. The usefulness of symmetrization depends on how many mutually commuting symmetry operations can be found. Symmetry is most useful for lattices where all translations commute, such as the square lattice. Even noncommuting symmetries could be easily exploited if the ground state was known to transform according to the identity representation of the symmetry group. This can not be assumed to be the case for the frustrated HAFM. To our knowledge, the largest structure that has been solved using exact diagonalization and taking advantage of all of its symmetries is the 36-site square lattice~\cite{schulz92}. It would be very difficult to find the ground state of a structure with the same size and a lower number of commuting symmetries without approximation. One way to manage larger systems is to restrict the wavefunction to the space spanned by a subset of the basis states. In this approach, the problem is transformed into finding a subspace that accurately approximates the full-space result, but that is small enough to be handled computationally. In this paper, we variationally optimize the truncation of the Hilbert space, and exactly diagonalize within the truncated space. The rest of the paper is organized as follows: section \ref{Choice of Truncation} contains a justification for our choice of optimal wavefunction and truncated space, section \ref{Results} discusses the ground state properties that we obtain with this approach for a series of frustrated and honeycomb clusters, and section \ref{Conclusion} summarizes our conclusions. \section{Choice of Truncation} \label{Choice of Truncation} Consider a truncation of the space to the basis states $\left\{|\alpha_{1}\rangle,|\alpha_{2}\rangle,\ldots, |\alpha_{N_{trunc}}\rangle\right\}$ where $N_{trunc} < N_{total}$. Define a truncated Hamiltonian that consists of those elements of the original Hamiltonian that connect states retained in the truncated space. Let $E(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{N_{trunc}}\})$ denote the smallest eigenvalue of the truncated Hamiltonian. We define the optimal truncation as the one that minimizes $E$ with respect to all sets with $N_{trunc}$ basis states. By the variational principle, the ground state wavefunction of the corresponding truncated Hamiltonian is the wavefunction that, subject to the constraint of vanishing for all but $N_{trunc}$ states, minimizes the expectation of the full Hamiltonian. Therefore, $E$ for the optimal truncation is the smallest possible variational upper bound on the true ground state energy that can be obtained using trial wavefunctions that have no more than $N_{trunc}$ nonzero components. The minimization over sets of basis states is accomplished using a stochastic search: An initial truncation is chosen and the ground state energy of the corresponding truncated Hamiltonian is found using the Lanczos method~\cite{sorensen92}. Moves in the stochastic search consist of adding states to the space and eliminating others while keeping the overall number of states fixed. The Lanczos method is used at each step to find the ground state energy for the new truncation, and the move is accepted or rejected according to a Metropolis algorithm~\cite{metropolis53,numerical_recipes}. This procedure is repeated until all new moves are rejected, in which case a minimum of $E(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{N_{trunc}}\})$ has been found. If this is the global minimum, the resulting truncation is the ideal truncation. We have found no evidence that the procedure gets trapped in local minima. For systems that are small enough that the full problem can be solved, we have also applied an alternative truncation procedure for purposes of comparison with our variational scheme. This consists of keeping only the basis states that have the largest weights in the full-space ground state solution and varying the cutoff weight below which states are excluded from the basis. The energy obtained from this alternative procedure must be greater than or equal to the variational result, but the wavefunction from this alternative procedure is expected to be closer to the true ground state. Therefore, this alternative procedure might be expected to yield better results for correlation functions. A comparison of results obtained using these two independent methods helps to assure that the variational procedure is converging properly and shows that the procedure produces reasonable correlation functions. In order to optimize the variational search, it is necessary to bias the selection of the states to be added to, or eliminated from, the truncated basis during each step. The procedure proposed here is analogous to force-bias Monte Carlo. In our case, the equivalent of the force in a particular direction is the difference between the energy when a particular state $|\beta\rangle$ is included in a truncation and the energy when the state is not included in the truncation: \begin{equation} \nabla_{\beta}E(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{N_{trunc}}\}) \equiv E(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{N_{trunc}},\beta\}) - E(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{N_{trunc}}\}). \end{equation} In force-bias Monte Carlo, the force is a function of the configuration of the system, and correspondingly $\nabla_{\beta}E$ is a function of the set of states included in the truncation. In our case, since each state is either included or not included, we must take \begin{equation} \nabla_{\beta}E(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{N_{trunc}},\beta\}) \equiv \nabla_{\beta}E(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{N_{trunc}}\}). \end{equation} $\nabla_{\beta}E$ can be estimated easily for each $\beta$ using the solution from the previous truncation: We denote the states that are included in the previous truncation as internal states, and the remaining states of the full Hilbert space as external states. The internal states are the states that could be eliminated from the previous truncation in the process of forming the new truncation, while the external states are the states that could be added. For each internal state, we wish to calculate the change in the variational energy caused by eliminating it from the previous truncation. Let the ground state wavefunction for the previous truncation be $|\Psi_{0}\rangle$ and let $\psi_{\beta} = \langle\beta|\Psi_{0}\rangle$. We approximate the ground state of the truncation with the state $|\beta\rangle$ \underline{eliminated} by assuming that the rest of the wavefunction remains unchanged except for an overall normalization factor, \begin{equation} |\Psi_{0-\beta}\rangle = \frac{|\Psi_{0}\rangle - \psi_{\beta}|\beta\rangle} {\sqrt{1-{|\psi_{\beta}|}^{2}}}. \end{equation} To first order in ${|\psi_{\beta}|}^{2}$ this approximation gives, \begin{equation} \nabla_{\beta}E = E_{0} - \langle\Psi_{0-\beta}|H|\Psi_{0-\beta}\rangle = {|\psi_{\beta}|}^{2}\left(E_{0}-H_{\beta\beta}\right) \label{inside_force} \end{equation} where $H|\Psi_{0}\rangle = E_{0}|\Psi_{0}\rangle$ and $H_{\beta\beta} = \langle\beta|H|\beta\rangle$. Similarly, the effect of \underline{adding} an external state is approximated using second order perturbation theory as: \begin{equation} \nabla_{\beta}E = \frac {\left|\langle\beta|H|\Psi_{0}\rangle\right|^{2}} {E_{0}-H_{\beta\beta}}. \label{outside_force} \end{equation} Note that $\nabla_{\beta}E$ will be zero if $|\beta\rangle$ is neither an internal state nor an external state that is connected by the Hamiltonian to an internal state. Depending on the stage of the variational procedure, a set of trial states is chosen which either contains all of the states for which $\nabla_{\beta}E$ is nonzero, or a randomly chosen subset of such states. $\nabla_{\beta}E$ is calculated for this set, and the new truncation is formed by taking the states with the largest values. Choosing a random subset of trial states introduces a stochastic element into the computation and effectively reduces the variational step size. During a minimization procedure where the full set of trial states is used at every step, a move will eventually be rejected in the Monte-Carlo evaluation. Further iterations beyond this point will simply generate the same move. This is similar to a gradient minimization with a fixed step size where the step overshoots the minimum. Here, since each state is either included or not included, it is impossible to reduce the step size in the usual sense. Instead, the step size can be effectively reduced by using a randomly chosen subset of the components of the gradient. The fastest minimization is achieved by using all of the trial states until the first move is rejected, and then considering a random subset which is gradually reduced in size. For the HAFM model considered in this paper, we found that our move selection algorithm was so effective that additional moves after the first rejected move produced minimal improvements in the energy. Accordingly, we stop the variational procedure when the first move is rejected. The idea of iterative improvement of a Hilbert space truncation using perturbative estimates of the importance of new states has a long history in the quantum chemistry literature~\cite{bender69,huron73,evangelisti83,feller89,harrison91}. In addition, for this class of problems, the final truncated results are typically corrected with a perturbative treatment of the remaining states~\cite{maynau91,shavitt92,wenzel92,steiner94}. Extrapolation methods are also frequently used~\cite{buenker75}. Such methods would likely be a useful addition to our method, but since the emphasis of this paper is on a variational approach, we have avoided such corrections. Iterative improvement of a Hilbert space truncation has also been studied in the context of quantum lattice models. De Raedt and von der Linden estimated the importance of a new basis state by means of the energy lowering obtained from a Jacobi rotation involving the state~\cite{de_raedt92}. Riera and Dagotto added basis states that are connected by the Hamiltonian to states with a large weight in the current truncated solution~\cite{riera93}. In this previous work, the basis is expanded by adding selected new states until either the desired quantities converge or computational limits are reached. In contrast, our emphasis is on finding the optimal basis of a given size. Working with a constant size basis has two advantages: (1) It allows us to define the optimal basis in an unambiguous manner and to express the problem of finding this optimal basis as a minimization problem. This makes it possible to harness the full power of the Metropolis algorithm and the simulated annealing approach. (2) It allows us to tackle problems with no clear hierarchy of importance among the basis states. In quantum chemistry, there is a hierarchy of states in which higher excitations are progressively less important. In contrast, the frustrated HAFM lacks any clear {\em a priori} hierarchy among the basis states. As a result, truncation can induce level crossings and change the character (e.g. the symmetry) of the ground state. If a basis selection process were to start with an incorrect ground state, augmentation of the truncation runs the risk of not selecting the basis states that are important for the true ground state. This makes it likely that the true ground state would never be found. By working with a basis of a constant size, which is variationally optimized, we avoid this problem. The effectiveness of the variational Hilbert space truncation procedure can be demonstrated by comparing its results to those obtained from the full-space solution. Define the fractional error in the energy for a given truncation by \begin{equation} \delta\epsilon(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{N_{trunc}}\})= \frac{E(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{N_{trunc}}\})-E_{N_{total}}} {E_{N_{total}}}, \label{def_of_dE} \end{equation} where $E_{N_{total}}$ is the full-space ground state energy. Figure \ref{energy_errors} shows $\delta\epsilon$ for the truncation resulting from the variational truncation procedure and the truncation resulting from keeping the states with the largest weights in the full-space solution. The energies found using the variational procedure are just slightly below those found by truncating based on the full-space solution. The fact that the variational energies are the lowest energies indicates that the variational minimization is converging properly. The closeness of the two results indicates that our definition of a best truncation is successful in capturing the most important parts of the full-space wavefunction. The difference between the two results grows as the retained fraction of the space diminishes and as the physical system gets smaller, but it stays relatively insignificant except for the smallest truncation size of the smallest structure. For example, retaining only $\frac{1}{6}$ of the basis states of the F-20 structure results in only about 1 percent error in the energy. Note that in order to get the same fractional error, a smaller fraction of the basis vectors is required for the larger systems. As a result, the number of states that must be retained in the truncated space grows more slowly than the number of states in the full-space. Therefore, larger systems make truncation increasingly useful. The curves resulting from the frustrated structures have a different shape than the curves resulting from the unfrustrated structures. The error falls more slowly for the unfrustrated structures than for the frustrated structures as the retained fraction of space increases. This suggests that the method is more useful for frustrated structures. Figure \ref{best_case_corr} shows the correlations for the honeycomb lattice structures as a function of the fraction of space retained in the truncation. Since for these structures the nearest neighbor correlation function is proportional to the energy, it is not included. The multiple lines are due to the fact that the 24 and 26 site structures each have two inequivalent 3rd neighbor correlations, and the 24 site structure has two inequivalent 4th neighbor correlations. Again, both the results of the variational truncation method and the results of truncating the Hilbert space based on the weights of states in the full-space solution are shown. The truncation based on the full-space solution is expected to give a better approximation to correlation functions than the variational method, but for the correlations considered here, the results of two methods are almost indistinguishable. Furthermore, truncation down to a few percent of the space by either of these methods introduces only a few percent error in the correlations. Since the HAFM on the honeycomb lattice has long range order, all of the correlations are fairly large in magnitude. This causes our truncation methods to give particularly good results for these correlations. In contrast, correlations between sites that are far apart on the frustrated structures are a worst case situation. Correlations on the frustrated structures usually become very small at long distances. As a result, the fractional error in these correlations is quite large. Figure \ref{worst_case_corr} shows the fractional error in the correlation that is smallest in magnitude for the 20, 24, and 26 site frustrated structures. The full-space values of these correlations are $3.31 \times 10^{-2}$, $-3.43 \times 10^{-3}$, and $2.02 \times 10^{-3}$, respectively. With less than half of the space retained, the fractional error introduced in these correlations becomes substantial. The error resulting from the variational truncation method is rather similar to the error introduced by truncating the Hilbert space based on the weights of states in the full-space solution. The fractional error in a correlation seems to grow with the inverse of the magnitude of the correlation. Since we are interested in the most accurate approximation to the full-space properties of the system, it is desirable to make the size of the truncation as large as possible. As mentioned above, memory is the primary constraint on the size of the system that can be handled using exact diagonalization techniques. Thus, effective implementation of this algorithm requires careful treatment of memory usage. The requirement of maximizing speed while minimizing memory usage provides a particular programming challenge to implementing the variational Hilbert space truncation method. We have implemented the method on the Naval Research Laboratory's 256 node Thinking Machines Corporation CM-5E supercomputer. In the Appendix we provide an outline of technical issues related to our implementation of the algorithm on this massively parallel architecture. \section{Results} \label{Results} Table \ref{fs_results} summarizes some of the ground state properties of the HAFM on the structures we considered. The expectation of $\vec{S}_{TOT}^{2}$ can be calculated by summing the correlation functions between all pairs of sites. Since each structure considered has an even number of spins, the possible exact eigenvalues are $s(s+1)$ where $s$ is an integer. Deviation from these values can be expected for truncated solutions because the truncation procedure breaks the invariance of the model under global spin rotation. For each of our full-space solutions (which includes all structures studied except F-32), the expectation of $\vec{S}_{TOT}^{2}$ is $0$ to the accuracy of the solution. Thus, for every system except F-32, the calculated ground state is a spin singlet. For the truncated solution of the F-32 system, this expectation is $\approx 0.5$. This value is between the values expected for a spin singlet ($s(s+1) = 0$) and a spin triplet ($s(s+1) = 2$). It is much closer to the value of the spin singlet than to the triplet. Moreover, we have found that the variational procedure tends to decrease this value, indicating that the ground state of F-32 is also a spin singlet. Table \ref{fs_results} contains two entries for the F-28 structure because its ground state is a rotational doublet. The rest of the states are rotational singlets. The two F-28 states are distinguished by considering their transformation properties under improper rotation about the symmetry axis through the center of the bond between site $19$ and site $20$ (see Fig.~\ref{frustrated_structures}~(d)). Under this transformation, the F-28A state has eigenvalue $1$, while the F-28B state has eigenvalue $-1$. The first column of Table \ref{fs_results} contains the ground state energy per site. As expected, frustration raises the ground state energy. The energies per site of the structures based on the honeycomb lattice reveal the expected finite size effects for the HAFM on a lattice: the energy per site increases as the size of the system increases. Finite size effects are not as clearly evident in the frustrated structures, but the trend from F-24 to F-26 to F-28 is rather similar to what could be expected from finite size effects. The trend is reversed in F-32. These clusters are not especially similar to each other except for overall topology, so it is reasonable that finite size effects are obscured by effects due to details of the structure. Furthermore, as the size of the frustrated structures increases, the hexagonal rings become more plentiful and closer together. Thus, these systems should behave more like the unfrustrated structures at larger sizes. Eventually, the energy must decrease toward the unfrustrated value. It is likely that the drop in energy between F-28 and F-32 indicates the beginning of this trend. Note that this drop in energy can not be a result of using a truncated solution for the F-32 system since the energy resulting from the truncation must be greater than the full-space energy. The rest of the columns in Table \ref{fs_results} show the nearest neighbor spin-spin correlations. The correlation between site $i$ and site $j$ is defined by \begin{equation} C_{i,j} = \langle\Psi_{0}|\vec{S}_{i}\cdot\vec{S}_{j}|\Psi_{0}\rangle \end{equation} where $\Psi_{0}$ is the ground state wavefunction. The sum of all of the nearest neighbor correlations for a particular structure gives the ground state energy. Even though the ground state energies vary relatively little, the nearest neighbor correlation functions vary dramatically (see Table \ref{fs_results}). The nearest neighbor correlations are divided into four columns. The column labeled $H-H$ contains correlations between sites that are both located on the same hexagonal ring. The column labeled $H-H^{\prime}$ contains correlations between sites that are located on two different hexagonal rings. The column labeled $H-P$ contains correlations between a site located on a hexagonal ring and a site that is not located on any hexagonal ring. The column labeled $P-P$ contains correlations between two sites neither of which is on a hexagonal ring. Fig.~\ref{frustrated_structures} and Fig.~\ref{hexagonal_structures} serve as keys to the labeling of the sites. All of the nearest neighbor correlation functions are negative, which is not surprising since the ground state wavefunction is chosen to minimize the sum over these correlations. In order to provide physical insight into the results, we consider the following argument: it is possible to solve the HAFM analytically on a structure consisting of a central site and its three neighbors. The sum of the three correlations for this system is $-5/4$. The variational principle can then be used to show that for a general structure, the sum of the three correlations between a given site and its neighbors can not be less than $-5/4$. This sum is reduced in magnitude by frustration and by quantum fluctuations when additional sites are included in the structure. However, the existence of the strict bound discussed above suggests that a strong correlation between a site and one of its neighbors will reduce the correlations to the rest of its neighbors. This behavior is exemplified by the correlations in Table \ref{fs_results}. The strongest correlations, those in the $H-H$ column, are for the bonds between two sites that are on the same hexagonal ring. Furthermore, the strongest of these correlations are found on the frustrated structures where the bonds that form the hexagonal ring do not have to compete with two other identical bonds. The drop in energy between F-28 and F-32 can be attributed to an increase in the number of bonds of this type. The weakest nearest neighbor correlations are found between sites that are located on different hexagonal rings. These bonds are frustrated and also suffer from strong competition from the bonds on each of the hexagonal rings. To illustrate these arguments in a specific example, consider the F-26 structure. The $C_{1,9}$ and $C_{11,12}$ correlations are both frustrated since each of these bonds is included in two pentagonal rings. The $C_{1,9}$ correlation is much weaker (-0.103) than the $C_{11,12}$ correlation (-0.332) because the $C_{1,9}$ correlation has competition from four strong (-0.424) correlations of the $C_{1,2}$ type (correlations between sites that are on the same hexagon but not on any other hexagons). For similar reasons, the $H-P$ correlations are weaker than the $P-P$ correlations. The correlation functions for the 28 site frustrated structure are constrained by the symmetries of the wavefunction, and this results in several anomalously small correlations, especially $C_{3,13}$ for the $A$ wavefunction. Although the original structure is tetrahedral, the process of resolving the two degenerate states breaks this symmetry by singling out the symmetry axis through the bond between sites $19$ and $20$. There is an approximate equivalence of correlations between the results for the two wavefunctions. The role of $C_{1,2}$ is switched with $C_{2,3}$, the role of $C_{1,9}$ is switched with $C_{3,13}$, and the role of $C_{2,11}$ is switched with $C_{6,7}$. Roughly speaking, the correlations that are closest to the axis through the bond between sites $19$ and $20$ switch places with the correlations that are furthest away form this axis. The F-28 structure has unusually strong long range correlations between the sites labeled as 7, 11, 15, and 28 in Fig.~\ref{frustrated_structures}~(d). These sites form the corners of a tetrahedron. For the F-28A state, the correlations of this type perpendicular to the symmetry breaking axis ($C_{7,28}$ and $C_{11,15}$) are $0.141$ and the other correlations of this type ($C_{7,11}$, $C_{7,15}$, $C_{11,28}$, and $C_{15,28}$) are $0.136$. For the F-28B state, these correlations are $0.134$ and $0.139$ respectively. This result is interesting because it suggests strong ferromagnetic correlations between the spins on the four apex sites that form the corners of a tetrahedron in F-28. This is consistent with quantum mechanical calculations of the electronic structure of the C$_{28}$ molecule, which is believed to have the same structure as the F-28 cluster: in those calculations, the molecule is found to have an $s = 2$ ground state, with the spins in the four apex sites aligned~\cite{pederson93}. \section{Conclusion} \label{Conclusion} The variational Hilbert space truncation approach provides an effective way to extend the range of structures for which exact diagonalization of the HAFM is feasible. Substantial reductions in memory can be obtained with less than a 1\% error in the ground state energy. A few percent error is introduced in most correlations. The exception is very weak correlations for which the method will give a rough idea at best. For system sizes that are at the current leading edge of computational capabilities, a reduction of the Hilbert space by a factor of thirty can be achieved. For the HAFM, a factor of thirty reduction in memory use allows structures with about $5$ additional sites to be handled. Our method is compatible with symmetrization techniques, which, depending on the structure under consideration, can achieve a similar reduction in memory requirements. Finally, our method should be useful for models other than the HAFM. In fact, much larger reductions in the size of the Hilbert space can be expected for systems where the ground state is dominated by a few of the basis states used in the expansion of the wavefunction. For such systems, the method should be capable of identifying the important basis states, and thus the important physics of the ground state. Using this variational approach, we have successfully determined the ground state properties of the HAFM on a series of frustrated and unfrustrated structures. An interesting and unexpected result is the doublet nature of the ground state of the 28 site frustrated structure. The 32 site frustrated structure seems to be a rotational singlet, but it would be interesting to know whether other larger structures of this type also break structural symmetries. \section*{Acknowledgement} This work was supported by ONR Contract \#N00014-93-1-0190. The computations were performed on the NRL 256-node CM-5 supercomputer. We acknowledge helpful input during the initial stages of this project from Prof. L. Johnsson.
1,314,259,996,383
arxiv
\section{Introduction} In a typical scenario for recommender systems, a lot of data is available about interactions between users and items, such as users purchasing products or listening to songs, where each user interacts with only a handful of the items in the catalog – e.g. no user is going to play every song in a music service or watch every movie in a video service. Typically, recommendation models based on collaborative filtering try to predict the entries in the user-item interactions matrix – that is, a matrix in which rows index users, columns index items, and the value at each user-item combination is the number of times the user has interacted/consumed the item or the rating she gave to it – based on minimization of some loss function such as squared difference between the predicted and the real value, with the idea that items with higher predicted values are better candidates to recommend to the user (see \cite{koren}). In so-called explicit feedback settings, in which users provide an explicit valuation or rating of each item, such models are usually fit only to the non-missing entries in the interactions matrix, as this data signals both likes and dislikes of the user, which leads to efficient optimization procedures. However, in so-called implicit-feedback settings, in which there are no explicit ratings but rather only event histories such as songs played by each user, it’s not enough for a good model to be fit only to the non-missing entries, as they don’t tend to signal dislikes and there can be pathological cases in which the non-missing entries all have the same value (e.g. when the matrix is binary - see \cite{implicit}). In this case, it’s necessary to also consider the non-missing entries (typically with a value of zero), of which there are orders of magnitude more, resulting in a more computationally challenging problem. Unlike Gaussian likelihood (squared loss) or Bernoulli likelihood (log loss), Poisson likelihood, when using a model that does not exponentiate its parameters, offers a very fast optimization approach for the missing entries filled with zeros, since the log-likelihood for them is given by their predicted value only (multiplied by minus one), and in low-rank matrix factorization, the sum of predicted values for all combinations of users and items can be quickly obtained by first summing the latent factors for all users (one vector) and for all items (another vector), then calculating the dot product between the resulting summed vectors. This is not the first time that a Poisson model has been proposed for sparse matrix factorization - \cite{gap} also developed this idea, but following a Bayesian approach, while \cite{hpf} improved upon it by adding a hierarchical structure and a faster optimization procedure based on variational inference, with many other works later building upon that base, also relying on variational inference (e.g. \cite{dpf}, \cite{hcpf}). While a Bayesian hierarchical formulation is more expressive, fitting such models is much slower than conventional optimization techniques on regularized models. This paper proposes an optimization-based approach towards matrix factorization that maximizes Poisson likelihood based on proximal gradients . \section{Low-rank matrix factorization} Low-rank matrix factorization is one of the most commonly used techniques in collaborative filtering for predicting entries in the user-item interaction matrix based only on observed user-item interactions (\cite{koren}). The idea behind it is to assign to each user $u$ and item $i$ a vector of fixed dimensionality $k$ representing arbitrary features (a.k.a. latent factors) $\mathbf{a}_u \in \mathbb{R}^k, \mathbf{b}_i \in \mathbb{R}^k$ (these are the model parameters) in such a way that the value for each entry in the interactions matrix is approximated by the dot product between the features of the user and the item for that entry, i.e. $x_{ui} \approx \langle \mathbf{a}_u, \mathbf{b}_i \rangle$ or by a transformation of it $x_{ui} \approx f(\langle \mathbf{a}_u, \mathbf{b}_i \rangle)$. These features or latent factors are in turn determined through an optimization objective that aims at minimizing the difference between the predicted and the real values, e.g. \begin{align} \min_{\mathbf{A}, \mathbf{B}} \: \lVert I_x(\mathbf{X} - \mathbf{A} \mathbf{B}^T) \lVert \end{align} Where $\mathbf{A}_{m, k} = \begin{pmatrix} \mathbf{a}_1^T, ..., \mathbf{a}_m^T \end{pmatrix}, \mathbf{B}_{n, k} = \begin{pmatrix} \mathbf{b}_1^T, ..., \mathbf{b}_n^T \end{pmatrix}$, and $I_x$ is the indicator function which is one when the entry $x_{ui}$ is present in the data and zero when it is missing. As such model tends to overfit the interactions data, other additional improvements upon it are typically incorporated, such as centering the entries in the matrix, incorporating regularization on the model parameters, and adding user and item biases as additional parameters. The optimization problem is typically solved through the Alternating Least-Squares algorithm (\cite{als} - when one factor matrix is fixed, optimizing for the other latent factor matrix is a convex optimization problem with a closed-form solution – this algorithm alternates between minimizing one or the other until convergence) or Stochastic Gradient Descent (\cite{koren}). In the implicit-feedback case with missing-as-zero and values consisting of counts (e.g. number of times a user clicked something - note that many works instead propose a weighting scheme), the matrices $\mathbf{A}$ and $\mathbf{B}$ are sometimes constrained to have all non-negative entries, as it wouldn’t make sense to predict negative values, biases are left out (as they would not allow to predict values of zero), and entries are not centered, resulting in a minimization problem such as: \begin{align} \min_{\mathbf{A} \in \mathbb{R}_{+}^{m, k}, \mathbf{B} \in \mathbb{R}_{+}^{n, k}} \: \lVert \mathbf{X} - \mathbf{A} \mathbf{B}^T \lVert^2 + \lambda (\lVert \mathbf{A} \lVert^2 + \lVert \mathbf{B} \lVert^2) \end{align} This is a more challenging optimization problem, with a matrix X that usually is too large to even fit in a computer’s memory, but different methods have been devised to solve it or solve variations thereof smartly, such as implicit-ALS (\cite{implicit}) along with techniques to speed it up (\cite{cg}), or BPR (Bayesian Personalized ranking), which tries to sample only some of the missing entries at each update (\cite{bpr}). \section{Sparse Poisson regression} A typical probability distribution used for counts data is the Poisson distribution, parameterized by one variable $z > 0$, with probability density function given by $p(y) = z^y \exp(-z) / y!$ . This distribution is limited to non-negative integers and tends to produce asymmetrical and more peaked distributions that are more resemblant of real counts data than others such as the normal distribution. Poisson models in which the $z$ parameter is defined as the sum or dot product of other variables can be fit to observed data by following the maximum-likelihood principle, which translates into maximizing Poisson log-likelihood (the negative of it plus a constant is referred from here on interchangeably as Poisson loss), given by: \begin{align} ll(z) = - (z - y \log(z) + \log(y!)) \end{align} Generalized linear models for Poisson regression usually add a link function, taking the form $\mathbf{y} \sim \text{Poisson}(\exp(\mathbf{X} \beta))$, where $\beta$ are the model coefficients (parameters), $\mathbf{X}$ (not to be confused with the matrix in the factorization models) is the matrix of covariates, and $\mathbf{y}$ the observed counts for each observation; but others (e.g. \cite{cmp}) have also tried to perform Poisson regression for all-non-negative covariates without exponentiation (i.e. with an identity link function), constraining the coefficients to be non-negative instead, i.e. $\mathbf{y} \sim \text{Poisson}(\mathbf{X} \beta)$, which is the approach that will be followed in this work, since exponentiated numbers would not allow for fast calculation of the sum of all entries in the $\mathbf{X}_{m, n}$ matrix. As $\log(y!)$ does not depend on the model parameters, it can be left out of the minimization objective. For fitting a Poisson regression with non-negative features of dimensionality $k$ and coefficients without exponentiation to $m$ observations of covariates $\mathbf{A}_{m,k} = \begin{pmatrix} \mathbf{a}_1^T, ..., \mathbf{a}_m^T \end{pmatrix}$ and counts $\mathbf{b}_m = \begin{pmatrix} b_1, ..., b_m \end{pmatrix}$, the optimization objective (maximum likelihood estimation problem) would look as follows: \begin{align} \min_{\mathbf{x} \in \mathbb{R}_{+}^k} \sum_i \mathbf{a}_i^T \mathbf{x} - b_i \log(\mathbf{a}_i^T \mathbf{x}) \end{align} Note that $\sum_i \mathbf{a}_i^T \mathbf{x}$ can be obtained by first summing $\mathbf{s} = \sum_i \mathbf{a}_i$ and then taking its inner product with $\mathbf{x}$, i.e. $\sum_i \mathbf{a}_i^T \mathbf{x} = \mathbf{s}^T \mathbf{x}$, something that could not be achieved with the exponentiated version, and when $b_i = 0$, $b_i \log(\mathbf{a}_i^T \mathbf{x}) = 0$ too, so for zero-valued entries, maximization of Poisson likelihood translates into minimizing $\mathbf{a}_i^T \mathbf{x}$. As such, the minimization objective can be re-expressed as: \begin{align} \min_{\mathbf{x} \in \mathbb{R}_{+}^k} \mathbf{s}^T \mathbf{x} - \sum_{b_i > 0} b_i \log(\mathbf{a}_i^T \mathbf{x}) \end{align} Adding $l_2$ regularization on the model parameters would result in an objective like this: \begin{align} \min_{\mathbf{x} \in \mathbb{R}_{+}^k} \mathbf{s}^T \mathbf{x} - \sum_{b_i > 0} \left( b_i \log(\mathbf{a}_i^T \mathbf{x}) \right) + \lambda \lVert \mathbf{x} \lVert_2^2 \end{align} This is a convex optimization problem, but as other works have found out (\cite{cmp}), it cannot be solved through typical methods like L-BFGS-B (\cite{lbfgs}) that rely on assumptions such as Lipschitz continuity. Back to the matrix factorization case, if we adopt Poisson loss (negative likelihood plus a constant) and consider one of the matrices to be fixed, the optimal values for each vector of latent factors in matrices $\mathbf{A}$ and $\mathbf{B}$ are the solution of a Poisson regression problem in which $\mathbf{a}_i$ are the rows of the matrix that was fixed, and $b_i$ are the entries for the corresponding row (for users) or column (for items) in the interactions matrix $\mathbf{X}$. From the formula above, it can be seen that Poisson loss can be calculated without ever iterating through the non-zero values (their contribution is obtained through $\mathbf{s}$), which is very convenient and efficient in the implicit-feedback case as most of the entries will indeed be zero, and as will be shown later, such loss can be optimized just as efficiently. \section{Proximal gradient methods} While this constrained and non-exponentiated approach to Poisson regression cannot be solved through typical gradient-based methods, it still represents a convex minimization problem and there are other techniques that can indeed solve it, such as Proximal Gradient Descent, Accelerated Proximal Gradient Descent, Alternating Direction Method of Multipliers (ADMM, \cite{boyd}), or Composite Mirror-Prox (\cite{cmp}). Given a decomposable convex minimization problem of the form \begin{align} \min_{\mathbf{x} \in \mathbf{dom} f} f(\mathbf{x}) + h(\mathbf{x}) \end{align} Where $f(\mathbf{x})$ is a loss function with desirable properties such as differentiability and smoothness, and $h(\mathbf{x})$ is a regularization function which might not have these same properties, the proximal gradient descent method iteratively performs updates as follows: \begin{algorithm}[H] \caption{Proximal Gradient Descent}\label{Proximal Gradient Descent} \hspace*{\algorithmicindent} \textbf{Inputs} Functions $f(.)$ and $h(.)$, starting point $\mathbf{x}_0$, number of steps $T$, step size sequence $\alpha_1, ..., \alpha_T$ \\ \hspace*{\algorithmicindent} \textbf{Output} Optimal parameters $\mathbf{x}^*$ \begin{algorithmic}[1] \For {$1 .. T$} \State Set $\mathbf{x}_{t + \frac{1}{2}} := \mathbf{x}_t - \alpha_t \nabla f(\mathbf{x}_t)$ \State Set $\mathbf{x}_{t+1} := \mathbf{Prox}_{\alpha h}(\mathbf{x}_{t + \frac{1}{2}})$ \If {termination criteria is satisfied} \State break \EndIf \EndFor \Return $\mathbf{x}_{T}$ \end{algorithmic} \end{algorithm} Where $\mathbf{Prox}_{\alpha h}$ is the proximal operator, defined as \begin{align} \mathbf{Prox}_{\alpha h}(\mathbf{x}) = \argmin_{\mathbf{y \in \mathbf{dom} f}} ( h(\mathbf{y}) + \frac{1}{2 \alpha} \lVert \mathbf{y} - \mathbf{x} \lVert_2^2 ) \end{align} Intuitively, what this algorithm does is first it takes a gradient step, but then finds the nearest point that is in the domain and penalizes it by regularization and distance to the unconstrained point where the gradient step would take it otherwise. For more details and explanations of how and why this method works, see \cite{boyd}. The Poisson regression variation introduced before can also be expressed in this canonical form by letting: \begin{align} f(\mathbf{x}) = - \sum_{b_i > 0} b_i \log(\mathbf{a}_i^T \mathbf{x}) \: \:, \:\:\: h(\mathbf{x}) = \mathbf{s}^T \mathbf{x} + \lambda \lVert \mathbf{x} \lVert_2^2 \end{align} The $h(\mathbf{x})$ function is proximal-friendly. In order to obtain its proximal operator, it's easy to calculate: \begin{align} \frac{\partial}{\partial \mathbf{y}} (\mathbf{s}^T \mathbf{y} + \lambda \lVert \mathbf{y} \lVert_2^2 + \frac{1}{2 \alpha} \lVert \mathbf{y} - \mathbf{x} \lVert_2^2) = 2 \lambda \mathbf{y} + \mathbf{s} + \frac{\mathbf{y} - \mathbf{x}}{\alpha} \end{align} By setting it to zero, we obtain: \begin{align} \mathbf{Prox}_{\alpha h}(\mathbf{x}) = \max \{0, \frac{\mathbf{x} - \alpha \mathbf{s}}{2 \lambda \alpha + 1} \} \end{align} Similarly, for $l_1$ regularization, we would obtain: \begin{align} \mathbf{Prox}_{\alpha h_{l_1}}(\mathbf{x}) = \max \{0, \mathbf{x} - \alpha (\lambda + \mathbf{s}) \} \label{l1} \end{align} The gradient of $f(.)$ is given by the formula: \begin{align} \nabla f(\mathbf{x}) = - \sum_{b_i > 0} \frac{b_i}{\mathbf{a}_i^T \mathbf{x}} \mathbf{a}_i \end{align} For sparse Poisson regression with $l_2$ regularization, this would translate into the following update rules: \begin{algorithm}[H] \caption{Proximal Gradient Descent for Sparse Poisson Regression}\label{Proximal Gradient Descent for Sparse Poisson Regression} \hspace*{\algorithmicindent} \textbf{Inputs} Functions $f(.)$ and $h(.)$, starting point $\mathbf{x}_0$, number of steps $T$, step size sequence $\alpha_1, ..., \alpha_T$ \\ \hspace*{\algorithmicindent} \textbf{Output} Optimal parameters $\mathbf{x}^*$ \begin{algorithmic}[1] \For {$1 .. T$} \State Set $\mathbf{x}_{t + \frac{1}{2}} := \mathbf{x}_t - \alpha_t \sum_{b_i > 0} \frac{-b_i}{\mathbf{a}_i^T \mathbf{x}} \mathbf{a}_i$ \State Set $\mathbf{x}_{t+1} := \max \{0, \frac{\mathbf{x}_{t + \frac{1}{2}} - \alpha \mathbf{s}}{2 \lambda \alpha + 1} \}$ \If {termination criteria is satisfied} \State break \EndIf \EndFor \Return $\mathbf{x}_T$ \end{algorithmic} \end{algorithm} In this formulation, $f(\mathbf{x})$ is an unbounded problem in which the solution is $\mathbf{x} = \infty$, but its gradient is still well behaved. It's also possible to define the functions as $f(\mathbf{x}) = - \sum_{b_i > 0} b_i \log(\mathbf{a}_i^T \mathbf{x}) + \lambda \lVert \mathbf{x} \lVert_2^2 \: \:, \: h(\mathbf{x}) = \mathbf{s}^T \mathbf{x}$ too, since the $l_2$ norm meets the necessary criteria, and in this case $f(\mathbf{x})$ would reach its minimum at a non-infinite $\mathbf{x}$. Proximal gradient descent is usually not the preferred method for these types of problems, and tends to require more updates (steps) than other methods, but note for now that, if doing just one iteration, one proximal gradient update is much faster to compute than one update for ADMM or Composite Mirror-Prox. Without going into much detail, $f(\mathbf{x})$ does not have a closed-form proximal operator, and one iteration of the ADMM updates (primal-dual method) for the same problem would look as follows: \begin{algorithm}[H] \caption{ADMM update for sparse Poisson regression}\label{ADMM update for sparse Poisson regression} \begin{algorithmic}[1] \State $\mathbf{x}_{t+1} := \mathbf{L\texttt{-}BFGS\texttt{-}B}(\mathbf{z}_t - \mathbf{u}_t, f, \nabla f, \mathbf{x} \ge 0) $ \State $\mathbf{z}_{t+1} := \max \{0, \frac{\mathbf{x}_{t+1} + \mathbf{u}_t - \alpha \mathbf{s}}{2 \lambda \alpha + 1} \}$ \State $\mathbf{u}_{t+1} := \mathbf{u}_t + \mathbf{x}_{t+1} - \mathbf{z}_{t+1}$ \end{algorithmic} \end{algorithm} Similarly, one Composite Mirror-Prox update, while perhaps making more progress than a proximal gradient update, would also be more expensive to compute, and require keeping additional extra variables in memory. It's also possible to add second-order information about $f(\mathbf{x})$ and use Proximal Newton (\cite{newton}) instead of Proximal Gradient Descent. When following the approach including the $l_2$ norm into $f(\mathbf{x})$, the gradient and hessian are given by: \begin{align} \nabla f(\mathbf{x}) = - \sum_{b_i > 0} \frac{b_i}{\mathbf{a}_i^T \mathbf{x}} \mathbf{a}_i + 2 \lambda \mathbf{x} \\ H(\mathbf{x}) = \nabla^2 f(\mathbf{x}) = \frac{\mathbf{A}^T \mathbf{A} \odot \mathbf{b}}{(\mathbf{A} \mathbf{x})^2} + 2 \lambda I \end{align} Where $\mathbf{A} \odot \mathbf{b}$ denotes element-wise multiplication. Newton's step is not a simple negative gradient step, but is given by $(\nabla^2 f)^{-1} g$ (with $g$ being the gradient), and the proximal function is in Newton's case better defined not as the squared difference in $l_2$ norm between two points, but as the difference by the norm induced by the hessian $H$, i.e. \begin{align} \mathbf{Prox}_h^H (\mathbf{x}) = \argmin_{\mathbf{y} \in \mathbf{dom} f} h(\mathbf{y}) + \frac{1}{2} \lVert \mathbf{x} - \mathbf{y} \lVert_H^2 = \argmin_{\mathbf{y} \in \mathbf{dom} f} h(\mathbf{y}) + \frac{1}{2} (\mathbf{y} - \mathbf{x})^T H (\mathbf{y} - \mathbf{x}) \end{align} In the case of $h(\mathbf{x}) = \mathbf{s}^T \mathbf{x}$, the Proximal Newton operator is the solution of a quadratic program (QP), which can be expressed in the canonical quadratic form used by most solvers as: \begin{align} \mathbf{Prox}_h^H (\mathbf{x}) = \argmin_{\mathbf{y}} \frac{1}{2} \mathbf{y}^T H \mathbf{y} + (\frac{\mathbf{s}}{2} - H \mathbf{x}^T)^T \mathbf{y} \: \: \: \: \: s.t. \: \: \: \mathbf{y} \ge 0 \end{align} Further, a line search can then be performed, with the direction being $\mathbf{x}_g = \mathbf{Prox}_h^H (\mathbf{x} - (H(\mathbf{x}))^{-1} \nabla f(\mathbf{x})) - \mathbf{x}$. Again, Newton iterations are also much slower than regular gradient iterations, and require the hessian to be positive semi-definite, which will not always be the case if the regularization parameters is not large enough. \section{Poisson matrix factorization} Producing a maximum-likelihood estimate of a Poisson model with regularization for the case of matrix factorization would translate into solving: \begin{align} \max_{\mathbf{A} \in \mathbb{R}_{+}^{m, k}, \mathbf{B} \in \mathbb{R}_{+}^{n, k}} \sum_u^m \sum_i^n \left( -\mathbf{a}_u^T \mathbf{b}_i + x_{ui} \log(\mathbf{a}_u^T \mathbf{b}_i) - \log(x_{ui}!) \right) - \lambda (\lVert \mathbf{A} \lVert_2^2 + \lVert \mathbf{B} \lVert_2^2) \end{align} While at first glance this might not look like a computationally-tractable problem, from the previous sections, we know that it can equivalently be re-expressed as follows: \begin{align} \min_{\mathbf{A} \in \mathbb{R}_{+}^{m, k}, \mathbf{B} \in \mathbb{R}_{+}^{n, k}} \mathbf{s}_A^T \mathbf{s}_B - \left( \sum_{x_{ui} > 0} x_{ui} \log(\mathbf{a}_u^T \mathbf{b}_i) \right) + \lambda (\lVert \mathbf{A} \lVert_2^2 + \lVert \mathbf{B} \lVert_2^2) \end{align} Where $\mathbf{s}_A = \sum_i^m \mathbf{a}_i$ and $\mathbf{s}_B = \sum_i^{n} \mathbf{b}_i$. This is a much faster objective to evaluate, but the problem is not convex. Following the same alternating minimization idea as in ALS, note that if one of the $\mathbf{A}$ or $\mathbf{B}$ matrices in the factorization problem is fixed, obtaining the optimal values of other matrix is a convex minimization problem which reduces to performing one Poisson regression for each vector of the other matrix as explained before. Note however that, in order to make progress in the desired minimization objective, since the matrices to optimize are being alternated, it’s not strictly necessary to run the optimization routine for each vector until convergence each time, but rather just to make progress towards the optimum of each vector in each pass, and it might make more sense to spend the time applying updates to the other matrix than applying multiple updates to the same matrix. With only one update, Accelerated Proximal Gradient Descent would reduce to just regular Proximal Gradient Descent. Putting everything together, the optimization routine for the Poisson factorization model, with gamma initialization and step sizes decreasing by half at each step, would be as follows: \begin{algorithm}[H] \caption{Alternating Proximal Gradients}\label{Alternating Proximal Gradients} \hspace*{\algorithmicindent} \textbf{Inputs} Sparse matrix $\mathbf{X} \in \mathbb{R}_{+}^{m, n}$, number of factors $k$, initial step size $\alpha$, regularization parameter $\lambda$, number of iterations $T$, number of updates per iteration $\tau$ \\ \hspace*{\algorithmicindent} \textbf{Outputs} Latent factors $\mathbf{A}^*, \mathbf{B}^*$ \begin{algorithmic}[1] \State Sample $\mathbf{A}_{m,k} \sim Gamma(1, 1), \mathbf{B}_{n, k} \sim Gamma(1, 1)$ \For {$1 .. T$} \State Calculate $\mathbf{s}_B = \sum_i^n \mathbf{b}_i$ \For {$u = 1 .. m$} \For {$1 .. \tau$} \State $\mathbf{a}_u := \max \{0, \left( \mathbf{a}_u + \alpha \sum_{x_{ui} > 0} \frac{x_{ui}}{\mathbf{a}_u^T \mathbf{b}_i} \mathbf{b}_i - \alpha \mathbf{s}_B \right) / \left( 2 \lambda \alpha + 1 \right) \}$ \EndFor \EndFor \State Calculate $\mathbf{s}_A = \sum_i^m \mathbf{a}_i$ \For {$i = 1 .. n$} \For {$1 .. \tau$} \State $\mathbf{b}_i := \max \{0, \left( \mathbf{b}_i + \alpha \sum_{x_{ui} > 0} \frac{x_{ui}}{\mathbf{a}_u^T \mathbf{b}_i} \mathbf{a}_u - \alpha \mathbf{s}_A \right) / \left( 2 \lambda \alpha + 1 \right) \}$ \EndFor \EndFor \State Update $\alpha := \alpha / 2$ \EndFor \Return $\mathbf{A}, \mathbf{B}$ \end{algorithmic} \end{algorithm} Since the updates for each row in the matrix being optimized are independent of each other, they can be calculated in parallel or in a distributed setting by sharing between nodes the $\mathbf{s}$ vector, the other matrix, and non-zero entries for that row/column in the $\mathbf{X}$ matrix. An efficient implementation of the algorithm requires duplicating the $\mathbf{X}$ matrix in row-sparse and column-sparse formats, but no additional intermediate variables are needed. The implementation developed here is made open-source and freely available\footnote{\url{https://github.com/david-cortes/poismf}}. The regularization might also be changed to $l_1$ without additional hassle by using the formula in (\ref{l1}) instead, which is more likely to result in some factors having a value of exactly zero. \section{Numerical stability and convergence} This model was fit to different datasets for collaborative filtering with both implicit and explicit feedback data, and it displayed some interesting behaviors. As expected, there can be issues with numerical precision and instability (note that a large step size would turn the whole vector to zeros), but it was possible to fit the model with the same procedure and hyperparameters to many different datasets, obtaining good results in all of them. While iterations of HPF are guaranteed to monotonically increase the training set likelihood, proximal gradient iterations are not, and using the wrong step size might result in all the factors becoming zero or very close to zero, after which the updates would be undefined (division by zero and logarithm of zero). As such, the step sizes and regularization parameter need to be chosen carefully - too much or too little in either of them, and the parameters will become undefined or be too large for typical computer floating point representations. A line search can also help in this regard, but it was possible to save the time required for it by a good combination of hyperparameters instead. While it is beneficial to perform more than one update per iteration, in practice $\tau = 1$ is enough to reach convergence when looking at ranking metrics rather than Poisson loss. The procedure might be terminated after running for a fixed number of iterations, or by some termination criteria based on training or validation-set likelihood or other metric. A perhaps more theoretically sound approach would be to use the alternative with $l_2$ norm incorporated into $f(.)$ rather than into $h(.)$, and to use an initial step size that is the same at the beginning of each iteration, decreasing instead with further updates to the same user/item factors ($\tau$ in the algorithm). This approach however required more iterations to converge (in terms of ranking metrics), with each iteration being much slower as it contains multiple updates, and did not lead to better final results. A different approach using a Conjugate Gradient method based on \cite{nncg} updating one vector at a time just like for proximal gradients was also tried - this approach uses a line search, and a different way of handling active constraints. For the alternating minimization model used here, it was used with early stopping at 5-10 updates per vector at a time, and was found out to benefit from low regularization values or even no regularization at all. It was able to deliver slightly better results according to both ranking metrics and Poisson likelihood, and was less prone to numeric instability, but was many times slower. For datasets in which there is a relatively large proportion of non-zero values, the CG approach, coupled with a large number of alternating iterations, was able to deliver significantly better results, and the approach was also competitive against other methods when evaluated under P@K, whereas the proximal gradient approach oftentimes does poorly in terms of this metric as will be shown in the next section. Another alternative formualtion giving a much higher weight (importance) to the non-missing entries was also tried, and to some degree it helps to avoid failed optimizations in which all parameters turn to zero, but did not lead to an improvement in ranking metrics. The number of steps required to reach a local optimum is also something variable – depending on the choice of step size and regularization, for some datasets such as the smaller MovieLens (\cite{movielens}) ones it can do it in as little as 3 iterations, and oftentimes benefits from early stopping. The point at which to stop it however is hard to determine, and oftentimes later iterations will only decrease training likelihood. PF and HPF seem to produce wildly different likelihood values, but despite these large differences, the rankings that they induce on the items are not as dissimilar. While HPF produces relatively large variations in the values of latent factors, when looking at the factors for a single user, in PF with $l_2$ regulatization these can all be very close to each other with some minor variation that still produces a good ranking - the actual values across users still resemble gamma distributions though, and using the CG method with lower regularization values leads to factors (when looking at a single user/item) that show a lot more variability and resemblance to gamma distributions. In the datasets experimented with, large step sizes coupled with low regularization result in the parameters quickly becoming undefined. A choice of $\alpha = 10^{-3}$ or $10^{-4}$ can result in reaching an optimum in 2-3 iterations, but more often than not, it results in failed optimization, or in reaching an optimum quickly but then later iterations resulting in undefined parameters. Regularization parameters lower than $10^4 - 10^5$ also seem to result in failure rather often. A range of relatively safe choices seem to be $\alpha = 10^{-6} \: \texttt{-} \: 10^{-8}, \lambda = 10^7 \: \texttt{-} \: 10^{11}, T= 5 \: \texttt{-} \: 20, \tau = 1 \: \texttt{-} \: 5$. Performing more iterations, while still having an effect on likelihood, does not lead to better results when evaluated under ranking metrics. For the CG method, the optimal regularization tends to be a much smaller number, and it ends up benefiting from performing more alternating iterations (recommended values for $k = 30 \:\texttt{-}\: 40$: $\lambda = 0 \:\texttt{-}\: 10^7$, $T = 10 \:\texttt{-}\: 100$, $\tau = 1 \:\texttt{-}\: 30$). Using a large number of latent factors and/or low regularization requires much more iterations and/or updates per iteration in order to reach convergence (e.g. for $k = 70 \:\texttt{-}\ 100$, it requires $T = 50 \:\texttt{-}\: 100$, $\tau = 10 \:\texttt{-}\: 30$). Using $l_1$ regularization results in a large drop in the performance metrics (not shown here) under both proximal and conjugate gradient approaches, and requires completely different hyperparameters - much lower regularization and much larger step sizes, along with more step sizes and perhaps $\tau > 1$. As expected, it has the nice property that many of the latent factors are exactly zero, and they look gamma-distributed just like in HPF. Varying the number of latent factors (dimensionality of the low-rank approximation) seems to have almost no effect in the induced rankings. Interestingly, the optimal number of latent factors in each dataset tends to be much lower than for other factorization models - for example, while the model with weighted binary entries might perform at best in a given dataset when using $k \approx 100$, the Poisson model, in the same dataset, performs better when using $k \approx 30$, and the end result in terms of metrics is virtually the same with either $k = 30$ or $k = 100$. Fitting models with large numbers of latent factors under the CG method requires much more iterations and more updates per iteration in order to reach a local optimum, resulting in a slower procedure when compared to other factorization models. Proximal Newton was not able to find as good local optima, and in the version of the minimization objective with the regularization term incorporated into $h(.)$, the hessians are more often than not non-positive-semi-definite, which is understandable given that $f(.)$ is unbounded. When using large regularization parameters (the same ones that work for Proximal Gradient Descent), Proximal Newton converges in barely 1-2 iterations, in the sense that further updates are then zero or very close to zero for each factor, but these local optima are worse than the ones found by other methods. ADMM for this problem is much slower, and also requires more iterations and $\tau > 1$, but it does manage to find just as good optima. Composite Mirror-Prox was too prone to numerical instability in this model framework. \section{Experiments} The model described here, fit through the Alternating Proximal Gradients procedure, was compared against its Bayesian counterpart, Hierarchical Poisson Factorization (HPF, \cite{hpf}) fit through coordinate ascent under mean-field approximation, and against implicit-ALS using the Conjugate Gradient method (\cite{cg}), by fitting models with the same number of factors to the RetailRocket\footnote{\url{https://www.kaggle.com/retailrocket/ecommerce-dataset}}, Last.Fm (\cite{lastfm}) and MillionSong TasteProfile (\cite{millionsong}) datasets, comparing implicit-feedback evaluation metrics along with time spent (in seconds). \begin{table}[H] \caption {Dataset Descriptions} \begin{adjustbox}{max width=\textwidth}{\centering \begin{tabular}{|l|c|c|c|c|} \hline \textbf{Dataset} & \textbf{\# Users} & \textbf{\# Items} & \textbf{\# Entries} & \textbf{Sparsity} \\ \hline \textbf{RetailRocket} & 1,407,580 & 235,061 & 2,145,179 & 0.00065\% \\ \hline \textbf{Last.Fm} & 358,868 & 160,113 & 17,535,655 & 0.03\% \\ \hline \textbf{MillionSong} & 1,019,318 & 384,546 & 48,373,586 & 0.012\% \\ \hline \hline \end{tabular}}\end{adjustbox} \end{table} The RetailRocket dataset contains events "click", "add to basket", "purchase" in an e-commerce shop, from different visitors. In order to set a count for each user-item pair, the events were given the following values: click = +1, add to basket = +3, purchase = +3, with the final value for a given user-item pair being the sum of the event values for it. The Last.Fm and MillionSong datasets contain counts of the number of times a song was played by a user in online music listening services. The models were compared by leaving 20\% of the observations at random as test set, then discarding users in the test set who were not in the training set or who had less than 3 entries in the test set. Recommendations were evaluated for each user individually by ranking the items that were not in the training set according to the model, and taking P@5 (precision at 5) and area under the ROC curve for these predictions on a random sample of $25,000$ users, with the items in the test set as positive class. The numbers were then averaged across all these $25,000$ users. Additionally, Pearson correlation ($\rho$) was calculated between the model outputs and the values in the test set, but on the whole and not by user. For Poisson Factorization (PF) and Hierarchical Poisson Factorization (HPF), log-likelihood plus constant was also evaluated on the test set, defined as $LogLik = \sum_{x_{ui} \in \mathbf{test}} -\mathbf{a}_u^T \mathbf{b}_i + x_{ui} \log(\mathbf{a}_u^T \mathbf{b}_i)$. The experiments were run in a Google Cloud server with Skylake CPU, using 16 cores. The times tracked include only time spent in the respective optimization routines and does not account for initialization, allocation of intermediate matrices, evaluation of termination criteria, or others. All hyper-parameters were set to the defaults recommended in their respective implementations - for HPF, these were $a, a', c, c' = 0.3, b', d' = 1$, and for implicit-ALS, 15 iterations and regularization parameter $10^{-2}$. HPF was run until the percent increase in training likelihood was below a certain threshold instead of running for a fixed number of iterations. For PF (this work), the hyper-parameters were set to $T=10, \tau=1, \alpha=10^{-7}, \lambda = 10^9$, with only the number of factors $k$ varying - the values tried for each model were 40, 70, 100. \begin{table}[H] \caption {Results on RetailRocket Dataset} \begin{adjustbox}{max width=\textwidth}{\centering \begin{tabular}{|l|c||c|c|c|c|c|} \hline \textbf{Model} & \textbf{k} & \textbf{P@5} & \textbf{AUC} & $\mathbf{\rho}$ & \textbf{LogLik} & \textbf{Time (s)} \\ \hline \textbf{PF} & 40 & 0.0020 & \textbf{0.8704} & \textbf{0.1404} & $\mathbf{-1.7 \times 10^6}$ & \textbf{1.36} \\ \hline \textbf{HPF} & 40 & 0.0049 & 0.8570 & 0.0370 & $-2.9 \times 10^{7}$ & 69.25 \\ \hline \textbf{implicit-ALS} & 40 & \textbf{0.0256} & 0.7778 & 0.0991 & - & 8.03 \\ \hline \hline \textbf{PF} & 70 & 0.0020 & \textbf{0.8708} & \textbf{0.1404} & $\mathbf{-1.7 \times 10^6}$ & \textbf{2.25} \\ \hline \textbf{HPF} & 70 & 0.0039 & 0.8587 & 0.0275 & $-3.0 \times 10^{7}$ & 86.78 \\ \hline \textbf{implicit-ALS} & 70 & \textbf{0.0345} & 0.8020 & 0.1085 & - & 13.41 \\ \hline \hline \textbf{PF} & 100 & 0.0020 & \textbf{0.8706} & \textbf{0.1404} & $\mathbf{-1.7 \times 10^6}$ & \textbf{2.52} \\ \hline \textbf{HPF} & 100 & 0.0049 & 0.8633 & 0.0450 & $-2.4 \times 10^{7}$ & 161.59 \\ \hline \textbf{implicit-ALS} & 100 & \textbf{0.0384} & 0.8179 & 0.109 & - & 19.08 \\ \hline \hline \end{tabular}}\end{adjustbox} \end{table} \begin{table}[H] \caption {Results on Last.Fm Dataset} \begin{adjustbox}{max width=\textwidth}{\centering \begin{tabular}{|l|c||c|c|c|c|c|} \hline \textbf{Model} & \textbf{k} & \textbf{P@5} & \textbf{AUC} & $\mathbf{\rho}$ & \textbf{LogLik} & \textbf{Time (s)} \\ \hline \textbf{PF} & 40 & 0.0636 & 0.9543 & \textbf{0.2600} & $\mathbf{-8.03 \times 10^{9}}$ & \textbf{3.38} \\ \hline \textbf{HPF} & 40 & 0.0815 & 0.9757 & 0.2540 & $-6.70 \times 10^{13}$ & 269.21 \\ \hline \textbf{implicit-ALS} & 40 & \textbf{0.1410} & \textbf{0.9810} & 0.2104 & - & 92.15 \\ \hline \hline \textbf{PF} & 70 & 0.0636 & 0.9543 & \textbf{0.2600} & $\mathbf{-8.03 \times 10^{9}}$ & \textbf{5.17} \\ \hline \textbf{HPF} & 70 & 0.0878 & 0.9764 & 0.2446 & $-6.70 \times 10^{13}$ & 434.53 \\ \hline \textbf{implicit-ALS} & 70 & \textbf{0.1458} & \textbf{0.9790} & 0.1985 & - & 123.32 \\ \hline \hline \textbf{PF} & 100 & 0.0636 & 0.9543 & \textbf{0.2600} & $\mathbf{-8.03 \times 10^{9}}$ & \textbf{6.81} \\ \hline \textbf{HPF} & 100 & 0.0765 & 0.9752 & 0.2256 & $-6.71 \times 10^{13}$ & 594.71 \\ \hline \textbf{implicit-ALS} & 100 & \textbf{0.1509} & \textbf{0.9759} & 0.1889 & - & 160.03 \\ \hline \hline \end{tabular}}\end{adjustbox} \end{table} \begin{table}[H] \caption {Results on MillionSong Dataset} \begin{adjustbox}{max width=\textwidth}{\centering \begin{tabular}{|l|c||c|c|c|c|c|} \hline \textbf{Model} & \textbf{k} & \textbf{P@5} & \textbf{AUC} & $\mathbf{\rho}$ & \textbf{LogLik} & \textbf{Time (s)} \\ \hline \textbf{PF} & 40 & 0.0268 & 0.9312 & \textbf{0.1307} & $\mathbf{-4.4386 \times 10^8}$ & \textbf{9.95} \\ \hline \textbf{HPF} & 40 & 0.0358 & \textbf{0.9608} & 0.1142 & $-2.245 \times 10^{12}$ & 3705.01 \\ \hline \textbf{implicit-ALS} & 40 & \textbf{0.073} & 0.8920 & 0.1302 & - & 186.9 \\ \hline \hline \textbf{PF} & 70 & 0.0268 & 0.9313 & 0.1307 & $\mathbf{-4.4387 \times 10^8}$ & \textbf{14.64} \\ \hline \textbf{HPF} & 70 & 0.0334 & \textbf{0.9609} & 0.1104 & $-2.236 \times 10^{12}$ & 6222.11 \\ \hline \textbf{implicit-ALS} & 70 & \textbf{0.0843} & 0.8835 & \textbf{0.1309} & - & 244.66 \\ \hline \hline \textbf{PF} & 100 & 0.0268 & 0.9313 & \textbf{0.1307} & $\mathbf{-4.4388 \times 10^8}$ & \textbf{20.08} \\ \hline \textbf{HPF} & 100 & 0.0339 & \textbf{0.9605} & 0.1006 & $-2.232 \times 10^{12}$ & 8004.14 \\ \hline \textbf{implicit-ALS} & 100 & \textbf{0.0924} & 0.8793 & 0.1300 & - & 313.7 \\ \hline \hline \end{tabular}}\end{adjustbox} \end{table} \section{Conclusions and discussion} This work presented an optimization-based approach towards Poisson matrix factorization which is especially well suited to very sparse inputs. While the model is in principle similar to its Bayesian counterpart HPF, the procedure for fitting it to observed data is very different and is based on proximal gradients rather than on variational inference or MCMC. The alternating minimization approach presented here has faster iterations than HPF (Hierarchical Poisson Factorization) or implicit-ALS, and requires fewer iterations to reach a local optimum, turning out to be 2-3 orders of magnitude faster than HPF with variational inference, and 1 order of magnite faster than implicit-ALS with the CG method for the dataset sizes evaluated, but is more prone to numerical instability issues. Ranking metrics were evaluated on 3 implicit-feedback datasets for collaborative filtering. In 2 of these datasets, it managed to achieve a better $\rho$ than the other algorithms, but did not achieve as good P@5. HPF also seemed to result in far worse P@5 than implicit-ALS. Although in some cases it did not fare as well as implicit-ALS under ranking metrics, these metrics are still reasonably good and much better than a random choice (e.g. AUC $> 0.85$ in all cases), showing a lot of promise for applications of PF to very large-scale datasets. While the model presented here was meant to be fit to user-item-count triplets only, it should be easy to expand the model to incorporate other sparse count data about users and items (such as text descriptions as bag-of-words) in the same form as in \cite{cmf}, which is a task for future work. \bibliographystyle{plain}
1,314,259,996,384
arxiv
\section{Introduction} What is the inherent parallelism of higher-order functional programs? Is it possible to turn $\lambda$-terms into low-level programs, at the same time exploiting this parallelism? Despite great advances in very close domains, these questions have not received a definite answer, yet. The main difficulties one faces when dealing with parallelism and functional programs are due to the higher-order nature of those programs, which turns them into objects having a non-trivial interactive behaviour. The most promising approaches to the problems above are based on Game Semantics~\cite{HylandO00,AbramskyJM00} and the Geometry of Interaction~\cite{Girard89} (GoI), themselves tools which were introduced with purely semantic motivations, but which have later been shown to have links to low-level formalisms such as asynchronous circuits~\cite{GhicaSS11}. This is especially obvious when Geometry of Interaction is presented in its most operational form, namely as a token machine~\cite{DanosRegnier}. Most operational accounts on the Geometry of Interaction are in \emph{particle-style}, i.e., a \emph{single} token travels around the net; this is largely due to the fact that parallel computation without any form of synchronization nor any data sharing is not particularly useful, so having multiple tokens would not add anything to the system. While some form of synchronization was implicit in earlier presentations of GoI, the latter has been given a proper status only recently, with the introduction of \textsf{SMLL}${}^0$~\cite{lics2014}, where \emph{multiple} tokens circulate simultaneously, and also \emph{synchronize} at a new kind of node, called a \emph{sync node}. All this has been realized in a minimalistic logic, namely multiplicative linear logic, a logical system which lacks any copying (or erasing) capability and, thus, is not an adequate model of realistic programming languages (except purely linear ones, whose role is relevant in quantum computation~\cite{SelingerValiron05}). Multitoken GoI machines are relatively straightforward to define in a linear setting: all \emph{potential} sources of parallelism give rise to \emph{actual} parallelism, since erasing and copying are simply forbidden. As a consequence, managing parallelism, and in particular the spawning of new tokens, is easy: the mere syntactical occurrence of a source of parallelism triggers the creation of a new token. Concretely, these sources of parallelism are \emph{unit nodes} (when thought logically), or \emph{constants} (when read through the lenses of functional programming). The reader will find an example in Section~\ref{sect:multexpo}, Fig.~\ref{fig:potential}. But can all this scale to more expressive proof theories and programming formalisms? If programs or proofs are allowed to copy or erase portions of themselves, the correspondence between potential and actual parallelism vanishes: any occurrence of a unit node can possibly be erased, thus giving rise to \emph{no} token, or copied, thus creating \emph{more than one} token. The underlying interactive machinery, then, necessarily becomes more complex. But \emph{how}? The solution we propose here relies on linear logic itself: it is the way copying and erasing are handled by the exponential connectives of linear logic which gives us a way out. We find the resulting theory simple and elegant. In this paper we generalize the ideas behind \textsf{SMLL}${}^0${} in giving a proper status to synchronization and parallelism in GoI. We show that multiple tokens and synchronization can work well together in a \emph{very expressive } logical system, namely multiplicative linear logic with \emph{exponentials}, \emph{fixpoints}, and \emph{units}. The resulting system, called \textsf{SMEYLL}, is then general enough to simulate universal models of functional programming: we prove that \ensuremath{\mathsf{PCF}}{} can be embedded into \textsf{SMEYLL}, both when call-by-name and call-by-value evaluation are considered. The latter is not the case for single-token machines, as we illustrate in Section~\ref{sect:multexpo}. This is a version extended with proofs and more details of an eponymous paper \cite{lics2015} which appeared in the proceedings of the Thirteenth Annual Symposium on Logic in Computer Science . \subsection*{Contributions} This paper's main contributions can be summarized as follows: \begin{varitemize} \item \emph{An Expressive Logical System.} We introduce \textsf{SMEYLL}{} nets, whose expressiveness is increased over $\textsf{MELL}$ nets by several constructs: we have \emph{fixpoints} (captured by the $Y$-box), an operator for \emph{synchronization} (the sync node), and a \emph{primitive conditional} (captured by the $\bot$-box). The presence of fixpoints forces us to consider a restricted notion of reduction, namely closed \emph{surface reduction} (\emph{i.e.}, reduction never takes place inside a box). Cuts can \emph{not} be eliminated (in general) from \textsf{SMEYLL}{} proofs, as one expects in a system with fixpoints. Reduction, however, is proved to be \emph{deadlock-free}, \emph{i.e.}, normal forms cannot contain surface cuts. \item \emph{A Multitoken Interactive Machine.} \textsf{SMEYLL}{} nets are seen as interactive objects through their synchronous interactive abstract machine (\textsf{SIAM}{} in the following). Multiple tokens circulate around the net simultaneously, and synchronize at sync nodes. We prove that the \textsf{SIAM}{} is an \emph{adequate computational model}, in the sense that it precisely reflects normalization through machine execution. The other central result about the \textsf{SIAM}{} is \emph{deadlock-freeness}, \emph{i.e.}, if the machine terminates it does so in a final state. In other words, the execution does not get stuck, which in principle could happen as we have several tokens running in parallel and to which we apply guarded operators (\emph{e.g.}, synchronization). Our proof comes from the interplay of nets and machines: we transfer \emph{termination} from machines to nets, and then transfer \emph{back deadlock-freeness} from nets to machines. \item \emph{A Fresh Look at CBV and CBN.} A slight variation on {\textsf{SMEYLL}} nets, and the corresponding notion of interactive machine, is shown to be an adequate model of reduction for Plotkin's \ensuremath{\mathsf{PCF}}~\cite{Plotkin}. This works both for call-by-name and call-by-value evaluation and, noticeably, the \emph{same} interactive machine is shown to work in \emph{both} cases: what drives the adoption of each of the two mechanisms is, simply, the translation of terms into proofs. What is surprising here is that CBV can be handled by a stateless interactive machine, even without the need to go through a CPS translation. This is essentially due to the presence of multiple tokens. \item \emph{New Proof Techniques.} {Deadlock-freeness} is a key issue when working with multitoken machines. A direct scheme to prove it (the one used in \cite{lics2014}) would be: (i) prove cut elimination for the nets, (ii) prove soundness for the machine, and (iii) deduce deadlock-freeness from (i) and (ii). However, in a setting with fixpoints, cut elimination is not available because termination simply does not hold\footnote{Even without fixpoints, there is to the authors' knowledge no direct combinatorial proof of termination for surface reduction.}. Instead, we develop a new technique, which heavily exploit the interplay between net rewriting and the multitoken machine. Namely, we \emph{transfer} termination of the machine (including termination as a deadlock) into termination of the nets. This combinatorial technique is novel and uses multiple tokens in an essential way. It appears to be of technical interest in its own. \end{varitemize} \subsection*{Related Work} Almost thirty years after its introduction, the literature on GoI is vast. Without any aim of being exhaustive, we only mention the works which are closest in spirit to what we are doing here. The fact that GoI can be turned into an implementation scheme for purely functional (but expressive) $\lambda$-calculi, has been observed since the beginning of the nineties~\cite{DanosRegnier,Mackie95}. Among the different ways GoI can be formulated, both (directed) virtual reduction and bideterministic automata have been shown to be amenable to such a treatment. In the first case, parallel implementations~\cite{PediciniQ07,Pinto01} have also been introduced. We claim that the kind of parallel execution we obtain in this work is different, being based on the underlying automaton and not on virtual reduction. The fact that GoI can simulate call-by-name evaluation is well-known, and indeed most of earlier results relied on this notion of reduction. As in games~\cite{AbramskyM97}, call-by-value requires a more sophisticated machinery to be handled by GoI. This machinery, almost invariably, relies on effects~\cite{HoshinoMH14,Schopp14}, even when the underlying language is purely functional. This paper suggests an alternative route, which consists in making the underlying machine parallel, nodes staying stateless. Another line of work is definitely worth mentioning here, namely Ghica and coauthors' Geometry of Synthesis~\cite{Ghica07,GhicaS10}, in which GoI suggests a way to compile programs into circuits. The obtained circuit, however, is bound to be sequential, since the interaction machinery on which everything is based is particle-style. On the side of nets, Y-boxes allow us to handle \emph{recursion}. A similar box was originally introduced by Montelatici~\cite{Montelatici03}, even though in a polarized setting. Our Y-box differs from it both in the typing and in the dynamics; these differences are what make it possible to build a GoI model. \section{On Multiple Tokens and the Exponentials}\label{sect:multexpo} In this section, we will explain through a series of examples \emph{how} one can build a multitoken machine for a non-linear typed $\lambda$-calculus, and \emph{why} this is not trivial. Let us first consider a term computing a simple arithmetical expression, namely $M=(\lambda x.\lambda y.x+y)(4-2)(1+2)$. This term evaluates to $5$ and is purely linear, i.e. the variables $x$ and $y$ appear exactly once in the body of the abstraction. How could one evaluate this term trying to exploit the inherent parallelism in it? Since we \emph{a priori} know that the term is linear, we know that the subexpressions $S=(4-2)$ and $T=(1+2)$ are indeed needed to compute the result, and thus can be evaluated in parallel. The subexpression $x+y$ could be treated itself this way, but its arguments are missing, and should be waited for. What we have just described is precisely the way the multitoken machine for \textsf{SMLL}${}^0${} works~\cite{lics2014}, as in Fig.~\ref{fig:potential} (left): each constant in the underlying proof gives rise to a separate token, which flows towards the result. Arithmetical operations act as synchronization points. \begin{figure}[h] \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center}\includegraphics[width=8cm]{potential}\end{center} \end{minipage}} \end{center} \caption{Actual vs. Potential Parallelism.}\label{fig:potential} \end{figure} Now, consider a slight variation on the term $M$ above, namely $N=(\lambda x.\lambda y.x+x)(4-2)(1+2)$. The term has a different normal form, namely $4$, and is \emph{not} linear, for two different reasons: on the one hand, the variable $x$ is used twice, and on the other, the variable $y$ is not used at all. How should one proceed, then, if one wants to evaluate the term in parallel? One possibility consists in evaluating subexpressions \emph{only if} they are really needed. Since the subexpression $x+x$ is of course needed (it is, after all, the result!), one can start evaluating it. The value of the variable $x$, as a consequence, is needed, and the subexpression it will be substituted for, namely $4-2$, must itself be evaluated. On the other hand, $1+2$ should not be evaluated, simply because its value does not contribute to the final result. This is precisely what call-by-name evaluation actually do. The interactive machine which we define in this paper captures this process. It has to be noticed, in particular, that discovering that one of the subexpressions is needed, while the other is not, requires some work. The way we handle all this is strongly related to the structure of the exponentials in linear logic. We give the CBN translation of $N$ in Fig.~\ref{fig:potential} (right). The two rightmost subterms are translated into exponential boxes (where $\mathsf{S}$ is the net for $4-2$ and $\mathsf{T}$ for $1+2$), which serve as \emph{boundaries} for parallelism: whatever potential parallelism a box includes, must be triggered before giving rise to an actual parallelism. Each of the occurrences of the variable $x$ triggers a new kind of token, which starts from the dereliction nodes ($?d$) at the surface and whose purpose is precisely to look for the box the variable will be substituted for. We call these \emph{dereliction tokens}. What happens if we rather want to be consistent with call-by-\emph{value} evaluation? In this case, both subterms $(4-2)$ and $(1+2)$ in the term $N$ above should be evaluated. Let us however consider a more extreme example, in which call-by-name and call-by-value have different \emph{observable} behaviors, for example the term $L=(\lambda x.1)\Omega$, where $\Omega=(\lambda x.xx)(\lambda x.xx)$. The call-by-value evaluation of $L$ gives rise to divergence, while in call-by-name $L$ evaluates to $1$. Something extremely interesting happens here. We give the call-by-value translation of $L$ in Fig.~\ref{fig:cbvomega}. \begin{figure}[h] \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center}\includegraphics[width=8cm]{cvbDiverge2}\end{center} \end{minipage}} \end{center} \caption{The CBV-translation of $(\lambda x.1)\Omega$.}\label{fig:cbvomega} \end{figure} First of all, we observe that a standard \emph{single-token} machine would start from the conclusion, find the node $\mathsf{one}$, and exit again: such a machine would simply converge on the term $L$. When running on the term $\Omega$ alone, the machine would diverge, but as subterm of $L$, $\Omega$ is never reached, so the machine's behaviour on $L$ is not the one which we would expect in call-by-value. Our multitoken machine, instead, simultaneously launches tokens from all dereliction nodes at surface: the dereliction token coming out of $\Omega$ (represented on the right in Fig.~\ref{fig:cbvomega}) reaches the Y-box, and makes the machine diverge. We end this section by stressing that the interactive machine we use is the same, and that this machine correctly models CBN and CBV, solely depending on the chosen translation of terms into nets. The call-by-name translation of $L$ puts the subterm $\Omega$ in a box which is simply unreachable from the rest of the net (as in the case of $\mathsf{T}$ in Fig.~\ref{fig:potential}), and our machine converges as expected. The call-by-value translation of $L$, on the other hand, does \emph{not} put $\Omega$ inside a box. As a consequence, there is no barrier to the computation to which $\Omega$ gives rise---the same as if $\Omega$ would be alone---and our machine correctly diverges. This is the key difficulty in any interactive treatment of CBV, and we claim that the way we have solved it is novel. \section{Nets and a Multitoken Interactive Machine }\label{sec:SMELLY} We start with an overview of this section, which is divided into four parts. \paragraph*{Nets and Their Dynamics} \textsf{SMEYLL}\ nets come with \emph{rewriting} rules, which provide an operational semantics for them, and with a \emph{correctness} criterion, which ultimately guarantees that nets rewriting is deadlock-free. \paragraph*{Multitoken Machines} On any net we define a \emph{multitoken} machine, called \textsf{SIAM}, which provides an effective computational model in the style of GoI. A fundamental property we need to check for the machine is \emph{deadlock-freeness}, \emph{i.e.}, if the machine terminates it does so in a final state. From the beginnings of linear logic, the correctness criterion of nets has been interpreted as deadlock-freeness in distributed systems \cite{Asperti}; this is also the case for \textsf{MELLS}. Here, however, we work with surface reduction, and we have fixpoints. For these reasons, a rather refined approach is needed. \paragraph*{The Interplay Between Nets and Machines} Nets rewriting and the \textsf{SIAM}{} are tightly related. We establish the following results. Let $R$ denote a net, $\mathcal{M}_R$ its machine, and $\red$ the net rewriting relation. First of all, we know that (i) if $R$ is cut-and-sync-free, the machine $\mathcal{M}_R$ terminates in a final state. On the net hand, we establish that (ii) \emph{net rewriting is deadlock-free}: if no reduction is possible from $R$, then $R$ is cut-and-sync-free. On the machine side, we establish that (iii) if $R\red S$ then $\mathcal{M}_R$ converges/deadlocks iff the same holds for $\mathcal{M}_{S}$. We then use the multitoken paradigm to provide a decreasing parameter for net rewriting, and establish that (iv) if $\mathcal{M}_R$ terminates, then $R$ has no infinite sequence of reductions. Putting all this together, it follows that \emph{multitoken machines are deadlock-free}. \paragraph*{Computational Semantics} Finally, by using the machine representation, we associate a denotational semantics to nets, which we prove to be sound with respect to net reduction. \subsection{Nets and Their Dynamics.} In this section we introduce \textsf{SMEYLL}\ nets, which are a generalization of \textsf{MELL}\ proof nets. For a detailed account on proof nets, we refer the reader to Laurent's notes \cite{LaurentTorino}: our approach to correctness, as well as the way to deal with weakening, is very close to the one described there. \subsubsection{Formulas} The language of \textsf{SMEYLL}{} \emph{formulas} is identical to the one for \textsf{MELL}. The language of formulas is therefore: $$ A ::= \mathsf{1}\; \; \mbox{\Large{$\mid$}}\;\; \bot\; \; \mbox{\Large{$\mid$}}\;\; X\; \; \mbox{\Large{$\mid$}}\;\; X\b\middA\otimesA\middA\parrA\; \; \mbox{\Large{$\mid$}}\;\;!A\; \; \mbox{\Large{$\mid$}}\;\; ?A, $$ where $X$ ranges over a denumerable set of {propositional variables}. The constants $1,\bot$ are the \emph{units}. \emph{Atomic formulas} are those formulas which are either propositional variables or units. Linear negation $(\cdot)\b$ is extended into an involution on all formulas as usual: $A\b\b\equiv A$, $1\b\equiv \bot $, $(A\otimesB)\b\equiv A\b\parr B\b $, $(!A)\b \equiv{} ?A\b$. Linear implication is a defined connective: $A\linB\equivA\b\parrB$. Atoms and connectives of linear logic are usually divided in two classes: positive and negative. Here however, we define \emph{positive} (denoted by $P$) and \emph{negative} (denoted by $N$) those formulas which are built from units in the following way: $P ::= \mathsf{1} \; \; \mbox{\Large{$\mid$}}\;\;P\otimesP$, and $N::= \bot \; \; \mbox{\Large{$\mid$}}\;\;N\parrN$. So in particular, there are formulas which are neither positive nor negative, e.g.\ $\bot\parr\mathsf{1}$. \subsubsection{Structures} A \textsf{SMEYLL}{} \emph{structure} is a finite labeled \emph{directed} graph built over the alphabet of nodes which is represented in Fig.~\ref{SMELLYnets} (where the \emph{orientation} is the top-bottom one). All edges have a source, but some edges may have no target; such dangling edges are called the \emph{conclusions of the structure}. The edges are labeled with \textsf{SMEYLL}{} formulas; the label of an edge is called its \emph{type}. We call those edges which are represented below (resp. above) a node \emph{conclusions} (resp. \emph{premisses}) of the node. We will often say that a node ``has a conclusion (premiss) $A$'' as shortcut for ``has a conclusion (premiss) of type $A$''. When we need more precision, we distinguish between an edge and its type, and we use variables such as $e,f$ for the edges. The nodes $! $, $Y$ and $\bot$ are called \emph{boxes}. One among their conclusions (the leftmost ones in Fig.~\ref{SMELLYnets}, which have type $!A$, $!A$ and $\bot$, respectively) is said to be \emph{principal}, the other ones being \emph{auxiliary}. $!$-boxes and Y-boxes are \emph{exponential}. An exponential box is \emph{closed} if it has no auxiliary conclusions. To each box is associated, in an inductive way, a structure which is called the \emph{content} of the box. To the $!$-box we associate a structure with conclusions $A,?\Gamma$. To the Y-box corresponds a structure of conclusions $A, ?A\b, ?\Gamma$. To the {$\bot$-box} is associated a structure of non-empty conclusions $\Gamma$, together with a new node $\mathsf{bot}$ of conclusion $\bot$. We represent a box $b$ and its content $S$ as in Fig.~\ref{SMELLYboxes}. With a slight abuse of terminology, the nodes and edges of $S$ are said to be \emph{inside} $b$. Similarly, a crossing of any box's border is said to be a \emph{door}, and we often speak of premiss and conclusion \emph{of a} (principal or auxiliary) \emph{door}. Note that the principal door of the Y-box (marked by Y) has premisses $A,?A\b$ and conclusion $!A$. A node \emph{occurs at depth 0} or \emph{at surface} in the structure $R$ if it is a node of $R$, while it \emph{occurs at depth $n+1$} in $R$ if it occurs at depth $n$ in a structure associated to a box of $R$. Please observe that nets being defined inductively, the depth of nodes is always finite. \begin{figure}[htbp] \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center}\includegraphics[width=8cm]{SMELLYnets}\end{center} \end{minipage}} \end{center} \caption{\textsf{SMEYLL}\ Nodes.}\label{SMELLYnets} \end{figure} \begin{figure}[htbp] \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center}\includegraphics[width=8cm]{SMELLYboxes}\end{center} \end{minipage}} \end{center} \caption{\textsf{SMEYLL}\ Boxes.}\label{SMELLYboxes} \end{figure} The sort of each node induces constraints on the number and the labels of its premisses and conclusions, which are shown in Fig.~\ref{SMELLYnets}. We observe that the $\bot$-box is \emph{the same} as in \cite{Girard87} and corresponds to the sequent calculus rule $\infer[]{ \vdash \bot, \Gamma }{\vdash \Gamma}$. All nodes are standard except sync nodes and Y-boxes, which need some further explanation: \begin{varitemize} \item Y-boxes model \emph{recursion} (more on this when we introduce the reduction rules). Proof-theoretically, the Y-box corresponds to adding the following fix-point sequent calculus rule to \textsf{MELL}: $$ \infer[Y]{ \vdash !A,?\Gamma }{\vdash A, ?A\b,?\Gamma} $$ \item Sync nodes model {\em synchronization points}. A sync node has $n$ premisses and $n$ conclusions; for each $i$ ($1\leq i\leq n$) the $i$-th premiss and the \emph{corresponding} $i$-th conclusion are typed by the \emph{same} formula, which needs to be \emph{positive}. \end{varitemize} \vskip 4pt \noindent\emph{Simple and positive structures.} Two relevant classes of structures are simple and positive structures. A formula is \emph{simple} it is is built out of $\{X,X\b, 1, \otimes, \parr\}$. A structure $R$ is \emph{simple (resp. positive)} if all its conclusions are simple (resp. positive) formulas. This \emph{does not} mean that all formulas occurring in $R$ are simple (resp. positive). $R$ can be very complex; the constraint only deals with $R$'s conclusions. \subsubsection{Correctness} A \emph{net} is a structure which fulfills a \emph{correctness criterion} defined by means of {switching paths} (see \cite{LaurentTorino}). A \emph{switching path} on the structure $R$ is an undirected path\footnote{By path, in this paper we always mean a \emph{simple path} (no repetition of either nodes or edges).} such that (i) for each $\parr$-or-$?c$-node, the path uses at most one of the two \emph{premisses}, and (ii) for each sync node, the path uses at most one of the \emph{conclusions}. The former condition is standard, the latter condition rules out paths which bounce on sync nodes ``from below'': a path crossing a sync node may traverse one premiss and one conclusion, or traverse two distinct premisses. A structure is \emph{correct} if: \begin{varenumerate} \item none of its switching paths are cyclic, and \item the content of each of its boxes is itself correct. \end{varenumerate} The reader familiar with linear logic correctness criteria did probably notice that the only condition we require is acyclicity, and that connectedness is simply not enforced (as, e.g., in Danos and Regnier's criterion~\cite{DanosRegnierMult}). Actually, the only role of connectedness consists in ruling out the so-called Mix rule from the sequent calculus. This is not relevant in our development, so we will ignore it. An advantage of accepting the Mix rule is that we do not need extra conditions for dealing with weakening. A similar approach is adopted by Laurent~\cite{LaurentTorino}. In the following, when we talk of \textsf{MELL}{} (resp. \ensuremath{\textsf{MLL}}{}), we actually always mean \textsf{MELL}{} + Mix (resp. \ensuremath{\textsf{MLL}}{} + Mix). \subsubsection{Net Reduction} Reduction rules for nets are sketched in Fig.~\ref{SMELLYred}. Reduction rules can be applied only at surface (\emph{i.e.}, when the redex occurs at depth $0$), and not in an arbitrary context. Moreover, observe that reduction rules involving an \emph{exponential} box can only be applied when the box is \emph{closed}, \emph{i.e.}, when it has no auxiliary doors. We write $\rightsquigarrow$ for the rewriting relation induced by these rules. Some reduction rules deserve some further explanations: \begin{varitemize} \item The $y$-rule unfolds a Y-box, this way modeling recursion. The intuition should be clear when looking at the translation of the \ensuremath{\mathsf{PCF}}\ term $L=\PCFletrec{f}{x}{M}{N}$, which reduces to the explicit substitution of $f$ by $\lambda x.\PCFletrec{f}{x}{M}{M}$ in $N$, call it $P$. Indeed, the encoding of $L$ reduces to the encoding of $P$: \begin{center} \includegraphics[width=6cm]{Yexample}\label{fig:Yexample} \end{center} (where $M^\dagger$ and $N^\dagger$ stand for the encodings of $M$ and $N$, respectively). When (and only if!) $N$ recursively calls $f$, the corresponding $d$ node ``opens'' the $!$-box for the first iteration of $f$; if $f$ further uses a recursive call of itself, the $Y$-box again turns into yet another $!$-box and is opened, and so on. \item The $s.el$-rule erases a sync link whose premisses are \emph{all} conclusions of $\mathsf{one}$ nodes. \item The $w$-rule, corresponding to a cut with weakening, \emph{deletes} the redex (because the box has no auxiliary conclusions). \item The $bot.el$-rule opens a $\bot$-box. \end{varitemize} \begin{figure}[htbp] \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center} \includegraphics[width=8cm]{SMELLYred_refined} \end{center} \end{minipage}} \end{center} \caption{\textsf{SMEYLL}\ Net Rewriting Rules.}\label{SMELLYred} \end{figure} It is immediate to check that correctness is preserved by all reduction rules. \begin{lemma} If $R$ is a net and $R\rightsquigarrow S$, then $S$ is itself a net. \end{lemma} Since the constraints exclude most of the commutations which are present in \textsf{MELL}, rewriting enjoys a strong form of confluence: \begin{prop}[Confluence and Uniqueness of Normal Forms]\label{net_conf} The rewriting relation $\rightsquigarrow$ has the following properties: \begin{varenumerate} \item it is confluent and normal forms are unique; \item any net weakly normalizes iff it strongly normalizes. \end{varenumerate} \end{prop} \begin{proof} The only critical pairs are the trivial ones of \ensuremath{\textsf{MLL}}, leading to the same net. Therefore, reduction enjoys a diamond property (uniform confluence): if $R \rightsquigarrow S$ and $R\rightsquigarrow T$, then either $S=T$ or there exists $U$ such that $S\rightsquigarrow U$ and $T\rightsquigarrow U$. (1) and (2) are direct consequences. \end{proof} The strict constraints on rewriting, however, render cut elimination non-trivial: it is not obvious that a reduction step is available whenever a cut is present. We need to prove that in presence of a cut, there is always a valid redex (\emph{i.e.}, it is surface, and any exponential box acted upon is closed). The main difficulty comes from $\bot$-boxes, as they can hide large parts of the net, and in particular dereliction nodes which may be necessary to fire a reduction. However, the following establishes that as long as there are cuts or syncs, it is always possible to perform a valid reduction. \begin{thm}[Deadlock-Freeness for Nets]\label{main_lem} Let $R$ be a simple \textsf{SMEYLL}{} net. If $R$ contains cuts or sync nodes, then a reduction applies, \emph{i.e.}\ there exists $S$ such that $R\rightsquigarrow S$. \end{thm} The rather long proof is given in Appendix~\ref{app:SMELLY}. The key element is the definition of an order on the boxes which occur at depth $0$ in $R$; the existence of such an order relies on the correctness criterion. The order captures the dependency among boxes, \emph{i.e.}, exposes the order in which cuts are eliminated. \begin{cor}[Cut Elimination] \label{cutel} Let $R$ be a simple \textsf{SMEYLL}\ net. If $R\rightsquigarrow^* S$ and $S$ cannot be further reduced, then $S$ is a cut free \ensuremath{\textsf{MLL}}{} net\footnote{Precisely, \ensuremath{\textsf{MLL}}{} + Mix, as we have already pointed out.}, \emph{i.e.}, it only containing $\mathsf{ax}$, $\mathsf{one}$, $\otimes$, $\parr$ nodes. \end{cor} \paragraph*{Discussion on simple structures} The hypothesis which we make in Theorem \ref{main_lem} that a structure is simple is an assumption which in this section we use \emph{to simplify auxiliary lemmas}; it will not appear in our main result, namely Theorem~\ref{SIAM deadlock free}. \subsection{\textsf{SIAM}}\label{SIAM} All along this section, $R$ indicates a \textsf{SMEYLL}\ structure (with no other hypothesis, unless otherwise stated). \subsubsection{Preliminary Notions} Some auxiliary definitions are needed before we can introduce our interactive machines. \emph{Exponential signatures} are defined by the following grammar $$ \sigma ::= {*} \; \; \mbox{\Large{$\mid$}}\;\; l(\sigma) \; \; \mbox{\Large{$\mid$}}\;\; r(\sigma) \; \; \mbox{\Large{$\mid$}}\;\; \encode{\sigma}{\sigma} \; \; \mbox{\Large{$\mid$}}\;\; y(\sigma,\sigma), $$ while \emph{stacks} are defined as follows $$ \mathord{s} ::= \epsilon \; \; \mbox{\Large{$\mid$}}\;\; l.\mathord{s} \; \; \mbox{\Large{$\mid$}}\;\; r.\mathord{s} \; \; \mbox{\Large{$\mid$}}\;\; \sigma.\mathord{s} \; \; \mbox{\Large{$\mid$}}\;\; \delta, $$ where $\epsilon$ is the empty stack and $.$ denotes concatenation (and, thus, $s.\epsilon=s$). Given a formula $A$, a stack $\mathord{s}$ \emph{indicates an occurrence $\alpha$ of an atom} (resp. \emph{an occurrence $ \mu$ of a modality}) in $A$ if $\mathord{s}[A]= \alpha$ (resp. $\mathord{s}[A]= \mu$), where $\mathord{s}[A]$ is defined as follows: \begin{varitemize} \item $\epsilon [\alpha] = \alpha$, \item $\sigma.\delta[\mu B] = {\mu}$, \item $\sigma.t[\mu B] = t[B] $ whenever $t\neq\delta$, \item $l.t[B \Box C]= t[B] $ and $r.t[B \Box C]=t[C]$, where $\Box$ is either $\otimes$ or $\parr$. \end{varitemize} We observe that a stack can indicate a modality only if its head is $\delta$. \begin{example}\label{ex:indic} Given the formula $A =\bang(\bot\otimes{!\mathsf{1}})$, the stack $*.\delta$ indicates the first occurrence of $\bang$, $ *.r.*.\delta[A]$ gives the second occurrence of $!$, and $*.\delta, *.l[A]=\bot$. \end{example} The set of $R$'s \emph{positions} $\POSALL_R$ contains all the triples in the form $(\mathord{e}, \mathord{s}, \mathord{t})$, where: \begin{varenumerate} \item $\mathord{e}$ is an edge of $R$, \item the \emph{formula stack} $\mathord{s}$ is either $\delta$ or a stack which indicates an occurrence of atom or modality in the type $A$ of $\mathord{e}$, \item the \emph{box stack} $\mathord{t}$ is a stack of $n$ exponential signatures, where $n$ is the number of exponential boxes inside which $\mathord{e}$ appears. \end{varenumerate} We use the metavariables $\ss$ and $\mathbf{p}$ to indicate positions. For each position $\mathbf{p}=(\mathord{e},\mathord{s},\mathord{t})$, we define its \emph{direction} $\mathtt{dir} (\mathbf{p})$ as \emph{ upwards} ($\uparrow$) if $\mathord{s}$ indicates an occurrence of $!$ or of negative atom, as \emph{downwards} ($\downarrow$) if $\mathord{s}$ indicates an occurrence of $?$ or of positive atom, as \emph{stable} ($\leftrightarrow$) if $\mathord{s}= \delta$ or if the edge $\mathord{e}$ is the conclusion of a $\mathsf{bot}$ node. A position $\mathbf{p}=(\mathord{e}, \mathord{s}, \epsilon)$ is \emph{initial} (resp. \emph{final}) if $e$ is a conclusion of $R$, and $dir(\mathbf{p})$ is $\uparrow$ (resp. $\downarrow$). For simplicity, on initial (final) positions, we require all exponential signatures in $\mathord{s}$ to be $*$. So for example, if $!(\bot\otimes{!\mathsf{1}})$ is a conclusion of $R$, there is one final position ($s=*.r.*$), and three initial positions (the three stacks given in Example~\ref{ex:indic}). The following subsets of $\POSALL_R$ play a crucial role in the definition of the machine: \begin{varitemize} \item the set $\mathtt{INIT}_R$ of all \emph{initial positions}; \item the set $\mathtt{FIN}_R$ of all \emph{final positions}; \item the set $\mathtt{ONES}_{R}$ of positions $(\mathord{e}, \epsilon, \mathord{t})$ where $\mathord{e}$ is the conclusion of a $\mathsf{one}$ node; \item the set $\mathtt{DER}_{R}$ of positions $(\mathord{e}, *.\delta, \mathord{t})$ where $\mathord{e}$ is the conclusion of a $\mathsf{?d}$ node; \item the starting positions $\mathtt{START}_{R}= \mathtt{INIT}_R \cup \mathtt{ONES}_{R}\cup \mathtt{DER}_{R} $; \item the set $\mathtt{PRIVATE}_R$ of the positions $\mathbf{p}$ for which $\mathtt{dir}(\mathbf{p})=\leftrightarrow$. \end{varitemize} The multitoken machine $\machine{R}$ for $R$ consists of a set of \emph{states} and a \emph{transition} relation between them. These are the topics of the following two subsections. \subsubsection{States} A state of $\machine{R}$ is a snapshot description of the tokens circulating in $R$. We also need to keep track of the positions where the tokens started, so that the machine only uses each starting position once. Formally, a \emph{state} $\mathbf{T} = ( Cod(\mathbf{T}), Dom(\mathbf{T}))$ is a set of positions $Cod(\mathbf{T}) \subseteq \POSALL_R$ together with a set of positions $Dom(\mathbf{T}) \subseteq \mathtt{START}_R$. Intuitively, $Cod(\mathbf{T}) $ describes the current position of the tokens, and $Dom(\mathbf{T})$ keeps track of which starting positions have been used\footnote{In Section~\ref{tracing} we show that $Dom(\mathbf{T})$ is actually redundant; we have however decided to give it explicitly, because it makes the definition of the machine simpler.}. A state is \emph{initial} if $ Cod(\mathbf{T})=Dom(\mathbf{T})=\mathtt{INIT}_R$. We indicate the (unique) initial state of $\mathcal{M}_R$ by $\mathbf{I}_R$. A state $\mathbf{T}$ is \emph{final} if all positions in $Cod(\mathbf{T})$ belong to either $\mathtt{FIN}_R$ or $\mathtt{PRIVATE}_R$. The set of all states will be denoted by $\mathcal{S}_{R}$. Given a state $\mathbf{T}$ of $\mathcal{M}_R$, we say that \emph{there is a token in $\mathbf{p}$} if $\mathbf{p}\in Cod(\mathbf{T})$. We use expressions such as ``a token moves'', ``crosses a node'', in the intuitive way. \subsubsection{Transitions} The transition rules of $\machine{R}$ are given by the transitions described in Fig.~\ref{fig:trRules} (where $\Box$ stands for either $\otimes$ or $\parr$). The rules marked by (i)--(iii) make the machine concurrent, but the constraints they need to satisfy are rather technical and for this reason we prefer to postpone the related discussion. \paragraph*{Transition Rules, Graphically} The position $\mathbf{p} = (\mathord{e}, \mathord{s}, \mathord{t})$ is represented graphically by marking the edge $e$ with a bullet $\bullet$, and writing the stacks $(\mathord{s}, \mathord{t})$. A transition $\mathbf{T} \rightarrow \mathbf{U}$ is given by depicting only the positions in which $\mathbf{T}$ and $\mathbf{U}$ differ. It is of course intended that all positions of $\mathbf{T}$ which do not explicitly appear in the picture also belong to $\mathbf{U}$. \condinc{}{ \todo{ To represent a transition $\mathbf{T} \rightarrow \mathbf{U}$, we depict $\mathtt{pos}(\ss)$ in the left-hand-side, and $\mathtt{pos_{\sttwo}}(\ss)$ on the right-hand-side of the arrow, for each $\ss\inDom(\mathbf{T})$ such that $\mathtt{pos_{\sttwo}}(\ss)\not=\mathtt{pos}(\ss)$. It is of course intended that $\mathtt{pos_{\sttwo}}(\ss)=\mathtt{pos}(\ss)$ for all $\ss$ whose value is not explicitly appearing in the picture. }} To save space, in Fig.~\ref{fig:trRules} we annotate the transition arrows with a \emph{direction}; we mean that the rule applies (only) to positions which have that direction. We sometimes explicitly indicate the direction of a position by directly annotating it with ${}^{\downarrow},{}^{\uparrow}$ or ${}^{\leftrightarrow}$. Notice that no transition is defined for stable positions. We observe that tokens \emph{changes direction} only in one of two cases: either when they move from an edge of type $A$ to an edge of type $A\b$ (\emph{i.e.}, when crossing a $\mathsf{ax}$ or a $\mathsf{cut}$ node), or when they cross a $Y$-node, in the case where the transitions are marked by (*): moving down from the edge $A$ and then up to $?A\b$, or vice versa. Whenever a token is on the conclusion of a box, it can move into that box (graphically, the token ``crosses'' the border of the box) and it is modified as if it were crossing a node. For exponential boxes, in Fig.~\ref{fig:trRules} we depict only the border of the box. The transitions for the multiplicative nodes $\mathsf{ax}$, $\mathsf{cut}$, $\otimes$, $\parr$ are the standard ones. The rules for \emph{exponential nodes} are mostly standard. There are however two novelties: the introduction of ``dereliction tokens", \emph{i.e.}, tokens which start their path on the conclusion of a $?d$ node, and the $Y$ box. We discuss both below. \paragraph*{Some Further Comments} Certain peculiarities of our interactive machines need to be further discussed: \begin{varitemize} \item \emph{Y-boxes}. The recursive behaviour of Y-boxes is captured by the exponential signature in the form $y(\cdot,\cdot)$, which intuitively keeps track of how many times the token has entered a Y-box so far. Let us examine the transitions via the $Y$ door. Each transition from $!A$ (conclusion of $Y$) or from $?A\b$ (premiss of $Y$) to the edge $A$ (premiss of $Y$) corresponds to a recursive call. The transition from $A$ to $?A\b$ captures the return from a recursive call; when all calls are unfolded, the token exits the box. The auxiliary doors of a $Y$-box have the same behaviour as those of $!$-boxes. \item \emph{Dereliction Tokens}. As we have explained in section~\ref{sect:multexpo}, this is a key feature of our machine. A dereliction token is generated (according to conditions (i) below) on the conclusion of a $\mathsf{?d}$ node, as depicted in Fig.~\ref{fig:trRules}. Intuitively, each dereliction token corresponds to a copy of a box. \item \emph{Box Copies and stable tokens.} A token in a stable position is said to be \emph{stable}. Each such token is the remains of a token which started its journey from $\mathtt{DER}$ or $\mathtt{ONES}$, and flowed in the graph ``looking for a box''. This stable token that was once roaming the net therefore witnesses the fact that \emph{an instance} of dereliction or of $\mathsf{one}$ ``has found its box''. Stable tokens play an essential role, as they keep track of box copies. We are going to formalize this immediately below. It is immediate to check that a stable token can only be located inside a box, more precisely on the premiss of its principal door. In Fig.~\ref{SIAM_stable} we indicate explicitly all the exponential transitions which lead to a stable position; the other transition leading to a stable position is the one on $\bot$-box. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=7cm]{SIAM_stable}} \caption{Exponential transitions to a stable position }\label{SIAM_stable} \end{figure} \end{varitemize} \begin{figure}[htbp] \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center} \includegraphics[width=8cm]{SIAM_transitions} \end{center} \end{minipage}} \end{center} \caption{\textsf{SIAM}{} Transition Rules.} \label{fig:trRules} \end{figure} \paragraph*{Multitoken Rules} The rules (i)--(iii) from Fig.~\ref{fig:trRules} are where the multitoken nature of the \textsf{SIAM}{} really comes into play. Those rules are subject to certain conditions, which are intimately related to box copies. Given a state $\mathbf{T}$ of $\mathcal{M}_R$, we define $\mathtt{Instances}_{\mathbf{T}}(S)$ to be $\{\epsilon\}$ if $R=S$ (we are at depth 0). Otherwise, if $S$ is the structure associated to a box node $b$ of $R$, we define $\mathtt{Instances}_{\mathbf{T}}(S)$ as the set of all $t$ such that $t$ is the box stack of a stable token at the principal door of $b$. Intuitively, as we discussed above, the box stack of each such a token \emph{identifies a copy of the box} which contains $S$. Rules marked as (i)--(iii) only apply if certain conditions are satisfied: \begin{varitemize} \item[(i)] The position $(\mathord{e},\epsilon,\mathord{t})$ (resp. $(\mathord{e},\delta,\mathord{t})$) does not already belong to $Dom(\mathbf{T})$, and $\mathord{t}\in \mathtt{Instances}_{\mathbf{T}}(S)$, where $S$ is the structure to which $e$ belongs. If both conditions are satisfied, $Cod(\mathbf{T})$ and $Dom(\mathbf{T})$ are extended with the position $\mathbf{p}$. This is the only transition changing $Dom(\mathbf{T})$. Intuitively, each $\mathord{t}$ corresponds to a copy of the (box containing the) $\mathsf{one}$ (resp. $?d$) node. \item[(ii)] The token moves inside the $\bot$-box\ only if its box stack $\mathord{t}$ belongs to $\mathtt{Instances}_{\mathbf{T}}(S)$, where $S$ is the content of the $\bot$-box. (Notice that if the $\bot$-box\ is inside an exponential box, there could be several stable tokens at its principal door, one for each copy of the box.) \item[(iii)] Tokens cross a sync node $l$ only if for a certain $\mathord{t}$, there is a token on each position $(e,\mathord{s},\mathord{t})$ where $e$ is a premiss of $l$, and $\mathord{s}$ indicates an occurrence of atom in the type of $e$. In this case, all tokens cross the link simultaneously. Intuitively, insisting on having the same stack $\mathord{t}$ means that the tokens all belong to the same box copy. The simultaneous transition of the tokens has to be related to the $s.el$-rule, which takes place only when \emph{all} premisses are conclusions of $\mathsf{one}$ nodes. Note that the tokens traverse a sync link only downwards, because all edges are positive. \end{varitemize} A \emph{run} of the \textsf{SIAM}{} machine of $R$ is a \emph{maximal} sequence of transitions $\mathbf{I}_R \rightarrow \cdots \rightarrow \mathbf{T}_n \rightarrow \cdots $ from an initial state $\mathbf{I}_R$.\\ \subsubsection{Basic Properties} In this and next section, we study some properties of the \textsf{SIAM}. We write $\mathbf{T} \nrightarrow$ if no reduction applies from $\mathbf{T}$. A non final state $\mathbf{T} \nrightarrow$ is called a \emph{deadlock} state. If $\mathbf{I}_R \rightarrow \mathbf{T}_1 \rightarrow ...\rightarrow \mathbf{T}_n \nrightarrow $ is a run of $\mathcal{M}_R$ we say that the run \emph{terminates} (in the state $\mathbf{T}_n$). A run of $\mathcal{M}_R$ \emph{diverges} if it is infinite, \emph{converges} (resp. \emph{deadlocks}) if it terminates in a final (resp. non final) state. \noindent \begin{prop}[Confluence and Uniqueness of Normal Forms]\label{lem:diamProp}\label{machine_conf} The relation $\rightarrow$ enjoys the following properties: \begin{varitemize} \item it is confluent and normal forms are unique; \item if a run of the machine $\mathcal{M}_R$ terminates, then all runs of $\mathcal{M}_R$ terminate. \end{varitemize} \end{prop} \begin{proof} By checking each pair of transition rules we observe that $\rightarrow$ has the diamond property, because the transitions do not interfere with each other. \end{proof} \condinc{}{\todo By Lemma \ref{lem:diamProp}, all runs of $\mathcal{M}_R$ have the same behaviour. We can therefore say that $\mathcal{M}_R$ \emph{diverges} if $\mathcal{M}_R$ has an infinite run, \emph{converges} if its runs converge, \emph{deadlocks} if its runs do. By using Lemma \ref{trsf_prop} we can prove that \begin{prop}[\todo{What is a good name for this??}]\label{basic_soundness} Assume $R \rightsquigarrow S$. $\mathcal{M}_R$ diverges, converges or deadlocks if and only if $\mathcal{M}_{S}$ diverges, converges or deadlocks.\\ \end{prop} } \subsubsection{Tracing Back}\label{tracing} For each position $\mathbf{p}$ in $R$, we observe (by examining the cases in Fig.~\ref{fig:trRules}) that there is at most one position from which $\mathbf{p}$ can come via a transition. When disregarding the conditions we impose on rules labelled as (i)--(iii), the transitions also apply to a single token, in isolation. By reading the transitions ``backwards'', we can therefore define a partial function $\mathrm{orig} : \POSALL_R \rightharpoonup \mathtt{START}_R$, where $\mathrm{orig}(\mathbf{p}) := \ss$ if $\mathbf{p}$ traces back to $\ss$. But there is more: \begin{lemma}\label{lemma:traceback} For any state $\mathbf{T}$ such that $\mathbf{I}_R \rightarrow^* \mathbf{T}$, the restriction of $\mathrm{orig} $ to $Cod(\mathbf{T})$ is a total, injective function. \end{lemma} Therefore, for every position $\mathbf{p}$ which appears in a run of $\mathcal{M}_R$, $\mathrm{orig}(\mathbf{p})$ is defined. With this in mind, $\mathtt{START}_R$ can be seen as an index set identifying each token. For most of this section (until Theorem~\ref{soundness}) we are only interested in the ``wave'' of tokens, and do not need to distinguish them individually. In Section~\ref{sec:beyond}, however, we heavily rely on $\mathrm{orig}$ to associate values and operations to tokens. \begin{rem} Tracing back from $Cod(\mathbf{T})$ allows us to reconstruct $Dom(\mathbf{T})$ from the set of current positions. We have preferred to carry along $Dom(\mathbf{T})$ in the definition of state to make it more immediate, since the definition of $\mathrm{orig}$ is rather technical. Similarly, in order not to trace back all the way each time we need the starting position, one can also make the choice to carry the function along with the state. We made a similar choice in our previous work \cite{lics2014}, where a state was defined as a function $ Dom(\mathbf{T}) \to \POSALL_R$ The two definitions are of course equivalent for all states which can be reached from the initial state, thanks to Lemma~\ref{lemma:traceback}. \end{rem} \subsubsection{State Transformation}\label{def:Transformation} Our central tool to relate net rewriting and the \textsf{SIAM}{} is a mapping of states to states. More precisely, if $R \rightsquigarrow S$, we define a \emph{transformation} as a partial function $\mathrm{trsf}_{R \rightsquigarrow S}: \POSALL_{R} \rightharpoonup \POSALL_{S}$, which extends to a transformation on states $\mathrm{trsf}_{R \rightsquigarrow S}: \mathcal{S}_{R} \rightharpoonup \mathcal{S}_{S}$ in the obvious way, point-wise. We will omit the subscript $R \rightsquigarrow S$ of $\mathrm{trsf}_{R \rightsquigarrow S}$ whenever it is obvious. Assume $R \rightsquigarrow_{a} S$ (axiom step), and $\mathbf{p}=(d, s, \epsilon) \in \POSALL_{R}$. If $d\in \{e, f, g\}$ as shown in Fig.~\ref{fig:trsf}(a), then $\mathrm{trsf}_{R \rightsquigarrow S}(\mathbf{p}):=(h, s, \epsilon) \in \POSALL_{S} $. For the other edges, $ \mathrm{trsf}_{R \rightsquigarrow S}(\mathbf{p}):= \mathbf{p}$. This definition can rigorously be described as in Fig.~\ref{fig:trsf}(b), where the mapping is shown by the dashed arrows. We give some other cases of reductions in Fig.~\ref{fig:trsfOthers}. $\mathrm{trsf}$ acts as the identity on all positions $\mathbf{p}$ relative to those edges which are not modified by the reduction rule, \emph{i.e.}, $\mathrm{trsf} (\mathbf{p}) = \mathbf{p}$. The cross symbols $\pmb{\times}$ serves to indicate that the source position has no corresponding target in $S$ (remember that the mapping is partial). Intuitively, the token on that position is \emph{deleted} by the mapping. It is important to observe that in the case of steps $bot.el$ and $d$ (the only rules which open a box), a stable token is always deleted. \begin{fact}\label{fact:trsf} If $R\rightsquigarrow R'$ via a $bot.el$ or $d$ step, the action of $\mathrm{trsf}$ always delete a stable token. \end{fact} \begin{figure}[htbp] \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center} \includegraphics[scale=.6]{trsfAxEdges} \qquad\qquad \includegraphics[scale=.6]{trsfAx}\\ \end{center} \end{minipage}} \end{center} \caption{$\mathrm{trsf}_{R\rightsquigarrow S}$, Formally and as a Drawing.}\label{fig:trsf} \end{figure} The cases of $d$ and $y$ deserve some further discussion: \begin{varitemize} \item In the $d$ rule, the token generated on the $?d$ node is deleted, and disappears in $S$. For the other tokens, those outside the !-box are modified by removing the signature $*$ (which was acquired while crossing that $?d$ node) from the formula stack. The tokens $(e,\mathord{s},\mathord{t})$ inside the !-box are modified by removing the signature $*$ from the \emph{bottom} of the box stack $\mathord{t}$, which is coherent with the invariant on the size of $\mathord{t}$ (its size is its exponential depth). Why from the bottom of the stack? Because the box $b$ which disappears is at depth 0 in $R$, therefore for each position $(e,\mathord{s},\mathord{t})$ inside the box, the signature corresponding to $b$ is at the bottom of $\mathord{t}$. \item In the $y$ rule, things are slightly more complicated. What happens to the tokens lying inside a Y-box depends on the bottom element of their box stack, which is the signature corresponding to the Y-box. If the signature at the bottom of the stack is not of the form $y(\cdot,\cdot)$, the token has entered the Y-box only once (\emph{i.e.}, it belongs to the first recursive call) and hence the token is mapped onto a token in the copy of $S$ outside the Y-box. Otherwise, the token is mapped onto a token in the Y-box; it loses one $y(\cdot,\cdot)$ symbol (\emph{i.e.}, it does one iteration less), but the box stack becomes longer (which is coherent with the increase in depth). We show an example in Fig.~\ref{fig:Ytrsf_example}. The (stable) token with a stack $(\delta, *)$ on the premise of $Y$-node is mapped onto a token on the premise of the $\bang$ node, with the same stack. In contrast, the token with a stack $(\delta, y(*,y(*,*)))$ is mapped onto a token on a premise of the $Y$ node on the right-hand side, now with a stack $(\delta, y(*,*).*)$ --- it loses a $y$ symbol. \end{varitemize} \begin{figure*}[htbp] \begin{center} \fbox{ \begin{minipage}{.97\textwidth} \begin{center} \includegraphics[width=15cm]{transform_refined} \end{center} \end{minipage}} \end{center} \caption{The Function $\mathrm{trsf}_{R\rightsquigarrow S}$.}\label{fig:trsfOthers} \end{figure*} \begin{figure}[htbp] \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center} \includegraphics[width=8cm]{Ytrsf_example} \end{center} \end{minipage}} \end{center} \caption{$\mathrm{trsf}_{R\rightsquigarrow S}$ on $y$-reduction.}\label{fig:Ytrsf_example} \end{figure} Each statement below can be proved by case analysis. The proof is given in Appendix \ref{app:SIAM}. \begin{lemma}[Properties of $\mathrm{trsf}$]\label{trsf_prop} Assume $R \rightsquigarrow S$. \begin{varenumerate} \item If $\mathbf{T} \rightarrow \mathbf{U}$ in $\machine{R}$ then $\mathrm{trsf}(\mathbf{T}) \rightarrow^* \mathrm{trsf}(\mathbf{U})$ in $\machine{S}$. \item If $I_R \to \cdots \to \mathbf{T}_n\cdots $ is a run of $\mathcal{M}_R$, then $\mathrm{trsf}(I_R) \to^* \cdots \to^* \mathrm{trsf}(\mathbf{T}_n) \cdots$ is a run of the machine $\mathcal{M}_{S}$. \item\label{trsf_main} $I_R \to \cdots \to \mathbf{T}_n \cdots $ diverges/converges/deadlocks iff $\mathrm{trsf}(I_R) \to^* \cdots \to^* \mathrm{trsf}(\mathbf{T}_n) \cdots$ does. \end{varenumerate} \end{lemma} \newcommand{\der}[1]{\mathtt{weight}(#1)} We end this section by looking at the number of circulating tokens. We observe that the number of tokens, and stable tokens in particular, in any state $\mathbf{T}$ which is reached in a run of $\mathcal{M}_R$ is finite. We denote by $\der{\mathbf{T}}$ the number of stable tokens in $\mathbf{T}$ (\emph{i.e.}, $Cod(\mathbf{T}) \cap \mathtt{PRIVATE}_R$). The following is immediate by analyzing Fig.~\ref{fig:trsfOthers} and checking which tokens are deleted. \begin{lemma}\label{lem:derTokens}\label{derTokens} Assume $R \rightsquigarrow S$. We have that $\der{\mathbf{T}} \geq \der{\mathrm{trsf}(\mathbf{T})}$. Moreover, if $R \rightsquigarrow S $ via the $d$-rule or $bot.el$-rule, then $\der{\mathbf{T}} > \der{\mathrm{trsf}(\mathbf{T})}$ \end{lemma} \subsection{The Interplay of Nets and Machines}\label{interplay} We already know that if a simple net $R$ reduces to a normal form $S$, then $S$ is an \ensuremath{\textsf{MLL}}{} net (Corollary~\ref{cutel}), actually a very simple one. It is immediate that in this case, every run of the machine $\mathcal{M}_S$ terminates in a final state: each token in the initial state flows to a final position (the net has neither sync nor boxes to stop them). Given an arbitrary net $R$, we of course do not know if it reduces to a normal form, but we are still able to use the facts above to prove that $\mathcal{M}_R$ is deadlock-free. \begin{lemma}[Mutual Termination]\label{net_termination} Let $R$ be a simple net, as in Theorem~\ref{main_lem}. We have: \begin{varenumerate} \item if a run of $\mathcal{M}_R$ terminates, then each sequence of reductions starting from $R$ terminates; \item if a sequence of reductions starting from $R$ terminates, then each run of $\mathcal{M}_R$ terminates in a \emph{final} state. \end{varenumerate} \end{lemma} \newcommand{\ww}[1]{\mathtt{weight}(#1)} \begin{proof} Let us first consider Point 1. By hypothesis, there is a run of $\mathcal{M}_R$ which terminates in a state $\mathbf{T}$. We define $\ww{R}:=\der{\mathbf{T}}$. By Lemma~\ref{trsf_prop}, if $R \rightsquigarrow S$, $\mathrm{trsf}$ maps the run of $\mathcal{M}_R$ into a run of $\mathcal{M}_{S}$ which terminates in the state $\mathrm{trsf}(\mathbf{T})$. By Lemma \ref{derTokens}, $ \der{\mathrm{trsf}(\mathbf{T})} \leq \der{\mathbf{T}}$, hence $\ww{S} \leq \ww{R}$. Using Lemma \ref{derTokens}, we prove that it is not possible to have an infinite sequence of $\rightsquigarrow$ reductions starting from $R$, because: (i) each rewriting step which opens a box ($d$, or $bot.el$) strictly decreases $\ww{R}$; (ii) there is only a finite number of rewriting steps which can be performed without opening a box. Let us then consider Point 2. By hypothesis, $R$ reduces to a cut free net $S$, which has the form described in Corollary~\ref{cutel}. On such a net, all runs of $\mathcal{M}_S$ terminate in a final state. If $\mathcal{M}_R$ has a run which is infinite (resp. deadlocks), by Lemma~\ref{trsf_prop} $\mathrm{trsf}$ would map it into a run of $\mathcal{M}_S$ which is infinite (resp. deadlocks). \end{proof} Lemma~\ref{net_termination} entails deadlock-freeness of the \textsf{SIAM}{} as an immediate consequence: \begin{thm}[Deadlock-Freeness of the \textsf{SIAM}]\label{SIAM deadlock free} Let $R$ be a \textsf{SMEYLL}{} net such that no $?$ appears in its conclusions. If a run of $\mathcal{M}_R$ terminates in the state $\mathbf{T}$, then $\mathbf{T}$ is a final state. \end{thm} \begin{proof} If $R$ has no $\bot$ and no $!$ in its conclusions, deadlock freeness is immediate consequence of Theorem~\ref{net_termination}. However, the result is true also without this constraint, because we can always ``close'' the net $R$ into a net $\overline{R}$ in a way that cannot create any new deadlocks. $\overline{R}$ is the net obtained from $R$ when we cut each conclusion $A$ of $R$ with the conclusion $A\b$ of the net $S_{A\b}$ which is defined as follows. $S_{A\b}$ has the direct encoding of the formula tree of $A\b$ above the conclusion $A\b$ (each modality $?$ is introduced by a $?d$ node); the atomic leaves are conclusion of an axiom in the case of $X,X\b, \bot$, or of a $\mathsf{one}$ node in the case of $\mathsf{1}$. Therefore, $S_{A\b}$ has only conclusions $X\b,X,1$, \emph{i.e.}\ the other side of the axioms. To conclude we observe that the \textsf{SIAM}{} deadlocks in $\overline{R}$ iff it deadlocks in $R$. \end{proof} We stress that in the statement above there is \emph{no assumption that the conclusions are simple formulas} (unlike in Lemma~\ref{net_termination}, or Theorem~\ref{main_lem}). The constraint that the conclusions are required not to contain the $?$ modality is instead a real limit, which is intrinsic to most presentations of GoI (see, \emph{e.g.}, ~\cite{Girard89}). \subsection{Computational Semantics}\label{semantics} For the rest of this section, we assume the nets to be simple nets. The reason why this is not a bothering restriction, is that the nets to which we are going to give computational meaning in the Section~\ref{sec:beyond} are nets where all conclusions have type $1$. The machine $\machine{R}$ implicitly gives a semantics to $R$. By Proposition~\ref{lem:diamProp}, all runs of $\mathcal{M}_R$ have the same behaviour. We can therefore say that $\mathcal{M}_R$ either \emph{converges} (to a unique final state) or \emph{diverges}. We write $\mathcal{M}_R\Downarrow$ if all runs of the machine converge. We write $R\Downarrow$ if all sequences of reductions starting from $R$ terminate in the (unique) normal form. In the previous section we have established (Lemma~\ref{net_termination}) that \begin{cor}[Adequacy]\label{thm:termination} $\mathcal{M}_R \Downarrow$ if and only if $R \Downarrow$. \end{cor} We also already know that: \begin{cor}[Invariance]\label{basic_soundness} Assume $R \rightsquigarrow S$. $\mathcal{M}_R\Downarrow$ if and only if $\mathcal{M}_{S}\Downarrow$. \end{cor} We now introduce an equivalence on the machines which is finer than the one induced by convergence. We associate a partial function $\sem {R}$ to each net $R$ through the machine $\machine{R}$, and show that $\sem {R}$ is a sound interpretation. This way we have a finer computational model for $\textsf{SMEYLL}$, on which we will build in the next sections. The \emph{interpretation} $\sem {R}$ of a net $R$ is defined as follows \begin{varitemize} \item if $\machine{R}$ diverges, $\sem {R} := \Omega$, \item if $\machine{R}$ converges, $\sem {R}$ is the partial function $\sem {R}: \mathtt{INIT}_{R} \rightharpoonup \mathtt{FIN}_{R}$ where $\sem {R}(\ss) := \mathbf{p}$ if $\mathbf{p}$ is a final position in the final state $\mathbf{T}$ of the machine (\emph{i.e.}, $\mathbf{p} \in Cod(\mathbf{T}) \cap \mathtt{FIN}_{R}$) and $\mathrm{orig}(\mathbf{p}) = \ss$. \end{varitemize} \begin{theorem}[Soundness]\label{soundness} If $R \rightsquigarrow S$, $\sem {R}= \sem {S}$. \end{theorem} The proof is given in Appendix~\ref{app:SIAM}. \section{Beyond Nets: Interpreting Programs}\label{sec:beyond} \textsf{SMEYLL}{} nets as defined and studied in Section~\ref{sec:SMELLY} are purely ``logical''. In this section we introduce \emph{program nets}, which are a (slight) variation on \textsf{SMEYLL}{} nets in which external data can be manipulated. This allows us to interpret \ensuremath{\mathsf{PCF}}-like languages. The machine running on these nets will be a very simple extension of the \textsf{SIAM}, of which it inherits all properties. The intuition behind program nets is as follows. Assume a language with a single base type. The base type is mapped to the formula $\mathsf{1}$; values of the base type are stored in a \emph{memory}. Elementary operations of the base type are modeled using sync nodes, recursion is modeled by Y-boxes, conditional tests are captured by a generalization of the $\bot$-box. Arrow and product types (and all the usual $\lambda$-calculus constructions) are encoded by means of one of the well-known mappings of intuitionistic logic into linear logic~\cite{MaraistOTW95,phdmackie,Girard87}, depending on the chosen evaluation strategy. Before introducing program nets and interactive machines for them, let us fix a language which will also be our main application. \subsection{\ensuremath{\mathsf{PCF}}} The language we shall consider in this section is nothing more than Plotkin's \ensuremath{\mathsf{PCF}}, whose \emph{terms} ($ M,N,P$) and \emph{types} ($A,B$) are defined as follows: {\footnotesize \[ \begin{array}{lll} M &{:}{:}{=} & % x \; \; \mbox{\Large{$\mid$}}\;\; \lambda x.M \; \; \mbox{\Large{$\mid$}}\;\; MM \; \; \mbox{\Large{$\mid$}}\;\; \pi_l(M) \; \; \mbox{\Large{$\mid$}}\;\; \pi_r(M) \; \; \mbox{\Large{$\mid$}}\;\;\\ && \PCFpair{M,M} \; \; \mbox{\Large{$\mid$}}\;\; \PCFn{n} \; \; \mbox{\Large{$\mid$}}\;\; \mathtt{s}(M) \; \; \mbox{\Large{$\mid$}}\;\; \mathtt{p}(M) \; \; \mbox{\Large{$\mid$}}\;\;\\ && \PCFifzero{P}{M}{M} \; \; \mbox{\Large{$\mid$}}\;\; \PCFletrec{f}{x}{M}{M}, \\ A &{:}{:}{=} & % \mathbb{N} \; \; \mbox{\Large{$\mid$}}\;\; A \to A \; \; \mbox{\Large{$\mid$}}\;\; A \times A, \end{array} \] }\\ Here, $n$ ranges over the set of non-negative natural numbers. A \emph{typing context} $\Delta$ is a (finite) set of typed variables $\{x_1:A_1,\dots,x_n:A_n\}$, and \emph{typing judgements} are in the form $\Delta\vdash M:A$. We say that a typing judgement is \emph{valid} if it can be derived from a standard set of typing rules). Most term constructs are self-explanatory: we only give a few words on the $\mathtt{letrec}$ construction. In standard \ensuremath{\mathsf{PCF}}, the fixpoint is represented with a Y-combinator: while this is fine in call-by-name evaluation, it does not behave well in the context of call-by-value reduction. As the $\mathtt{letrec}$ makes sense in both situations, we use it instead. Moreover, we only want to allow recursive definitions of \emph{functions}. To syntactically enforce this, we consider a $\mathtt{letrec}$ binding two variables: one for the function to be defined, and one for its argument. A typing context $\Delta$ is a (finite) set of typed variables $\{x_1:A_1,\dots,x_n:A_n\}$, and a typing judgement is written as \[ \Delta\vdash M:A \] A typing judgement is \emph{valid} if it can be derived from the usual set of typing rules, presented in Table~\ref{tab:typrules}. On PCF terms, we define a call-by-name and a call-by-value evaluation, in a standard way. \begin{table*} \[ \infer{\Delta,x:A\vdash x:A}{} \qquad \infer{\Delta\vdash \lambda x.M:A\to B}{\Delta,x:A\vdash M:B} \qquad \infer{\Delta\vdash MN:B}{ \Delta\vdash M:A\to B & \Delta\vdash N:A } \] \[ \infer{\Delta\vdash \pi_l{(M)}:A}{ \Delta\vdash M:A\times B } \qquad \infer{\Delta\vdash \pi_r{(M)}:B}{ \Delta\vdash M:A\times B } \qquad \infer{\Delta\vdash \PCFpair{M,N}:A\times B}{ \Delta\vdash M:A & \Delta\vdash N:B } \] \[ \infer{\Delta\vdash\PCFn{n}:\mathbb{N}}{} \quad \infer{\Delta\vdash\mathtt{s}{(M)}:\mathbb{N}}{ \Delta\vdash M:\mathbb{N} } \quad \infer{\Delta\vdash\mathtt{p}{(M)}:\mathbb{N}}{ \Delta\vdash M:\mathbb{N} } \quad \infer{\Delta\vdash \PCFifzero{P}{M}{N}:A}{ \Delta\vdash P:\mathbb{N} & \Delta\vdash M:A & \Delta\vdash N:A } \] \[ \qquad \infer{\Delta\vdash \PCFletrec{f}{x}{M}{N}:C}{ \Delta,f:A\to B,x:A\vdash M:B & \Delta,f:A\to B\vdash N:C } \] \caption{Typing rules for PCF} \label{tab:typrules} \end{table*} \subsubsection{Call-by-name reduction} A {\em value} in the call-by-name setting is defined from the following grammar: {\footnotesize \[ \begin{array}{lll} U & {:}{:}{=} & x \; \; \mbox{\Large{$\mid$}}\;\; \lambda x.M \; \; \mbox{\Large{$\mid$}}\;\; \PCFpair{M,N} \; \; \mbox{\Large{$\mid$}}\;\; \PCFn{n}. \end{array} \]} \\ A {\em call-by-name reduction context $C[-]$} is defined with the following grammar: {\footnotesize \[ \begin{array}{lll} C[-] &{:}{:}{=} & [-]\; \; \mbox{\Large{$\mid$}}\;\; C[-]N\; \; \mbox{\Large{$\mid$}}\;\; \pi_l{C[-]}\; \; \mbox{\Large{$\mid$}}\;\; \pi_r{C[-]}\; \; \mbox{\Large{$\mid$}}\;\; \\ &&\mathtt{s}(C[-])\; \; \mbox{\Large{$\mid$}}\;\; \mathtt{p}(C[-])\; \; \mbox{\Large{$\mid$}}\;\; \PCFifzero{C[-]}{M}{N}. \end{array} \]} \\ In call-by-name, $M$ rewrites to $N$, written as $M \to_{\it cbn} N$, is defined according to the rules presented in Table~\ref{tab:cbnrw}. \begin{table*} \begin{center} \begin{minipage}{.6\textwidth} (1) {\em Axiom rules.} \[ \infer{(\lambda x.M)N \to_{\it cbn} M\{x:=N\}}{} \qquad \infer{\pi_l{\PCFpair{M,N}} \to_{\it cbn} M}{} \qquad \infer{\pi_r{\PCFpair{M,N}} \to_{\it cbn} N}{} \] \[ \infer{\mathtt{s}(\PCFn{n}) \to_{\it cbn} \PCFn{n+1}}{} \qquad \infer{\mathtt{p}(\PCFn{n+1}) \to_{\it cbn} \PCFn{n}}{} \qquad \infer{\mathtt{p}(\PCFn{0}) \to_{\it cbn} \PCFn{0}}{} \] \[ \infer{\PCFifzero{\PCFn{0}}{M}{N}\to_{\it cbn} M}{} \qquad \infer{\PCFifzero{\PCFn{n+1}}{M}{N}\to_{\it cbn} N}{} \] \[ \infer{\PCFletrec{f}{x}{M}{N} \to_{\it cbn} N\{f := \lambda x.\PCFletrec{f}{x}{M}{f\,x}\}}{} \] (2) {\em Congruence rules.} Provided that $C[-]$ is a call-by-name context: \[ \infer{C[M]\to_{\it cbn} C[N]}{M\to_{\it cbn} N} \] \end{minipage} \end{center} \caption{Call-by-name reduction strategy for PCF.} \label{tab:cbnrw} \end{table*} \subsubsection{Call-by-value reduction} A value in the call-by-value setting is defined from the following grammar {\footnotesize\[ \begin{array}{lll} U & {:}{:}{=} & x \; \; \mbox{\Large{$\mid$}}\;\; \lambda x.M \; \; \mbox{\Large{$\mid$}}\;\; \PCFpair{U,U} \; \; \mbox{\Large{$\mid$}}\;\; \PCFn{n}. \end{array} \] }\\ A {\em call-by-value reduction context $C[-]$} is defined with the following grammar: {\footnotesize \[ \begin{array}{lll} C[-] &{:}{:}{=} & [-]\; \; \mbox{\Large{$\mid$}}\;\; C[-]N\; \; \mbox{\Large{$\mid$}}\;\; VC[-]\; \; \mbox{\Large{$\mid$}}\;\; \PCFpair{C[-],N}\; \; \mbox{\Large{$\mid$}}\;\; \PCFpair{V,C[-]}\; \; \mbox{\Large{$\mid$}}\;\; \\ && \pi_l{C[-]} \; \; \mbox{\Large{$\mid$}}\;\; \pi_r{C[-]}\; \; \mbox{\Large{$\mid$}}\;\; \mathtt{s}(C[-])\; \; \mbox{\Large{$\mid$}}\;\; \mathtt{p}(C[-])\; \; \mbox{\Large{$\mid$}}\;\;\\ &&\PCFifzero{C[-]}{M}{N}.\phantom{\; \; \mbox{\Large{$\mid$}}\;\;} \end{array} \]} \noindent In call-by-value, $M$ rewrites to $N$, written as $M \to_{\it cbv} N$, is defined according to the rules of Table~\ref{tab:cbvrw}. \begin{table*} \begin{center} \begin{minipage}{.6\textwidth} (1) {\em Axiom rules.} \[ \infer{(\lambda x.M)U \to_{\it cbv} M\{x:=U\}}{} \qquad \infer{\pi_l{\PCFpair{U,V}} \to_{\it cbv} U}{} \qquad \infer{\pi_r{\PCFpair{U,V}} \to_{\it cbv} V}{} \] \[ \infer{\mathtt{s}(\PCFn{n}) \to_{\it cbv} \PCFn{n+1}}{} \qquad \infer{\mathtt{p}(\PCFn{n+1}) \to_{\it cbv} \PCFn{n}}{} \qquad \infer{\mathtt{p}(\PCFn{0}) \to_{\it cbv} \PCFn{0}}{} \] \[ \infer{\PCFifzero{\PCFn{0}}{M}{N}\to_{\it cbv} M}{} \qquad \infer{\PCFifzero{\PCFn{n+1}}{M}{N}\to_{\it cbv} N}{} \] \[ \infer{\PCFletrec{f}{x}{M}{N} \to_{\it cbv} N\{f := \lambda x.\PCFletrec{f}{x}{M}{f\,x}\}}{} \] (2) {\em Congruence rules.} Provided that $C[-]$ is a call-by-value context: \[ \infer{C[M]\to_{\it cbn} C[N]}{M\to_{\it cbn} N} \] \end{minipage} \end{center} \caption{Call-by-name reduction strategy for PCF.} \label{tab:cbvrw} \end{table*} \subsection{Program Nets and Register Machines} \newcommand{\surfone}[1]{{\tt SurfOne}(#1)} \newcommand{\syncnode}[1]{{\tt SyncNode}(#1)} \newcommand{{\tt SyncNames}}{{\tt SyncNames}} \newcommand{\oneindex}[1]{{\tt ind}(#1)} \newcommand{\mapsyncname}[1]{{\tt mkname}(#1)} \newcommand{\memories}{\mathrm{Mem}} \newcommand{\mem}[1]{\mathbf{m}_{#1}} \newcommand{\smem}{\mem{\mathbf{T}}} \newcommand{\mathbf{R}}{\mathbf{R}} \renewcommand{\S}{\mathbf{S}} \newcommand{\mathrm{test}}{\mathrm{test}} \newcommand{\mathrm{update}}{\mathrm{update}} \newcommand{\mathrm{init}}{\mathrm{init}} \newcommand{\mathrm{I}}{\mathrm{I}} \newcommand{\mathrm{arity}}{\mathrm{arity}} \newcommand{\nm}{\mathit{l}} In the rest of this paper, we assume that all atomic formulas are units (\emph{i.e.}, $1$ and $\bot$). The language of formulas is therefore $ A ::= \mathsf{1}\; \; \mbox{\Large{$\mid$}}\;\; \bot\middA\otimesA\middA\parrA \; \; \mbox{\Large{$\mid$}}\;\;!A\; \; \mbox{\Large{$\mid$}}\;\; ?A.$ First of all, we need the definition of a \emph{memory}: \begin{deff}\label{def:mem} Let $\mathrm{I}$ be a (possibly) infinite set whose elements are called \emph{addresses}. Let ${\tt SyncNames}$ be a finite set of names, where to each name we associate a positive number that we call {\em arity}. Given a set of \emph{values} $\mathbb{X}$, we define $\memories $ as the set $\mathrm{I}\to\mathbb{X}$ of all functions from $\mathrm{I}$ to $\mathbb{X}$, equipped with the following operations: $$ \begin{array}{r@{{~~}:{~~}}l} \mathrm{test} & \mathrm{I}\times\memories \to \mathrm{Bool}\times\memories; \\ \mathrm{update} & {\tt SyncNames}\times\displaystyle(\mathrm{I}^*)\times\memories \rightharpoonup \memories; \\ \mathrm{init} & \mathrm{I}\times\memories \to \memories. \end{array} $$ where the partial function $\mathrm{update}$ is defined on a triple $(\nm,x,\mem{})$ iff the length of $x$ equals the arity of $\nm$. A \emph{memory}\footnote{An even more fitting name would be \emph{memory states}, but we do not want to overload too much the term ``state''.} is any element $\mem{}$ of $\memories$, and we say that $\mem{}$ \emph{has values in} $\mathbb{X}$. \end{deff} Intuitively, $\mem{}$ represents a set of \emph{registers} which are referenced by the elements of $\mathrm{I}$ (the addresses). The operation $\mathrm{test}$ is used to query the value of a register, $\mathrm{update}$ to update its value, and $\mathrm{init}$ to set a register of the memory to a default value. Some comments on the operations on $\memories$ are useful. The reason why we have $\memories$ in the codomain of the operation $\mathrm{test}$, is that we aim at a general model where $\mathrm{test}$ might have a non-local effect on the memory, such as in a quantum setting (see e.g.~\cite{esop}), though its implementation is beyond the scope of this paper. Notice also that the type of $\mathrm{update}$ is really a dependent-type. \subsubsection{Program Nets} Program nets are obtained as a light and natural extension of \textsf{SMEYLL}{} nets, as follows: \begin{varitemize} \item $\bot$-boxes\ are replaced by multi-$\bot$-boxes, which are meant to handle \emph{tests}. A \emph{multi-$\bot$-box} is a $\bot$ node to which we associate \emph{two} structures with the same conclusions $\Gamma$, as shown in Fig.~\ref{multibox} and~\ref{SIAMmultibox} (these figures are fully explained later on). An {\em extended {\textsf{SMEYLL}} net} is a {\textsf{SMEYLL}} net where multi-$\bot$-box{}es\footnote{In some example pictures, it is still convenient to use simple $\bot$-boxes; they can be seen as a short-cut for multi-$\bot$-boxes\ with the same net in both places. } are used in place of $\bot$-box{}es. \item Given an (extended) net $R$, let $\surfone{R}$ be the set of all $\mathsf{one}$ nodes at the surface, and $\syncnode{R}$ be the set of {\em all} sync-nodes of the extended net $R$, whether at surface or not. A {\em decoration} of $R$ with names ${\tt SyncNames}$ consists of the following two pieces of data: \begin{varenumerate} \item An injective partial map $\oneindex{R}:\surfone{R}\rightharpoonup \mathrm{I}$, \emph{i.e.}, $\mathsf{one}$ nodes are not necessarily decorated; \item A {\em total} map $\mapsyncname{R}:\syncnode{R}\to {\tt SyncNames}$, where ${\tt SyncNames}$ is a finite set of names. This map is simply naming the sync nodes appearing in the extended net $R$. We assume that given a name of arity $k$, all the sync nodes decorated with that name have the same arity $k$, where the arity of a sync node is the total number of $1$'s in its premisses. \end{varenumerate} \end{varitemize} \begin{deff} Given a set $\memories$ as in Definition~\ref{def:mem}, a {\em program net} is a pair $\mathbf{R}= (R,\mem{R})$, where $R$ is a decorated, extended net and $\mem{R}\in \memories$ is a memory. \end{deff} Rewriting on {\textsf{SMEYLL}} nets easily extends to program nets as shown in Fig.~\ref{multibox} (where we adopt the convention that the memory associated to the net is $\mem{1}$ before reduction, and $\mem{2}$ after reduction). The rules are as follows. Rule ${\it decor}$ is a new rewriting rule which associates to a surface node $\mathsf{one}$ an address $r\in I$; when doing this, we are \emph{linking} the $\mathsf{one}$ node to the memory. Rule ${\it bot.el}$ is modified to reflect the use of multi-$\bot$-boxes. As shown in Fig.~\ref{multibox}, the reduction depends on the memory, and is determined by the result of the operation $\mathrm{test}$. For the other reduction rules, the underlying net is rewritten exactly as for \textsf{SMEYLL}\ nets. Concerning the memory, only the rule ${\it s.el}$ modifies it, as follows: $\mem{2}=\mathrm{update}(\nm,(r_1,r_2,\dots r_k), \mem{1})$, where $k$ is the arity of $\nm$. In all the remaining cases $\mem{1}=\mem{2}$ (i.e. the memory is not changed) \begin{figure}[htbp] \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center} \includegraphics[width=8.2cm]{multibox} \end{center} \end{minipage}} \end{center} \caption{Program Net Rewriting.}\label{multibox} \end{figure} What we have introduced so far is a general schema for program nets; in order to capture specific properties, we need to define $\memories$ and the operations on it. In the following section, we specialize the construction to \ensuremath{\mathsf{PCF}}. \subsubsection{\ensuremath{\mathsf{PCF}}\ nets}\label{pcf nets} \newcommand{\ttrue}{\mathtt{tt}} \newcommand{\tfalse}{\mathtt{ff}} To encode \ensuremath{\mathsf{PCF}}\ programs, we use a class of program nets. Once $\memories$ and the operations on it are appropriately defined, we are able to gain more expressive power than in \textsf{SMEYLL}{}, while good computational properties will be still guaranteed by the underlying nets. A {\em \ensuremath{\mathsf{PCF}}\ net} is a program net where $\memories$ has values in $\mathbb{N}$, that is, $\memories ~{:}{=}~ \mathrm{I}\to\mathbb{N}$. The set of sync-names is $\{\mathtt{max}, \mathtt{p}, \mathtt{s}\}$: $\mathtt{max}$ is binary while $\mathtt{p}$ and $\mathtt{s}$ are unary. The operation $\mathrm{update}$ is defined as follows. { The sync node of label $\mathtt{p}$ acts as the predecessor, that is $\mathrm{update}(\mathtt{è},r,\mem{1}) = \mem{2}$ where $\mem{2}(r)=\mem{1}(r)-1$ and $\mem{2}(k)=\mem{1}(k)$ if $k\neq r$. The node of label $\mathtt{s}$ acts as the successor, that is $\mathrm{update}(\mathtt{s},n,\mem{1}) = \mem{2}$ where $\mem{2}(r)=\mem{1}(r)+1$ and $\mem{2}(k)=\mem{1}(k)$ if $k\neq r$. Finally, the sync node of label $\mathtt{max}$ acts as follows: $\mathrm{update}(\mathtt{max},r,q,\mem{1}) = \mem{2}$ where $\mem{2}(r)=\mem{2}(q)=\max(\mem{1}(r),\mem{1}(q))$ and $\mem{2}(k)=\mem{1}(k)$ if $k\neq r$ and $k\neq q$.} For the other operations, $\mathrm{test}(r,\mem{})$ is defined to be $(\ttrue, \mem{})$ if $\mem{}(r) = 0$, and $(\tfalse, \mem{})$ otherwise; $\mathrm{init}(r,\mem{})$ is defined to be the memory $\mathbf{n}$ where $\mathbf{n}(r) = 0$ and $\mathbf{n}(k) = \mem{}(k)$ for $k \neq r$. Any typing derivation is encoded as a \ensuremath{\mathsf{PCF}}\ net. Two possible encodings will be considered: one for call-by-value, one for call-by-name, which correspond to two translations of intuitionistic logic into linear logic~\cite{MaraistOTW95,phdmackie}. \subsubsection{Register Machines} \begin{figure} \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center} \includegraphics[width=7.2cm]{SIAMmultibox} \end{center} \end{minipage}} \end{center} \caption{Multi-$\bot$-box\ Transition for a Register Machine.}\label{SIAMmultibox} \end{figure} The \textsf{SIAM}{}, as we defined it in Section~\ref{SIAM}, is readily adapted to interpret \ensuremath{\mathsf{PCF}}{} nets. Let us first sketch a general construction for the machine which is associated to a program net. The dynamics of the machine is mostly inherited from the \textsf{SIAM}; the novelty is that the notion of state now includes a {memory}. Let us fix a set of memories $\memories{}$. To a program net $\mathbf{R}= (R, \mem{R})$ ($\mem{R}\in \memories$) is associated the machine $\mathcal{M}_{\mathbf{R}}$ whose memories, states and transitions are defined as follows. The definition of position and set of positions is the same as in Section~\ref{SIAM}. \paragraph*{Memories} $\memories$ and the operations on it are the same as for the program net $\mathbf{R}$. To illustrate the machine, we need however to make the set of addresses $\mathrm{I}$ precise. We take $\mathrm{I}$ to be the set of positions $\mathtt{INIT}_R \cup \mathtt{ONES}_R$. We say that the access to the memory is defined for all positions for which $\mathrm{orig} (\mathbf{p}) \in \mathtt{INIT}_R \cup \mathtt{ONES}_R$. \paragraph*{States} A state of $\mathcal{M}_{\mathbf{R}}$ is a pair $(\mathbf{T}, \smem)$, where $\mathbf{T}$ is a state in the sense of Section~\ref{SIAM}, and $\smem\in \memories{}$ is a memory. An \emph{initial} state of $\mathcal{M}_{\mathbf{R}}$ is a pair $(\mathbf{I}, \mem{\mathbf{I}})$, where $\mem{\mathbf{I}}$ coincides with $\mem{R}$ for the positions corresponding to decorated $\mathsf{one}$ nodes, is arbitrary on $\mathtt{INIT}_R$, and is $0$ anywhere else. \paragraph*{Transitions} The transitions are the same as in \ref{SIAM}, except in the following cases, which are defined only if the access to the memory is defined. \begin{varitemize} \item Sync nodes. When the tokens $\mathbf{p}_1, \dots, \mathbf{p}_k$ cross a sync node with label $\nm$ and arity $k$, the operation $\mathrm{update}(\nm,\mathbf{p}_1, \dots, \mathbf{p}_k, \mem{})$ opportunely modifies the memory $\mem{}$. \item Multi-$\bot$-box. Let the box be as in Fig.~\ref{SIAMmultibox}, where $S_0$ and $S_1$ are the two nets associated to it, and the edges $e_0,e_1$ are as indicated. When a token is in position $\mathbf{p}=(e,\epsilon,\mathord{t})$ on the principal conclusion of the box, it moves to $ (e_0,\epsilon,\mathord{t})$ if $\mathrm{test}(\mathrm{orig}(\mathbf{p}),\smem)$ returns the boolean $\mathtt{ff}$ (arrow (i) in Fig.~\ref{SIAMmultibox}) and it moves to $(e_1,\epsilon,\mathord{t})$ if $\mathrm{test}(\mathrm{orig}(\mathbf{p}),\smem)$ returns $\mathtt{tt}$ (arrow (ii) in Fig.~~\ref{SIAMmultibox}). If a token $(f, \mathord{s}, \mathord{t})$ is on an auxiliary conclusion $f$, it moves to the corresponding conclusion in $S_0$ (resp. $S_1$) if $\mathord{t}\in \mathtt{Instances}_{\mathbf{T}}(S_0)$ (resp. $\mathord{t} \in \mathtt{Instances}_{\mathbf{T}}(S_1)$). \end{varitemize} \paragraph*{State Transformations} Let $\mathbf{R}=(R,\mem{R})$ be a program net and $\mathbf{R} \rightsquigarrow \S=(S, \mem{S})$. The transformation $\mathrm{trsf}$ described in Fig.~\ref{fig:trsfOthers} associates \emph{positions} of $R$ to \emph{positions} of $S$; this allows us also to specify the transformation of the memory, hence allowing us to map a memory of $\mathcal{M}_\mathbf{R}$ into a memory of $\mathcal{M}_{\S}$. More precisely, each state $(\mathbf{T}, \smem )$ of $\mathcal{M}_\mathbf{R}$ is mapped into a state $(\mathrm{trsf}(\mathbf{T}), \mathrm{trsf}(\smem) )$ of $\mathcal{M}_{\S}$. \subsubsection{\ensuremath{\mathsf{PCF}}\ Machines} A \ensuremath{\mathsf{PCF}}\ machine is a {register} machine where $\memories$ and the operations on it are defined as for \ensuremath{\mathsf{PCF}}\ nets (Section \ref{pcf nets}). As for the \textsf{SIAM}, we have that $\mathrm{trsf}$ maps each run of $\mathcal{M}_\mathbf{R}$ into a run of $\mathcal{M}_{\mathbf{R}'}$ which converges/diverges/deadlocks iff the run on $\mathcal{M}_R$ does. By combining \ensuremath{\mathsf{PCF}}\ nets and the \ensuremath{\mathsf{PCF}}\ machine, it is possible to establish similar results to those in Section~\ref{interplay} and~\ref{semantics}. \condinc{}{ \begin{lemma} Let $\mathbf{R}$ be a \ensuremath{\mathsf{PCF}}\ net where all conclusions have type $\mathsf{1}$. The machine $\mathcal{M}_\mathbf{R}$ terminates in a final state (say $(\mathbf{T},\smem)$) iff $\mathbf{R}$ reduces to a cut and sync free net (say $\mathbf S=(S, \mem{S})$). Moreover \[ \mem{S}=\sem {\smem} \] where $\sem{\blue{\smem}}$ is the restriction of $\blue{\smem}$ to the elements pointed to by final positions. \end{lemma} } Assume $\mathbf{R}$ is a \ensuremath{\mathsf{PCF}}\ net of conclusion $\mathsf{1}$. We write $\mathbf{R} \Downarrow n$ if $\mathbf{R}$ reduces to $\mathbf S$, where the value in the memory corresponding to the unique $\mathsf{one}$ node in $\mathbf S$ is $n$. Similarly we write $\mathcal{M}_{\mathbf{R}} \Downarrow n$, where $n$ is the value pointed to by the unique final position in the final state of $\mathcal{M}_{\mathbf{R}}$. \begin{thm}[Adequacy] $\mathbf{R} \Downarrow n$ if and only if $\mathcal{M}_{\mathbf{R}} \Downarrow n$ \end{thm} \subsection{The Call-by-Value Encoding} In the call-by-value encoding of \ensuremath{\mathsf{PCF}}\ into \ensuremath{\mathsf{PCF}}\ nets, the shape of the net corresponding to $x_1:A_1,\dots,x_n:A_n\vdash M:B$ is \[ \includegraphics[page=2,width=5cm]{PCFcbv.pdf} \] where $M^\dagger$ is a net and where $(\cdot)^\dagger$ is a mapping of types to {\textsf{SMEYLL}} formulas, defined as follows: \begin{align*} \mathbb{N}^\dagger&:=~\mathsf{1};\\ (A\to B)^\dagger&:=~{!(A^\dagger\b\parr B^\dagger)};\\ (A\times B)^\dagger&:=~{A^\dagger}\otimes{B^\dagger}. \end{align*} \noindent In our translation, we have chosen to adopt an \emph{efficient} encoding, rather than the usual call-by-value encoding. In other words, we follow Girard's optimized translation of intuitionistic into linear logic, which relies on properties of positive formulas \cite{Girard87}\footnote{A good summary of the different translations is given at the address \url{http://llwiki.ens-lyon.fr/mediawiki/index.php/Translations_of_intuitionistic_logic}}. We feel that this encoding is closer to call-by-value computation than the non-efficient one; it however raises a small issue. Notice in fact that we map natural numbers into the type $1$, not $!1$. How about duplication and erasure, then? We will handle this in the next section, by using sync nodes, but let us first better clarify what the issue is. Girard's translation relies on the fact that $1$ and $!1$ are logically equivalent (\emph{i.e.}, they are equivalent for provability). However, this in itself is not enough to capture duplication in our setting, because we need to also duplicate the values in the memory, and not only the underlying net. We illustrate this in Fig.~\ref{one_equiv}. The portion inside the dashed line corresponds to a proof of $1\vdash {!1} $; when we look at an example of its use (l.h.s. of the figure), we see that by using it we do duplicate the node $\mathsf{one}$, but \emph{not} the value $n$ which is associated to it. The value $n$ is not transmitted from the $1$ to the $!1$ which is going to be duplicated. The logical encoding however still correctly models weakening (r.h.s. of Fig.~\ref{one_equiv}). \begin{figure} \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center} \includegraphics[width=7cm]{girardCBV} \end{center} \end{minipage}} \end{center} \caption{A Proof of $1\vdash {!1}$.}\label{one_equiv} \end{figure} \paragraph*{Exponential Rules and the Units} The formula $\bot$ does not support contraction, weakening and promotion ``out of the box'' in {\textsf{SMEYLL}} but it is nonetheless possible to encode them as \ensuremath{\mathsf{PCF}}\ nets with the help of the binary sync node {\tt max}. \begin{figure} \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center} \includegraphics[page=4,width=4cm]{PCFcbv-positive-nodes.pdf} \end{center} \end{minipage}} \end{center} \caption{Syntactic Sugar: Copying $\bot$.}\label{fig:copy-node} \end{figure} \begin{varitemize} \item {\em Contraction.} We encode contraction on $\bot$ by using a sync node {\tt max} and the syntactic sugar {\tt copy} defined in Fig.~\ref{fig:copy-node}. It duplicates the value associated to the incoming edge, and it does so in a call-by-value manner: it will only copy a $\mathsf{one}$ node (i.e. a result), not a whole computation. In particular, it should be noted that the rules of net rewriting are not modified. \item {\em Promotion.} We aim at the reduction(s) shown in Fig.~\ref{fig:prom-one-idea}: a $\mathsf{one}$ node with memory set to $n$ is sent to a frozen computation (inside a $!$-box) computing the same $\mathsf{one}$ node. Since {\textsf{SMEYLL}} features recursion in the form of the $Y$-box, together with the copy operation already defined it is possible to write a net for the formula $\bot\parr{!\mathsf{1}}$, as shown in Fig.~\ref{fig:prom-one-net}. \item {\em Weakening.} We can directly use the coding given on the r.h.s. of Fig.~\ref{one_equiv}.\\ \end{varitemize} \begin{figure} \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center} \includegraphics[page=6,width=8.2cm]{PCFcbv-positive-nodes.pdf} \end{center} \end{minipage}} \end{center} \caption{Desired Behavior for the Mapping of $\mathsf{1}$ to ${!\mathsf{1}}$.}\label{fig:prom-one-idea} \end{figure} \begin{figure} \begin{center} \fbox{ \begin{minipage}{.47\textwidth} \begin{center} \includegraphics[page=7,width=6cm]{PCFcbv-positive-nodes.pdf} \end{center} \end{minipage}} \end{center} \caption{\ensuremath{\mathsf{PCF}}\ Net Computing $\bot\! \parr {!1}$.}\label{fig:prom-one-net} \end{figure} \paragraph*{Exponential Rules for the Image $A^\dagger$ of any Type $A$} The goal of this paragraph is to construct nets which behave like the nodes $?c$, $?w$ and $?p$ of linear logic, this for any edge of type $A^\dagger\b$. For any type $A$, the formula $A^\dagger$ is a multi-tensor of $\mathsf{1}$'s and $!$-ed types. We therefore construct the grey contraction, weakening and promotion nodes inductively on the structure of the type, as presented in Fig.~\ref{fig:cwp-pos-type}. \begin{figure*} \begin{center} \fbox{ \begin{minipage}{14cm} \begin{center} \includegraphics[width=13cm,page=1]{PCFcbv-positive-nodes2} \end{center} \end{minipage}} \end{center} \caption{Inductive Definition of Contraction, Weakening and Promotion Nodes.}\label{fig:cwp-pos-type} \end{figure*} \paragraph*{Interpreting Typing Judgements} Typing derivations are inductively mapped to \ensuremath{\mathsf{PCF}}\ nets as shown in Fig.~\ref{fig:PCFcbv}. The grey nodes $\mathsf{?c}$ and $\mathsf{?w}$ were defined in Fig.~\ref{fig:cwp-pos-type} (the case of $?\bot$ has been discussed above). The grey node ``$\mathsf{?}$'' is a shortcut for the following construction: \begin{center} \includegraphics[page=3,height=2cm]{PCFcbv-positive-nodes.pdf} \end{center} \begin{figure*} \begin{center} \fbox{ \begin{minipage}{15cm} \begin{center} \includegraphics[page=1,width=14cm]{PCFcbv3} \end{center} \end{minipage}} \end{center} \caption{Call-by-Value Translation of \ensuremath{\mathsf{PCF}}\ into \ensuremath{\mathsf{PCF}}\ Nets.}\label{fig:PCFcbv} \end{figure*} \paragraph*{Adequacy} We prove the following result, which relates the call-by-value encoding into \ensuremath{\mathsf{PCF}}\ nets and the call-by-value reduction strategy for terms: \begin{theorem} \label{th:adequacy-cbv} Let $M$ be a closed term of type $\mathbb{N}$. Then $M\to_{\it cbv}\PCFn{n}$ if and only if $M^\dagger\Downarrow n$ \end{theorem} As a corollary, we conclude that the machine on $M^\dagger$ behaves as $M$ in call-by-value. \begin{corollary} Let $M$ be a closed term of type $\mathbb{N}$. Then $M$ call-by-value converges if and only if $\mathcal{M}_{M^\dagger}$ itself converges \end{corollary} \subsection{The Call-by-Name Encoding} Besides the encoding of call-by-value \ensuremath{\mathsf{PCF}}, which is non-standard, and has thus been described in detail, program nets also have the expressive power to encode call-by-\emph{name} \ensuremath{\mathsf{PCF}}. The encoding is the usual one: a proof net corresponding to $x_1:A_1,\dots,x_n:A_n\vdash M:B$ has conclusions $\{?A_1^*\b,\ldots, ?A_n^*\b, B^*\}$ \[ \includegraphics[page=2,width=5cm]{PCFcbn.pdf} \] where $(\cdot)^*$ is a mapping of types to {\textsf{SMEYLL}} formulas: \begin{align*} \mathbb{N}^*&{}:=~\mathsf{1};\\ (A\to B)^* &{}:=~ ?(A^*)\b\parr B^*;\\ (A\times B)^* &{}:=~ {!(A^*)}\otimes{!(B^*)}. \end{align*} \noindent Typing derivations are mapped to \ensuremath{\mathsf{PCF}}{} nets essentially in the standard way, and presented in Figure~\ref{fig:PCFcbn}. Note that unlike the call-by-value translation, since every context is always in $?$-form, we do not need special weakening, contraction and promotion nodes. Then, as in the previous section, one can relate the call-by-name encoding in \ensuremath{\mathsf{PCF}}\ nets and the call-by-name reduction strategy for terms. \begin{figure*} \begin{center} \fbox{ \begin{minipage}{15cm} \begin{center} \includegraphics[page=1,width=14cm]{PCFcbn} \end{center} \end{minipage}} \end{center} \caption{Call-by-Name Translation of \ensuremath{\mathsf{PCF}}\ into \ensuremath{\mathsf{PCF}}\ Nets.}\label{fig:PCFcbn} \end{figure*} \begin{theorem}[Adequacy] \label{th:adequacy-cbn} Let $M$ be a closed term of type $\mathbb{N}$. Then $M\to_{\it cbn}\PCFn{n}$ if and only if $M^*\Downarrow n$ \end{theorem} As a corollary, one can show that the machine on $M^*$ behaves as $M$ in call-by-name. \begin{corollary} Let $M$ be a closed term of type $\mathbb{N}$. Then $M$ converges in call-by-name if and only if the register machine $\mathcal{M}_{M^*}$ itself converges. \end{corollary} \section{Conclusions} We have shown how the multitoken paradigm not only works well in the presence of exponential and fixpoints, but also allows us to treat different evaluation strategies in a uniform way. Some other interesting aspects which emerged along the last section are worth being mentioned. In the call-by-value encoding of \ensuremath{\mathsf{PCF}}, we have used binary \emph{sync nodes} in an essential way, to duplicate values in the register: without them, the efficient encoding of natural numbers would not have been possible. This shows that sync nodes can indeed have an interesting computational role besides reflecting entanglement in quantum computation~\cite{lics2014}. In the future, we plan to further the potential of such an use, in particular in view of efficient implementations. A key feature of \textsf{SMEYLL}{} nets rewriting is that it is \emph{surface}. Surface reduction allows us to interpret recursion, but how much do we lose by considering surface reduction instead of usual cut-elimination? We think that a simple way to understand the limitations of surface reduction is to consider an analogy to Plotkin's weak reduction. In \ensuremath{\mathsf{PCF}}, $\lambda x.\Omega$ is a normal form. As a consequence one loses, \emph{e.g.}, some nice results about the shape of normal forms in the $\lambda$-calculus (which, in logic, corresponds to the subformula property). In presence of fixpoints, however, this is a necessary price to pay. Otherwise, any term including a fixpoint would diverge. Of course there is much more to be said about all this, and we refer the reader to, \emph{e.g.}, the work by Simpson~\cite{SimpsonRTA2005}. \section*{Acknowledgments} The first author is partially supported by the ANR project 12IS02001 PACE. The second and third authors were supported by the project ANR-2010-BLANC-021301 ``LOGOI''. \bibliographystyle{abbrv}
1,314,259,996,385
arxiv
\section{\large Introduction} The vigorous study of general linear groups and more generally algebraic ${\K}$-theory was stimulated in mid-sixties by the desire to solve Serre's problem on projective modules ({\it cf.} Faisceaux Alg\'ebriques Coherent, 1955). This prominent problem in commutative algebra asks whether {\it finitely generated projective modules over a polynomial ring over a field are free}. The beautiful book {\it Serre's problem on projective modules} by T.Y. Lam gives a comprehensive account of the mathematics surrounding Serre's problem and its solution. Later we see analogs of Serre's Problem for modules with forms and for other classical groups in the work of H. Bass, A. Suslin, L.N. Vaserstein, V.I. Kopeiko, R. Parimala and others in \cite{Bass}, \cite{SUSV}, \cite{KOP}, \cite{SUSK}, \cite{P1}, \cite{P2}. In this current paper, we are interested in the context of modules with forms in certain problems related to Serre's Problem, {\it viz.} normality of the elementary subgroup of the full automorphism group, Suslin's local-global principle for classical-like groups, stabilization for ${\k}$-functors of classical-like groups, and the structure of unstable ${\k}$-groups of classical-like groups. Difficulties one has in handling the quadratic version of Serre's Problem in characteristic 2 were first noted by Bass in \cite{Bass}. In fact, in many cases it was difficult to handle classical groups over fields of characteristic 2, rather than classical groups over fields of char $\ne$ 2. (For details see \cite{HV1}). In 1969, A. Bak resolved this problem by introducing {\it form rings} and {\it form parameter}. He introduced the general quadratic group or Bak's unitary group, which covers many different types of classical-like groups. We also see some results in this direction in the work of Klein, Mikhalev, Vaserstein {\it et al.} in \cite{K1}, \cite{K2}, \cite{V2}. The concept of {\it form parameter} also appears in the work of K. McCrimmon, and plays an important role in his classification theory of Jordan algebras ({\it cf.}~\cite{M}), for details see (\cite{HO}, footnote pg. 190.) and \cite{J}. In his seminal work ``${\K}$-theory of forms", Bak has established analog of many problems related to Serre's problem in a very explicit and rigorous manner. But, Bak's definition of the general quadratic group does not include many other types classical-like groups, {\it viz.} odd dimensional orthogonal groups, exceptional groups ${\E}_6$, ${\E}_7$, ${\E}_8$ {\it etc.} In 2000, G. Tang, in his Ph.D thesis, established analog of many results for the general Hermitian groups. Very recently, in 2005, Victor Petrov using Bak's concept of {\it doubly parametrized form parameter} has resolved this problem by introducing {\it odd unitary groups}, which also includes Bak's unitary and general Hermitian groups; {\it cf.~\cite{P}}. Also, he has established many analogous results for his group. In 1976, D. Quillen came up with a localization method which was one of the main ingredients for the proof of Serre's problem (now widely known as Quillen---Suslin Theorem). Shortly after the original proof Suslin introduced the following matrix theoretic version of Quillen's local-global principle. {\bf Suslin's Local-Global Principle:} {\it Let $R$ be a commutative ring with identity, $X$ a variable and $\alpha(X)\in {\GL}(n, R[X])$ with $\alpha(0)={\I}_n$, $n\ge 3$. If $\alpha_{\mf{m}}(X)\in {\E}(n, R_{\mf{m}}[X])$ for every maximal ideal $\mf{m}\in {\M}(R)$, then $\alpha(X)\in {\E}(n, R[X])$. } \vp Soon after he gave the ${\k}$-analog of Serre's problem, which says, \vp {\it for a polynomial ring in $r$ variables over a field $K$ elementary subgroup of ${\GL}_n(R)$ coincides with the special linear group. i.e.} $${\E}_n(K[X_1,\ldots,X_r])={\SL}_n(K[X_1,\dots,X_r]). $$ In connection of this theorem he proved the normality of the elementary subgroup ${\E}(n, A)$ in the general linear group ${\GL}(n, A)$, over a module finite ring $A$, for $n\ge 3$; ({\it cf.}~\cite{Tu}). Later analogous results for the symplectic and orthogonal groups were proven by Suslin and Kopeiko in \cite{SUS} and \cite{SUSK} and by Fu An Li in \cite{Fu}, and for arbitrary Chevalley groups by Abe ({\it cf.}~\cite{A}) in the local case, and by Taddei ({\it cf.}~\cite{Ta}) in general. Later we see a simpler and more general treatment in works of Ambily, Bak, Hazrat, Petrov, Rao, Stavrova, Stepanov, Suzuki, Vavilov, and others. We see generalization of the above local-global principle for the symplectic group in \cite{KOP}, and for the orthogonal group in \cite{SUSK}. The normality of the general quadratic groups is known from the work of A. Bak and N. Vavilov, {\it cf.\cite{BV}}. In \cite{T}, G. Tang has proved the normality property for the general Hermitian groups. In \cite{BRK}, we have shown that the question of normality of the elementary subgroup of the general linear group, symplectic and orthogonal groups, is equivalent to the above local-global principle, where the base ring is associative with identity and finite over its center. In that article above three classical groups were treated uniformly. Motivated by the work of A. Bak, R.G. Swan, L.N. Vaserstein and others, in \cite{BBR}, the author with A. Bak and R.A. Rao has established an analog of Suslin's local-global principle for the transvection subgroup of the automorphism group of projective, symplectic and orthogonal modules of global rank at least 1 and local rank at least 3, under the assumption that the projective module has constant local rank and that the symplectic and orthogonal modules are locally an orthogonal sum of a constant number of hyperbolic planes. In this article we have proved the equivalence of the local global principle with the normality property. Since normality holds in the above cases, this establishes that the local global principle also holds. In fact following Suslin-Vaserstein's method we establish an analogous local-global principle for the general quadratic and general Hermitian groups. We treat these two groups uniformly and give explicit proofs of those results. We have overcome many technical difficulties which come in the Hermitian case due to the elements $a_1,\ldots,a_r$ (with respect to these elements we define the Hermitian groups). We assume $a_1=0$. The rigorous study of the general Hermitian groups can be found in \cite{T}. In \cite{BV}, we get an excellent survey on this area in a joint work of A. Bak and N. Vavilov. We refer \cite{HVZ} for an alternative approach to localization, \cite{HSVZ} for a general overview, and \cite{CR} for relative cases. Also, for commutative rings with identity the Quillen---Suslin's local-global principle is in the work of V. Petrov and A. Stavrova ({\it cf.}~\cite{PS}), which covers, in particular, classical groups of Witt index $\ge 2$ or $\ge 3$, depending on the type. In \cite{BRK}, it has been shown that the normality criterion of the elementary subgroup of the general linear group is equivalent to the above local-global principle. In this paper we establish the analogous local-global principle for the general quadratic and Hermitian group, and prove an equivalence. More precisely, we prove ($\S 6$, Theorem \ref{LG}, and $\S 7$, Theorem \ref{N-LG}) \vp {\bf Theorem 1} {\bf (Local-Global Principle)} Let $k$ be a commutative ring with identity and $R$ an associative $k$-algebra such that $R$ is finite as a left $k$-module. If $\alpha(X)\in {\G}(2n,R[X], \LMD[X])$, $\alpha(0)={\I}_n$ and $$\alpha_{\m}(X)\in {\E}(2n,R_{\m}[X], \LMD_{\mf m}[X])$$ for every maximal ideal $\m \in {\M}(k)$, then $$\alpha(X)\in {\E}(2n,R[X], \LMD[X]).$$ $($Note that $R_{\m}$ denotes $S^{-1}R$, where $S = k \setminus \m$.$)$ \vp {\bf Theorem 2} Let $k$ be a commutative ring with identity and $R$ an associative $k$-algebra such that $R$ is finite as a left $k$-module. The for size at least 6 in the quadratic case and at least $2(r+3)$ in the Hermitian case: \begin{center} {\bf (Normality of the elementary subgroup)}\\ $\equiv$\\ {\bf (Local-Global Principle)} \end{center} \vp \iffalse \begin{enumerate} \item {\bf (Normality)} The elementary subgroup ${\E}(2n, R, \LMD)$ is a normal subgroup of the general quadratic (general Hermitian) group ${\G}(2n, R, \LMD)$. \item {\bf (L-G Principle)} If $\alpha(X)\in {\G}(2n,R[X], \LMD[X])$, $\alpha(0)={\I}_n$ and $$\alpha_{\m}(X)\in {\E}(2n,R_{\m}[X], \LMD_{\mf m}[X])$$ for every maximal ideal $\m \in {\M}(k)$, then $$\alpha(X)\in {\E}(2n,R[X], \LMD[X]).$$ $($Note that $R_{\m}$ denotes $S^{-1}R$, where $S = k \setminus \m$.$)$ \end{enumerate} \fi To give a complete picture about the ${\k}$-functors we shall shorty discuss the progress in the stabilization problem for ${\k}$-functors. The study of this problem first time appeared in the work of Bass--Milnor--Serre, and then we see in the work by A. Bak, M. Stein, L.N. Vaserstein, and others for the symplectic, orthogonal and general quadratic groups. For details {\it cf.}~\cite{Bak1}, \cite{ST}, \cite{V}, \cite{V2}, and \cite{V3}. In 1998, R.A. Rao and W. van der Kallen studied this problem for the linear groups over an affine algebra in \cite{RV}. The result settled for the general quadratic and the general Hermitian groups by A. Bak, G. Tang and V. Petrov in \cite{Bak2} and \cite{Bak3}. The result by Bak---Petrov---Tang has been improved by Sergei Sinchuk, ({\it cf.}~\cite{SS}). It has been observed that over a regular affine algebra Vaserstein's bounds for the stabilization can be improved for the transvection subgroup of full automorphism group of projective, and symplectic modules. But cannot be improved for the orthogonal case in general. For details {\it cf.} \cite{BR}, \cite{BRS}. We refer recent breakthrough result by J. Fasel, R.A. Rao and R.G. Swan (\cite{FRS}, Corollary 7.7). A very recent result of Weibo Yu gives a similar bound for the odd unitary groups, ({\it cf.}~\cite{We}). In this paper we don't prove any new result in this direction. Though the study of stability for ${\k}$-functors started in mid-sixties, the structure of ${\k}$-group below the level of stable range was not much studied. In 1991, A. Bak showed that the group ${\GL}(n, R)/{\E}(n, R)$ is nilpotent-by-abelian for $n\ge 3$; ({\it cf.}~\cite{Bak}). In \cite{HR}, R. Hazrat proved the similar result for the general quadratic groups over module finite rings. The paper of Hazrat and Vavilov \cite{HV} redoes this for classical Chevalley groups (that is types A, C, and D) and then extends it further to the exceptional Chevalley groups (that is types E, F, and G). They have shown the following: Let $\Phi$ be a reduced irreducible root system of rank $\geq 2$ and $R$ be a commutative ring such that its Bass--Serre dimension $\delta(R)$ is finite. Then for any Chevalley group ${\G}(\Phi, R)$ of type $\Phi$ over $R$ the quotient ${\G}(\Phi, R)/{\E}(\Phi, R)$ is nilpotent-by-abelian. In particular, ${\k}(\Phi, R)$ is nilpotent of class at most $\delta(R) + 1$. They use the localization-completion method of A. Bak in \cite{Bak}. In \cite{BBR}, the author with Bak and Rao gave a uniform proof for the transvection subgroup of full automorphism group of projective, symplectic and orthogonal modules of global rank at least 1 and local rank at least 3. Our method of proof shows that for classical groups the localization part suffices. Recently, in ({\it cf.}~\cite{BHV}) Bak, Vavilov and Hazrat proved the relative case for the unitary and Chevalley groups. But, to my best knowledge, so far there is no definite result for the general Hermitian groups. I observe that using the above local-global principle, arguing as in \cite{BBR}, it follows that the unstable ${\k}$ of general Hermitian group is nilpotent-by-abelian. We follow the line of Theorem 4.1 in \cite{BBR}. More precisely, we prove ($\$ 8$, Theorem \ref{nil}) \vp {\bf Theorem 3} For the general Hermitian group of large size over a commutative ring $R$ with identity, the quotient group $\frac{{\SH}(2n, R, a_1,\ldots, a_r)}{{\EH}(2n, R, a_1,\ldots, a_r)}$ is nilpotent for $n\ge r+3$. \vp We conclude with a brief description of the organization of the rest of the paper. Section 1 of the paper serves as an introduction. Section 2 recalls form rings, section 3 general quadratic groups over form rings and their elementary subgroups, section 4 general Hermitian groups and their elementary subgroups, section 5 provides preliminary results regarding the groups above, section 6 the local-global principle for the elementary subgroup of the general quadratic and general Hermitian group, section 7 the equivalence of normality of the elementary subgroup and the local-global principle for the elementary subgroup, and section 8 the nilpotent by abelian structure of non-stable ${\k}$ of the general Hermitian group. \section{Form Rings} {\bf Definition:} Let us first recall the concept of $\LMD$-quadratic forms introduced by A. Bak in his Ph.D. thesis ({\it cf.}~\cite{Bak1}) in order to overcome the difficulties that arise for the characteristic 2 cases. Let $R$ be an (not necessarily commutative) associative ring with identity, and with involution $-:R\ra R$, $a\mapsto \ol{a}$. Let $\lambda\in C(R)$ = center of $R$ be an element with the property $\lmd \ol{\lmd}=1$. We define additive subgroups of $R$ $$\LMD_{\tn{max}}=\{a\in R\,|\, a=-\lmd\ol{a}\} \,\,\,\, \& \,\,\,\, \LMD_{\tn{min}}=\{a-\lmd\ol{a}\,|\, a\in R\}.$$ One checks that $\LMD_{\tn{max}}$ and $\LMD_{\tn{min}}$ are closed under the conjugation operation $a\mapsto \ol{x}ax$ for any $x\in R$. A $\lmd$-form parameter on $R$ is an additive subgroup $\LMD$ of $R$ such that $ \LMD_{\tn{min}}\subseteq \LMD\subseteq\LMD_{\tn{max}}$, and $\ol{x}\LMD x\subseteq \LMD$ for all $x\in R$. A pair $(R,\LMD)$ is called a {\it form ring}. \vp {\bf Examples:} \begin{enumerate} \item $ \LMD_{\tn{min}} = 0 \Leftrightarrow \lmd=1$, and involution is trivial. In particular, $\LMD=0 \Leftrightarrow \lmd=1$, involution is trivial, and $R$ is commutative. \item If $R$ is a commutative integral domain, and involution is trivial. {\it i.e.}, then $\lmd^2=1 \Leftrightarrow \lmd=\pm 1$. If $\lmd=1$ and char$R\ne 2$, then $\LMD_{\tn{max}}=0$, and so $0$ is the only form parameter. If $\lmd=-1$ and char$R\ne 2$, then $\LMD$ contains $2R$, and closed under multiplication by squares. If $R$ is a field, then we get $\LMD = R$. If $R$ is a $\mbb{Z}$, then we get $\LMD = 2\mbb{Z}$ and $\mbb{Z}$. If char$R = 2$, then $R^2$ is a subring of $R$, and $\LMD$ = $R^2$-submodules of $R$. \item The ring of $n\times n$ matrices $({\EM}(n, R), \LMD_n)$ is a form ring. \end{enumerate} \vp {\bf Remark:} Earlier version of $\lmd$-form parameter is due to K. McCrimmon which plays an important role in his classification theory of Jordan Algebras. He defined for the wider class of alternative rings (not just associative rings), but for associative rings it is a special case of Bak's concept. (For details, {\it cf.} N. Jacobson; Lectures on Quadratic Jordan Algebras, TIFR, Bombay 1969). The excellent work of Hazrat-Vavilov in \cite{HV1} is a very good source to understand the historical motivation behind the concept of form rings. And, an excellent source to understand the theory of form rings is the book \cite{HO} by A.J. Hahn and O.T. O'Meara. \section{General Quadratic Group} Let $V$ be a right $R$-module and ${\GL}(V)$ the group of all $R$-linear automorphisms of $V$. A map $f: V\times V \ra R$ is called {\it sesqulinear form} if $f(ua,vb)=\ol{a}f(u,v)b$ for all $u,v\in V$, \,\,$a,b\in R$. We define $\LMD$-{\it quadratic form} $q$ on $V$, and associated $\lmd$-{\it Hermitian form} and as follows: $$q : V \ra R/\LMD, \,\,{\rm ~given~ by~} \,\, q(v)=f(v,v) + \LMD,\,\,{\rm ~and~}$$ $$h: V\times V\ra R; \,\,{\rm ~given~ by~}\,\, h(u,v)=f(u,v) + \lmd \ol{f(v,u)}.$$ A Quadratic Module over $(R, \LMD)$ is a triple $(V,h,q)$. \vp {\bf Definition:} ``Bak's Unitary Groups'' or ``The Unitary Group of a Quadratic Module'' or ``General Quadratic Group'' ${\GQ}(V,q,h)$ is defined as follows: $${\GQ}(V,q,h) \,\, =\,\, \{\alpha \in {\GL}(V) \,\,|\,\, h(\alpha u, \alpha v)=h(u,v), \,\, q(\alpha u)=q(v)\}.$$ {\bf Examples:} \underline{Traditional Classical Groups} \begin{enumerate} \item By taking $\LMD = \LMD_{\tn{max}} = R$, $\lmd=-1$, and trivial involution we get symplectic group ${\GQ}(2n, R, \LMD) = {\Sp}(2n, R)$. \item By taking $\LMD = \LMD_{\tn{min}} = 0$, $\lmd=1$, trivial involution we get quadratic or orthogonal group ${\GQ}(2n, R, \LMD) = {\O}(2n, R)$. \item For general linear group, let $R^{o}$ be the ring opposite to $R$, and $R^e = R\oplus R^{o}$. Define involution as follows: $(x, y^o) \mapsto (y, x^o)$. Let $\lmd = (1,1^o)$ and $\LMD = \{(x,-x^o) \,\,|\,\,x\in R\}$. Then identify ${\GQ}(2n, R^e, \LMD) = \{(g, g^{-1})\,\,|\,\, g\in {\GL}(n,R)\}$ with ${\GL}(n,R)$. \end{enumerate} {\bf Free Case:} Let $V$ be a free right $R$-module of rank $2n$ with ordered basis $e_1, e_2, \ldots, e_n, e_{-n},\ldots, e_{-2}, e_{-1}$. Consider the sesqulinear form $f : V\times V \lra R$, defined by $f(u,v)=\ol{u}_1v_{-1}+\cdots+\ol{u}_nv_{-n}$. Let $h$ be Hermitian form, and $q$ be the $\LMD$-quadratic form defined by $f$. So, we have $$h(u,v) = \ol{u}_1v_{-1}+\cdots+\ol{u}_nv_{-n} +\lmd \ol{u}_{-n}v_n+\cdots+\lmd\ol{u}_{-1}v_1,$$ $$q(u) = \LMD + \ol{u}_1u_{-1}+\cdots+\ol{u}_n u_{-n}.$$ Using this basis we can identify ${\GQ}(V, h, q)$ with a subgroup of ${\GL}(2n, R)$ of rank $2n$. We denote this subgroup by ${\GQ}(2n, R, \LMD)$. By fixing a basis $e_1,e_2,\ldots, e_n, e_{-1}, e_{-2}, \ldots, e_{-n}$, we define the form $$\psi_n= \begin{pmatrix} 0 & \lmd {\rm I}_n \\ {\rm I}_n &0\end{pmatrix}$$ Hence, ${\GQ}(2n, R,\LMD)$ = $\{\sigma\in {\GL}(2n, R, \LMD)\,|\, \ol{\sigma}\psi_n \sigma=\psi_n\}.$ \vp For $\sigma=\begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}\in {\GL}(2n, R,\LMD)$, one can show that $\sigma\in {\GQ}_{2n}(R,\LMD)$ ($\alpha, \beta, \gamma, \delta$ are $n\times n$ block matrices) if and only if $\ol{\gamma} \alpha,\ol{\delta}\beta\in \LMD$. For more details see (\cite{Bak1}, 3.1 and 3.4). \vp A typical element in ${\GQ}(2n, R,\LMD)$ is denoted by a $2n\times 2n$ matrix $\begin{pmatrix} \alpha & \beta \\ \gamma &\delta \end{pmatrix}$, where $\alpha, \beta, \gamma, \delta$ are $n\times n$ block matrices. There is a standard embedding, ${\GQ}(2n, R,\LMD)\ra {\GQ}(2n+2, R,\LMD)$, given by $$\begin{pmatrix} \alpha & \beta \\ \gamma &\delta \end{pmatrix} \mapsto \begin{pmatrix}\alpha & 0 & \beta & 0\\ 0 & 1 & 0 & 0\\ \gamma & 0 & \delta & 0\\0 & 0 & 0 & 1\end{pmatrix}$$ called the {\it stabilization} map. This allows us to identify ${\GQ}(2n, R,\LMD)$ with a subgroup in ${\GQ}(2n+2, R,\LMD)$. \vp {\bf Elementary Quadratic Matrices:} Let $\rho$ be the permutation, defined by $\rho(i)=n+i$ for $i=1,\ldots,n$. Let $e_{ij}$ be the matrix with $1$ in the $ij$-th position and 0's elsewhere. For $a\in R$, and $1\le i, j\le n$, we define $$q\eps_{ij}(a)={\rm I}_{2n}+ae_{ij}-\ol{a}e_{\rho(j)\rho(i)} \,\,\tn{ ~~~~for } i\ne j,$$ $$qr_{ij}(a)=\begin{cases} {\rm I}_{2n}+ae_{i\rho(j)}-\lmd\ol{a}e_{j\rho(i)} \,\,\tn{ for } i\ne j\\ {\rm I}_{2n}+ae_{\rho(i)j}\,\,\,\,\,\,\,\,\,\,\,\ \,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\tn{~~ for } i=j,\end{cases}$$ $$ql_{ij}(a)=\begin{cases} {\rm I}_{2n}+ae_{\rho(i)j}-\ol{\lmd}\ol{a}e_{\rho(j)i} \,\,\tn{ for } i\ne j\\ {\rm I}_{2n}+ ae_{\rho(i)j} \,\,\,\,\,\,\,\,\,\,\,\ \,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\tn{ ~~for } i=j.\end{cases}$$ (Note that for the second and third type of elementary matrices, if $i=j$, then we get $a=-\lmd\ol{a}$, and hence it forces that $a\in \LMD_{\rm max} (R)$. One checks that these above matrices belong to ${\GQ}(2n, R,\LMD)$; {\it cf.}~\cite{Bak1}.) \vp {\bf n-th Elementary Quadratic Group ${\EQ}(2n, R,\LMD)$:} \tn{The subgroup generated by $q\eps_{ij}(a)$, $qr_{ij}(a)$ and $ql_{ij}(a)$, for $a\in R$ and $1\le i,j\le n$.} \vp It is clear that the stabilization map takes generators of ${\EQ}(2n, R,\LMD)$ to the generators of ${\EQ}(2(n+1), R, \LMD)$. {\bf Commutator Relations:} There are standard formulas for the commutators between quadratic elementary matrices. For details we refer \cite{Bak1} (Lemma 3.16), and \cite{HR} ($\S$ 2). In later sections we shall repeatedly use those relations. \vp \section{Hermitian Group} We assume that $\LMD$ is a $\lmd$ form parameter on $R$. For a matrix $M=(m_{ij})$ over $R$ we define $\ol{M}=(\ol{m}_{ij})^t$. For $a_1,\dots,a_n\in \LMD$ and $n>r$ let $$A_1=\begin{pmatrix} a_1 & 0 & 0 & \cdots & 0\\ 0 & a_2 & 0 & \cdots & 0\\ \cdots & \cdots & \cdots & \cdots & \cdots\\ 0 & \cdots & 0 & a_{r-1} & 0\\ 0 & \cdots & 0 & 0 & a_r \end{pmatrix}=[a_1,\ldots,a_r]$$ denote the diagonal matrix whose $ii$-th diagonal coefficient is $a_i$. Let $A=A_1\perp {\rm I}_{n-r}$. We define the form $$\psi^h_n= \begin{pmatrix} A_1 & \lmd {\rm I}_n \\ {\rm I}_n &0\end{pmatrix}.$$ {\bf Definition:} {\bf General Hermitian Group} \tn{of the elements $a_1,\ldots,a_r$ is defined as follows: ${\GH}(2n, R, a_1,\ldots ,a_r, \LMD)$: The group generated by the all non-singular $2n\times 2n$ matrices} $$\{\sigma\in {\GL}(2n, R)\,|\, \ol{\sigma}\psi^h_n \sigma=\psi^h_n\}.$$ As before, there is an obvious embedding $${\GH}(2n, R, a_1,\ldots,a_r, \LMD) \inj {\GH}(2n+2, R, a_1,\ldots,a_r, \LMD).$$ To define {\bf elementary Hermitian matrices}, we need to consider the set $C=\{(x_1,\ldots,x_r)^t\in (R^r)^t\,|\,\underset{i=1}{\overset{r}\sum} \ol{x}_ia_ix_i\in \LMD_{\tn{min}} (R)\}$ for $a_1,\ldots,a_r$ as above. In order to overcome the technical difficulties caused by the elements $a_1,\ldots,a_r$, we shall finely partition a typical matrix $\begin{pmatrix} \alpha & \beta \\ \gamma &\delta \end{pmatrix}$ of ${\GH}(2n, R, a_1,\ldots,a_r, \LMD)$ into the form $$\begin{pmatrix} \alpha_{11} & \alpha_{12} & \beta_{11} & \beta_{12} \\ \alpha_{21} & \alpha_{22} & \beta_{21} & \beta_{22}\\ \gamma_{11} & \gamma_{12} & \delta_{11} & \delta_{12} \\ \gamma_{21} & \gamma_{22} & \delta_{21} & \delta_{22} \end{pmatrix}$$ where $\alpha_{11},\beta_{11}, \gamma_{11}, \delta_{11}$ are $r\times r$ matrices, $\alpha_{12},\beta_{12},\gamma_{12},\delta_{12}$ are $r\times (n-r)$ matrices, $\alpha_{21},\beta_{21},\gamma_{21},\delta_{21}$ are $(n-r)\times r$ matrices, and $\alpha_{22},\beta_{22},\gamma_{22}, \delta_{22}$ are $(n-r)\times (n-r)$ matrices. By (\cite{T}, Lemma 3.4), \begin{eqnarray} \label{tn1} \tn{ the columns of } \alpha_{11}-{\rm I}_r,\alpha_{12},\beta_{11},\beta_{12},\ol{\beta}_{11}, \ol{\beta}_{21}, \ol{\delta}_{11}-{\rm I}_r,\ol{\delta}_{21} \in C. \end{eqnarray} It is a straightforward check that the subgroup of \lb ${\GH}(2n, R, a_1,\ldots,a_r, \LMD)$ consisting of \begin{eqnarray*} \left\{ \begin{pmatrix} {\rm I}_r & 0 & 0 & 0\\ 0 & \alpha_{22} & 0 & \beta_{22}\\ 0 & 0 & {\rm I}_r & 0 \\ 0 & \gamma_{22} & 0 & \delta_{22} \end{pmatrix} \in {\GH}(2n, R, a_1,\ldots,a_r) \right\} \end{eqnarray*} \begin{eqnarray*} \cong {\GH}(2(n-r), R,a_1,\ldots,a_r,\LMD). \end{eqnarray*} {\bf Elementary Hermitian Matrices:} The first three kinds of generators are taken for the most part from ${\GQ}(2(n-r), R,\LMD)$, which is embedded, as above, as a subgroup of ${\GH}(2n, R)$ and the last two kinds are motivated by the result \eqref{tn1} concerning the column of a matrix in ${\GH}(2n, R)$. For $a\in R$, we define \begin{align*} h\eps_{ij}(a)= & {\rm I}_{2n}+ae_{ij}-\ol{a}e_{\rho(j)\rho(i)} \,\,\,\,~~\tn{~~~~~for}\,\, r+1\le i\le n, 1\le j\le n, i\ne j, \\ hr_{ij}(a)= & \begin{cases} {\rm I}_{2n}+ae_{i\rho(j)}-\lmd\ol{a}e_{j\rho(i)} \,\,\,\,\tn{for}\,\, r+1\le i, j\le n, i\ne j\\ {\rm I}_{2n}+ae_{i\rho(j)} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\tn{~for} \,\, r+1\le i, j\le n, i=j, \end{cases} \\ hl_{ij}(a)= & \begin{cases} {\rm I}_{2n}+ae_{\rho(i)j}-\ol{\lmd}\ol{a}e_{\rho(j)i} \,\, \,\,\tn{for} \,\, 1\le i, j\le n, i\ne j \\ {\rm I}_{2n}+ae_{\rho(i)j} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \tn{~for} \,\, 1\le i, j\le n, i= j. \end{cases} \end{align*} (Note that for the second and third type of elementary matrices, if $i=j$, then we get $a=-\lmd\ol{a}$, and hence it forces that $a\in \LMD_{\rm max} (R)$). One checks that the above matrices belong to ${\GH}(2n, R,a_1,\ldots,a_r, \LMD)$; {\it cf.}~\cite{T}. For $\zeta=(x_1,\ldots,x_r)^t\in C$, let $\zeta_f\in R$ be such that $\zeta_f+\lmd\ol{\zeta}_f=\underset{i=1}{\overset{r}\sum} \ol{x}_ia_ix_i$. (The element $\zeta_f$ is not unique in general). We define $$hm_i(\zeta)=\begin{pmatrix} {\rm I}_r & \alpha_{12} & 0 & 0\\ 0 & {\rm I}_{n-r} & 0 & 0\\ 0 & -\ol{A}_1\alpha_{12} & {\rm I}_r & 0 \\ 0 & \gamma_{22} & -\ol{\alpha}_{12} & {\rm I}_{n-r} \end{pmatrix}$$ for $\zeta\in C \,\,\& \,\, r+1\le i\le n$ to be the $2n\times 2n$ matrix, where $\alpha_{12}$ is the $r\times (n-r)$ matrix with $\zeta$ as its $(i-r)$-th column and all other column's zero, and $\gamma_{22}$ is the $(n-r)\times (n-r)$ matrix with $\ol{\zeta}_f$ in $(i-r,i-r)$-th position and $0$'s elsewhere. Let $e_k$ denote the column vector of length $(n-r)$ with $1$ in the $k$-th position and $0$'s elsewhere, and $e_{t\,s}$ denote a $(n-r)\times (n-r)$ matrix with $1$ in the $ts$-th position and $0$'s elsewhere. As above, we define $$hr_i(\zeta)=\begin{pmatrix} {\rm I}_r & 0 & 0 & \beta_{12}\\ 0 & {\rm I}_{n-r} & -\lmd\ol{\beta}_{12} & \beta_{22}\\ 0 & 0 & {\rm I}_r & -\ol{A}_1\beta_{12} \\ 0 & 0 & 0 & {\rm I}_{n-r} \end{pmatrix}$$ for $\zeta\in C \,\,\& \,\, r+1\le i\le n$ a $2n\times 2n$ matrix, where $\beta_{12}$ is the $r\times (n-r)$ matrix with $\zeta$ as its $(i-r)$-th column and all other column's zero, and $\beta_{22}$ is the $(n-r)\times (n-r)$ matrix with $\lmd\ol{\zeta}_f$ in $(i-r,i-r)$-th position and $0$'s elsewhere. Note that if $\eta=e_{pq}(a)$ is an elementary generator in ${\GL}(s, R)$, then the matrix $({\rm I}_{n-s}\perp \eta\perp {\rm I}_{n-s} \perp \eta^{-1})\in h\eps_{ij}(a)$. It has been shown in \cite{T} ($\S 5$) that each of the above matrices is in ${\GH}(2n, R, a_1,\ldots,a_r, \LMD)$.\vp {\bf Definition:} {\bf n-th Elementary Hermitian Group} of the elements $a_1,\ldots, a_r$; ${\EH}(2n, R,a_1,\ldots,a_r, \LMD)$: The group generated by $h\eps_{ij}(a)$, $hr_{ij}(a)$, $hl_{ij}(a)$, $hm_i(\zeta)$ and $hr_i(\zeta)$, for $a\in R$, $\zeta\in C$ and $1\le i,j\le n$. The stabilization map takes generators of ${\EH}(2n, R,a_1,\dots, a_r,\LMD)$ to the generators of ${\EH}(2(n+1), R, a_1,\dots, a_r,\LMD)$. \vp {\bf Commutator Relations:} There are standard formulas for the commutators between quadratic elementary matrices. For details we refer \cite{T}. \vp \section{Preliminaries and Notations} {\bf Blanket Assumption:} We always assume that $2n\ge 6$ and $n>r$ while dealing with the Hermitian case. We do not want to put any restriction on the elements of $C$. Therefore we assume that $a_i\in \LMD_\tn{min}(R)$ for $i=1,\ldots,r$, as in that case $C=R^r$. We always assume $a_1=0$. \begin{nt} \tn{In the sequel \tn{M}$(2n,R)$ will denote the set of all $2n\times 2n$ matrices. By $\G(2n,R, \LMD)$ we shall denote either the quadratic group ${\GQ}(2n, R,\LMD)$ or the Hermitian group ${\GH}(2n, R,a_1,\ldots,a_r, \LMD)$ of size $2n\times 2n$. By ${\es}(2n, R, \LMD)$ we shall denote respective subgroups ${\SQ}(2n, R,\LMD)$ or ${\SH}(2n, R,a_1,\ldots,a_r,\LMD)$ with matrices of determinant 1, in the case when $R$ will be commutative. Then, by ${\E}(2n, R, \LMD)$ we shall denote the corresponding elementary subgroups ${\EQ}(2n, R,\LMD)$ and ${\EH}(2n, R,a_1,\ldots,a_r, \LMD)$. To treat uniformly we denote the elementary generators of ${\EQ}(2n, R,\LMD)$, and the first three types of elementary generators of ${\EH}(2n, R,\LMD)$ by $\vartheta_{ij}(\star)$, for some $\star\in R$. To express the last two types of generators of ${\EH}(2n, R,\LMD)$ we shall use the notation $\vartheta_i(\star)$, where $\star$ is a column vector of length $r$ defined over the ring $R$. {\it i.e.}, we will have two types of elementary generators, namely $\vartheta_{ij}({\rm ring \,\, element})$ and $\vartheta_i({\rm column \,\, vector})$. Let $\LMD[X]$ denote the $\lambda$-form parameter on $R[X]$ induced from $(R,\LMD)$, {\it i.e.}, $\lmd$-form parameter on $R[X]$ generated by $\LMD$, {\it i.e.}, the smallest form parameter on $R[X]$ containing $\LMD$. Let $\LMD_s$ denote the $\lambda$-form parameter on $R_s$ induced from $(R,\LMD)$. } \end{nt} For any column vector $v\in (R^{2n})^t$ we define the row vectors $\widetilde{v}_q=\ol{v}^t\psi^q_n$ and $\widetilde{v}_h=\ol{v}^t\psi^h_n$. \begin{de} \tn{We define the map $\tn{M}:(R^{2n})^t \times (R^{2n})^t \ra \tn{M}(2n,R)$ and the inner product $\langle \,,\rangle$ as follows: } \begin{align*} \tn{M}(v,w) & = v.\widetilde{w}_q-\ol{\lambda}\ol{w}.\widetilde{v}_q, \,\,\,\, \tn{ when } {\G}(2n,R)={\GQ}(2n, R,\LMD)\\ & = v.\widetilde{w}_h -\ol{\lambda}\ol{w}.\widetilde{v}_h, \,\,\,\, \tn{ when } {\G}(2n,R)={\GH}(2n, R,a_1,\ldots,a_r, \LMD),\\ \langle v,w\rangle & = \widetilde{v}_q.w, \,\,\,\, \tn{ when } {\G}(2n,R)={\GQ}(2n, R,\LMD) \\ & = \widetilde{v}_h.w, \,\,\,\, \tn{ when } {\G}(2n,R)={\GH}(2n, R,a_1, \ldots , a_r, \LMD). \end{align*} \end{de} Note that the elementary generators of the groups ${\EQ}(2n, R)$ and ${\EH}(2n, R)$ are of the form ${\rm I}_{2n}+{\rm M}(\star_1,\star_2)$ for suitable chosen standard basis vectors. We recall the following well known facts: \begin{lm} \label{nor} $($cf. \cite{Bak1}, \cite{T}$)$ The group ${\E}(2n,R, \LMD)$ is perfect for $n\ge 3$ in the quadratic case, and for $n\ge r+3$ in the Hermitian case, {\it i.e.}, $$[{\E}(2n,R, \LMD), {\E}(2n,R, \LMD)] = {\E}(2n,R, \LMD).$$ \end{lm} \begin{lm} \label{key2} {\bf (Splitting property):} For all elementary generators of the general quadratic group ${\GQ}(2n, R,\LMD)$ and for the first three types elementary generators of the Hermitian group ${\GH}(2n, R,a_1,\ldots,a_r, \LMD)$ we have: $$\vartheta_{ij}(x+y)=\vartheta_{ij}(x)\vartheta_{ij}(y)$$ for all $x, y \in R$. For the last two types of elementary generators of Hermitian group we have the following relation: $$hm_i(\zeta)hm_i(\xi)=hm_i(\zeta + \xi)hl_{ii}(\bar{\zeta}_f+\bar{\xi}_f+\bar{\zeta}\bar{A}_1\xi-\ol{(\zeta+\xi)}_f),$$ $$hr_i(\zeta) hr_i(\xi)=hr_i(\zeta + \xi)hr_{ii}((\zeta+\xi)_f-\xi_f-\zeta_f-\bar{\xi}A_1\zeta).$$ \end{lm} {\bf Proof.} See pg. 43-44, Lemma 3.16, \cite{Bak1} for the ${\GQ}(2n, R,\LMD)$ and Lemma 8.2, \cite{T} for the group ${\GH}(2n, R,a_1,\ldots,a_r, \LMD)$. \begin{lm} \label{key6} Let $G$ be a group, and $a_i$, $b_i \in G$, for $i = 1, \ldots, n$. Then for $r_i={\underset{j=1} {\overset{i}\Pi}} a_j$, we have ${\underset{i=1}{\overset{n}\Pi}}a_i b_i= {\underset{i=1}{\overset{n}\Pi}}r_ib_ir_i^{-1} {\underset{i=1}{\overset{n}\Pi}}a_i.$ \end{lm} \begin{nt} \tn{ By ${\G}(2n,R[X], \LMD [X], (X))$ we shall mean the group of all quadratic and Hermitian matrices over $R[X]$ which are ${\I}_n$ modulo $(X)$.} \end{nt} \begin{lm} \label{key1} The group ${\G}(2n,R[X],\LMD [X], (X)) \cap {\E}(2n,R[X], \LMD [X])$ is generated by the elements of the types $ \eps \vartheta_{ij}(\star_1)\eps^{-1}$ and $ \eps \vartheta_{i}(\star_2)\eps^{-1}$, where $\eps \in {\E}(2n,R, \LMD)$, $\star_1\in R[X]$, $\star_2\in ((R[X])^{2n})^t$ with both $\vartheta_{ij}(\star_1)$ and $\vartheta_{i}(\star_2)$ congruent to ${\rm I}_{2n}$ modulo $(X)$. \end{lm} We give a proof of this Lemma for the Hermitian group. The proof for the quadratic case is similar, but easier. {\bf Proof of Lemma \ref{key1}}. Let $a_1(X),\ldots, a_r(X)$ be $r$ elements in the polynomial ring $R[X]$ with respect to which we are considering the Hermitian group ${\GH}(2n, R[X], a_1(X),\ldots,a_r(X), \LMD [X])$. Let $\alpha(X)\!\in\!{\EH}(2n, R[X], a_1(X),\ldots,a_r(X), \LMD [X])$ be such that $\alpha(X)$ is congruent to ${\rm I}_{2n}$ modulo $(X)$. Then we can write $\alpha(X)$ as a product of elements of the form $\vartheta_{ij}(\star_1)$, where $\star_1$ is a polynomial in $R[X]$, and of the form $\vartheta_{i}(\star_2)$, where $\star_2$ is a column vector of length $r$ defined over $R[X]$. We write each $\star_1$ as a sum of a constant term and a polynomial which is identity modulo $(X)$. Hence by using the splitting property described in Lemma \ref{key2} each elementary generator $\vartheta_{ij}(\star_1)$ of first three type can be written as a product of two such elementary generators with the left one defined on $R$ and the right one defined on $R[X]$ which is congruent to ${\rm I}_{2n}$ modulo $(X)$. For the last two types of elementary generators we write each vector $\star_2$ as a sum of a column vector defined over the ring $R$ and a column vector of defined over $R[X]$ which is congruent to the zero vector of length $r$ modulo $(X)$. In this case, as shown in Lemma \ref{key2}, we get one extra term involving elementary generator of the form $hl_{ii}$ or $hr_{ii}$. But that extra term is one of the generator of first three types. And then we can split that term again as above. Therefore, $\alpha(X)$ can be expressed as a product of following types of elementary generators: $$\vartheta_{i j}(\star_1(0))\vartheta_{i j}(X\star_1)\,\, {\rm with } \,\,\star_1(0)\in R \,\,{\rm and } \,\, \vartheta_{ij}(X\star_1)={\rm I}_{2n} \,\,{\rm modulo }\,\, (X),$$ $$\vartheta_{i}(\star_2(0))\vartheta_{i}(X\star_2) \,\,{\rm with }\,\, \star_2(0)\in R \,\,{\rm and }\,\, \vartheta_{i}(X\star_2)={\rm I}_{2n} \,\,\rm{ modulo } \,\, (X).$$ Now result follows by using the identity described in Lemma \ref{key6}. \hb \section{Suslin's Local-Global Principle} In his remarkable thesis ({\it cf.}~\cite{Bak1}) A. Bak showed that for a form ring $(R, \LMD)$ the elementary subgroup ${\EQ}(2n,R, \LMD)$ is perfect for $n\ge 3$ and hence is a normal subgroup of ${\GQ}(2n,R, \LMD)$. As we have noted earlier, this question is related to Suslin's local-global principle for the elementary subgroup. In \cite{T}, G. Tang has shown that for $n\ge r+3$ the elementary Hermitian group ${\EH}(2n, R,a_1, \ldots , a_r, \LMD)$ is perfect and hence is a normal subgroup of ${\GH}(2n, R,a_1, \ldots , a_r, \LMD)$. In this section we deduce an analogous local-global principle for the elementary subgroup of the general quadratic and Hermitian groups, when $R$ is module finite, {\it i.e.}, finite over its center. We use this result in $\S 5$ to prove the nilpotent property of the unstable Hermitian group ${\KH}_1$. Furthermore, we show that if $R$ is finite over its center, then the normality of the elementary subgroup is equivalent to the local-global principle. This generalizes our result in \cite{BRK}. The following is the key Lemma, and it tells us the reason why we need to assume that the size of the matrix is at least 6. In \cite{BRK}, proof is given for the general linear group. Arguing in similar manner by using identities of commutator laws result follows in the unitary and Hermitian cases. A list of commutator laws for elementary generators is stated in (\cite{Bak1}, pg. 43-44, Lemma 3.16) for the unitary groups and in (\cite{T}, pg. 237-239, Lemma 8.2) for the Hermitian groups. For a direct proof we refer Lemma 5, \cite{P}. \begin{lm} \label{key3} Suppose $\vartheta$ is an elementary generator of the general quadratic \lb $($Hermitian$)$ group ${\G}(2n, R[X], \LMD[X])$, $n\ge 3$. Let $\vartheta$ be congruent to identity modulo $(X^{2m})$, for $m>0$. Then, if we conjugate $\vartheta$ with an elementary generator of the general quadratic $($Hermitian$)$ group ${\G}(2n, R, \LMD)$, we get the resulting matrix is a product of elementary generators of general quadratic $($Hermitian$)$ group ${\G}(2n, R[X], \LMD[X])$, each of which is congruent to identity modulo $(X^m)$. \end{lm} \begin{co} \label{key4} In Lemma \ref{key3} we can take $\vartheta$ as a product of elementary generators of the general quadratic $($general Hermitian$)$ group ${\G}(2n, R[X], \LMD[X])$. \end{co} \begin{lm} \label{key5} Let $(R, \LMD)$ be a form ring and $v\in {\E}(2n,R, \Lambda)e_{2n}$. Let $w\in R^{2n}$ be a column vector such that $\langle v,w\rangle=0$. Then ${\rm I}_{2n}+\tn{M}(v,w)\in {\E}(2n,R, \LMD)$. \end{lm} {\bf Proof.} Let $v=\eps e_{2n}$, where $\eps \in {\E}(2n,R,\LMD)$. Then it follows that ${\rm I}_{2n}+\tn{M}(v,w)=\eps ({\rm I}_{2n}+ \tn{M}(e_{2n},w_1))\eps^{-1},$ where $w_1 = \eps^{-1}w$. Since $\langle e_{2n},w_1\rangle=\langle v,w\rangle=0$, we get $w_1^t=(w_{11},\dots,w_{1 \,n-1},0,w_{1\,n+1},\ldots,w_{1 \,2n})$. Therefore, as $\lmd\ol{\lmd}=\ol{\lmd}\lmd=1$, $${\rm I}_{2n}+\tn{M}(v\!,\!w)\!\! = \!\! \begin{cases} \underset{1\le i\le n-1} {\underset{1\le j\le n} \Pi} \!\!\!\!\! \,\eps \,ql_{in}(-\ol{\lmd} \ol{w}_{n+1\, i})\,q\eps_{jn}(-\ol{\lmd}\ol{w}_{1\, j})ql_{n\, n}^{-1}(*) {\eps}^{-1} \\ \underset{1\le i\le n-1} {\underset{r+1\le j\le n}{\underset{1\le k\le r} \Pi}} \!\!\!\!\! \eps hl_{in}(-\ol{\lmd} \ol{w}_{n+1\, i})\!h\eps_{jn}(-\ol{\lmd}\ol{w}_{1\, j})\!hm_n (-\ol{w}_{1 \,k)})hl_{n\, n}^{-1}(*) {\eps}\!\!\!^{-1} \end{cases}$$ (in the quadratic and Hermitian cases respectively), \\ where $\ol{w}_{1 \,\,n+k}= (w_{1 \,n+k},0,\ldots,0)$. Hence the result follows. \hb \vp Note that the above implication is true for any associative ring with identity. From now onwards we assume that $R$ is finite over its center $C(R)$. Let us recall \begin{lm} \label{noeth} Let $A$ be Noetherian ring and $s\in A$. Let $s\in A$ and $s\neq 0$. Then there exists a natural number $k$ such that the homomorphism ${\G}(A,s^kA, s^k\LMD) \ra {\G}(A_s, \LMD_s)$ $($induced by localization homomorphism $A \ra A_s)$ is injective. \end{lm} For the proof of the above lemma we refer (\cite{HV}, Lemma 5.1). Also, we recall that one has any module finite ring $R$ is direct limit of its finitely generated subrings. Thus, one may assume that $C(R)$ is Noetherian. Let $(R, \LMD)$ be a (module finite) form ring with identity. \begin{lm} \label{Di} {\bf (Dilation Lemma)} Let $\alpha(X)\in {\G}(2n,R[X], \LMD [X])$, with $\alpha(0)={\rm I}_{2n}$. If $\alpha_s(X)\in {\E}(2n,R_s[X], \LMD_s [X])$, for some non-nilpotent $s\in R$, then $\alpha(bX) \in {\E}(2n,R[X], \LMD [X])$, for $b\in s^l {\C}(R)$, and $l\gg 0$. \end{lm} \begin{re} $($In the above Lemma we actually mean there exists some $\beta(X)\in {\E}(2n,R[X], \LMD [X])$ such that $\beta(0)={\rm I}_{2n}$ and $\beta_s(X)=\alpha(bX)$.$)$ \end{re} {\bf Proof.} Given that $\alpha_s(X)\in {\E}(2n,R_s[X], \LMD_s [X])$. Since $\alpha(0)={\rm I_{2n}}$, using Lemma \ref{key1} we can write $\alpha_s(X)$ as a product of the matrices of the form $ \eps \vartheta_{ij}(\star_1)\eps^{-1}$ and $ \eps \vartheta_{i}(\star_2)\eps^{-1}$, where $\eps \in {\E}(2n,R_s, \LMD_s)$, $\star_1\in R_s[X]$, $\star_2\in ((R_s[X])^{2n})^t$ with both $\vartheta_{ij}(\star_1)$ and $\vartheta_{i}(\star_2)$ congruent to ${\rm I}_{2n}$ modulo $(X)$. Applying the homomorphism $X\mapsto XT^d$, where $d\gg 0$, from the polynomial ring $R[X]$ to the polynomial ring $R[X,T]$, we look on $\alpha(XT^d)$. Note that $R_s[X,T]\cong (R_s[X])[T]$. As $C(R)$ is Noetherian, it follows from Lemma \ref{noeth} and Corollary \ref{key4} that over the ring $(R_s[X])[T]$ we can write $\alpha_s(XT^d)$ as a product of elementary generators of general quadratic (Hermitian) group such that each of those elementary generator is congruent to identity modulo $(T)$. Let $l$ be the maximum of the powers occurring in the denominators of those elementary generators. Again, as $C(R)$ is Noetherian, by applying the homomorphism $T\mapsto s^mT$, for $m\ge l$, it follows from Lemma \ref{noeth} that over the ring $R[X,T]$ we can write $\alpha_s(XT^d)$ as a product of elementary generators of general quadratic group such that each of those elementary generator is congruent to identity modulo $(T)$, for some $b\in (s^l)C(R)$, {\it i.e.}, we get there exists some $\beta(X,T)\in {\E}(2n,R[X,T], \LMD [X,T])$ such that $\beta(0,0)={\rm I}_{2n}$ and $\beta_s(X,T)=\alpha(bXT^d)$. Finally, the result follows by putting $T=1$. \hb \iffalse -------------- We prove the Lemma in 2 steps. \vp \\ {\bf Step 1.} First we prove the following: \vp {\it If $\alpha(X)={\rm I}_{2n}+ X^d \tn{M}(v,w)$, where $v\in {\E}(2n,R, \LMD)e_{2n}$, $d\gg 0$ and $\langle v,w\rangle=0$, then $\alpha(X)\in {\E}(2n,R[X], \LMD [X])$ and it can be written as a product of elementary generators each of which is congruent to identity modulo $(X)$.} Replacing $R$ by $R[X]$ in Lemma \ref{key5} we get that ${\rm I}_n+X\tn{M}(v,w)\in {\E}(n,R[X], \LMD[X])$. Let $v=\eps e_1$, where $\eps \in {\E}(n,R)$. Since $\lmd\ol{\lmd}=\ol{\lmd}\lmd=1$, as in the proof of Lemma \ref{key5}, we can write ${\rm I}_{2n}+ X\tn{M}(v,w)$ $$= \!\! \begin{cases} \underset{1\le i\le n-1} {\underset{1\le j\le n} \Pi} \eps ql_{in}(-\ol{\lmd}X \ol{w}_{n+1\, i})\,q\eps_{jn}(-\ol{\lmd}X\ol{w}_{1\, j})ql_{n\, n}^{-1}(*) {\eps}^{-1} \\ \underset{1\le i\le n-1} {\underset{r+1\le j\le n}{\underset{1\le k\le r} \Pi}} \eps hl_{in}(-\ol{\lmd}X \ol{w}_{n+1\, i})\,h\eps_{jn}(-\ol{\lmd}X\ol{w}_{1\, j})hm_n (-X\ol{w}_{1 \,k)})hl_{n\, n}^{-1}(*){\eps}\!\!\!^{-1} \end{cases}$$ (in the quadratic and Hermitian cases respectively), where $\ol{w}_{1 \,\,n+k}= (w_{1 \,n+k},0,\ldots,0)$. Now we split the proof into following two cases: Case I: $\eps$ is an elementary generators of the type $\vartheta_{pq}(\star_1)$ or $\vartheta_p(\star_2)$. We apply the homomorphism $X\mapsto X^2$. Since $d\gg 0$ the result follows by applying Lemma \ref{key3} over $R[X]$. Case II: $\eps$ is a product of elementary generators of the type $\vartheta_{pq}(\star_1)$ or $\vartheta_p(\star_2)$. Let $\eps$ be product of $r$ elementary generators . First applying the homomorphism $X\mapsto X^{2^r}$ and then applying the Corollary \ref{key4} we see that the result is true for $d\gg 0$. \vp \\ {\bf Step 2.} Given that $\alpha_s(X)\in {\E}(2n,R_s[X], \LMD_s [X])$. Therefore for some integer $d>0$ $\alpha_s(XT^d)\in {\E}(2n,R_s[XT^d], \LMD_s [X])$. Since $\alpha(0)={\rm I_{2n}}$, Let $\alpha_s(X)=\underset{k}\Pi \vartheta_{i(k)j(k)}(h_k(X))$, where $h_k(X)\in R_s[X]$. So, $\alpha_s(XT^d)=\underset{k}\Pi \vartheta_{i(k) j(k)}(h_k(XT^d))$. Choose $d\ge 2^r$ and $r=\mu(\alpha(X))$. Since $\alpha(0)={\rm I}_n$, as in Lemma \ref{key1} we get \begin{eqnarray*} \alpha_s(XT^d) &=& \underset{k}\Pi \eps_k \vartheta_{i(k)j(k)}(XT^d\lambda_k(XT^d))\eps_k^{-1}\\ &=&\underset{k}\Pi ({\rm I}_n + XT^d\lambda_k(XT^d)\eps_k \tn{M}(e_{i(k)}, e_{\sigma(j)(k)}) \eps_k^{-1}), \end{eqnarray*} for $\lambda_k(XT^d) \in R_s[X, T]$, $\eps_k \in {\E}(n, R_s, \LMD_s)$. Let $v_k=\eps_k e_{i(k)}$. Then taking $w_k(X,T)=\ol{\lambda}X\lambda_k(XT^d)\eps_k e_{\sigma(j)(k)}$, we get $\langle v_k,w_k(X,T)\rangle=0$ (without loss of generality we are assuming $\sigma(i)\ne j$). Applying result in Step I over the polynomial ring $(R_s[X])[T]$ we get ${\rm I}_n+T^d \tn{M}(v_k,w_k(X,T)) \lb \in {\E}(n,R_s[X,T], \LMD_s[X,T])$, and can be expressed as a product of the form $\Pi \vartheta_{p_{k(t)} q_{k(t)}}(T h_k(X,T))$, where $h_k(X,T)\in R_s[X,T]$. Let $l$ be the maximum of the powers occurring in the denominators of $h_k(X,T)$ for all $k$. (For the last two types of elementary Hermitian generators we take the largest power of $s$ arising in the denominator of $a_i(X)$'s and the trace defining element). Now applying the homomorphism $T\mapsto s^mT$ for $m\ge l$ we get $\alpha(bXT^d)\in {\E}(n,R[X,T], \LMD[X,T])$ for some $b\in (s^l)C(R)$. Finally, putting $T=1$ we get the required result. \hb ------------------- \fi \begin{tr} \label{LG} {\bf (Local-Global Principle)} \\ If $\alpha(X)\in {\G}(2n,R[X], \LMD[X])$, $\alpha(0)={\I}_n$ and $$\alpha_{\m}(X)\in {\E}(n,R_{\m}[X], \LMD_{\mf m}[X]),$$ for every maximal ideal $\m \in \M \, (C(R))$, then $$\alpha(X)\in {\E}(2n,R[X], \LMD[X]).$$ $($Note that $R_{\m}$ denotes $S^{-1}R$, where $S = C(R) \setminus \m$.$)$ \end{tr} {\bf Proof.} Since $\alpha_{\m}(X)\in {\E}(2n,R_{\m}[X], \LMD_{\mf m}[X])$, for all $\m \in {\M}(C(R))$, for each $\m$ there exists $s\in C(R) \setminus \m$ such that $\alpha_s(X)\in {\E}(n,R_s[X], \LMD_s[X]).$ Using Noetherian property we can consider a finite cover of $C(R)$, say $s_1+\cdots+ s_r=1$. Let $\theta(X,T)= \alpha_s(X+T)\alpha_s(T)^{-1}$. Then $$\theta(X,T)\in {\E}(2n,(R_s[T])[X], \LMD_s[T][X])$$ and $\theta(0,T)={\I}_n$. By Dilation Lemma, applied with base ring $R[T]$, there exists $\beta(X)\in {\E}(2n,R[X,T], \LMD[X,T])$ such that $$\beta_{s}(X)= \theta(bX,T).$$ Since for $l\gg 0$, the ideal $\langle s_1^l,\ldots, s_r^l\rangle = R$, we chose $b_1,b_2,\dots,b_r\in C(R)$, with $b_i\in (s^l)C(R),\, l\gg 0$ such that (A) holds and $b_1+\cdots+b_r=1$. Then there exists $\beta^i(X)\in {\E}(2n,R[X,T], \LMD[X,T])$ such that $\beta^i_{s_i}(X)= \theta(b_iX,T)$. Therefore, $$\underset{i=1}{\overset{r}\Pi}\beta^i(X)\in {\E}(2n,R[X,T], \LMD[X,T]).$$ But, $$\alpha_{s_1\cdots s_r}(X)= \left(\underset{i=1}{\overset{r-1}\Pi} \theta_{s_1\cdots \hat{s_i}\cdots s_r}(b_iX,T) {\mid}_{T=b_{i+1}X+\cdots +b_rX}\right) \theta_{s_1\cdots \cdots s_{r-1}}(b_rX,0).$$ Since $\alpha(0)={\I}_n$, and as a consequence of the Lemma \ref{noeth} it follows that the map ${\E}(R,s^kR, s^k\LMD) \ra {\E}(R_s, \LMD_s)$ in injective, we conclude $\alpha(X)\in {\E}(2n,R[X], \LMD[X])$. \hb \iffalse Let $b_1,b_2,\dots,b_r\in C(R)$, with $b_i\in (s^l)C(R),\, l\gg 0$ be such that (A) holds and $b_1+\cdots+b_r=1$. Then $\theta(b_iX,T)\in {\E}(2n,R[X,T], \LMD[X,T])$ and hence $\underset{i=1}{\overset{r}\Pi}\theta(b_iX,T)\in {\E}(2n,R[X,T], \LMD[X,T])$. But, $$\alpha(X)=\left(\underset{i=1}{\overset{r-1}\Pi} \theta(b_iX,T) {\mid}_{T=b_{i+1}X+\cdots +b_rX}\right) \theta(b_rX,0).$$ Since $\alpha(0)={\I}_n$, it follows that $\alpha(X)\in {\E}(2n,R[X], \LMD[X])$. (For simplicity we have written the above expression. Actually we are considering the equality in the localization $R_{b_1.\ldots.b_r}$ and using Lemma \ref{noeth} repeatedly.) \hb \fi \section{Equivalence of Normality and Local-Global Principle} Next we are going to show that if $k$ is a commutative ring with identity and $R$ is an associative $k$-algebra such that $R$ is finite as a left $k$-module, then the normality criterion of elementary subgroup is equivalent to Suslin's local-global principle for above two classical groups. (Remark: One can also consider $R$ as a right $k$-algebra.) One of the crucial ingredients in the proof of the above theorem is the following result which states that the group ${\E}$ acts transitively on unimodular vectors. The precise statement of the fact is the following: \begin{de} \tn{A vector $(v_1,\ldots, v_{2n})\in R^{2n}$ is said to be unimodular if there exists another vector $(u_1,\ldots, u_{2n})\in R^{2n}$ such that $\sum_{i=1}^{2n} v_i u_i=1$. } \tn{The set of all unimodular vector in $R^{2n}$ is denoted by ${\Um}(2n, R)$.} \end{de} \begin{tr} \label{swan} Let $R$ be a semilocal ring $($not necessarily commutative$)$ with involution and $v=(v_1,\ldots,v_{2n})^t$ be a unimodular and isotropic vector in $R^{2n}$. Then $v\in {\E}(2n, R)e_{2n}$ for $n\ge 2$, {\it i.e.}, ${\E}(2n,R)$ acts transitively on the set of isotropic vectors in ${\Um}(2n, R)$. \end{tr} \iffalse \begin{co} \label{swan2} If $R$ is a semilocal ring with involution, then one can write ${\G}(2n,R)= {\E}(2n,R){\G}(2,R)$ for $n\ge 2$. \end{co} \fi Let us first recall some known facts before we give a proof of the theorem. \begin{de} \tn{An associative ring $R$ is said to be {\bf semilocal} if $R/\tn{rad}(R)$ is Artinian semisimple.} \end{de} We recall the following three lemmas. \begin{lm} \label{HB} \tn{(H. Bass)} Let $A$ be an associative $B$-algebra such that $A$ is finite as a left $B$-module and $B$ be a commutative local ring with identity. Then $A$ is semilocal. \end{lm} {\bf Proof.} Since $B$ is local, $B/\tn{rad} (B)$ is a division ring by definition. That implies $A/\tn{rad} (A)$ is a finite module over the division ring $B/\tn{rad}(B)$ and hence is a finitely generated vector space. Thus $A/\tn{rad} (A)$ Artinian as $B/\tn{rad}(B)$ module and hence $A/\tn{rad}(A)$ Artinian as $A/\tn{rad}(A)$ module, so it is an Artinian ring. It is known that an artin ring is semisimple if its radical is trivial. Thus $A/\tn{rad}(A)$ is semisimple, as $\tn{rad}(A/\tn{rad}(A))=0$. Hence $A/\tn{rad}(A)$ Artinian semisimple. Therefore, $A$ is semilocal by definition. \hb \iffalse \begin{lm} \tn{(H. Bass) (\cite{B}, Lemma 4.3.26)} \label{B} Let $R$ be a semilocal ring $($may not be commutative$)$, and let $I$ be a right ideal of $R$. Let $a$ in $R$ be such that $Ra+I=R$. Then the coset $a+I=\{a+x \,|\,x\in I\}$ contains a unit of $R$. \end{lm} {\bf Proof.} We give a proof due to R.G. Swan. We can factor out the radical and assume that $R$ is semisimple Artinian. Let $I=(aR\cap I)\oplus I'$. Replacing $I$ by $I'$ we can assume that $R=aR\oplus I$. Let $f:R\ra aR$ by $r\mapsto ar$ for $r\in R$. Therefore, we get an split exact sequence $0\lra J\lra R \stk{f}\lra aR\lra 0$, for some ideal $J$ in $R$ which gives us a map $g:R\ra J$ such that $R \stk{(f,g)}\lra aR\oplus J$ is an isomorphism. Since $aR\oplus J\cong R\cong aR\oplus I$ cancellation (using Jordon-H\"older or Krull-Schmidt) shows that $J\cong I$. If $h:R\cong J\cong I$, then $R\stk{(f,g)}\lra aR\oplus I\cong R$ is an \ isomorphism sending $1$ to $(a,i)$ to $a+i$, where $i=h(1)$. Hence it follows that $a+i$ is a unit. \hb \vp \begin{lm} \label{swan4} Let $R$ be a semisimple Artinian ring and $I$ be a right ideal of $R$. Let $J=aR+I$. Write $J=eR$, where $e$ is an idempotent $($possible since $J$ is projective. For detail cf. \cite{BK} {\rm Theorem 4.2.7}$)$. Then there is an element $i\in I$ such that $a+i=eu$, where $u$ is a unit in $R$. \end{lm} {\bf Proof.} Since $R=J+(1-e)R=aR+I+(1-e)R$, using Lemma \ref{B} we can find a unit $u=a+i+(1-e)x$ in $R$ for some $x\in R$. Since $a+i\in eR$, it follows that $eu=a+i$. \hb \begin{co} \label{swan5} Let $R$ be a semisimple Artinian ring and $(a_1,\dots,a_n)$ be a row vector over $R$, where $n\ge 2$. Let $\Sigma a_iR=eR$, where $e$ is an idempotent. Then there exists $\eps\in {\E}(n,R)$ such that $(a_1,\dots,a_n)\eps=(0,\dots,0,e)$. \end{co} {\bf Proof.} By Lemma \ref{swan4} we can write $eu=\Sigma_{i=1}^{n-1} a_ib_i+a_n$, where $u$ is a unit. Therefore, applying an elementary transformation we can assume that $a_n=eu$. Multiplying from the left by $({\rm I}_{n-2} \perp u\perp u^{-1})$ we can make $a_n=e$. Since all $a_i$ are left multiple of $e$, further elementary transformations reduce our vector to the required form. \hb \vp The following observation will be needed to do the case $2n=4$. \begin{lm} \label{swan6} Let $R$ be a semisimple Artinian ring and $e$ be an idempotent. Let $f=1-e$, and $b$ be an element of $R$. If $bRf\subseteq eR$, then we have $b\in eR$. \end{lm} {\bf Proof.} Since $R$ is a product of simple rings, it will suffice to do the case in which $R$ is simple. If $e=1$, we are done. Otherwise $RfR$ is a non-zero two sided ideal, and hence $RfR=R$. Since $bR=bRfR\subseteq eR$, we have $b\in eR$. \hb \begin{lm} \label{swan7} Let $R$ be a semisimple Artinian ring and let $-:R\ra R$ be a $\lmd$-involution on $R$. Let $( x \,\, y) $ be a unimodular row of length $2n$, where $2n\ge 4$. Then there exists an element $\eps\in {\E}(2n,R)$ such that $( x \,\, y)\eps= (x' \,\, y')$, where $x_1'$ is a unit in $R$. \end{lm} {\bf Proof.} Let $x=(x_1,\ldots,x_n)$ and $b=(y_1,\ldots,y_n)$. We claim that there exists $\eps\in {\E}(2n,R)$ such that $( x \,\, y)\eps = (x' \,\, y')$, where $x'$ is a unit in $R$. Among all $( x' \,\, y')$ of this form, choose one for which the ideal $I=\Sigma x_i'R$ is maximal. Replacing the original $(x \,\, y )$ by $( x' \,\, y')$ we can assume that $I=\Sigma x_iR$ is maximal among such ideals. Write $I=Re$, where $e$ is an idempotent in $R$. By Corollary \ref{swan5} we can find an element $\eta\in {\E}(2n, R)$ such that $x\eta =(0,0,\dots,e)$. So we can modify $x$ by my elementary generators of the form $q\eps_{ij}(\star)$ or $h\eps_{ij}(\star)$ and hence we assume that $x=(0,0,\dots,e)$. We claim that $y_i\in eR$ for all $i\ge 1$. First we consider the case $2n\ge 6$. Assume $y_1\notin I$, but $y_i\in I$ for all $i\ge 2$. If we apply $q\eps_{1n}(1)$ in the quadratic case then this replaces $y_{n}$ to $y_{n}-y_1$ but not changes $e$ and $y_1$. On the other hand for the Hermitian case we do not have the generator $q\eps_{1n}(1)$. But if we apply $hm_n(1,\ldots,1)$, then it changes $y_2$ but does not changes $e$ and $b_1$. Therefore, in both the cases we can therefore assume that some $y_i$ with $i>1$ is not in $I$. (Here recall that we have put no restriction on $C$, {\it i.e.}, for us $C=R^r$). Apply $qr_{i i}(1)$ with $2\le i\le n$ in the quadratic case. This changes $x_i=0$ (for $i>1$) to $y_i$ while $x_n=e$ is preserved. The ideal generated by the entries of $x$ now contains $Re+Ry_i$, which is larger than $I$, a contradiction, as $I$ is maximal. In the Hermitian case if we apply suitable $hr_i(1,\ldots,1)$ then also we see that the ideal generated by the entries of $x$ now contains $eR+y_iR$, hence a contradiction. If $2n=4$, we can argue as follows. Let $f=1-e$. Let us assume that $y_1\notin I$ as above. Then by Lemma \ref{swan6}, it will follow that we can find some $s\in R$ such that $y_1sf\ne eR$. First consider the quadratic case. Applying $qr_{21}(sf)$ replaces $x_2=e$ by $c=e+y_1sf$. As $ce=e$, $I=eR\subset cR$. Also, $cf=y_1sf\in cR$ but $cf\notin I$. Hence $I \subsetneq cR$, a contradiction. We can get the similar contradiction for $y_2$ by applying $qr_{22}(sf)$. In the Hermitian case, apply $hr_1(1)$ to get the contradiction for $y_1$. Now note that in this case $r=1$, as we have assumed $r<n$. Hence we can apply $qr_{22}(sf)$ to get the contradiction. Since all $y_i$ lie in $eR$, the right ideal generated by all the entries of $(x\,\,y)$ is $eR$, but as this row vector is unimodular, we get $eR=R$, and therefore $e=1$. \hb \vp \\ {\bf Proof of Theorem \ref{swan}.} Let $J$ be the Jacobson radical of $R$. Since the left and the right Jacobson radical are same, $J$ is stable under the involution which therefore passes to $R/J$. Let $\eps$ be as in Lemma \ref{swan7} for the image $(x'\,\,y')$ of $(x\,\,y)$. By lifting $\eps$ from $R/J$ to $R$ and applying it to $(x\,\,y)$, we reduce to the case, where $x_n$ is a unit in $R$. Let $\alpha=x_n\perp x_n^{-1}$. Then applying $({\rm I}_{n-2}\perp \alpha \perp {\rm I}_{n-2} \perp \alpha^{-1})$ we can assume that $x_n=1$. Next applying $\Pi_{i=1}^{n-1} ql_{ni}(-y_i)$ and $\Pi_{i=1}^{n-1} hl_{ni}(-y_i)$ in the respective cases we get $y_1=\cdots=y_{n-1}=0$. As isotropic vector remains isotropic under elementary quadratic (Hermitian) transformation, we have $y_n+\ol{y}_n\lmd=0$, hence $ql_{11}(\ol{y}_n\ol{\lmd})$ and $hl_{11}(\ol{y}_n\ol{\lmd})$ are defined and applying it reduces $y_n$ to $0$ in both the cases. Now we want to make $x_i=0$ for $i=1,\ldots,n$. In the quadratic case it can be done by applying $\Pi_{i=1}^{n-1} h\eps_{in}(-x_i)$. Note that this transformation does not affect any $y_i$'s, as $y_i=0$. In the Hermitian case we can make $x_{r+1}=\cdots=x_n=0$ as before applying $\Pi_{i=r+1}^{n-1} q\eps_{in}(-x_i)$. To make $x_1=\cdots=x_r=0$ we have to recall that the set $C=R^r$, {\it i.e.}, there is no restriction on the set $C$. Hence $hr_n(-x_1,\ldots,-x_r)$ is defined and applying it we get $x_1=\cdots=x_r=0$. Also note that other $x_i$'s and $y_i$'s remain unchanged. Finally, applying $hl_{nn}(1)$ and then $hr_{nn}(-1)$ we get the required vector $(0,\ldots,0,1)$. This completes the proof.\hb \begin{tr} Let $k$ be a commutative ring with identity and $R$ an associative $R$-algebra such that $R$ is finite as a right $k$-module. Then the following are equivalent for $n\ge 3$ in the quadratic case and $n\ge r+3$ in the Hermitian case: \begin{enumerate} \item {\bf (Normality)} ${\E}(2n, R, \LMD)$ is a normal subgroup of ${\G}(2n, R, \LMD)$. \item {\bf (L-G Principle)} If $\alpha(X)\in {\G}(2n,R[X], \LMD[X])$, $\alpha(0)={\I}_n$ and $$\alpha_{\m}(X)\in {\E}(n,R_{\m}[X], \LMD_{\mf m}[X])$$ for every maximal ideal $\m \in {\M}(k)$, then $$\alpha(X)\in {\E}(2n,R[X], \LMD[X]).$$ $($Note that $R_{\m}$ denotes $S^{-1}R$, where $S = k \setminus \m$.$)$ \end{enumerate} \end{tr} {\bf Proof.} In Section 3, we have proved the Lemma \ref{key5} for any form ring with identity and shown that the local-global principle is a consequence of Lemma \ref{key5}. So, the result is true in particular if we have ${\E}(2n, R, \LMD)$ is a normal subgroup of ${\G}(2n, R, \LMD)$. \iffalse In particular, suppose ${\E}(2n, R, \LMD)$ is a normal subgroup of ${\G}(2n, R, \LMD)$. Let $\alpha={\rm I}_n+\tn{M}(v,w)$, where $v=Be_1$, and $B\in {\G}(n,R,\LMD)$. Then we can write $\alpha=B({\rm I}_n+\tn{M}(e_1, w_1))B^{-1}$, where $w_1= B^{-1} w$. Hence it is enough to show that ${\rm I}_n + \tn{M}(e_1,w_1)\in {\E}(n,R)$. Now arguing as in the proof of Lemma \ref{key5} we get the result. Now in section \S we have proved local-global principle as a consequence of Lemma \ref{key5}. Hence the implication follows. \fi To prove the converse we need $R$ to be finite as $k$-module, where $k$ is a commutative ring with identity ({\it i.e.}, a ring with trivial involution). Let $\alpha\in {\E}(2n,R,\LMD)$ and $\beta\in {\G}(2n,R, \LMD)$. Then $\alpha$ can be expressed as a product of matrices of the form $\vartheta_{ij}(\tn{ ring element})$ and $\vartheta_{i}(\tn{ column vector})$. Hence we can write $\beta\alpha \beta^{-1}$ as a product of the matrices of the form $({\rm I}_{2n}+\beta\, \tn{M}(\star_1,\star_2)\beta^{-1})$, with $\langle \star_1, \star_2\rangle =0$, where $\star_1$ and $\star_2$ are suitably chosen standard basis vectors. Now let $v=\beta \star_1$. Then we can write $\beta\alpha \beta^{-1}$ as a product of the matrices of the form $({\rm I}_{2n}+\beta\, \tn{M}(v,w)\beta^{-1})$, with $\langle v, w\rangle =0$ for some row vector $w$ in $R^{2n}$. We show that each $({\rm I}_{2n}+\tn{M}(v,w))\in {\E}(2n,R,\LMD)$. Let $\gamma(X)={\rm I}_{2n}+X\tn{M}(v,w)$. Then $\gamma(0)={\rm I}_{2n}$. By Lemma \ref{HB} it follows that $S^{-1}R$ is a semilocal ring, where $S=k-\m$, $\m \in {\M}(R)$. Since $v\in {\rm Um}_{2n}(A)$, using Theorem \ref{swan} we get $v\in {\E}(2n, S^{-1}R, S^{-1}\LMD) e_1$, hence $Xv\in {\E}(2n, S^{-1}R[X], S^{-1}\LMD[X]) e_1$. Therefore, applying Lemma \ref{key5} over $S^{-1}(A[X], \LMD[X])$ it follows that $$\gamma_{\m}(X)\in {\E}(2n,S^{-1}R[X],S^{-1}\LMD[X]).$$ Now applying Theorem \ref{LG}, it follows that $\gamma(X)\in {\E}(2n,R[X],\LMD[X])$. Finally, putting $X=1$ we get the result. \hb \fi \begin{lm} \tn{(H. Bass) (\cite{B}, Lemma 4.3.26)} \label{B} Let $R$ be a semilocal ring $($may not be commutative$)$, and let $I$ be a left ideal of $R$. Let $a$ in $R$ be such that $Ra+I=R$. Then the coset $a+I=\{a+x \,|\,x\in I\}$ contains a unit of $R$. \end{lm} {\bf Proof.} We give a proof due to R.G. Swan. We can factor out the radical and assume that $R$ is semisimple Artinian. Let $I=(Ra\cap I)\oplus I'$. Replacing $I$ by $I'$ we can assume that $R=Ra\oplus I$. Let $f:R\ra Ra$ by $r\mapsto ra$, for $r\in R$. Therefore, we get an split exact sequence $0\lra J\lra R \stk{f}\lra Ra\lra 0$, for some ideal $J$ in $R$ which gives us a map $g:R\ra J$ such that $R \stk{(f,g)}\lra Ra\oplus J$ is an isomorphism. Since $Ra\oplus J\cong R\cong Ra\oplus I$ cancellation (using Jordon-H\"older or Krull-Schmidt) shows that $J\cong I$. If $h:R\cong J\cong I$, then $R\stk{(f,g)}\lra Ra\oplus I\cong R$ is an \ isomorphism sending $1$ to $(a,i)$ to $a+i$, where $i=h(1)$. Hence it follows that $a+i$ is a unit. \hb \vp \begin{lm} \label{swan4} Let $R$ be a semisimple Artinian ring and $I$ be a left ideal of $R$. Let $J=Ra+I$. Write $J=Re$, where $e$ is an idempotent $($possible since $J$ is projective. For detail {\it cf.} \cite{BK} {\rm Theorem 4.2.7}$)$. Then there is an element $i\in I$ such that $a+i=ue$, where $u$ is a unit in $R$. \end{lm} {\bf Proof.} Since $R=J+R(1-e)=Ra+I+R(1-e)$, using Lemma \ref{B} we can find a unit $u=a+i+x(1-e)$ in $R$, for some $x\in R$. Since $a+i\in Re$, it follows that $ue=a+i$. \hb \begin{co} \label{swan5} Let $R$ be a semisimple Artinian ring and $(a_1,\dots,a_n)^t$ be a column vector over $R$, where $n\ge 2$. Let $\Sigma Ra_i=Re$, where $e$ is an idempotent. Then there exists $\eps\in {\E}_n(R)$ such that $\eps (a_1,\dots,a_n)^t=(0,\dots,0,e)^t$. \end{co} {\bf Proof.} By Lemma \ref{swan4} we can write $ue=\Sigma_{i=1}^{n-1} b_ia_i+a_n$, where $u$ is a unit. Therefore, applying an elementary transformation we can assume that $a_n=ue$. Multiplying from the left by $({\rm I}_{n-2} \perp u\perp u^{-1})$ we can make $a_n=e$. Since all $a_i$ are left multiple of $e$, further elementary transformations reduce our vector to the required form. \hb \vp The following observation will be needed to do the case $2n=4$. \begin{lm} \label{swan6} Let $R$ be a semisimple Artinian ring and $e$ be an idempotent. Let $f=1-e$, and $b$ be an element of $R$. If $fRb\subseteq Re$, then we have $b\in Re$. \end{lm} {\bf Proof.} Since $R$ is a product of simple rings, it will suffice to do the case in which $R$ is simple. If $e=1$, we are done. Otherwise $RfR$ is a non-zero two sided ideal, and hence $RfR=R$. Since $Rb=RfRb\subseteq Re$, we have $b\in Re$. \hb \begin{lm} \label{swan7} Let $R$ be a semisimple Artinian ring and let $-:R\ra R$ be a $\lmd$-involution on $R$. Let $( x \,\, y)^t $ be a unimodular row of length $2n$, where $2n\ge 4$. Then there exists an element $\eps\in {\E}(2n,R)$ such that $\eps( x \,\, y)^t= (x' \,\, y')^t$, where $x_1'$ is a unit in $R$. \end{lm} {\bf Proof.} Let $x=(x_1,\ldots,x_n)^t$ and $b=(y_1,\ldots,y_n)^t$. We claim that there exists $\eps\in {\E}(2n,R)$ such that $\eps( x \,\, y)^t = (x' \,\, y')$, where $x'$ is a unit in $R$. Among all $( x' \,\, y')^t$ of this form, choose one for which the ideal $I=\Sigma Rx_i'$ is maximal. Replacing the original $(x \,\, y )^t$ by $( x' \,\, y')^t$ we can assume that $I=\Sigma Rx_i$ is maximal among such ideals. Write $I=Re$, where $e$ is an idempotent in $R$. By Corollary \ref{swan5} we can find an element $\eta\in {\E}_n(R)$ such that $\eta x =(0,0,\dots,e)^t$. So we can modify $x$ by my elementary generators of the form $q\eps_{ij}(\star)$ or $h\eps_{ij}(\star)$ and hence we assume that $x=(0,0,\dots,e)^t$. We claim that $y_i\in Re$ for all $i\ge 1$. First we consider the case $2n\ge 6$. Assume $y_1\notin I$, but $y_i\in I$ for all $i\ge 2$. If we apply $q\eps_{1n}(1)$ in the quadratic case then this replaces $y_{n}$ to $y_{n}-y_1$ but not changes $e$ and $y_1$. On the other hand for the Hermitian case we do not have the generator $q\eps_{1n}(1)$. But if we apply $hm_n(1,\ldots,1)$, then it changes $y_2$ but does not changes $e$ and $b_1$. Therefore, in both the cases we can therefore assume that some $y_i$ with $i>1$ is not in $I$. (Here recall that we have put no restriction on $C$, {\it i.e.}, for us $C=R^r$). Apply $qr_{i i}(1)$ with $2\le i\le n$ in the quadratic case. This changes $x_i=0$ (for $i>1$) to $y_i$ while $x_n=e$ is preserved. The ideal generated by the entries of $x$ now contains $Re+Ry_i$, which is larger than $I$, a contradiction, as $I$ is maximal. In the Hermitian case if we apply suitable $hr_i(1,\ldots,1)$ then also we see that the ideal generated by the entries of $x$ now contains $Re+Ry_i$, hence a contradiction. If $2n=4$, we can argue as follows. Let $f=1-e$. Let us assume that $y_1\ne I$ as above. Then by Lemma \ref{swan6} it will follow that we can find some $s\in R$ such that $fsy_1\ne Re$. First consider the quadratic case. Applying $qr_{21}(fs)$ replaces $x_2=e$ by $c=e+fsy_1$. As $ec=e$, $I=Re\subset Rc$. Also, $fc=fsy_1\in Rc$ but $fc\notin I$. Hence $I \subsetneq Rc$, a contradiction. We can get the similar contradiction for $y_2$ by applying $qr_{22}(fs)$. In the Hermitian case, apply $hr_1(1)$ to get the contradiction for $y_1$. Now note that in this $r=1$ as we have assume $r<n$. Hence we can apply $qr_{22}(fs)$ to get the contradiction. Since all $y_i$ lie in $Re$, the left ideal generated by the all entries of $(x\,\,y)^t$ is $Re$, but as this column vector is unimodular, we get $Re=R$, and therefore $e=1$. \hb \vp \\ {\bf Proof of Theorem \ref{swan}.} Let $J$ be the Jacobson radical of $R$. Since the left and the right Jacobson radical are same, $J$ is stable under the involution which therefore passes to $R/J$. Let $\eps$ be as in Lemma \ref{swan7} for the image $(x'\,\,y')^t$ of $(x\,\,y)^t$. By lifting $\eps$ from $R/J$ to $R$ and applying it to $(x\,\,y)^t$ we reduce to the case where $x_n$ is a unit in $R$. Let $\alpha=x_n\perp x_n^{-1}$. Then applying $({\rm I}_{n-2}\perp \alpha \perp {\rm I}_{n-2} \perp \alpha^{-1})$ we can assume that $x_n=1$. Next applying $\Pi_{i=1}^{n-1} ql_{ni}(-y_i)$ and $\Pi_{i=1}^{n-1} hl_{ni}(-y_i)$ in the respective cases we get $y_1=\cdots=y_{n-1}=0$. As isotropic vector remains isotropic under elementary quadratic (Hermitian) transformation, we have $y_n+\lambda\ol{y}_n=0$, hence $ql_{11}(\ol{\lmd}\ol{y}_n)$ and $hl_{11}(\ol{\lmd}\ol{y}_n)$ are defined and applying it reduces $y_n$ to $0$ in both the cases. Now we want to make $x_i=0$ for $i=1,\ldots,n$. In the quadratic case it can be done by applying $\Pi_{i=1}^{n-1} h\eps_{in}(-x_i)$. Note that this transformation does not affect any $y_i$'s, as $y_i=0$. In the Hermitian case we can make $x_{r+1}=\cdots=x_n=0$ as before applying $\Pi_{i=r+1}^{n-1} q\eps_{in}(-x_i)$. To make $x_1=\cdots=x_r=0$ we have to recall that the set $C=R^r$, {\it i.e.}, there is no restriction on the set $C$. Hence $hr_n(-x_1,\ldots,-x_r)$ is defined and applying it we get $x_1=\cdots=x_r=0$. Also note that other $x_i$'s and $y_i$'s remain unchanged. Finally, applying $hl_{nn}(1)$ and then $hr_{nn}(-1)$ we get the required vector $(0,\ldots,0,1)$. This completes the proof.\hb \begin{tr} \label{N-LG} Let $k$ be a commutative ring with identity and $R$ an associative $k$-algebra such that $R$ is finite as a left $k$-module. Then the following are equivalent for $n\ge 3$ in the quadratic case and $n\ge r+3$ in the Hermitian case: \begin{enumerate} \item {\bf (Normality)} ${\E}(2n, R, \LMD)$ is a normal subgroup of ${\G}(2n, R, \LMD)$. \item {\bf (L-G Principle)} If $\alpha(X)\in {\G}(2n,R[X], \LMD[X])$, $\alpha(0)={\I}_n$ and $$\alpha_{\m}(X)\in {\E}(2n,R_{\m}[X], \LMD_{\mf m}[X])$$ for every maximal ideal $\m \in {\M}(k)$, then $$\alpha(X)\in {\E}(2n,R[X], \LMD[X]).$$ $($Note that $R_{\m}$ denotes $S^{-1}R$, where $S = k \setminus \m$.$)$ \end{enumerate} \end{tr} {\bf Proof.} In Section 3 we have proved the Lemma \ref{key5} for any form ring with identity and shown that the local-global principle is a consequence of Lemma \ref{key5}. So, the result is true in particular if we have ${\E}(2n, R, \LMD)$ is a normal subgroup of ${\G}(2n, R, \LMD)$. To prove the converse we need $R$ to be finite as $k$-module, where $k$ is a commutative ring with identity ({\it i.e.}, a ring with trivial involution). Let $\alpha\in {\E}(2n,R,\LMD)$ and $\beta\in {\G}(2n,R, \LMD)$. Then $\alpha$ can be expressed as a product of matrices of the form $\vartheta_{ij}(\tn{ ring element})$ and $\vartheta_{i}(\tn{ column vector})$. Hence we can write $\beta\alpha \beta^{-1}$ as a product of the matrices of the form $({\rm I}_{2n}+\beta\, \tn{M}(\star_1,\star_2)\beta^{-1})$, with $\langle \star_1, \star_2\rangle =0$, where $\star_1$ and $\star_2$ are suitably chosen standard basis vectors. Now let $v=\beta \star_1$. Then we can write $\beta\alpha \beta^{-1}$ as a product of the matrices of the form $({\rm I}_{2n}+\beta\, \tn{M}(v,w)\beta^{-1})$, with $\langle v, w\rangle =0$ for some row vector $w$ in $R^{2n}$. We show that each $({\rm I}_{2n}+\tn{M}(v,w))\in {\E}(2n,R,\LMD)$. Let $\gamma(X)={\rm I}_{2n}+X\tn{M}(v,w)$. Then $\gamma(0)={\rm I}_{2n}$. By Lemma \ref{HB} it follows that $S^{-1}R$ is a semilocal ring, where $S=k-\m$, $\m \in {\M}(k)$. Since $v\in {\rm Um}(2n, R)$, using Theorem \ref{swan} we get $$v\in {\E}(2n, S^{-1}R, S^{-1}\LMD) e_1,$$ hence $Xv\in {\E}(2n, S^{-1}R[X], S^{-1}\LMD[X]) e_1$. Therefore, applying Lemma \ref{key5} over $S^{-1}(A[X], \LMD[X])$ it follows that $$\gamma_{\m}(X)\in {\E}(2n,S^{-1}R[X],S^{-1}\LMD[X]).$$ Now applying Theorem \ref{LG}, it follows that $\gamma(X)\in {\E}(2n,R[X],\LMD[X])$. Finally, putting $X=1$ we get the result. \hb \section{Nipotent property for ${\k}$ of Hermitian groups} We devote this section to discuss the study of nilpotent property of unstable ${\k}$-groups. The literature in this direction can be found in the work of A. Bak, N. Vavilov and R. Hazrat. Throughout this section we assume $R$ is a commutative ring with identity, {\it i.e.}, we are considering trivial involution and $n\ge r+3$. Following is the statement of the theorem. \iffalse In \cite{Bak}, Bak proved that for an almost commutative ring the unstable ${\k}$-group is nilpotent-by-abelian. Later in \cite{HV} Vavilov and Hazrat have generalized his result for Chevalley groups. Bak's construction uses a localization-completion method. In \cite{BBR}, the author with A. Bak and Ravi A. Rao showed that the localization part suffices to get the result if we restrict our concern for finite krull dimension. In their very recent paper ({\it cf.}~\cite{BHV}) Bak, Vavilov and Hazrat proved the relative case for the unitary and Chevalley groups. But, in my best knowledge, so far there is no result for Hermitian groups. I observe that using the above local-global principle, arguing as in \cite{BBR}, it follows that the unstable ${\k}$ of Hermitian group is nilpotent-by-abelian. We emitting the proof of Theorem 4.1 in \cite{BBR}. Arguing in the same manner one can......... \fi \begin{tr} \label{nil} The quotient group $\frac{{\SH}(2n, R, a_1,\ldots, a_r)}{{\EH}(2n, R, a_1,\ldots, a_r)}$ is nilpotent for $n\ge r+3$. The class of nilpotency is at the most \tn{max} $(1, d+3-n)$, where $d=\dim \,(R)$. \end{tr} The proof follows by emitting the proof of Theorem 4.1 in \cite{BBR}. \begin{lm} \label{sol3a} Let $I$ be an ideal contained in the Jacobson radical $J(R)$ of $R$, and $\beta\in {\SH}(2n, R, \LMD)$, with $\beta\equiv {\rm I}_n$ modulo $I$. Then there exists $\theta\in {\EH}(2n, R, a_1,\!\!\ldots\!\!, a_r)$ such that $\beta\theta$= the diagonal matrix $[d_1,d_2,\dots,d_{2n}]$, where each $d_i$ is a unit in $R$ with $d_i\equiv 1$ modulo $I$, and $\theta$ a product of elementary generators with each congruent to identity modulo $I$. \end{lm} {\bf Proof.} The diagonal elements are units. Let $\beta=(\beta_{ij})$, where $d_i=\beta_{ii}=1+s_{ii}$ with $s_{ii}\in I\subset J(R)$, for $i=1,\ldots, 2n$, and $\beta_{ij}\in I\subset J(R)$ for $i\ne j$. First we make all the $(2n,j)$-th, and $(i,2n)$-th entries zero, for $i=2,\ldots,n$, $j=2,\ldots,n$. Then repeating the above process we can reduce the size of $\beta$. Since we are considering trivial involution, we take $$\alpha \!\! =\!\!\underset{j=1}{\overset{n}\Pi}hl_{nj}(-\beta_{2nj}d_{j}^{-1})\!\! \underset{n+1\le j\le n+r}{\underset{n+r+1\le i\le 2n-1}\Pi} \!\!\! hm_i(-\zeta_jd_j^{-1})\!\!\!\!\! \underset{n+r+1\le j\le 2n-1}{\underset{r+1\le i\le n-1} \Pi} \!\!\! h\eps_{in}(\beta_{\rho(n)\rho(i)}d_{j}^{-1}),$$ where $j=i-r$ and $\zeta_j=(0,\ldots,0,\beta_{2n j})$, and \\ $$\gamma=\underset{r+1\le i\le 2n-1}{\underset{r+1\le j\le 2n-1}\Pi h\eps_{nj}(a_{i-r}(\star)d_{2n}^{-1})} hr_n(\eta),$$ where $a_t=0$ for $t>r$, and $\eta=(\beta_{1 2n}d_{2n}^{-1},\beta_{2 2n}d_{2n}^{-1},\ldots,\beta_{n 2n}d_{2n}^{-1})$. Then the last column and last row of $\gamma\beta\alpha$ become $(0,\dots,0, d_{2n})^t$, where $d_{2n}$ is a unit in $R$ and $d_{2n}\equiv 1$ modulo ${\I}$. Repeating the process we can modify $\beta$ to the required form. \hb \iffalse \begin{lm} \label{sol3} Let $(R, \LMD)$ be a commutative form ring, {\it i.e.}, with trivial involution and $s$ be a non-nilpotent element in $R$. Let $D$ denote the diagonal matrix $[d_1,\dots,d_{2n}]$, where $d_i\equiv 1$ modulo $(s^l)$ for $l\ge 2$. Then $$\left[\vartheta_{ij}\left(\frac{a}{s} X \right), D\right]\subset {\EH}(2n, R[X])\cap {\SH}(2n, (s^{l-1})R),$$ and $$\left[\vartheta_{i}\left(\star\right), D\right]\subset {\EH}(2n, R[X])\cap {\SH}(2n, (s^{l-1})R),$$ where $\star$ is a vector defined over the ring $R_s[X]$ and $\vartheta_{i}(\star)$ is congruent to ${\rm I}_{2n}$ modulo $(X)$. \end{lm} {\bf Proof.} Proof of this result relies on calculation done in \cite{T}. Let $d=d_id_j^{-1}$. Then using a list of commutator laws for elementary generators is stated in (\cite{T}, pg. 237-239) for the Hermitian group, it follows that $$\left[\vartheta_{ij}\left(\frac{a}{s} X \right), D\right]= \vartheta_{ij}\left(\frac{a}{s} X\right) \vartheta_{ij}\left(-\frac{a}{s} dX\right).$$ Since $d_i, d_j\equiv 1$ modulo $(s^l)$ for $l\ge 2$, we can write $d=1+s^m\lambda$ for some $m>2$ and $\lambda\in R$. Hence \begin{align*} \vartheta_{ij}\left(\frac{a}{s} X \right)\vartheta_{ij}\left(-\frac{a}{s} dX\right) & = \vartheta_{ij}\left(\frac{a}{s} X\right) \vartheta_{ij}\left(-\frac{a}{s} X\right) \vartheta_{ij}\left(-\frac{a}{s}s^m\lambda X\right)\\ & = \vartheta_{ij}\left(-\frac{a}{s}s^m\lambda X\right).\end{align*} But $\vartheta_{ij}\left(-\frac{a}{s}s^m\lambda X\right) \in {\EH}(2n, R[X])\cap {\SH}(2n, (s^{m-1})R)$. This proofs the lemma.\hb \begin{lm} \label{sol4} Let $(R, \LMD)$ be as above, $s\in R$ a non-nilpotent element in $R$ and $a\in R$. Then for $l\ge 2$ $$\left[\vartheta_{ij}\left(\frac{a}{s} X\right), {\SH}(2n, s^lR)\right] \subset {\EH}(2n, R[X]),$$ and $$\left[\vartheta_{i}\left(\star\right), {\SH}(2n, s^lR)\right] \subset {\EH}(2n, R[X]),$$ where $\star$ is a vector defined over the ring $R_s[X]$ and $\vartheta_{i}(\star)$ is congruent to ${\rm I}_{2n}$ modulo $(X)$. More generally, $$\left[\eps(X), {\SH}(2n, s^lR[X])\right]\subset {\EH}(2n, R[X])$$ for $l\gg 0$ and $\eps(X)\in {\EH}(2n, R_s[X])$. \end{lm} {\bf Proof.} We give a proof for the generators $\vartheta_{ij}\left(\frac{a}{s} X\right)$. The proof for the other type of generators is similar. First fix $(i,j)$ for $i\ne j$. Let $\alpha(X)=[\vartheta_{ij}\left(\frac{a}{s} X\right), \beta]$ for some $\beta\in {\SH}(2n, s^lR)$. As $l\ge 2$, it follows that $\alpha(X)\in {\SH}(2n, R[X])$. Since ${\EH}(2n, R[X])$ is a normal subgroup of ${\SH}(2n, R[X])$ for $n\ge r+3$ ({\it cf.}~\cite{T}, Theorem 4.2), we get $$\alpha_s(X)\in {\EH}(2n, R_s[X]).$$ Let $B=1+sR$. We show that $\alpha_B(X)\in {\EH}(2n, R_B[X])$. Since $s\in J(R_B, \LMD_B)$, it follows from Lemma \ref{sol3a} that we can decompose $\beta_B=\eps_1\cdots\eps_tD,$ where $\eps_i \in {\EH}(2n, R_B)$; $\lambda_i\in R_B$ is of the form $\vartheta_{p_iq_i}(s^l \lambda_i)$ or $\vartheta_{p_i}(\star_i)$ with $\star_i$ is a column vector of length $r$ defined over $R_B$, and $\vartheta_{p_i}(\star_i)$ is congruent to ${\rm I}_{2n}$ modulo $s^l$. And $D$ = the diagonal matrix $[d_1,\dots,d_{2n}]$ with $d_i$ is a unit in $R_B$ with $d_i\equiv 1$ modulo $(s^l)$ for $l\ge 2$; $i=1,\dots,2n$. If $t=0$, the result follows from Lemma \ref{sol3}. Using the commutator laws for the elementary generators is stated in (\cite{T}, pg. 237-239) it follows from Lemma \ref{sol3}. For $t>0$ we split the expression by usying the commutator laws for the elementary generators is stated in (\cite{T}, pg. 237-239), and then by applying induction. we get \begin{align*} \alpha_B(X) & =\left[\vartheta_{ij}\left(\frac{a}{s} X\right), \eps_1 \cdots \eps_t D\right] \\ & =\left[\vartheta_{ij}\left(\frac{a}{s} X\right), \eps_1\right] \eps_1 \left[\vartheta_{ij}\left(\frac{a}{s} X\right),\eps_2 \cdots \eps_t D\right] \eps_1^{-1} \end{align*} and by induction each term is in ${\EH}(2n, R_B[X])$, hence $$\alpha_B(X)\in {\EH}(2n, R_B[X]).$$ Since $\alpha(0)={\rm I}_n$, by the local-global principle for the Hermitian groups (Theorem \ref{LG}) it follows that $\alpha(X)\in {\EH}(2n, R[X], a_1,\ldots, a_r)$. \hb \fi \begin{pr} $(${\it cf.} Lemma 7, \cite{P}$)$ \label{sol5} Let $(R, \LMD)$ be a commutative form ring, {\it i.e.}, with trivial involution and $s$ be a non-nilpotent element in $R$ and $a\in R$. Then for $l\ge 2$ $$\left[\vartheta_{ij}\left(\frac{a}{s} \right), {\SH}(2n, s^lR)\right] \subset {\EH}(2n, R).$$ More generally, $\left[\eps, {\SH}(2n, s^lR)\right]\subset {\EH}(2n, R)$, for $l\gg 0$ and $\eps\in {\EH}(2n, R_s)$. \end{pr} {\bf Proof of Theorem \ref{nil}:} Recall \vp Let $G$ be a group. Define $Z^0=H$, $Z^1=[G,G]$ and $Z^i=[G,Z^{i-1}]$. Then $G$ is said to be nilpotent if $Z^r=\{e\}$ for some $r>0$, where $e$ denotes the identity element of $H$. Since the map ${\EH}(2n, R, a_1,\ldots, a_r)\ra {\EH}(2n, R/I, \ol{a}_1,\ldots, \ol{a}_r)$ is surjective we may and do assume that $R$ is a reduced ring. Note that if $n\ge d+3$, then the group ${\SH}(2n, R, a_1,\ldots, a_r)/{\EH}(2n, R, a_1,\ldots, a_r)={\KH}_1(R, a_1,\ldots, a_r)$, which is abelian and hence nilpotent. So we consider the case $n\le d+3$. Let us first fix a $n$. We prove the theorem by induction on $d=\dim R$. Let $$G={\SH}(2n, R, a_1,\ldots, a_r)/{\EH}(2n, R, a_1,\ldots, a_r).$$ Let $m=d+3-n$ and $\alpha=[\beta, \gamma]$, for some $\beta\in G$ and $\gamma\in Z^{m-1}$. Clearly, the result is true for $d=0$. Let $\widetilde{\beta}$ be the pre-image of $\beta$ under the map $${\SH}(2n, R, a_1,\ldots, a_r)\ra {\SH}(2n, R, a_1,\ldots, a_r)/{\EH}(2n, R, a_1,\ldots, a_r).$$ If $R$ is local then arguing as Lemma \ref{sol3a} it follows that we can choose a non-zero-divisor $s$ in $R$ such that $\widetilde{\beta}_s\in {\EH}(2n, R_s, a_1,\ldots, a_r)$. Consider $\ol{G}$, where bar denote reduction modulo $s^l$, for some $l\gg 0$. By the induction hypothesis $\ol{\gamma}=\{1\}$ in $\ol{{\SH}(2n, R)}$, where bar denote the reduction modulo the subgroup ${\EH}(2n, R)$. Since ${\EH}(2n, R)$ is a normal subgroup of ${\SH}(2n, R)$, for $n\ge r+3$, by modifying $\gamma$ we may assume that $\widetilde{\gamma}\in {\SH}(2n, R,s^lR, a_1,\ldots, a_r)$, where $\widetilde{\gamma}$ is the pre-image of $\gamma$ in ${\SH}(2n, R,a_1,\ldots, a_r)$. Now by Proposition \ref{sol5} it follows that $[\widetilde{\beta},\widetilde{\gamma}]\in {\EH}(2n, R,a_1,\ldots, a_r)$. Hence $\alpha=\{1\}$ in $G$. \hb \begin{re} \tn{In (\cite{BRK}, Theorem 3.1) it has been proved that the question of normality of the elementary subgroup and the local-global principle are equivalent for the elementary subgroups of the linear, symplectic and orthogonal groups over an almost commutative ring with identity. There is a gap in the proof of the statement $(3)\Ra (2)$ of Theorem 3.1 in \cite{BRK} (for an almost commutative ring). The fact that over a non-commutative semilocal ring the elementary subgroups of the classical groups acts transitively on the set of unimodular and istropic ({\it i.e.}, $\langle\,v,v\rangle=0$) vectors of length $n\ge 2$ in the linear case, and $n=2r\ge 4$ in the non-linear cases has been used in the proof, but it is not mentioned anywhere in the article. This was pointed by Professor R.G. Swan and he provided us a proof for the above result. } \end{re} {\bf Acknowledgment:} My sincere thanks to Professors R.G. Swan for giving me his permission to reproduce his proof of Theorem \ref{swan} (he gave a proof for the symplectic and orthogonal groups as noted above). I thank DAE Institutes in India and ISI Kolkata for allowing me to use their infrastructure facilities in times. I am very much grateful to Prof. A. Bak and Prof. Nikolai Vavilov for their kind efforts to correct the manuscript, and I thank University of Bielefeld, NBHM, and IISER Pune for their financial supports for my visits. I thank Professors T.Y. Lam, D.S. Nagaraj, Ravi Rao, B. Sury and Nikolai Vavilov for many useful suggestions and editorial inputs. I would like to give some credit to Mr. Gaurab Tripathi for correcting few mathematical misprints. {\small
1,314,259,996,386
arxiv
\section*{Introduction} Since their invention in the 1960s \cite{Townes58, Maiman60}, lasers have been the backbone of modern optics, playing fundamental roles in optical sciences and technologies. For a coherent light source, wavelength tunability is one of the most important specifications, underlying crucially many applications including optical communications \cite{Agrawal12}, spectroscopy \cite{Hodgkinson12, Vahala16}, frequency metrology \cite{Udem02}, sensing \cite{Vollmer08}, to name some. However, lasing wavebands are naturally limited by gain media. Nonlinear optical parametric processes, such as second-harmonic generation (SHG) and difference-frequency generation (DFG), with the flexible engineering of the phase-matching condition, are probably the most prominent approaches to achieve tunable coherent radiation at optical frequencies that can hardly be obtained by lasers directly \cite{Byer75, Dunn99, Breunig11}. Lithium niobate (LN), with outstanding nonlinear and linear optical properties, is widely employed for this application, where SFG/DFG has been extensively studied over the past decades, particularly in periodically-poled lithium niobate (PPLN) waveguides \cite{Byer92, Pierce95}. In general, a type-0 configuration is employed to achieve a high conversion efficiency, and temperature tuning is a common technique to vary the operation wavelength of a PPLN waveguide. However, the pump wavelength tunability of the type-0 SHG is fairly limited \cite{Byer92, Pierce95, Fejer97}, mainly due to the relatively small wavelength dependence of the thermo-optic coefficient (although DFG can exhibit a large wavelength tunability with a third wave involved). In fact, LN exhibits a remarkable thermo-optic birefringence \cite{Rendina05, Luo17OL, Luiten18}, significantly greater than most other optical media \cite{Tropf95}. This characteristic can be utilized to greatly increase the wavelength tunability of SHG in PPLN by employing a type-I configuration \cite{Byer92, Pierce95, Cha02}, which, however, inevitably sacrifices seriously the conversion efficiency due to the significantly weaker nonlinearity compared with a type-0 process. Over the past decade, a variety of integrated material platforms with $\chi^{(2)}$ nonlinearity have been developed for efficient nonlinear optical parametric processes \cite{Harris06, Vuckovic09, Lipson11, Duchesne11, Tang16Optica, Watts17, Bres17}, where the tight confinement of optical modes is able to significantly enhance nonlinear optical interactions. Recent advances in the integrated LN platform have greatly inspired study of nonlinear optics in LN nanophotonic structures \cite{Hu09, Bowers16, Cheng16, Loncar17OE, Fathpour17APL, Luo17OE, Loncar17NC, Buse17}, showing great potentials for nonlinear wavelength conversion with even higher efficiencies compared with PPLN and other integrated platforms. This provides an opportunity to achieve a large wavelength tunability by using the type-I configuration, while maintaining a high conversion efficiency at the same time. Here we demonstrate highly-tunable efficient on-chip SHG in a LN nanophotonic waveguide. We achieve SHG through type-I inter-modal phase matching between orthogonal polarizations, and by utilizing the strong thermo-optic birefringence of LN, we demonstrate temperature tuning of the SHG wavelength, with a measured tuning slope of 0.84 nm/K for a telecom pump, almost one order of magnitude higher than that of type-0 SHG in LN \cite{Byer92, Pierce95, Gui10}. Meanwhile, our device is designed to exhibit a large mode overlap, resulting in a theoretical normalized SHG efficiency of $22.2\%~{\rm W^{-1} cm^{-2}}$, which enables us to experimentally demonstrate a conversion efficiency of 4.7$\%~ \rm W^{-1}$ in a waveguide only 8~mm long. Our device is of great promise for efficient on-chip wavelength conversion to produce highly-tunable coherent visible light, which is essential for various integrated photonic applications such as particle and chemical sensing in aqueous environments \cite{Xiao14, Suntivich16, Lu16NC}, while taking advantage of the mature telecom laser technology. \begin{figure*}[t] \centering \includegraphics[width=2\columnwidth]{Fig1.pdf} \caption{ (a) Schematic of our Z-cut LN waveguide. FEM simulations of (b) mode profiles, and (c) effective indices as functions of wavelength, of TE$_{0,\rm{tele}}$ in the telecom and TM$_{j,\rm{vis}} (j=0,1,2)$ in the visible, where $w_t=1200$ nm, $h_1=460$ nm, $h_2=100$ nm, and $\theta=75^\circ$, at 20$^\circ$C. Discontinuity in the curve of TM$_{1,\rm{vis}}$ is due to its coupling with TE$_{2,\rm{vis}}$ (not shown). Zoom-in of the wavelength-dependent effective indices of TE$_{0,\rm{tele}}$ and TM$_{2,\rm{vis}}$ at (d) 20$^\circ$C, and (e) 70$^\circ$C, with black arrows indicating phase matching. (f) Simulated phase-matched pump wavelength $\lambda_{\rm{PM}}$ as a function of temperature. In (c)-(f), the FEM simulations take into account the temperature and wavelength dependence of the material refractive indices, for both ordinary and extraordinary light \cite{Zelmon97, Rendina05}.} \label{Fig1} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1.8\columnwidth]{Fig2.pdf} \caption{ (a) Experimental setup for device characterization and SHG measurement. Scanning electron microscope pictures showing the waveguide (b) top view, (c) facet, and (d) sidewall. (e) Fiber-to-fiber loss as a function of the differential length $L_d$, relative to that of $L_d=0$, for waveguides schematically illustrated in the inset, where $L_f$ and $R$ are kept as 8 mm and 100 $\mu$m, respectively. (f) Telecom-band transmission spectrum of the TE polarization for a straight waveguide with a length of $L\simeq8$ mm, whose schematic is shown in the inset. VOA: variable optical attenuator; LF: lensed fiber; WDM: wavelength division multiplexer. } \label{Fig2} \end{figure*} \section*{Waveguide Design} LN exhibits a significant thermo-optic birefringence, with a value of $|\frac{dn_e}{dT}-\frac{dn_o}{dT}| \sim 4\times10^{-5}\rm{K}^{-1}$ at room temperature \cite{Rendina05, Luo17OL, Luiten18}, where $\frac{dn_e}{dT}$ and $\frac{dn_o}{dT}$ are the thermo-optic coefficients for the extraordinary and ordinary light, respectively. As a result, if SHG occurs in a LN waveguide between optical waves with orthogonal polarizations, a temperature change of the device would result in a considerable variation of the material birefringence which in turn shifts significantly the phase-matched wavelength of the SHG process. In particular, we can maximize this effect by using a Z-cut LN waveguide, which supports ordinarily and extraordinarily polarized optical modes with high polarization purity [see Fig.~\ref{Fig1}(a) and (b)] \cite{Luo17OL}. We design the geometry of the Z-cut LN waveguide [see Fig.~\ref{Fig1}(a)] such that the fundamental quasi-transverse-electric mode (TE$_{0,\rm{tele}}$) in the telecom band is phase matched with the third-order quasi-transverse-magnetic mode (TM$_{2,\rm{vis}}$) in the visible. Figure \ref{Fig1}(c) and \ref{Fig1}(d) show the effective refractive indices of the two modes, simulated by the finite-element method (FEM), which gives a phase-matched pump wavelength of $\lambda_{\rm{PM}}$=1540~nm at room temperature of $20{\rm ^o C}$. Of particular interest is that LN exhibits a significant thermo-optic effect for extraordinary light ($\frac{dn_{e,vis}}{dT} \approx 4\times10^{-5}\rm{K}^{-1}$) while it is negligible for ordinary light ($\frac{dn_{o,tele}}{dT} \approx 0$) around room temperature \cite{Rendina05}. As a result, when the device temperature increases, the effective refractive index of the TE$_{0,\rm{tele}}$ mode remains nearly intact while that of the TM$_{2,\rm{vis}}$ mode increases considerably. Consequently, the phase-matched wavelength moves dramatically towards longer wavelengths. Figure \ref{Fig1}(e) shows an example, where $\lambda_{\rm{PM}}$ shifts to 1574~nm at a temperature of $70{\rm ^o C}$. Detailed analysis shows that the phase-matched wavelength depends almost linearly on the device temperature, as shown clearly in Fig. \ref{Fig1}(f), with a significant tuning slope of 0.69~nm/K. Phase matching of the two modes indicates potentially efficient SHG in the designed waveguide. For a lossless waveguide without pump depletion, the SHG efficiency is given by the following expression \cite{BoydBook, Byer75} \begin{equation} \Gamma = \frac{P_2}{P_1^2 }=\eta L^2 \left[ \frac{\sin(\Delta L/2)}{\Delta L/2} \right]^2, \label{Gamma} \end{equation} where $P_1$ and $P_2$ are the optical powers input at the fundamental wavelength $\lambda$ and produced at the second harmonic, respectively. $L$ is the waveguide length and $\Delta \equiv \frac{4\pi}{\lambda}(n_2-n_1)$ represents the phase mismatch, where $n_1$ and $n_2$ are the effective refractive indices of the TE$_{0,\rm{tele}}$ mode at the fundamental wavelength and the TM$_{2,\rm{vis}}$ mode at the second harmonic, respectively. When the phase-matching condition is satisfied ($\Delta = 0$), Eq.~(\ref{Gamma}) shows the maximum SHG efficiency $\Gamma_0 = \eta L^2$ that depends on the normalized conversion efficiency given as \begin{equation} \eta = \frac{8\pi^2}{\epsilon_0 c n_1^2 n_2 \lambda^2} \frac{\zeta^2 d_{\rm eff}^2}{A_{\rm eff}}, \label{eta} \end{equation} where $\epsilon_0$ and $c$ are the permittivity and light speed in vacuum, respectively, and $d_{\rm eff}$ is the effective nonlinear susceptibility. In Eq.~(\ref{eta}), $A_{\rm eff} \equiv (A_{1}^2 A_{2})^{\frac{1}{3}}$ is the effective mode area where $A_{i} \equiv \frac{ (\int_{\rm all} |\vec{E}_i|^2 dxdz)^3 }{|\int_{\chi^{(2)}} |\vec{E}_i|^2 \vec{E}_i dxdz|^2}$,~($i=1,2$), and $\zeta$ represents the spatial mode overlap factor between the fundamental and second-harmonic modes, given as \begin{equation} \zeta = \frac{ \int_{\chi^{(2)}} (E_{1x}^*)^2 E_{2y} dxdz}{|\int_{\chi^{(2)}}|\vec{E}_1|^2 \vec{E}_1 dxdz|^{\frac{2}{3}} |\int_{\chi^{(2)}} |\vec{E}_2|^2 \vec{E}_2 dxdz|^{\frac{1}{3}}}, \label{zeta} \end{equation} where $\int_{\chi^{(2)}}$ and $\int_{all}$ denote two-dimensional integration over the LN material and all space, respectively, $E_{1x}$ is the x-component of $\vec{E}_1(x,z)$, the electric field of the fundamental mode TE$_{0,\rm{tele}}$, and $E_{2y}$ is the y-component of $\vec{E}_2(x,z)$, the electric field of the second-harmonic mode TM$_{2,\rm{vis}}$. Equations (\ref{Gamma})-(\ref{zeta}) show that the SHG efficiency depends essentially on the spatial mode overlap $\zeta$, the effective mode area $A_{\rm eff}$, and the effective nonlinear susceptibility $d_{\rm eff}$. Numerical simulation shows that our waveguide exhibits a small $A_{\rm eff} = 1.46~\mu \rm m^2$. In particular, our designed waveguide exhibits a large spatial mode overlap, with $\zeta = 0.32$. As a result, the waveguide exhibits a normalized conversion efficiency as high as $\eta=22.2\%~{\rm W^{-1} cm^{-2}}$. This value is comparable to that of type-0 SHG in typical PPLN \cite{Fejer02, Gui10} and LN nanophotonic waveguides \cite{Loncar17OE} that utilize the maximum component of the $\chi^{(2)}$ nonlinearity ($d_{\rm eff}=d_{33}=27~{\rm pm/V}$), although a type-I configuration is employed here ($d_{\rm eff}=d_{31}=4.3~{\rm pm/V}$ \cite{Roberts92}). In contrast to those type-0 devices, our waveguide is expected to exhibit a significantly larger thermal tuning slope, as we will experimentally demonstrate in the following. \section*{Experimental Results} \begin{figure*}[t] \centering \includegraphics[width=2\columnwidth]{Fig3.pdf} \caption{ SHG from a straight LN nanophotonic waveguide with a length of 8 mm. (a) Conversion efficiency spectrum at T=18.7$^\circ$C, with the center wavelength of the sinc$^2$-function aligned to the measured peak. (b) SHG spectrum at a fixed pump wavelength of 1559.06 nm at T=18.7$^\circ$C. (c) Second-harmonic power as a function of pump power, with experimental data compared with a quadratic fitting, exhibiting a conversion efficiency of 4.7$\%~{\rm W^{-1}}$. } \label{Fig3} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1.9\columnwidth]{Fig4.pdf} \caption{ Thermal tuning of SHG. (a) Conversion efficiency spectra at different temperatures. (b) Measured phase-matched pump wavelength $\lambda_{\rm{PM}}$ as a function of temperature. } \label{Fig4} \end{figure*} To confirm our simulation results, we fabricated waveguides on a Z-cut LN-on-insulator wafer [see Fig.~\ref{Fig2}(b)], where the LN thin film has a thickness of $\sim$560 nm, sitting on 2-$\mu$m-thick buried oxide. Figure \ref{Fig2}(c) shows the cross-section of a fabricated waveguide, whose geometry is very close to our design [see Fig.\ref{Fig1}(a)]. In particular, as presented in Fig.~\ref{Fig2}(d), the waveguide sidewall is very smooth, implying a low propagation loss. In order to quantify the propagation and coupling losses, we fabricated waveguides with the same cross-section but different lengths, as schematically shown in the inset of Fig.~\ref{Fig2}(e). Since these waveguides share the same coupling and bending losses, by measuring their transmission as a function of the differential length, we can extract the propagation loss. Figure \ref{Fig2}(e) shows the measurement results, where the propagation loss of straight waveguides for the TE$_{0,\rm{tele}}$ mode is measured to be 0.54 dB/cm, a small value that represents the state-of-the-art quality of LN nanophotonic waveguides. Together with the overall fiber-to-fiber transmission of a straight waveguide [for example, see Fig.~\ref{Fig2}(f)], we obtained a fiber-to-chip coupling loss of about 5 dB/facet. To demonstrate SHG, we employed a straight waveguide with a length of about 8~mm. We launched a telecom-band continuous-wave (CW) laser into the device, with the setup shown in Fig.~\ref{Fig2}(a). By scanning the laser, we were able to measure the efficiency spectrum of SHG. One example is shown in Fig.~\ref{Fig3}(a), which shows a phase-matched pump wavelength of 1559~nm at a temperature of 18.7$^\circ$C. The main lobe of the recorded efficiency spectrum agrees well with the theoretical expectation from the function ${\rm sinc}^2(\Delta L/2)=[\frac{\sin(\Delta L/2)}{\Delta L/2}]^2$. The strong side lobes are likely caused by reflection at facets that are not perfectly smooth. By fixing the pump wavelength at 1559.06 nm, which exhibits the peak conversion efficiency, we observed coherent radiation from its second harmonic at 779.53 nm [see Fig.~\ref{Fig3}(b)]. The second harmonic shows a quadratic power dependence on the pump, which agrees very well with the theoretical expectation. Fitting the experimental data, we obtained an on-chip conversion efficiency of 4.7\%~$\rm W^{-1}$ [see Fig.~\ref{Fig3}(c)]. This value is smaller than the theoretical upper limit given by $\eta L^2$ (=14.2\%~$\rm W^{-1}$), mainly due to the non-zero propagation losses at both wavelengths. To show the spectral tuning capability of our device, we varied the device temperature from 18.7${\rm ^o C}$ to 90.0${\rm ^o C}$ and measured the SHG efficiency spectra. Figure \ref{Fig4}(a) presents the recorded spectra at different temperatures. It shows clearly that the SHG spectrum shifts towards longer wavelengths when the device temperature increases. The spectral shape also changes with temperature, potentially due to the temperature-dependent facet reflection. By mapping the phase-matched pump wavelength as a function of temperature, we obtained Fig.~\ref{Fig4}(b), showing an experimentally measured tuning slope of $\frac{d\lambda_{\rm{PM}}}{dT}=0.84$ nm/K, almost one order of magnitude larger than that achieved by type-0 SHG in LN \cite{Byer92, Pierce95,Gui10}. The experimental results agree well with our simulations [see Fig.~\ref{Fig1}(f)]. A slightly larger experimental value of the tuning slope is likely contributed by pyroelectric \cite{Bernal13} and thermal expansion effects \cite{Smith69} in the waveguide cross-section, which were not taken into account in the simulations. \section*{Conclusion} In conclusion, we have demonstrated highly-tunable efficient SHG in a LN nanophotonic waveguide. The LN waveguide exhibits a high optical quality with a propagation loss as low as 0.54 dB/cm in the telecom band, which represents the state-of-the-art quality of LN nanophotonic waveguides reported to date \cite{Loncar17Optica, Peruzzo18, Buse17}. In particular, we took advantage of the strong thermo-optic birefringence of LN to achieve thermal tuning of the SHG wavelength, with a tuning slope of 0.84 nm/K for a telecom-band pump, significantly higher than that offered by type-0 SHG in LN. At the same time, thanks to the tight mode confinement and a large spatial mode overlap, our waveguide exhibits a high theoretical normalized conversion efficiency of $22.2\%~{\rm W^{-1} cm^{-2}}$ even for the type-I inter-modal phase matching, which is comparable to that of type-0 SHG in typical PPLN and LN nanophotonic waveguides utilizing the largest nonlinear term $d_{33}$. Our waveguide design enabled us to experimentally record a SHG efficiency of 4.7$\%~\rm W^{-1}$ inside a waveguide only 8~mm long. Our device shows great promise for on-chip wavelength conversion that takes advantage of the mature telecom-band laser technology to produce highly-tunable coherent light in the visible. \section*{Appendices} \subsection{Device Fabrication} Starting from a Z-cut LN-on-insulator wafer by NANOLN, we used electron-beam lithography with ZEP520A as the resist for device patterning, and Argon ion milling for etching. Next, in order to remove the remaining resist and material residuals, we treated the chip with oxygen plasma followed by diluted hydrofluoric acid. Finally, we diced the chip and polished the facets for light coupling. \subsection{Experimental Setup} Pump light from a CW tunable telecom-band laser was coupled via a lensed fiber into the device chip, which was placed on top of a thermoelectric cooler that controls the temperature. At the waveguide output, pump light was collected together with the frequency-doubled light by a second lensed fiber. After being separated from its second harmonic by a 780/1550 wavelength division multiplexer, the telecom pump light was directed to an InGaAs detector for characterization, while the generated visible light was sent to a spectrometer for detection. A fiber polarization controller was used for optimal coupling from the input lensed fiber to the wanted waveguide mode, and variable optical attenuators were employed to study the power dependence of SHG. The spectrometer was cooled by liquid nitrogen for a high sensitivity. \subsection{SHG Spectrum Measurement} After aligning lensed fibers to the waveguide for optimal coupling, we scanned the telecom-band pump laser, with the spectrometer recording generated second-harmonic light during the whole laser scanning period. This process was repeated for different temperatures, which were controlled by the thermoelectric cooler under the device chip, to obtain the temperature dependence of the SHG spectrum. \section*{Funding} This work was supported in part by the National Science Foundation under Grant No.~ECCS-1641099 and ECCS-1509749, and by the Defense Advanced Research Projects Agency SCOUT program through Grant No.~W31P4Q-15-1-0007 from the U.S. Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, the U.S. Army, or the U.S. Government. \section*{Acknowledgment} The authors thank Chengyu Liu at Cornell University for helpful discussions on fabrication. This work was performed in part at the Cornell NanoScale Facility, a member of the National Nanotechnology Coordinated Infrastructure (National Science Foundation, ECCS-1542081), and at the Cornell Center for Materials Research (National Science Foundation, DMR-1719875).
1,314,259,996,387
arxiv
\section{Introduction} The system of charged quantum particles interacting with a constant magnetic field continues to attract intensive studies and is without a doubt one of the most investigated quantum systems, mainly motivated by condensed matter physics and quantum optics. A review article devoted to this quantum system and its related different kind of coherent states (CSs) was recently elaborated by Dodonov (see \cite{dodonov} and the complete reference list therein). The concept of what is now called coherent states has been of great interest to the scientific community since the work of Schr\"{o}dinger in 1926 \cite{schroedinger} on the quantum harmonic { oscillator (HO), where he introduced} a specific quantum state that has dynamical behavior that is most similar to that of the classical HO. The conditions any state must fulfill to be coherent were elaborated by Klauder as follows: continuity in complex label, normalization, non orthogonality, unity operator resolution with unique positive weight function of the integration measure, temporal stability and action identity \cite{klauderCS}. More details on the CSs and their different generalizations can be found in the literature \cite{klauder-skagerstam}-\cite{ali-antoine-gazeau}, the list is not of course exhaustive. In his study \cite{landau}, Landau found that the system of electronic motion in a static uniform magnetic field can be assimilated in two dimensions to a harmonic oscillator, with an energy structure of equidistant discrete levels, with a distance $\hbar \omega_c$ ($\omega_c$ is the cyclotron frequency), each level being highly degenerate. Such a system, more often named Landau model, also provides a natural description for other well known significant phenomena, the so-called integer and fractional quantum Hall effects. {In these last years, in the search of understanding the main features of the fractional quantum Hall effect (FQHE) \cite{pasquier, prange-girvin}}, many efforts have been done in the literature to find a wave function which minimizes the energy of a two-dimensional system of electrons subjected to a strong constant magnetic field applied perpendicularly to the sample, independently of the electron density. In \cite{bagarello}, a system of electrons, essentially a two-dimensional crystal, has been considered. Besides, the wave function introduced has been modified to lower the energy in order to explain the experimental data. From an appropriate quantization of the classical variables of the system Hamiltonian, Bagarello {\it et al} (see \cite{bagarello}, \cite{antoine-bagarello} and references therein), have modified the single electron wave function in view of the study of localization properties. The similar quantization has been also used to investigate the Bohm-Aharonov effect (\cite{harms-micu, omer-jellal} and references therein) emphasizing the fact that it is not the electric and the magnetic fields but the electromagnetic potentials which are the fundamental quantities in quantum mechanics. In a previous work \cite{ab1}, a connection has been established between quantum Hall effect and vector coherent states (VCSs) \cite{thirulogasanthar-ali, ali-englis-gazeau} by applying the various construction methods developed in the literature. In the same way, the motion of an electron in a noncommutative $xy$ plane, in a constant magnetic field background coupled with a harmonic potential was examined with the relevant VCSs constructed and discussed \cite{aremuajnmp}. The Barut-Girardello CSs have been built for Landau levels of a gas of spinless charged particles, subject to a perpendicular magnetic field confined in a harmonic potential with thermodynamical and the statistical properties have been investigated \cite{aremuaromp}. See also \cite{dodonov} and references quoted therein. Recently \cite{aremuaastp}, from a matrix (operator) formulation of the Landau problem and the corresponding Hilbert space, an analysis of various VCSs extended to diagonal matrix domains has been performed on the basis of Landau levels. The construction of CSs for continuous spectrum was first proposed for the Gazeau-Klauder CSs in \cite{gazeau-klauder} and later in \cite{inomata, klauder2, popov}. In the present work, we follow the method developed in \cite{gazeau-klauder}, by considering Landau levels, to built various classes of CSs as in \cite{ab1, gazeau-novaes, gouba} arising from physical Hamiltonian describing a charged particle in an electromagnetic field, by introducing additional parameters useful for handling discrete and continuous spectra of the Hamiltonian. The eigenvalue problem is presented and the quantum Hamiltonian spectra provided in the two possible orientations of the magnetic field by considering the infinite degeneracies of the Landau levels. The CSs are constructed with relevant properties discussed for both continuous and discrete spectra, and for purely discrete spectrum. The paper is organized as follows. In section \ref{sec2}, we revisit the model of electron moving on plane where the eigenvalue problems are explicitely set and solved. The position and momentum operators, satisfying canonical commutation relations, established for the considered Hamiltonians are also defined. Section \ref{sec3} is devoted to the construction of CSs for the quantum Hamiltonian possessing both continuous and discrete spectra by following the method developped in \cite{gazeau-klauder, ab1}. Concluding remarks are given in Section \ref{sec4}. \section{{Electron moving on plane revisited}}\label{sec2} In this section, we revisit the system of an electron moving on plane as in \cite{omer-jellal}, where we consider different scenarios for the symmetric gauge and the scalar potential. Consider an electron moving on the plane $xy$ in the uniform external electric field $\overrightarrow{E} =-\overrightarrow{\nabla}\Phi(x,y)$ and the uniform external magnetic field $\overrightarrow{B}$ which is perpendicular to the plane described by the Hamiltonian \begin{eqnarray}{\label{es0}} H = \frac{1}{2m}\left(\overrightarrow{p} + \frac{e}{c}\overrightarrow{A}\right)^{2} - e\Phi. \end{eqnarray} \subsection{Case of the symmetric gauge} \begin{eqnarray}{\label{es01}} \overrightarrow{A} = \left(\frac{B}{2}y, -\frac{B}{2}x \right). \end{eqnarray} Experimentally, the electric field $\overrightarrow{E}$ is oriented according to one of the two possible directions of the plane. Suppose the scalar potential is defined as \begin{eqnarray}{\label{es02}} \Phi(x,y) = -Ey. \end{eqnarray} Substituting the relations (\ref{es01}) and (\ref{es02}) in (\ref{es0}), the corresponding classical Hamiltonian, denoted by $H_{1}$, reads \begin{eqnarray}{\label{es1}} H_{1}(x,y,p_x,p_y) = \frac{1}{2m}\left[\left(p_{x} + \frac{eB}{2c}y\right)^{2} + \left(p_{y} - \frac{eB}{2c}x\right)^{2} \right] + eEy. \end{eqnarray} A canonical quantization of this system is obtained by promoting the classical variables $x,y,p_x,p_y$, to the operators $X,Y,P_x,P_y$ which satisfy the nonvanishing canonical commutation relations \begin{eqnarray} [X, P_x] = i\hbar = [Y, P_y]. \end{eqnarray} The Hamiltonian operator is derived from (\ref{es1}) as follows \begin{eqnarray} \hat H_1(X,Y,P_x,P_y) = \frac{1}{2m}\left[\left(P_{x} + \frac{eB}{2c}Y\right)^{2} + \left(P_{y} - \frac{eB}{2c}X\right)^{2}\right] + eEY. \end{eqnarray} In order to solve the eigenvalue problem \begin{eqnarray} \hat H_1\Psi = \mathcal{E}\Psi, \end{eqnarray} it is convenient to perform the change of variables as below \begin{eqnarray}{\label{complxop}} Z = X + i Y, \;\;\; P_{z} = \frac{1}{2}(P_{x} - i P_{y}), \end{eqnarray} satisfying the nonvanishing commutations relations \begin{eqnarray} [Z, P_z] = i\hbar = [\bar Z, P_{\bar z}], \end{eqnarray} and to define two sets of annihilation and creation operators $b, b^{\dag}$ and $d, d^{\dag}$ given by \begin{eqnarray}{\label{es2}} b = 2P_{\bar{z}} - i \frac{eB}{2c}Z + \lambda, \qquad b^{\dag} = 2P_{z} + i \frac{eB}{2c}\bar{Z} + \lambda, \end{eqnarray} \begin{eqnarray}{\label{es3}} d = 2P_{\bar{z}} + i \frac{eB}{2c}Z, \qquad d^{\dag} = 2P_{z} - i \frac{eB}{2c}\bar{Z}, \end{eqnarray} with $\lambda = \frac{mcE}{B}.$ These two sets of operators commute each other and satisfy the following commutation relations \begin{eqnarray}{\label{es4}} [b, b^{\dag}] = 2 m \hbar \omega_{c} {1 \! \! {\rm I}}, \quad [d^{^{\dag}}, d] = 2 m \hbar \omega_{c} {1 \! \! {\rm I}}, \end{eqnarray} where $\omega_{c} = \frac{eB}{mc}$ is known as the cyclotron frequency and ${1 \! \! {\rm I}}$ is the unit operator. The Hamiltonian $\hat H_1$ can be then re-expressed as follows: \begin{eqnarray}\label{es7} \hat H_1 = \frac{1}{4m}\left(b^{\dag}b + bb^{\dag}\right) - \frac{\lambda}{2m}\left(d^{\dag} + d\right) - \frac{\lambda^{2}}{2m}. \end{eqnarray} In order to compute the eigenvalues $\mathcal{E}$ and eigenvectors $\Psi$, we split $\hat H_1$ in (\ref{es7}) into two commuting parts in the following manner: \begin{eqnarray} \hat H_{1} = \hat H_{1_{OSC}} - \hat T_1, \end{eqnarray} where $\hat H_{1_{OSC}}$ denotes the harmonic oscillator part \begin{eqnarray} \hat H_{1_{OSC}} = \frac{1}{4m}(b^{\dag}b + bb^{\dag}), \end{eqnarray} while the part linear in $d$ and $d^{\dag}$ is given by \begin{eqnarray} \label{tfunc1} \hat T_1 = \frac{\lambda}{2m}(d^{\dag} + d) + \frac{\lambda^{2}}{2m}. \end{eqnarray} The annihilation and creation operators $b$ and $b^{\dag}$ can be also rewritten as follows: \begin{eqnarray} b = \sqrt{2m\hbar \omega_{c}}b',\quad b^{\dag} = \sqrt{2m\hbar \omega_{c}}b'^{\dag},\quad [b',b'^{\dag}] = 1 \! \! {\rm I}, \end{eqnarray} with \begin{eqnarray}{\label{es11}} b' = \sqrt{\frac{2m\omega_{c}}{\hbar}}\left(\frac{ P_{\bar{z}}}{m\omega_{c}} - i \frac{Z}{4} + \frac{\lambda}{2m\omega_{c}}\right), \quad b'^{\dag} = \sqrt{\frac{2m\omega_{c}}{\hbar}}\left(\frac{ P_{z}}{m\omega_{c}} + i \frac{\bar{Z}}{4} + \frac{\lambda}{2m\omega_{c}}\right). \end{eqnarray} Then, one has \begin{eqnarray} b|0\rangle = 0, \quad b|n\rangle = \sqrt{n}\sqrt{2m \omega_{c} \hbar}|n-1\rangle, \quad b^{\dag}|n\rangle = \sqrt{n+1}\sqrt{2m \omega_{c} \hbar}|n+1\rangle \end{eqnarray} leading to \begin{eqnarray} |n+1\rangle = \frac{1}{\sqrt{2m \omega_{c} \hbar (n+1)}}b^{\dag}|n\rangle, \end{eqnarray} and, recurrently, to \begin{eqnarray}{\label{es13}} \Phi_n \equiv |n\rangle = \frac{1}{\sqrt{(2m \omega_{c} \hbar)^{n} n !}}(b^{\dag})^{n}|0\rangle. \end{eqnarray} The harmonic oscillator Hamiltonian $\hat H_{1_{OSC}}$ reduces to \begin{eqnarray}\label{oscil1} \hat H_{1_{OSC}} = \frac{\hbar \omega_{c}}{2}(2 N' + 1 \! \! {\rm I}), \; N'= b'^\dag b' \end{eqnarray} with eigenvalues $\mathcal E_{n_{1_{OSC}}}$ given by \begin{eqnarray} \mathcal E_{n_{1_{OSC}}} = \hbar \omega_{c}(n + \frac{1}{2}), \quad n = 0, 1, 2, \ldots, \end{eqnarray} corresponding to the eigenvectors defined by (\ref{es13}). The eigenvalue equation $\hat T_1\phi = \mathcal E \phi$ can be reduced to \begin{eqnarray} \left(-i \frac{\partial}{\partial x} - \frac{m \omega_{c}}{2 \hbar}y \right)\phi - \frac{m \mathcal E}{\hbar \lambda}\phi = 0. \end{eqnarray} Setting $\alpha = \frac{m \mathcal E}{\hbar \lambda}$, it becomes \begin{eqnarray} -i \frac{\partial}{\partial x}\phi = \left(\frac{m \omega_{c}}{2 \hbar}y + \alpha\right)\phi, \end{eqnarray} whose solution is readily found to be \begin{eqnarray}{\label{es15}} \phi_{\alpha} \equiv \phi_{\alpha}(x,y) = e^{i (\alpha x + \frac{m \omega_{c}}{2 \hbar}xy)}, \;\;\; \alpha \in \mathbb R. \end{eqnarray} Then, the eigenvalues of the operator $\hat T_1$, corresponding to eigenfuctions (\ref{es15}), are given by \begin{eqnarray}{\label{es16}} \mathcal E_{\alpha} = \frac{\hbar \lambda}{m}\alpha + \frac{\lambda^{2}}{2m}, \;\;\; \alpha \in \mathbb R, \end{eqnarray} indicating that this spectrum, labeled by $\alpha$, is continuous. Therefore, to sum up, the eigenvectors and the energy spectrum of the Hamiltonian $\hat H_{1}$ are determined by the following formulas: \begin{eqnarray}{\label{es17}} \Psi_{(n, \alpha)} &=& \Phi_{n} \otimes \phi_{\alpha} \equiv |n,\alpha\rangle, \cr \cr \mathcal E_{(n,\alpha)} &=& \frac{\hbar \omega_{c}}{2}(2n + 1) - \frac{\hbar \lambda}{m}\alpha - \frac{\lambda^{2}}{2m} \quad n= 0, 1, 2, \ldots \end{eqnarray} \subsection{Case of the second possible symmetric gauge} We consider now the symmetric gauge \begin{eqnarray}{\label{ei18}} \overrightarrow{A} = \left(-\frac{B}{2}y, \frac{B}{2}x \right), \end{eqnarray} with the scalar potential given by \begin{eqnarray} \Phi(x,y) = -E x. \end{eqnarray} The classical Hamiltonian $H$ in equation (\ref{es0}) becomes \begin{eqnarray}{\label{ei19}} H_{2}(x,y,p_x,p_y) = \frac{1}{2m}\left[\left(p_{x} - \frac{eB}{2c}y\right)^{2} + \left(p_{y} + \frac{eB}{2c}x\right)^{2}\right] + eEx. \end{eqnarray} By mean of canonical quantization and proceeding like in the previous section, we define the two sets of annihilation and creation operators defined by \begin{eqnarray}{\label{ei20}} \mathfrak b^{\dag} = -2i P_{\bar{z}} + \frac{eB}{2c}Z + \lambda, \qquad \mathfrak b = 2 i P_{z} + \frac{eB}{2c}\bar{Z} + \lambda, \end{eqnarray} \begin{eqnarray}{\label{ei21}} \mathfrak d = 2i P_{z} - \frac{eB}{2c}\bar{Z},\qquad \mathfrak d^{\dag} = -2i P_{\bar{z}} - \frac{eB}{2c}Z, \end{eqnarray} with $\lambda$ defined as in (\ref{es2}) and (\ref{es3}). They also commute each with other and satisfy the commutation relations (\ref{es4}). The corresponding Hamiltonian operator $\hat H_{2}$ can be then written as \begin{eqnarray}{\label{ei24}} \hat H_{2} = \frac{1}{4m}(\mathfrak b^{\dag}\mathfrak b + \mathfrak b\mathfrak b^{\dag}) - \frac{\lambda}{2m}(\mathfrak d^{\dag} + \mathfrak d) - \frac{\lambda^{2}}{2m}, \end{eqnarray} where the following relation \begin{eqnarray}{\label{ei27}} \mathfrak d^{\dag} + \mathfrak d = 2P_{y} - \frac{eB}{c}X, \end{eqnarray} is obtained. Here, the harmonic oscillator part is given by \begin{eqnarray} \hat H_{2_{OSC}} = \frac{1}{4m}(\mathfrak b^{\dag}\mathfrak b + \mathfrak b\mathfrak b^{\dag}) \end{eqnarray} and the linear part by \begin{eqnarray} \hat T_{2} = \frac{\lambda}{2m}(\mathfrak d^{\dag} + \mathfrak d) + \frac{\lambda^{2}}{2m}. \end{eqnarray} The annihilation and creation operators $\mathfrak b$ and $\mathfrak b^{\dag}$ become here \begin{eqnarray} \mathfrak b = \sqrt{2m\hbar \omega_{c}}\mathfrak b',\quad \mathfrak b^{\dag} = \sqrt{2m\hbar \omega_{c}}\mathfrak b'^{\dag}, \quad [\mathfrak b',\mathfrak b'^{\dag}] = 1 \! \! {\rm I}, \end{eqnarray} with \begin{eqnarray}{\label{boson1}} \mathfrak b' = \sqrt{\frac{2m\omega_{c}}{\hbar}}\left(\frac{i P_{z}}{m\omega_{c}} + \frac{\bar{Z}}{4} + \frac{\lambda}{2m\omega_{c}}\right), \quad \mathfrak b'^{\dag} = \sqrt{\frac{2m\omega_{c}}{\hbar}}\left(-\frac{i P_{\bar{z}}}{m\omega_{c}} + \frac{Z}{4} + \frac{\lambda}{2m\omega_{c}}\right). \end{eqnarray} From (\ref{ei27}), it comes \begin{eqnarray}{\label{ei28}} \frac{\lambda}{2m}(\mathfrak d^{\dag} + \mathfrak d) = \frac{\hbar \lambda}{m}\left(\frac{P_{y}}{\hbar} - \frac{1}{2}\frac{m \omega_{c}}{\hbar} X \right) = \frac{\hbar \lambda}{m} \left(-i\frac{\partial}{\partial y} - \frac{m \omega_{c}}{2 \hbar}X \right). \end{eqnarray} Then, the eigenvalue equation $\hat T_2\phi = \mathcal E \phi$ is equivalent in this case to \begin{eqnarray} \frac{\hbar \lambda}{m}\left(-i \frac{\partial}{\partial y} - \frac{m \omega_{c}}{2 \hbar}X \right)\phi = \mathcal E \phi, \end{eqnarray} which leads to \begin{eqnarray} \left(-i \frac{\partial}{\partial y} - \frac{m \omega_{c}}{2 \hbar}x \right)\phi - \frac{m \mathcal E}{\hbar \lambda}\phi = 0. \end{eqnarray} Taking again $\alpha = \frac{m \mathcal E}{\hbar \lambda}$, it follows the equation \begin{eqnarray} -i \frac{\partial}{\partial y}\phi = \left(\frac{m \omega_{c}}{2 \hbar}x + \alpha\right)\phi, \end{eqnarray} which can be solved to give the eigenfunctions \begin{eqnarray}{\label{vep2}} \phi_{\alpha} \equiv \phi_{\alpha}(x,y) = e^{i (\alpha y + \frac{m \omega_{c}}{2 \hbar}xy)} \;\;\; \alpha \in \mathbb R, \end{eqnarray} of the operator $\hat T_2$ corresponding to eigenvalues expressed as in (\ref{es16}). Therefore, the eigenvectors and eigenvalues of the Hamiltonian $\hat H_{2}$, as previously determined for $\hat H_{1}$, are obtained as \begin{eqnarray}{\label{eig003}} \Psi_{(l, \alpha)} &=& \Phi_{l} \otimes \phi_{\alpha} \equiv |l,\alpha\rangle, \cr \cr \mathcal E_{(l,\alpha)} &=& \frac{\hbar \omega_{c}}{2}(2l + 1) - \frac{\hbar\lambda}{m}\alpha - \frac{\lambda^{2}}{2m} \;\;\; \;\; l= 0, 1, 2, \ldots \end{eqnarray} Introduce the position and momentum operators obtained from the annihilation and creation operators (\ref{es2}) and (\ref{ei20}) as \begin{eqnarray} \hat Q_1 &=& \frac{1}{2\sqrt{m\omega_c \hbar}}(b^{\dag}+ b), \quad \hat P_1 = \frac{i}{2\sqrt{m\omega_c \hbar}}(b^{\dag}- b),\cr \hat Q_2 &=& \frac{1}{2\sqrt{m\omega_c \hbar}}(\mathfrak b^{\dag}+ \mathfrak b), \quad \hat P_2 = \frac{i}{2\sqrt{m\omega_c \hbar}}(\mathfrak b^{\dag}- \mathfrak b), \end{eqnarray} respectively, where the following commutation relations \begin{eqnarray} [b, \mathfrak{b}^{\dag}] &=& 0 = [b^{\dag}, \mathfrak{b}], \quad [b, \mathfrak{b}] = 0 = [b^{\dag}, \mathfrak{b}^{\dag}], \cr [\hat Q_{1},\hat P_{2}] &=& 0= [\hat Q_{2}, \hat P_{1}],\quad [\hat Q_{1}, \hat Q_{2}] = 0 = [\hat P_{1}, \hat P_{2}] \end{eqnarray} are satisfied. Then, we respectively have in the gauges $\overrightarrow{A} = \left(\frac{B}{2}y, -\frac{B}{2}x \right)$ and $\overrightarrow{A} = \left(-\frac{B}{2}y, \frac{B}{2}x \right)$ \begin{eqnarray}\label{hacom00} \hat H_{1_{OSC}} = \frac{\hbar \omega_{c}}{2}[{Q}^{2}_{1} + {P}^{2}_{1}], \quad \hat H_{2_{OSC}} = \frac{\hbar \omega_{c}}{2}[{Q}^{2}_{2} + {P}^{2}_{2}], \quad [\hat H_{1_{OSC}}, \hat H_{2_{OSC}}] = 0. \end{eqnarray} Thus, from (\ref{es17}), (\ref{eig003}) and (\ref{hacom00}), the eigenvectors denoted $|\Psi_{nl}\rangle := |n, l\rangle = |n\rangle \otimes |l\rangle$ of $\hat H_{1_{OSC}} $ can be so chosen that they are also the eigenvectors of $\hat H_{2_{OSC}}$ as follows: \begin{eqnarray}{\label{equa37}} \hat H_{1_{OSC}}|\Psi_{nl}\rangle = \hbar \omega_{c} \left(n + \frac{1}{2}\right) |\Psi_{nl}\rangle, \,\, \hat H_{2_{OSC}}|\Psi_{nl}\rangle = \hbar \omega_{c} \left(l + \frac{1}{2}\right) |\Psi_{nl}\rangle, \; n, l = 0,1,2,\dots, \infty \end{eqnarray} so that $\hat H_{2_{OSC}}$ lifts the degeneracy of $\hat H_{1_{OSC}}$ and vice versa. From (\ref{es16}), consider the shifted eigenvalues \begin{eqnarray}\label{contvapshif} \mathcal E'_{\alpha} := \mathcal E_{\alpha} - \frac{\lambda^{2}}{2m} = \frac{\hbar \lambda}{m}\alpha, \end{eqnarray} where the states $|\epsilon_{\alpha}\rangle$ are delta-normalized states and form the orthonormal basis $\{|\epsilon_{\alpha}\rangle, \alpha \in \mathbb R \}$. The satisfy the eigenvalue equation \begin{eqnarray}\label{teignefunc} \left(\hat T_1 - \frac{\lambda^{2}}{2m} I_{\mathfrak H_{C}}\right)|\epsilon_{\alpha}\rangle = \mathcal E'_{\alpha}|\epsilon_{\alpha}\rangle\;, \end{eqnarray} which is the same equation for the operator $\hat T_2$. \section{Construction of coherent states}\label{sec3} In this section, CSs are constructed, considering the two possible orientations of the magnetic field as in \cite{ab1} as well as additional parameters, originated from discrete and continuous aspects of the Hamiltonian spectrum in line with \cite{gazeau-klauder}. As a matter of comparison, we first replace the original Hamiltonian operators by their corresponding shifted counterparts, as done in \cite{gazeau-klauder}. Then, we investigate the full operators and analyze the results. \subsection{Case of the shifted quantum Hamiltonian}\label{firstconst} Let $\mathfrak{H}_{D+C}:=\mathfrak{H}_D \oplus \mathfrak{H}_C$ be the Hilbert space associated to the operator $\mathcal{H}_{D} \oplus \mathcal{H}_{C}$, where $\mathfrak{H}_D$ and $\mathfrak{H}_C$ are associated to discrete and continuous spectra, respectively. Let consider the discrete shifted Hamiltonian ${\mathcal H}_{D} := {H}_{1_{osc}} - \frac{\hbar \omega_{c}}{2} 1 \! \! {\rm I}_{\mathfrak H_{D}}$ and the continuous shifted Hamiltonian ${\mathcal H}_{C} := T_{1} - \frac{\lambda^{2}}{2m} 1 \! \! {\rm I}_{\mathfrak H_{C}}$, where $1 \! \! {\rm I}_{\mathfrak H_{D}}$ and $1 \! \! {\rm I}_{\mathfrak H_{C}}$ denote the identity operators on $\mathfrak{H}_D$ and $\mathfrak{H}_C$, respectively. Let $\mathfrak{H}_D$ spanned by the eigenvectors $|\Psi_{nl}\rangle \equiv |n,l\rangle$ of $H_{1_{OSC}}$ and $H_{2_{OSC}}$ provided by (\ref{equa37}). Besides, let $\mathfrak{H}_C$ be the Hilbert space associated to the continuous spectrum spanned by the eigenvectors of the operator $T_1$ denoted $|\epsilon_{\alpha}\rangle$ in equation (\ref{teignefunc}). The shifted Hamiltonian $\left({H}_{1_{osc}} - \frac{\hbar \omega_{c}}{2} 1 \! \! {\rm I}_{\mathfrak H_{D}} \right)- \left(T_{1} - \frac{\lambda^{2}}{2m} 1 \! \! {\rm I}_{\mathfrak H_{C}}\right)$ possesses a spectrum which is discrete and degenerate according to (\ref{es17}); the Landau levels are infinitely degenerate and given by $\{\mathcal E'_{n_{osc}} = \hbar \omega_{c} n, n = 0, 1, 2, \dots\}$ while the continuous spectrum is furnished by $\{ \mathcal E'_{\alpha}, \alpha \in \mathbb R\}$. So, from (\ref{tfunc1}) and (\ref{oscil1}), the positive eigenvalues are \begin{eqnarray}\label{tfunc00} \mathcal E'_{n,\alpha} = \mathcal E'_{n} - \mathcal E'_{\alpha} = \hbar \omega_{c} \left(n - \frac{\lambda}{m\omega_{c}}\alpha \right) = \hbar \omega_{c} \left(n - \epsilon_{\alpha}\right), \; \epsilon_{\alpha} = \frac{\lambda}{m\omega_{c}}\alpha, \end{eqnarray} such that, for all $n \in \mathbb N^{*}$, \, $\alpha \leq \frac{m \omega_{c}}{\lambda}$. For the continuous spectrum, one also requires the condition $\mathcal E'_{\alpha} = -\hbar \omega_{c} \epsilon_{\alpha} \geq 0$, which implies $\alpha \leq 0$. Therefore, the energy positivity condition should be: $\alpha \leq 0$. Provided the positivity of the eigenvalues, as required \cite{ab1} for the operator $T_{1}$ (respectively $T_{2}$), the CSs related to ${\mathcal H}_{D} \oplus {\mathcal H}_{C}$, are given by the {\it unnormalized} states \cite{gazeau-klauder} \begin{eqnarray}{\label{el1}} |J,\gamma;J',\gamma';l;K,\theta;\beta\rangle &=& f(K,\theta)|J,\gamma;J',\gamma';l\rangle + e^{-i \beta}g(J, \gamma,J',\gamma') |K, \theta\rangle \cr &=& f(K,\theta)\left[\mathcal N(J) \mathcal N(J')\right]^{-1/2}J'^{l/2}e^{i l \gamma'}\sum^{\infty}_{n=0}\frac{J^{n/2}e^{-i n \gamma}}{\sqrt{n !l !}}|\Psi_{nl}\rangle \cr && + e^{-i \beta}g(J,\gamma,J',\gamma')\mathcal N_{\rho}(K)^{-1/2}\int^{\infty}_{0}\frac{K^{\epsilon^{-}_{\alpha}/2}e^{i \epsilon_{\alpha}\theta}}{\sqrt{\rho(\epsilon^{-}_{\alpha})}} |\epsilon^{-}_{\alpha}\rangle d\epsilon^{-}_{\alpha}, \end{eqnarray} with $\epsilon^{-}_{\alpha} =: - \epsilon_{\alpha} \geq 0$. The labeling parameters are chosen such that: $ 0\leq J,J',K \leq \infty,\; -\infty < \gamma, \gamma', \theta < \infty $ and $0 \leq \beta < 2\pi$. $f$ and $g$ are scalar functions and the normalization constants are given by with $|J,\gamma;J',\gamma';l\rangle \in \mathfrak{H}_D$ and \begin{eqnarray}{\label{el6}} \sum^{\infty}_{l=0}\langle J,\gamma;J',\gamma';l|J,\gamma;J',\gamma';l\rangle = \frac{1}{\mathcal N(J)}\sum^{\infty}_{n=0}\frac{J^{n}}{n !}\frac{1}{\mathcal N(J')} \sum^{\infty}_{l=0}\frac{J'^{l}}{l !} = 1. \end{eqnarray} Besides, $|K,\theta\rangle \in \mathfrak{H}_C$ and \begin{eqnarray}{\label{el7}} \langle K,\theta|K,\theta\rangle &=& 1 \Rightarrow \mathcal N_{\rho}(K)^{-1}\int^{\infty}_{0} \frac{K^{\epsilon^{-}_{\alpha}}}{\rho(\epsilon^{-}_{\alpha})}d\epsilon^{-}_{\alpha} \langle \epsilon^{-}_{\alpha}|\epsilon^{-}_{\alpha}\rangle = \mathcal N_{\rho}(K)^{-1}\int^{\infty}_{0}\frac{K^{\epsilon^{-}_{\alpha}}}{\rho(\epsilon^{-}_{\alpha})}d\epsilon^{-}_{\alpha} \cr \cr && \Rightarrow \mathcal N_{\rho}(K) = \int^{\infty}_{0}\frac{K^{\epsilon^{-}_{\alpha}}}{\rho(\epsilon^{-}_{\alpha})} d\epsilon^{-}_{\alpha}. \end{eqnarray} The continuity of the combined CSs follows from the continuity of the separate states and of that of the functions $f$ and $g$, which are assumed. Indeed, from the definition, we have \begin{eqnarray} &&||\,\, |J,\gamma;J',\gamma';l;K,\theta;\beta\rangle- |\tilde J,\tilde \gamma;\tilde{J'}, \tilde{\gamma'};l;\tilde K, \tilde \theta;\beta\rangle\, \, ||^2 \cr &=& |f(K,\theta)|^2 \langle J,\gamma;J',\gamma';l|J,\gamma;J',\gamma';l \rangle + |g(J,\gamma;J',\gamma')|^2 \langle K,\theta|K,\theta \rangle \cr &&+|f(\tilde K,\tilde \theta)|^2 \langle \tilde J,\tilde \gamma;J',\tilde \gamma';l|\tilde J,\tilde \gamma;\tilde J',\tilde \gamma';l \rangle + |g(J,\gamma;J',\gamma')|^2 \langle \tilde K,\tilde \theta|\tilde K,\tilde \theta \rangle\cr &&+ f(K,\theta)f(\tilde K,\tilde \theta)^{*} \langle \tilde J,\tilde \gamma;\tilde J',\tilde \gamma';l |J,\gamma;J',\gamma';l \rangle + f(K,\theta)^{*}f(\tilde K,\tilde \theta) \langle J,\gamma;J',\gamma';l|\tilde J,\tilde \gamma;\tilde J',\tilde \gamma';l \rangle\cr &&+g(J,\gamma;J',\gamma')g(\tilde J,\tilde \gamma;\tilde J',\tilde \gamma')^{*}\langle \tilde K,\tilde \theta|K,\theta \rangle + g(J,\gamma;J',\gamma')^{*}g(\tilde J,\tilde \gamma;\tilde J',\tilde \gamma')\langle K,\theta|\tilde K,\tilde \theta\rangle \end{eqnarray} such that \begin{eqnarray} \lim_{(J,\gamma;J',\gamma';K,\theta) \rightarrow (\tilde J,\tilde \gamma;\tilde J',\tilde \gamma';\tilde K,\tilde \theta)}||\, \, |J,\gamma;J',\gamma';l;K,\theta;\beta\rangle - |\tilde J,\tilde \gamma;\tilde{J'}, \tilde{\gamma'};l;\tilde K, \tilde \theta;\beta\rangle \,\, ||^2 =0. \end{eqnarray} Now, let us investigate the resolution of the identity or the completeness relation which is expressed in terms of the projectors onto the states $|J,\gamma;J',\gamma';l;K,\theta;\beta\rangle$. \begin{pro}\label{prop1} The CSs (\ref{el1}) satisfy, on $\mathfrak H_{D+C}$, the resolution of the identity \begin{eqnarray}{\label{el3}} &&\int^{\infty}_{0}\int^{\infty}_{0}\int^{\infty}_{0}\int_{\mathbb R} \int_{\mathbb R}\int_{\mathbb R}\int^{2\pi}_{0}|J,\gamma;J',\gamma';l;K,\theta;\beta\rangle \langle J,\gamma;J',\gamma';l;K,\theta;\beta| \cr && d\mu_{B}(\gamma)d\mu_{B}(\gamma')\frac{d\theta}{2\pi}\frac{d\beta}{2\pi}\mathcal N(J) \mathcal N(J')\mathcal N_{\rho}(K)d\nu(J)d\nu(J')d\lambda(K) = 1 \! \! {\rm I}_{{\mathfrak H}^l_{D}} + 1 \! \! {\rm I}_{\mathfrak H_{C}} \end{eqnarray} where $1 \! \! {\rm I}_{\mathfrak H^l_D}, 1 \! \! {\rm I}_{\mathfrak H^n_D}$ are the identity operators on the subspaces $\mathfrak H^n_D, \mathfrak H^l_D$ of $\mathfrak{H}_D$ such that \begin{eqnarray}\label{identsubs} \sum_{n=0}^{\infty}|\Psi_{nl}\rangle \langle \Psi_{nl}| = 1 \! \! {\rm I}_{\mathfrak H^l_D}, \quad \sum_{l=0}^{\infty}|\Psi_{nl}\rangle \langle \Psi_{nl}| = 1 \! \! {\rm I}_{\mathfrak H^n_D}. \end{eqnarray} $d\mu_{B}$ refers to the {\it Bohr measure} \cite{ab1} provided as follows \begin{eqnarray} \langle f|g\rangle_{ns} = \lim_{T \to \infty}\frac{1}{2T}\int_{-T}^{T} \overline{f(\gamma)}g(\gamma)d\gamma:= \int_{\mathbb R}\overline{f(\gamma)}g(\gamma)d\mu_{B}(\gamma) \end{eqnarray} given on the Hilbert space $\mathfrak H_{ns}$ of functions $f : \mathbb R \rightarrow \mathbb C$, which is complete with respect to the scalar product $\langle .|.\rangle_{ns}$. $d\lambda(K) = \sigma(K)dK$, and $\sigma(K)$ is a non-negative weight function $\sigma(K) \geq 0$ such that \begin{eqnarray} \int_{0}^{\infty}K^{\epsilon^{-}_{\alpha}} \sigma(K) dK \equiv \rho(\epsilon^{-}_{\alpha}). \end{eqnarray} On the Hilbert spaces ${\mathfrak H}_{D}$, ${\mathfrak H}_{C}$ and ${\mathfrak H}_{D+C}$, we have the following essential relations \begin{eqnarray}\label{identrq000} &&\int_{\mathbb R}\int_{\mathbb R}\int_{\mathbb R}\int^{\infty}_{0}\int^{\infty}_{0}\int^{\infty}_{0} \int^{2\pi}_{0}|f(K,\theta)|^{2}|J,\gamma;J',\gamma';l \rangle \langle J,\gamma;J',\gamma';l| \cr &&d\mu_{D}(J,\gamma,J',\gamma')d\mu_{C}(K,\theta)\frac{d\beta}{2\pi} = I_{\mathfrak H^l_{D}}, \cr \cr &&\int_{\mathbb R}\int_{\mathbb R}\int_{\mathbb R}\int^{\infty}_{0}\int^{\infty}_{0}\int^{\infty}_{0} \int^{2\pi}_{0}|g(J,\gamma,J',\gamma')|^{2}|K,\theta\rangle \langle K,\theta| \cr &&d\mu_{D}(J,\gamma,J',\gamma')d\mu_{C}(K,\theta)\frac{d\beta}{2\pi} = 1 \! \! {\rm I}_{{\mathfrak H}_{C}}, \cr \cr &&\int^{2\pi}_{0}e^{i \beta}\frac{d\beta}{2\pi}\int_{\mathbb R}\int_{\mathbb R}\int_{\mathbb R}\int^{\infty}_{0}\int^{\infty}_{0} \int^{\infty}_{0}g(J,\gamma,J',\gamma')^{*}f(K,\theta) |J,\gamma;J',\gamma';l \rangle \langle K,\theta| \cr &&d\mu_{D}(J,\gamma,J',\gamma')d\mu_{C}(K,\theta) = 0. \end{eqnarray} that need to be satisfied, where $d\mu_{D}$ and $d\mu_{C}$ are the measures associated to the discrete-spectrum CSs $\{J,\gamma,J',\gamma'\}$ and continuous-spectrum CSs $\{K,\theta\}$ labeling parameters, respectively. The identity operator $1 \! \! {\rm I}_{\mathfrak H_{D+C}}$ is the direct sum of the identity operators $1 \! \! {\rm I}_{{\mathfrak H}_{D}}$ and $1 \! \! {\rm I}_{{\mathfrak H}_{C}}$ which act on the complementary subspaces ${\mathfrak H}_{D}$ and ${\mathfrak H}_{C}$, respectively, corresponding to discrete and continuous spectra. \end{pro} Noting that the integration over $\beta, \, 0 \leq \beta < 2\pi$ eliminates the third relation above, which is related to the off-diagonal terms, the three conditions (\ref{identrq000}) are reduced to \begin{eqnarray}{\label{el4}} &&\int_{\mathbb R}\int^{\infty}_{0} |f(K,\theta)|^{2}d\mu_{C}(K,\theta) = 1, \cr &&\int_{\mathbb R}\int_{\mathbb R}\int^{\infty}_{0}\int^{\infty}_{0}|g(J,\gamma,J',\gamma')|^{2} d\mu_{D}(J,\gamma,J',\gamma') = 1. \end{eqnarray} In view of getting the resolution of the identity, let us take the functions $f$ and $g$ as in \cite{gazeau-klauder}, such that \begin{eqnarray} f(K,\theta) = \mathcal N_{f} \, e^{-\frac{K^{2} + \theta^{2}}{2}}, \;\;\; g(J,\gamma,J',\gamma') = \mathcal N_{g} \, e^{-\frac{J^{2} + J'^{2}}{2}}, \end{eqnarray} where the factors $\mathcal N_{g}$ and $\mathcal N_{f}$ are chosen so that \begin{eqnarray} \mathcal N^{2}_{f}\int_{\mathbb R}\int_{0}^{\infty} e^{-(K^{2} + \theta^{2})} d\mu_{C}(K,\theta) = 1, \cr \cr \mathcal N^{2}_{g}\int_{\mathbb R}\int_{\mathbb R}\int_{0}^{\infty}\int_{0}^{\infty}e^{-(J^{2} + J'^{2})} d\mu_{D}(J,\gamma,J',\gamma') = 1. \end{eqnarray} {\bf Proof.} See in the Appendix \ref{app000}. $\hfill{\square}$ \begin{pro}\label{prop2} The property of temporal stability can be obtained here by postulating similar assumptions as in \cite{gazeau-klauder}, such that $0 \leq \mathcal H_{D} \leq \Omega$ and $\Omega < \mathcal H_{C}$, i.e. the Hamiltonians are adjusted so that $0< \mathcal H_{C} - \Omega$. By taking into account the phase factor $e^{-i \beta }$, it comes the following relation \begin{eqnarray} e^{-i \mathcal H t}|J,\gamma;J',\gamma';l;K,\theta;\beta\rangle &=& f(K,\theta)| J,\gamma + \omega_{c} t;J',\gamma';l\rangle \cr &&+ e^{-i (\beta + \Omega t)} g(J,\gamma,J',\gamma')|K, \theta + \omega_{c} t\rangle \cr &=& |J,\gamma + \omega_{c} t;J',\gamma';l;K,\theta + \omega_{c} t;\beta + \Omega t\rangle, \end{eqnarray} with $\mathcal H = \mathcal H_{D} + (\mathcal H_{C} - \Omega)$. \end{pro} {\bf Proof.} See in the Appendix \ref{app000}. $\hfill{\square}$ The action identity as noticed in \cite{gazeau-klauder} is difficult to obtain with the combined CSs given in (\ref{el1}). \subsection{Case of the Hamiltonian ${H}_{2_{osc}} - T_{2}$} By analogy of the setting in Section \ref{firstconst}, we study the shifted Hamiltonian $\left({H}_{2_{osc}} - \frac{\hbar \omega_{c}}{2} 1 \! \! {\rm I}_{\mathfrak H_{D}} \right)- \left(T_{2} - \frac{\lambda^{2}}{2m} 1 \! \! {\rm I}_{\mathfrak H_{C}}\right)$. The related CSs are here given on $\mathfrak H_{D+C}$ by \begin{eqnarray}{\label{el5}} |J,\gamma;J',\gamma';n;K,\theta;\beta\rangle &=& f(K,\theta)|J,\gamma;J',\gamma';n\rangle + e^{-i \beta}g(J, \gamma,J',\gamma') |K, \theta\rangle \cr &=&f(K,\theta)\left[\mathcal N(J) \mathcal N(J')\right]^{-1/2}J^{n/2}e^{-i n \gamma}\sum^{\infty}_{l=0}\frac{J'^{l/2}e^{i l \gamma'}}{\sqrt{n !l !}}|\Psi_{nl}\rangle \cr && + e^{-i \beta}g(J,\gamma,J',\gamma')\mathcal N_{\rho}(K)^{-1/2}\int^{\infty}_{0}\frac{K^{\epsilon^{-}_{\alpha}/2}e^{i \epsilon_{\alpha}\theta}}{\sqrt{\rho(\epsilon^{-}_{\alpha})}}|\epsilon^{-}_{\alpha}\rangle d\epsilon^{-}_{\alpha}, \end{eqnarray} where the normalization constants are given as in (\ref{el6}) with the relation (\ref{el7}) also satisfied. \begin{pro} The CSs satisfy, on $\mathfrak H_{D+C}$, the resolution of the identity \begin{eqnarray}{\label{el9}} &&\int^{\infty}_{0}\int^{\infty}_{0}\int^{\infty}_{0}\int_{\mathbb R} \int_{\mathbb R}\int_{\mathbb R}\int^{2\pi}_{0}|J,\gamma;J',\gamma';n;K,\theta;\beta\rangle \langle J,\gamma;J',\gamma';n;K,\theta;\beta| \cr && d\mu_{B}(\gamma)d\mu_{B}(\gamma')\frac{d\theta}{2\pi}\frac{d\beta}{2\pi}\mathcal N(J) \mathcal N(J')\mathcal N_{\rho}(K)d\nu(J)d\nu(J')d\lambda(K) = 1 \! \! {\rm I}_{{\mathfrak H}^n_{D}} + 1 \! \! {\rm I}_{\mathfrak H_{C}}. \end{eqnarray} \end{pro} {\bf Proof.} See that of Proposition \ref{prop1}. $\hfill{\square}$ \begin{pro} The temporal stability property is given by \begin{eqnarray} e^{-i \mathcal H t}|J,\gamma;J',\gamma';n;K,\theta;\beta\rangle &=& f(K,\theta)| J,\gamma;J',\gamma' + \omega_{c} t;n\rangle \cr &&+ e^{-i (\beta + \Omega t)} g(J,\gamma,J',\gamma')|K, \theta + \omega_{c} t\rangle \cr &=& |J,\gamma;J',\gamma' + \omega_{c} t;n;K,\theta + \omega_{c} t;\beta + \Omega t\rangle. \end{eqnarray} \end{pro} {\bf Proof.} See that of Proposition \ref{prop2}. $\hfill{\square}$ \subsection{Case of the unshifted Hamiltonians $H_{1}$ and $H_{2}$} The eigenvalues $\mathcal E_{n,\alpha}$ of the Hamiltonian operators $H_{1}$ and $H_{2}$, given respectively in Eq.(\ref{es17}) and (\ref{eig003}), can be rewritten as $\mathcal E_{n,\alpha} = \mathcal E_{n} + \mathcal E_{\alpha}$, where \begin{eqnarray} \mathcal E_{n} = \hbar \omega_{c} \left(n+ \frac{1}{2}\right), \; \mathcal E_{\alpha} = -\hbar \omega_{c} \epsilon_{\alpha}, \; \epsilon_{\alpha} = \frac{\lambda}{m \omega_{c}}\alpha + \frac{\lambda^{2}}{2m\hbar \omega_{c}}. \end{eqnarray} The required conditions $\mathcal E_{n,\alpha} \geq 0$ for all $n \in \mathbb N$ and $\mathcal E_{\alpha} \geq 0$ lead to the relations \begin{eqnarray} \alpha \leq \frac{ m\omega_{c}}{2\lambda} - \frac{\lambda}{2\hbar}, \; \alpha \leq -\frac{\lambda}{2\hbar}, \end{eqnarray} giving $\alpha \leq -\frac{\lambda}{2\hbar}$. Setting \begin{eqnarray} \rho(n) = \mathcal E_{1}\mathcal E_{2}\dots \mathcal E_{n} \end{eqnarray} such that \begin{eqnarray}\label{conststruc} \rho(n) = \prod_{k=1}^{n}\hbar \omega_{c}\left(k+\frac{1}{2}\right) = (\kappa)^{n}\left(\frac{3}{2} \right)_{n}, \; \kappa = \hbar \omega_{c} \end{eqnarray} where $(\frac{3}{2})_{n}$ stands for the Pochhammer symbol \cite{erdelyi-ismail}. The CSs, related to $H_{1}$, defined in line with (\ref{el1}), are now given, where the function $\rho(n)= n !$ is replaced by the one in (\ref{conststruc}), by \begin{eqnarray}{\label{es03}} |J,\gamma;J',\gamma';l;K,\theta;\beta\rangle &=& f(K,\theta)\left[\mathcal N(J) \mathcal N(J')\right]^{-1/2}J'^{l/2}e^{i \mathcal E_{l} \gamma'}\sum^{\infty}_{n=0}\frac{J^{n/2}e^{-i \mathcal E_{n} \gamma}}{\sqrt{\rho(n)\rho(l)}}|\Psi_{nl}\rangle \cr && + e^{-i \beta}g(J,\gamma,J',\gamma')\mathcal N_{\rho}(K)^{-1/2}\int^{\infty}_{0}\frac{K^{\epsilon^{-}_{\alpha}/2}e^{i \epsilon_{\alpha}\theta}}{\sqrt{\rho(\epsilon^{-}_{\alpha})}} |\epsilon^{-}_{\alpha}\rangle d\epsilon^{-}_{\alpha} \end{eqnarray} yields \begin{eqnarray} \mathcal N(J) &=& \sum_{n=0}^{\infty}\frac{J^{n}}{\rho(n)} = \sum_{n=0}^{\infty}\frac{J^{n}}{\kappa^{n}(\frac{3}{2})_{n}} = \, _{1}F_{1}\left(1;\frac{3}{2};\frac{J}{\kappa}\right), \cr \cr \mathcal N(J') &=& \sum_{l=0}^{\infty}\frac{J'^{l}}{\rho(l)} = \sum_{l=0}^{\infty}\frac{J'^{l}}{\kappa^{l}(\frac{3}{2})_{l}} = \, _{1}F_{1}\left(1;\frac{3}{2};\frac{J'}{\kappa}\right), \end{eqnarray} with the relation (\ref{el7}) also remaining here valid. \begin{pro} The CSs (\ref{es03}) satisfy, on $\mathfrak H_{D+C}$, a resolution of the identity given in (\ref{el3}), where the measures $d\nu(J)$ and $d\nu(J')$ are now given by \begin{eqnarray} d\nu(J) &=& \frac{\Gamma^{2}(n+1)}{\kappa^{\eta}\Gamma(\eta)} \, _{1}F_{1}\left(1;\eta;\frac{J}{\kappa}\right) \frac{e^{-J/\kappa}J^{-n + \eta - 1}}{(\frac{1}{\kappa} + \mu - \sigma)^{n}} L^{\eta - 1}_{n}[(\mu - \sigma)J]dJ, \cr \cr d\nu(J')&=& \frac{\Gamma^{2}(l+1)}{\kappa^{\eta}\Gamma(\eta)} \, _{1}F_{1}\left(1;\eta;\frac{J'}{\kappa}\right) \frac{e^{-J'/\kappa}J'^{-l + \eta - 1}}{(\frac{1}{\kappa} + \mu - \sigma)^{l}} L^{\eta - 1}_{l}[(\mu - \sigma)J']dJ', \, \eta = \frac{3}{2}, \end{eqnarray} where the quantities $L^{\eta - 1}_{n}[(\mu - \sigma)J], L^{\eta - 1}_{l}[(\mu - \sigma)J']$ are the Laguerre polynomials, and lead to the identities \cite{erdelyi-ismail} \begin{eqnarray} n !\int_{0}^{\infty}t^{\nu-n}e^{\mu t} L_{n}^{\nu-n}[(\mu - \sigma)t]e^{-st}dt =\Gamma(\nu+1)(s-\sigma)^{n}(s-\mu)^{-\nu-1}, \; \Re{(\nu)} > n-1 \crcr l !\int_{0}^{\infty}t^{\nu-l}e^{\mu t} L_{l}^{\nu-l}[(\mu - \sigma)t]e^{-st}dt =\Gamma(\nu+1)(s-\sigma)^{l}(s-\mu)^{-\nu-1}, \; \Re{(\nu)} > l-1 \end{eqnarray} with $\nu = n+\eta-1$ (resp. $\nu = l + \eta-1$) and $\frac{1}{\kappa} = s-\mu$. \end{pro} The CSs for the Hamiltonian $H_{2}$, similar to the ones in (\ref{es03}), can be constructed in the same way, with the labeling parameters $J,\gamma$ playing the role of $J',\gamma'$ and vice versa. \section{Concluding remarks}\label{sec4} Coherent states have been constructed for Hamiltonians with both discrete and continuous spectra, in the context of the motion of an electron in an electromagnetic field, arising in the quantum Hall effect by considering shifted and unshifted spectra, respectively. These coherent states satisfy the Gazeau-Klauder coherent states criteria that are the continuity in the labels, the resolution of the identity and the temporal stability. The action identity property remains difficult to obtain in the combined coherent states as noticed in \cite{gazeau-klauder}. An extension of this work that is currently under investigation is the construction of coherent states for an Hamiltonian in the case of an electric field depending simultaneously on both $x$ and $y$ directions, and for Hamiltonian operators admitting discrete eigenvalues and eigenfuctions in appropriate Hilbert space \cite{aremuaetal}.
1,314,259,996,388
arxiv
\section{Conclusion and Future Work} \label{sec:conclusion} In this paper, we propose $(P^{3}FitRec)$, a Multi-layer MLP and Multi-layer Bi-LSTM based framework that provides personalized fitness recommendations with privacy preservation. Our $(P^{3}FitRec)$ employs Tensor Decomposition to infer entity embeddings from historical workout data and has achieved satisfactory results on predicting workout distance, workout speed sequence, and workout heart rate sequence. We demonstrate that personalized fitness recommendations can be achieved using minimum identity information from the users. For further studies, we propose extending the work in three aspects. First, the cold-start problem is an interesting topic in building recommender systems, for which in our case is on new sport types and new users. Users playing new sports will have to exercise without personalized recommendation and contribute an adequate amount of workout data before a model can be trained to cover these sport types. Likewise, for new users who have absolutely no workout history, it is a challenge to learn user embeddings using the Tensor Decomposition method. Moreover, our $(P^{3}FitRec)$ is built on a two-model pipeline, instead of an end-to-end model structure to realize the two related tasks respectively. Though the models share the same contextual input features and the prediction of the first model feeds into the second model, one can argue that the accuracy of the application can be further improved if an end-to-end structure is applied, such as an encoder-decoder structure. Lastly, the Transformer architecture \cite{transformer} has become dominant in sequence prediction tasks, especially in the field of natural language processing. Although our experiment on the attention mechanism shows little performance enhancement, it is possible to improve the performance by trying the Transformer architecture. \section{Discussion} \label{sec:discussion} Our $(P^{3}FitRec)$ can provide personalized fitness recommendations by applying the models mentioned in section \ref{sec:method}. The system allows users to specify the type of sport (run, bike, or mountain bike) and select a specific route, and then input target caloric expenditure. According to these inputs, our $(P^{3}FitRec)$ will provide various recommendations in the aspects of distance as well as speed and heart rate at each time step. Table \ref{table:workout_profile_table} shows a typical running workout from the dataset, whose original record has 6.2 km of distance, 592 kcal of caloric consumption, and average speed and heart rate of 8.8 km/h and 149 beats per minute respectively. Given the same caloric input, our $(P^{3}FitRec)$ predicts similar distance, speed, and heart rate. Furthermore, for higher target caloric inputs, we observe the greater distance, speed, and heart rate are predicted, and vice versa. Moreover, Fig. \ref{fig:speed_hr_profile} shows the change in speed and heart rate with respect to time stamps. The blue line represents the original workout record, while the dotted lines represent the recommended speed and heart rate according to various input target caloric expenditures. We observe that the model can properly predict the fluctuation of speed and heart rate with relatively low MAE. By comparing three different predicted workout speed and heart rate sequences, the system can properly predict the correlation between distance, calories, and heart rate. \begin{table}[t] \centering \begin{tabular}{llll} \toprule \multicolumn{4}{c}{\textbf{Original workout record}} \\ \midrule \textbf{Calories} & \textbf{Distance} & \textbf{Speed AVG.} & \textbf{Heart Rate AVG.} \\ \midrule 592kcal & 6.2km & 8.8km/h & 149bpm \\ \midrule \multicolumn{4}{c}{\textbf{Recommended workout}} \\ \midrule \textbf{Calories} & \textbf{Distance} & \textbf{Speed AVG.} & \textbf{Heart Rate AVG.} \\ \midrule 474 kcal & 5.97km & 8.46km/h & 136 bpm \\ \textbf{592 kcal} & \textbf{6.18km} & \textbf{8.64km/h} & \textbf{142 bpm} \\ 651 kcal & 6.28km & 8.69km/h & 145bpm \\ \bottomrule \end{tabular} \caption{Workout profile Table} \label{table:workout_profile_table} \end{table} Our $(P^{3}FitRec)$ has a high degree of freedom in choosing sport type, route, and target calories, and the personalized recommendations of speed and heart rate change dynamically according to the workout route. Therefore, the system provides users with greater flexibility to plan and predict exercises, and to modify the speed and pace during exercises. The proposed models, datasets and experimental data will be made available at the project repository for further extension and reproducibility studies. \protect\footnote{\url{https://github.com/BasemSuleiman/Personalized\_Intelligent\_Fitness\_Recommender}.}. \begin{figure}[t] \centering \includegraphics[width=12cm]{Figures/WorkoutProfile.jpg} \caption{Speed and Heart Rate Profile} \label{fig:speed_hr_profile} \end{figure} \section{Experiments and Results} \label{sec:evaluation} \subsection{Dataset} \label{sec:dataset} We use the open-source \textbf{Endomondo} dataset \cite{10.1145/3308558.3313643} of 167,373 workout records from 956 users. Each workout record consists of both sequential data (i.e., \textit{heart rate, speed, time elapsed, distance, altitude, latitude, longitude}) and contextual data (i.e., \textit{gender, sport, userID, URL}). For each workout record, the sequential data contains 500 data points, with sampling intervals ranging from seconds to minutes. In addition, the total \textit{caloric expenditure} of each workout record can be queried on endomondo.com, if the user is not deactivated. We have conducted data cleaning prior to the experiments. For instance, the original dataset has a high rate of abnormal measurements such as running speeds exceeding 50 km/h or abnormal average altitudes over 8000 meters. Moreover, 97$\%$ of the workout records belong to the types of running, cycling, and mountain biking, while there are fewer than 50 records for most other sport types in the dataset. Hence, we decide to only focus on running, cycling, and mountain biking in our study to ensure there is enough data for each sport type. About 65,500 workout records are retained after cleaning up and deleting the records that no longer provide \textit{caloric expenditure} due to users' discontinuation. Data augmentation is done to facilitate the modeling of workout distance according to different input variables. For instance, we expect our model outputs a shorter distance given a smaller caloric input. Specifically, we extend a workout route by appending a small random sub-sequence extracted from the start of the sequence to the end of the sequence if the user has returned to the starting location at the end of the exercise. Given a sequence $X = [x_0, x_1, x_2, x_3, \cdots, x_n, x_0]$ that returns to the starting location $x_0$, the extended sequence can be represented as $X' = [x_0, x_1, x_2, x_3, \cdots, x_n, x_0, x_1, \cdots, x_t]$ where $t < n$. For the workout distance prediction model, the original workout distance $L_X$ is used as the ground truth while the caloric expenditure, the extended workout distance $L_{X'}$, the entity embeddings, and the other contextual features are model inputs. The resulted dataset after the our pre-processing will be made available from the project repository for further experimentation and reproducibility. \protect\footnote{\url{https://github.com/BasemSuleiman/Personalized\_Intelligent\_Fitness\_Recommender}.} \subsection{Training Procedure} \label{sec:training_procedure} We first perform CP tensor decomposition to obtain user embeddings and workout route embeddings. Core consistency scores of ranks 2 to 20 are computed to choose the most appropriate rank of the decomposed tensors. Then we pick the entity embeddings of high scores to train the workout distance prediction model and the speed \& heart rate sequences prediction model respectively. The workout distance prediction model is trained using Adam \cite{adam} with weight decay of $1e^{-7}$ and the speed \& heart rate prediction model is trained using Adagrad \cite{adagrad}. For both models, we perform hyperparameter tuning with random search and/or grid search on the validation set. The best parameters are shown in table \ref{table:parameter_settings}. \begin{table}[t] \centering \begin{tabular}{llll} \toprule & \makecell[l]{\textbf{Workout Distance} \\ \textbf{Model}} & & \makecell[l]{\textbf{Speed \& HeartRate} \\ \textbf{Model}} \\ \midrule 2 Layer MLP & \begin{tabular}{@{}l@{}} Learning Rate: $1e^{-3}$ \\ Hidden Dimension: $64$ \\ Dropout: $0.2$ \end{tabular} & 1 Layer LSTM & \begin{tabular}{@{}l@{}} Learning Rate: $5e^{-3}$ \\ Hidden Dimension: $64$ \\ Dropout: $0.2$ \end{tabular} \\ \midrule 3 Layer MLP & \begin{tabular}{@{}l@{}} Learning Rate: $1e^{-3}$ \\ Hidden Dimension 1: $64$ \\ Hidden Dimension 2: $64$ \\ Dropout: $0.2$ \end{tabular} & 2 Layer (Bi) LSTM & \begin{tabular}{@{}l@{}} Learning Rate: $5e^{-3}$ \\ Hidden Dimension 1: $128$ \\ Hidden Dimension 2: $64$ \\ Dropout: $0.2$ \end{tabular} \\ \bottomrule \end{tabular} \caption{Parameter settings} \label{table:parameter_settings} \vspace{-8mm} \end{table} \subsection{Evaluation Metrics} \label{sec:evaluation_metrics} We report the results of our experiments through Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) for the workout distance prediction task and speed \& heart rate sequences prediction task respectively. \begin{equation} \text{RMSE}=\sqrt{\frac{1}{N_\text{test}} \sum_{y \in \mathcal{T}_{\text {test }}}\left(y_{t}-\hat{y}_{t}\right)^{2}} \end{equation} \begin{equation} \text{MAE}=\frac{1}{N_\text{test}} \sum_{y \in \mathcal{T}_{\text {test }}} \frac{1}{L} \sum_{t=1}^{L}\left|y_{t}-\hat{y}_{t}\right| \end{equation} where $N_\text{test}$ is the number of workout records in the test set $\mathcal{T}_{\text {test }}$ and $L$ is the number of time steps in each workout record. \subsection{Entity Embedding with Tensor Decomposition} \label{sec:tensor_decomposition_eval} As discussed in section \ref{sec:method}, we adopt the CP tensor decomposition method to generate entity embeddings and perform core consistency diagnostic to find the best rank of the decomposed tensors. Using formula \ref{eq:cp decomposition 5} to compute core consistency, the closer the value is to 100, the more appropriate the rank is. Generally speaking, with the increase of the number of ranks, the core consistency score tends to decrease monotonically due to the increase of decomposition noise and other non-trilinear variations \cite{080bc1d7971d4328add2c543579ec1f4}. \begin{figure}[t] \centering \includegraphics[width=12cm]{Figures/rank_vs_cc_score.png} \caption{CP Decomposition rank vs Core Consistency} \label{fig:CP Decomposition rank vs Core Consistency} \end{figure} As Figure \ref{fig:CP Decomposition rank vs Core Consistency} illustrates, the core consistency score is relatively high starting at CP Decomposition rank of 2. With the increase of CP Decomposition ranks, the score fluctuates and peaks at the rank of 13, after which a downward trend can be observed. Core consistency diagnostic provides an intrinsic evaluation of the CP Decomposition rank. However, it does not necessarily guarantee that the rank selected with this method will perform best at the final regression task. Therefore, we select the decomposed entity embedding tensors of rank 2, 11, and 13, respectively, and use the final models to further evaluate them. The rank 13 tensors have the highest core consistency score, and they are relatively high in dimension, which may increase challenges to the convergence of the final models. In contrast, the rank 2 tensors have a relatively high core consistency score and low dimension. We also pick rank 11 to balance the core consistency score and complexity. Inspired by word2vec \cite{word2vec}, we evaluate the trained entity embeddings with cosine similarity: $\operatorname{similarity}(A, B)=\frac{A \cdot B}{\|A\| \times\|B\|}=\frac{\sum_{i=1}^{n} A_{i} \times B_{i}}{\sqrt{\sum_{i=1}^{n} A_{i}^{2}} \times \sqrt{\sum_{i=1}^{n} B_{i}^{2}}}$. Taking user embeddings of size 13 as an example, the cosine similarity between randomly picked user A (192 running records) and user B (129 running records) is 0.78660, while that between user A and user C (132 mountain biking and 1 biking records) is -0.00489. Similarly, the cosine similarity between user D (104 biking and 70 running records) and user A is 0.57067, and that between user D and user C is 0.18126. The findings suggest that sport type has a large impact on the trained user embeddings, that is, users playing similar sports may have closer embeddings. In addition, we use T-SNE to plot the entity embeddings on a 2D plane, as shown in Fig. \ref{fig:user_embed_example} and Fig. \ref{fig:route_embed_example}. To compare the results intuitively, 3 user embedding scatterplots are drawn, and color-coded according to the average workout calories, average workout speed, and average workout distance respectively. Similarly, we plot 3 figures for the workout route embeddings, and color code them according to the type of sport. The size of each data point in the figures is proportional to the magnitudes of the average workout calories, average workout speed, and average workout distance respectively. As for the user embeddings, we observe that the users with similar preferences of average workout speeds and average workout distances tend to be clustered together. Although patterns are not as obvious for the workout route embeddings, possibly due to a lack of data points, the points of the same sports seem to be clustered together for running and mountain biking. More T-SNE plots of the entity embeddings of other ranks can be found in APPENDICES: \ref{sec:appendices}. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Figures/user_cp_13.png} \caption{Rank 13 User Embedding T-SNE Plot} \label{fig:user_embed_example} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Figures/route_cp_13.png} \caption{Rank 13 Workout Route Embedding T-SNE Plot} \label{fig:route_embed_example} \end{figure} \subsection{Workout Distance Prediction Model} For the experiment of the distance prediction sub-model, we ran ablations on training MLP models with different hidden layers and different entity embedding dimensions as input features. The former is to heuristically seek the optimal number of hidden layers for best model performance, while the latter is an extrinsic evaluation of the pre-trained entity embeddings on solving one of our final tasks. \begin{table}[t] \centering \begin{tabular}{lc} \toprule \textbf{Models}&\textbf{Distance RMSE (KM)} \\ \midrule 2 Layer MLP & \\ $\;$ + Embedding Size 2 & 0.1423 \\ $\;$ + Embedding Size 11 & 0.1394 \\ $\;$ + Embedding Size 13 & \textbf{0.1387} \\ 3 Layer MLP & \\ $\;$ + Embedding Size 2 & 0.1422 \\ $\;$ + Embedding Size 11 & 0.1420 \\ $\;$ + Embedding Size 13 & 0.1433 \\ \bottomrule \end{tabular} \caption{Ablation study of MLP layers and entity embedding dimensions} \label{table:ablation_study_mlp} \vspace{-4mm} \end{table} As shown in Table \ref{table:ablation_study_mlp}, the model that produces the best performance is 2 Layer MLP (1 hidden layer) with entity embedding of size 13. Since the performances of the 3 Layer MLP models (2 hidden layers) do not exceed those of the 2 Layer MLP models, we choose not to further experiment with more hidden layers. The task of predicting workout distance produces a single scalar, which is relatively simple. Hence, a 2 Layer MLP model is adequate. In contrast, the increase of hidden layers is likely to lead to over-fitting. Moreover, the result of the extrinsic evaluation of entity embeddings is consistent with the intrinsic evaluation result in section \ref{sec:tensor_decomposition_eval}. Regardless of the number of MLP layers, the best model is achieved with entity embedding of size 13, followed by that of size 2, and then that of size 11. In the intrinsic evaluation of entity embeddings, we found that entity embedding of size 13 has the highest core consistency score, followed by that of size 2, and then that of size 11. This finding validates the hypothesis of core consistency, that is, the higher the core consistency value, the more appropriate the CP decomposition result is. Although we cannot compare the result of this model with FitRec \cite{10.1145/3308558.3313643}, as it does not have the task of predicting workout distance, our best model achieves an RMSE of 0.1387 (km), which is reasonably good for predicting workout distance in practice. \subsection{Speed \& Heart Rate Prediction Model} For the speed \& heart rate prediction sub-model, we first ran ablations on different entity embedding inputs based on a 1 Layer LSTM network structure, which is an extrinsic evaluation of the pre-trained entity embeddings. From Table \ref{table:ablation_study_lstm}, we observe that MAE drops as embedding size increases. The 1 Layer LSTM model with an entity embedding size of 13 is found to yield the best performance, which decreases the MAE of the baseline (1 Layer LSTM without entity embedding) by 5.8$\%$ and 12.5$\%$ on speed and heart rate respectively. The result of this experiment is consistent with the core consistency diagnostic result in section \ref{sec:tensor_decomposition_eval} that the embedding size of 13 is most appropriate. \begin{table}[t] \centering \begin{tabular}{lcc} \toprule \makecell[l]{\textbf{Speed \& HeartRate} \\ \textbf{Model}} & \makecell[c]{\textbf{Speed MAE} \\ \textbf{(KMPH)}} & \makecell[c]{\textbf{Heart Rate MAE} \\ \textbf{(BPM)}} \\ \midrule 1 Layer LSTM & & \\ $\;$ + No Embedding & 2.92 & 13.01 \\ $\;$ + Embedding Size 2 & 2.90 & 13.01 \\ $\;$ + Embedding Size 11 & 2.80 & 11.57 \\ $\;$ + Embedding Size 13 & \textbf{2.75} & \textbf{11.38}\\ \toprule Embedding Size 13 & & \\ $\;$ + 2 Layer Stacked LSTM & 2.51 & 11.376 \\ $\;$ + 2 Layer Stacked Bi-LSTM & \textbf{2.4} & \textbf{11.304} \\ $\;$ + 2 Layer Stacked Bi-LSTM + Attention & 3.2 & 13.92 \\ \bottomrule \end{tabular} \caption{Ablation study of LSTM structure and entity embedding dimensions} \label{table:ablation_study_lstm} \vspace{-8mm} \end{table} Then we ran ablations on different LSTM structures with entity embedding size set to 13. As shown in Table \ref{table:ablation_study_lstm}, model performances are improved by applying another layer of LSTM and the bidirectional LSTM structure. The best model is the 2 Layer Stacked Bi-LSTM with embedding size 13, whose MAEs on speed and heart rate are 2.4 km/h and 11.3 beats/minute respectively. Compared with the baseline model (1 Layer LSTM without entity embedding), its MAEs are 17.8\% and 13.1\% less for speed and heart rate predictions respectively. However, the addition of the attention mechanism does not improve model performance. It is likely that speed and heart rate mostly depend on neighboring steps rather than distant steps in the sequence. In this case, the attention mechanism has little help since it primarily addresses information loss problems for long sequences \cite{attention}. \subsection{Comparison with FitRec} To further evaluate our result, we compare our $(P^{3}FitRec)$ with FitRec \cite{10.1145/3308558.3313643}. As shown in Table \ref{table:comparison_with_FitRec}, our $(P^{3}FitRec)$ and FitRec use the same dataset, and both aim to provide fitness recommendations by predicting speed and heart rate sequences. The comparison is conducted in the following aspects. First, in pre-processing, FitRec keeps all sports while we only keep running, cycling, and mountain biking, which account for 97\% of the total workouts, considering each sport needs sufficient samples to train a decent deep learning model. In addition, for each workout, FitRec only extracts the first 450 out of the 500 timestamps to reduce noise in the dataset, while we recommend keeping the entire 500 timestamps so that the model is more practical in real life. Furthermore, both studies share similar input features, including historical user information, route information, sport type, and gender. However, FitRec uses the time sequence and distance sequence as their inputs to predict speed sequence, while we remove the time sequence to avoid data leakage ($\text{speed} = \frac{\text{distance}}{\text{time}}$), which adds a significant challenge to the regression task. Considering the practical application of the model, we also add target caloric consumption to be one of the input features. Moreover, both systems derive user embeddings to extract the latent characteristics of the users from historical workout data. However, FitRec only uses a user's latest exercise record to derive user embeddings through an LSTM network. In contrast, we train user embeddings and workout route embeddings through tensor decomposition with all historical workout records. Using more records enables us to realize more abundant information from the entities. \begin{table}[ht] \centering \begin{tabular}{lll} \toprule & \textbf{$(P^{3}FitRec)$} & \textbf{FitRec} \\ \midrule \textbf{Pre-processing} & \parbox{.35\textwidth}{ \begin{itemize} \item 3 sport types (Account for 97\% of data) \item Entire sequence of each workout \end{itemize} } & \parbox{.35\textwidth}{ \begin{itemize} \item All sport types \item First 450 of 500 timestamps for each workout \end{itemize} } \\ \midrule \textbf{Input and Output} & \begin{tabular}[t]{@{}l@{}} Input: \\ \parbox{.35\textwidth}{ \begin{itemize} \item Sport type \item Gender \item Altitude sequence \item Distance sequence \item \textbf{Calories} \item User embedding \textbf{(Tensor decomposition)} \item Route embedding \textbf{(Tensor decomposition)} \end{itemize} } \\ Output: \\ \parbox{.35\textwidth}{ \begin{itemize} \item Speed sequence \item Heat rate sequence \end{itemize} } \end{tabular} & \begin{tabular}[t]{@{}l@{}} Input: \\ \parbox{.35\textwidth}{ \begin{itemize} \item Sport type \item Gender \item Altitude sequence \item Distance sequence \item \textbf{Time sequence} \item User embedding \textbf{(LSTM)} \end{itemize} } \\ Output: \\ \parbox{.35\textwidth}{ \begin{itemize} \item Speed sequence \item Heat rate sequence \end{itemize} } \end{tabular} \\ \midrule \textbf{Embedding method} & \parbox{.35\textwidth}{ \begin{itemize} \item Trained on all historical records \item Tensor decomposition \end{itemize} } & \parbox{.35\textwidth}{ \begin{itemize} \item Trained on the user's most recent record \item LSTM \end{itemize} } \\ \midrule \textbf{Best Model} & \begin{tabular}[c]{@{}l@{}} 2-layer stacked Bi-LSTM \\ (Speed \& Heart Rate \\ by single model) \end{tabular} & \begin{tabular}[c]{@{}l@{}} 2-layer stacked LSTM \\ (Speed \& Heart Rate \\ by 2 models) \end{tabular} \\ \midrule \textbf{Results} & \begin{tabular}[c]{@{}l@{}} Speed MAE: 2.4 KM/H; \\ Heart Rate MAE: \\ 11.304 BPM \end{tabular} & \begin{tabular}[c]{@{}l@{}} Speed MAE: 2.384 KM/H; \\ Heart Rate MAE: \\ 12.847 BPM \end{tabular} \\ \bottomrule \end{tabular} \caption{Comparison between $(P^{3}FitRec)$ and FitRec} \label{table:comparison_with_FitRec} \vspace{-4mm} \end{table} Next, our speed and heart rate prediction model has a bidirectional LSTM structure while FitRec implements a unidirectional LSTM. Finally, FitRec and our model have achieved similar performance in predicting speed and heart rate. More specifically, FitRec performs a little better in speed prediction, while our result in heart rate prediction is better. This does not mean our model is inferior in predicting speed. FitRec uses distance and elapsed time as inputs to predict speed, which might lead to data leakage to a certain extent. The removal of the elapsed time feature in our implementation adds challenges to the speed prediction task. Moreover, we also add target caloric expenditure as an input feature to our model to influence the magnitudes of the predicted speed sequence, which potentially further elevates the difficulty of the task. Despite these challenges, we still achieve a comparative result in speed prediction and a superior result in heart rate prediction. Furthermore, the implementation of tensor decomposition provides a more flexible and lighter method for extracting user and route information. The overall better result can be due to many factors, such as a better strategy of training entity embeddings, different model design, and less variance of sport types in the dataset, etc. The proposed models, datasets and experimental data will be made available at the project repository for further extension and reproducibility studies. \protect\footnote{\url{https://github.com/BasemSuleiman/Personalized\_Intelligent\_Fitness\_Recommender}.}. \section{Introduction} \label{sec:introduction} \raggedbottom With the advent of artificial intelligence, many innovative enterprises have provided various personalized products and services to individuals. Personalization is generally made possible by mining multidimensional data from individuals so that the products and services can be custom-made to people’s inclinations. In particular, the research of personalization on improving individuals’ well-being has attracted a lot of attention. For instance, smartphones and smartwatches can sense the dynamic changes of users, such as heart rate, blood pressure, sleep patterns, etc., to realize real-time and non-invasive monitoring of human health \cite{s18061714}. A study found that smartwatches can effectively monitor the instantaneous changes in the user's heart rate, and can accurately help diagnose atrial fibrillation associated with ischemic stroke (asymptomatic or paroxysmal) \cite{raja_elsakr_roman_cave_pour-ghaz_nanda_maturana_khouzam_2019}. On the other hand, other products focus on providing personalized suggestions to users, such as customizing exercise plans for users, understanding their reactions to the plans, and analyzing exercise results to continuously improve user experiences. For example, a framework named PRO-Fit (Personalized Recommender and Organizer Fitness assistant) was introduced, which uses collaborative filtering on user profile and activity data to generate personalized fitness schedules according to user availability and fitness goals \cite{10.1007/s00779-017-1039-8}. Traditionally, people can achieve comparative goals by hiring a human trainer, but machine learning-based applications have certain advantages in terms of quality and affordability. More specifically, machine learning algorithms can integrate various information sources from thousands or even millions of users to develop products or services, which is far beyond the knowledge range of a human trainer. In addition, recommender systems as software products usually have a lower marginal cost to provide services to new users. However, despite the convenience brought by technology, societies are paying more attention to the security and privacy issues in data mining and prediction. In 2018, the European Union issued General Data Protection Regulation (GDPR) to regulate the protection of citizens' personal data and privacy. In this sense, Tan et al. \cite{5772388} and Bouhenguel et al. \cite{4810232} discussed the Bluetooth security threats of wearable devices, and share their knowledge and insights on how to prevent devices and the networks they are connected to from being attacked. Other researchers such as Cyr et al. \cite{Cyr2014SecurityAO} analyzed the potential security problems of Fitbit devices when collecting and utilizing users' data, such as unnecessarily collecting information from nearby devices, or withholding all collected data to the device owners. Unfortunately, most service or product providers are inherently motivated to collect as much data as possible, especially personal data, from users to cultivate machine or deep learning models to enhance user experience. Although most of the existing work in the literature focuses on building accurate artificial intelligence models to make personalized fitness recommendations, little consideration is given to protecting the privacy of users. For instance, Dharia et al. \cite{10.1007/s00779-017-1039-8} proposed a PRO-Fit framework to proactively push notifications recommending fitness-related activities to users, which is based on their multivariate data, including their fitness preferences, calendar data, and social network data. Although the model has achieved good performance, the framework inevitably collects an intensive amount of sensitive data from users. Likewise, the “TweetFit” framework which was proposed in \cite{Farseev_Chua_2017} profiles people’s wellness by taking advantage of data from sensors such as speed or heart rate measurements during exercises and multiple social media sources such as Twitter tweets and Instagram image captions and comments. On the other hand, with specific attention to GDPR compliance, Sanchez et al. \cite{Sanchez2020} presented a fitness data privacy model that learns people’s privacy preferences for fitness data sharing and processing collected by the Internet of things (IoT) devices. They studied user privacy permission settings of fitness trackers such as Fitbit and smartwatches with the supplement of a questionnaire to learn users’ privacy profiles by applying machine learning modeling. They further developed a rule-based personal data manager (PDM) framework to provide privacy advice to users based on their machine learning models. However, this work mainly focuses on the general privacy settings of IoT devices, not their product functions. Loepp et al.\cite{ubo_mods_00115757} proposed a prototypical smartphone app that recommends personalized running routes based on analyzing multivariate data of the workout routes users have run, e.g., length of the route, uniqueness of the route, the shape of the route, the light of the route, and so on. The framework makes personalized recommendations based on running routes following a rule-based approach and users’ preferences which were manually set by users. Compared with the rule-based approach, machine or deep learning-based recommender systems have the advantage of learning user attributes intelligently and require minimum manual input from the users. In this paper, we address the above challenges by proposing a novel privacy-aware personalized fitness recommender system. In our proposed approach, we introduce a multi-level deep learning framework that learns important features from a large-scale real fitness dataset collected from wearable IoT devices to derive intelligent fitness recommendations. Unlike most existing approaches, our approach achieves personalization by inferring the fitness characteristics of users from sensory data and thus minimizing the need for explicitly collecting user identity or biometric information, such as name, age, height, and weight. Our proposed approach consists of two key components. We first build a model that learns user embedding and workout route embedding from a real-world Fitbit dataset. The user and workout route embeddings are then fed as input features to our proposed deep learning models to create personalized recommendations. Second, we develop a workout profile prediction model that suggests personalized workout recommendations that can guide the user based on their choice of workout distance, route and speed sequence, sport type, and target calories to consume. The goal of our exercise distance prediction is to provide personalized guidance for users to achieve the target exercise calorie. Similarly, our predicted speed sequence aims to guide the user to adjust the exercise speed according to the conditions of the selected target exercise calories and the selected exercise route. Meanwhile, our predicted heart rate sequence aims to provide the user with an important indicator of the expected health status in the upcoming exercise. In our approach, we also propose a three-dimensional tensor of "users – workout routes – contextual features" based on the historical workout data. The goal of this tensor is to capture the underlying structures inherited in users and workout routes using the Tensor Decomposition method CANDECOMP/PARAFAC (CP) \cite{kolda2009tensor}. The two resultant matrices related to the latent characteristics of the users and workout routes are then combined with other contextual features (choice of sport, target calories, etc.) as input to two models. The first model is a Multi-Layer Perceptron (MLP) which is used to predict the total distance of a future workout. The second model is a Long Short-Term Memory (LSTM) which is used to predict the speed sequence and heart rate sequence of a future workout. The main contributions of our proposed approach are threefold: \begin{enumerate} \item An approach to building privacy-aware personalized fitness recommendation systems, by inferring the fitness characteristics of users from sensory data instead of collecting multidimensional private data from users. This is complementary to the personalized fitness recommendation systems that do not consider privacy preservation, and the privacy preservation approaches running on independent encryption protocols. \item A model that learns user embedding and workout route embedding from real workout datasets collected from Fitbit devices. This includes gender, sport type, calories, workout duration, workout distance, workout speed, workout heart rate, and workout route geographical data. Our user and workout route embeddings are further utilized as the input features for our proposed privacy-aware and personalized fitness recommender. \item A workout profile prediction model that suggests personalized recommendation of the necessary workout distance, the rhythm of speed, the change of heart rate of a future exercise based on the user’s choice of workout route, sport type, and target calories to consume. \end{enumerate} The rest of this paper is structured as follows. In section \ref{sec:related work}, we present background information relevant to the topics of our proposed approach. We also, discuss various related studies in the literature and practice. The details of our proposed approach, including our methods and algorithms, are introduced in section \ref{sec:method}. The experimental evaluation and result analysis are discussed in section \ref{sec:evaluation}. In section \ref{sec:discussion}, we discuss the results in the context of our research goals and contributions. We draw key conclusions and future work in section \ref{sec:conclusion}. \section{Appendices} \label{sec:appendices} \verb|Entity Embedding T-SNE plot| \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Figures/user_cp_2.png} \caption{Rank 2 User Embedding} \label{fig:rank_2_user_embedding} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Figures/user_cp_11.png} \caption{Rank 11 User Embedding} \label{fig:rank_11_user_embedding} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Figures/user_cp_13.png} \caption{Rank 13 User Embedding} \label{fig:rank_13_user_embedding} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Figures/user_cp_16.png} \caption{Rank 16 User Embedding} \label{fig:rank_16_user_embedding} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Figures/route_cp_2.png} \caption{Rank 2 Route Embedding} \label{fig:rank_2_route_embedding} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Figures/route_cp_11.png} \caption{Rank 11 Route Embedding} \label{fig:rank_11_route_embedding} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Figures/route_cp_13.png} \caption{Rank 13 Route Embedding} \label{fig:rank_13_route_embedding} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{Figures/route_cp_16.png} \caption{Rank 16 Route Embedding} \label{fig:rank_1_route_embedding} \end{figure} \end{document} \section{Methodology - $(P^{3}FitRec)$} \section{Methodology - \texorpdfstring{$(P^{3}FitRec)$}{(P3FitRec)}} \label{sec:method} In this section, we first introduce an overview of our proposed framework ($P^{3}FitRec$), and then we introduce the details of each component of the proposed framework. Our $(P^{3}FitRec)$ is composed of three-dimensional tensor data analysis and two deep learning models (MLP and multi-layer Bi-LSTM), which are used to predict the total workout distance, and the speed and heart rate sequences respectively. The main inputs of the framework are \textit{Target Caloric Expenditure, Sport Type, User ID}, and \textit{Workout Route ID}. Based on the \textit{User ID}, \textit{User Embedding} can be found by looking up the pre-trained user embedding tensor using the Tensor Decomposition method. Similarly, based on the \textit{Workout Route ID}, \textit{Workout Route Embedding} can be found by looking up the pre-trained workout route embedding tensor, as well as the \textit{Total Workout Route Distance, Altitude Sequence} and \textit{Distance Sequence} associated with the chosen route. In summary, the contextual input features consist of \textit{Target Caloric Expenditure, Sport Type, User Embedding, Workout Route Embedding}, and \textit{Total Workout Route Distance}. The sequential input features encompass \textit{Altitude Sequence} and \textit{Distance Sequence}. \subsection{Model Structure} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{Figures/Structure.png} \caption{Overview of our proposed framework $(P^{3}FitRec)$} \label{fig:galaxy} \end{figure} Firstly, the MLP model takes the input of the contextual features to predict the required \textit{Total Workout Distance}. Subsequently, the predicted \textit{Total Workout Distance} is then concatenated with other contextual input features and sequential input features to form the input at each time step of the Multi-layer Bi-LSTM model. The Multi-layer Bi-LSTM model predicts the \textit{Heart Rate Sequence} through a fully connected layer at the output of the first LSTM layer and predicts the \textit{Speed Sequence} through another fully connected layer at the output of the second LSTM layer. \subsection{Entity Embedding with Tensor Decomposition} One of the main objectives of this study is to learn the latent characteristics of users from historical workout records without using their private data. We propose using a collaborative filtering method based on Tensor Decomposition to achieve this goal, which is a generalization of the conventional matrix decomposition method in higher-dimensional space. For tensor analysis, a three-dimensional "user-workout route-contexts" tensor of size $N_\text{user} \times N_\text{route} \times N_\text{contexts}$ is constructed based on historical workout records. However, each workout record in the dataset contains a unique workout route because it has a unique sequence of altitudes, longitudes, and latitudes. To reduce the dimension of the three-dimensional tensor, we first cluster the workout routes into a smaller number of categories $N'_\text{route}$ where $N'_\text{route} < N_\text{route}$ to represent the workout routes in the three-dimensional tensor. In the third dimension, the contexts encompass several computed features like \textit{user’s gender, sport type, user’s workout frequency, user’s average workout duration, user’s average workout distance, user’s average workout speed, user’s average workout heart rate}, etc. \begin{figure}[t] \centering \includegraphics[width=12cm]{Figures/Tensor_Decomposition.png} \caption{"User-workout route-contexts" Tensor Decomposition} \label{fig:Tensor_Decomposition} \end{figure} Several methods have been proposed in the literature for learning tensors known as tensor decomposition among which two typical approaches are mostly used in the literature known as CANDECOMP/ PARAFAC (CP) and Tucker decomposition \cite{kolda2009tensor}. This paper implements tensor decomposition using the CP approach since it has gained much popularity over tucker decomposition, and it is the most widely used algorithm due to its ease of interpretation. Given a tensor $X \in \Re^{I \times J \times K} $, the main goal of CP decomposition is to decrease the sum square error between the model and a given tensor $X$: \begin{eqnarray}\label{cp} X \approx \sum_{r=1}^R \lambda_r \ A_{r} \circ B_{r} \circ C_{r} \equiv[ \lambda; A,B,C] \end{eqnarray} where "$\circ$" is a vector outer product. $R$ is the latent element, $A_{r}, B_{r}$ and $C_{r}$ are r-th columns of component matrices $A \in \Re^{I \times R}$, $B \in \Re^{J \times R} $and $ C \in \Re^{K \times R}$, and $\lambda$ is the weight used to normalize the columns of $A, B,$ and $ C $. In this sense, CP method decomposes $X$ into three matrices $A$, $B$ and $C$ as shown in Fig. \ref{fig:Tensor_Decomposition}. Matrix $A$ represents the user mode, $B$ represents the workout route mode, and $C$ represents the context mode. This can be solved by minimizing the sum square error of \begin{equation} \label{eq:als} \min _{A, B, C}\left\|X-\sum_{r=1}^{R} \lambda_{r} A_{r} \circ B_{r} \circ C_{r}\right\|_{f}^{2} \end{equation} At first, the function given in Equation \ref{eq:als} seems to be a non-convex problem, because its goal is to optimize the sum of squares of three matrices. However, by fixing two factor matrices and solving only the third one, this problem can be simplified to a linear least squares problem. Following this approach, the ALS technique can be employed here, which solves every component matrix repeatedly by locking all other components until it converges \cite{anaissi2018regularized}. \begin{equation} \label{eq:cp decomposition 2} X=A L(C \odot B)^{T}+E \end{equation} Assume we have completed CP Decomposition with a selected rank and learned matrices $A$, $B$, and $C$. Then we fit the full Tucker3 model to the data using the CP Decomposition matrices $A$, $B$, and $C$ by minimizing \begin{equation} \label{eq:cp decomposition 3} \sigma(\mathbf{G})=\left\|\mathbf{X}-\mathbf{A} \mathbf{G}(\mathbf{C} \otimes \mathbf{B})^{\mathrm{T}}\right\|_{\mathrm{F}}^{2} \end{equation} where $\otimes$ denotes Kronecker product. The optimal $\mathbf{G}$ in equation \ref{eq:cp decomposition 3} can be determined as \begin{equation} \label{eq:cp decomposition 4} \operatorname{vec} \mathbf{G}=(\mathbf{C} \otimes \mathbf{B} \otimes \mathbf{A})^{+} \operatorname{vec} \mathbf{X} \end{equation} when the residual decomposition error $\sigma(\mathbf{G})$ is minimized. The underlying idea is to find the similarity between $\mathcal{L}$ and $\mathcal{G}$ where $\mathcal{L}$ is a super diagonal core tensor that all its super diagonal values are 1 and all its off-superdiagonal values are 0. To compare the similarity, we can have a look at the distribution of the elements in the superdiagonal and off-super diagonal of $\mathcal{G}$. If the superdiagonal elements of $\mathcal{G}$ are all close to the corresponding elements of $\mathcal{L}$, which is 1, and the off-superdiagonal elements of $\mathcal{G}$ are all close to the corresponding elements of $\mathcal{L}$, which is 0, then we say the CP Decomposition result is appropriate. Formally, the similarity between the two tensors, or core consistency can be written as \begin{equation} \label{eq:cp decomposition 5} c c=100\left(1-\frac{\sum_{l=1}^{R} \sum_{m=1}^{R} \sum_{n=1}^{R}\left(g_{l m n}-\lambda_{l m n}\right)^{2}}{R}\right) \end{equation} where the closer the cc score is to 100 the better. \subsection{Workout Distance Prediction} Our distance prediction model is based on MLP architecture, which refers to a fully connected neural network containing one or more hidden layers. As MLP is the basic form of neural network that can be used to solve classification and regression tasks, we adopt the MLP architecture to build a model that predicts workout distance using the contextual input features $X_\text{context} = [x_\text{user\_embed}, x_\text{route\_embed}, x_\text{context\_1}, x_\text{context\_2}, \cdots, x_\text{context\_n}]$. For the MLP model shown in Fig. \ref{fig:MLP_Model_Structure}, we feed the contextual input features $X_\text{context}$ into the network, and the output at the hidden layer $\text{net}_j$ can be calculated by by\begin{equation} \label{eq:mlp 1} \text{net}_j=W_j X_\text{context} + B_j \end{equation} The output of hidden layer after activation is $y_j = \text{ReLU}(\text{net}_j)$ where \begin{equation} \label{eq:mlp 2} \text{ReLU}(x) = \text{max}(0, x) \end{equation} Similarly, we can compute the output at the output layer by \begin{equation} \label{eq:mlp 3} \text { net }_k=W_k y_{k-1} + B_k \end{equation} And finally the model predicts a normalized workout distance by \begin{equation} \label{eq:mlp 4} y_k = \text{Sigmoid}(\text { net }) \end{equation} where \begin{equation} \label{eq:mlp 5} \text{Sigmoid}(x) = \frac{1}{1+e^{-x}} \end{equation} \begin{figure}[t] \centering \includegraphics[width=12cm]{Figures/MLP_Model_Structure.png} \caption{Our Workout Distance Prediction Model with MLP} \label{fig:MLP_Model_Structure} \end{figure} \subsection{Speed and Heart Rate Sequence Prediction} We propose a Multi-layer Bi-LSTM Model to predict speed and heart rate sequences due to inputs and outputs being in sequential format. LSTM-based models are an extension for RNNs that implement memory states to store information and gate mechanisms to control information flow for the purpose of alleviating vanishing gradient problems \cite{9005997}. More specifically, they use cell state and hidden state to carry information. A forget gate is used to control the preservation and removal of information passed from the last time step. Meanwhile, an input gate is used to decide what new information we’re going to store, while an output gate is used to specify what information contributes to the output at the current time step. Firstly, the output through the forget gate $f_{t}$ is computed as: \begin{equation} \label{eq:lstm 1} f_{t}=\sigma\left(W_{f}\left[h_{t-1}, x_{t}\right]+b_{f}\right) \end{equation} where \begin{list}{$\circ$}{} \item $h_t$, $h_{t-1}$ are the hidden states of the LSTM at time step $t$ and $t-1$ respectively \item $x_t$ is the input at time step $t$ \item $\sigma$ denotes sigmoid activation function \end{list} Then the output through the input gate $\tilde{C}_t$ is computed as: \begin{equation}\label{eq:lstm 2} \begin{cases} i_{t}=\sigma\left(W_{i}\left[h_{t-1}, x_{t}\right]+b_{i}\right)\\ \tilde{C}_t=\tanh\left(W_{C}\left[h_{t-1}, x_{t}\right]+b_{C}\right) \end{cases} \end{equation} where $\tanh$ denotes hyperbolic tangent activation function. Meanwhile, the cell state from the previous time step $C_{t-1}$ is updated by: \begin{equation} \label{eq:lstm 3} C_{t}=f_t * C_{t-1} + i_t * \tilde{C}_t \end{equation} Lastly, the output at the current time step $h_{t}$ is computed as: \begin{equation}\label{eq:lstm 4} \begin{cases} o_{t}=\sigma\left(W_{o}\left[h_{t-1}, x_{t}\right]+b_{o}\right)\\ h_{t}=o_t + \tanh (C_t) \end{cases} \end{equation} \newline Fig. \ref{fig:Speed_HeartRate_Model} shows the detailed structure of the 2-layer Bi-LSTM model. At step $t$, with input $X_t = [X_\text{context}, x_{\text{altitude}\_t}, x_{\text{distance}\_t}]$, heart rate $y_{\text{heart\_rate}\_t}$ is predicted through the hidden state of the first Bi-LSTM layer and speed $y_{\text{speed}\_t}$ is predicted through the hidden state of the second Bi-LSTM layer. We propose predicting the heart rate and speed at two different stages instead of predicting them both at the second stage or just using a single LSTM stage. \begin{figure}[t] \centering \includegraphics[width=12cm]{Figures/Bilstm.png} \caption{Speed and HeartRate Model} \label{fig:Speed_HeartRate_Model} \end{figure} When training the models, we find that the predicted heart rate sequence is always positively correlated with the input calories. However, in some cases, the predicted speed might be negatively correlated to the calorie input during inference, which is counter-intuitive. Therefore, we design the model structure by predicting the heart rate at the first layer and then using its hidden states as the input to the second LSTM layer to predict the speed. This approach reinforces the model to learn the correct correlation between speed and input calories to alleviate the counter-intuitive problem. Formally, if we denote the output of LSTM as $h^{(i)}$ where $i$ refers to $i^{th}$ layer, the predicted heart rate and speed are computed as: \begin{equation}\label{eq:lstm 5} y^{(i)} = \text{SELU}(W^{(i)} h^{(i)} + b^{(i)}) \end{equation} where SELU denotes scaled exponential linear unit activation function. Inspired by \cite{NIPS2017_5d44ee6f} and \cite{10.1145/3308558.3313643}, our sequence model adopts SELU as the activation function, because it induces self-normalizing properties like variance stabilization, which can solve vanishing and exploding gradient problems. Furthermore, the use of the bidirectional mechanism helps the model more effectively learn the underlying context by traversing the sequential input features twice, i.e., both forward and backward \cite{9005997}. For instance, understanding whether the workout route is going uphill or downhill facilitates a better prediction of speed and heart rate. \section{Related Work} \label{sec:related work} In this section, we first briefly review the technologies for sequential data modeling and technologies used in building recommender systems. Then we discuss related studies on different fitness recommendation topics, e.g., Fitness Activity Detection and Recommendation, Sequential Fitness Profile Recommendation, and Privacy Preservation in Fitness Recommendation. \textbf{Recurrent Neural Networks (RNNs).} In recent years, RNNs and particularly LSTM networks have been widely used in processing time-series data for sequential modeling tasks, for example, speech recognition \cite{Jorge_2019}, machine translation \cite{NIPS2014_a14ac55a}, image captioning \cite{Chu2020}, etc. Ilya Sutskever et al. proposed an end-to-end approach to address sequence to sequence mapping problem by constructing a multi-layer LSTM network, which maps the input sequence to a fixed-dimensional vector through LSTM, and then uses another LSTM to output the target sequence \cite{NIPS2014_a14ac55a}. They put forward several innovative techniques to improve the model, such as using stacked LSTM structure and reversing input vectors, etc. In English-French translation tasks, their model shows better performance over single forward LSTM models. Moreover, Bi-directional LSTM is an extension of traditional LSTMs that improves model performance on many sequential modeling tasks. This is because a unidirectional LSTM network only propagates information from past to future, while a Bi-directional LSTM network manages inputs in two ways, one from past to future and one from future to past \cite{650093}. Therefore, it consolidates the context of each step in the input sequence. Similarly, in our work, we consider a Bi-directional stacked LSTM architecture to solve the sequence-to-sequence modeling task. \textbf{Technologies of Recommender Systems.} Recommender systems generally follow two basic approaches: collaborative filtering or content-filtering \cite{10.1145/2481244.2481250}. Collaborative filtering holds the belief that people would continue the same experience in the future if they liked something in the past. The K-nearest Neighboring (KNN) method is often used to be the most favored approach to conducting collaborative filtering, which conducts finding similar users’ profiles to one certain user to compute likeness and dislikes for an item \cite{10.1145/371920.372071}. In contrast, content-based filtering methods are useful in situations where user information is sufficient but not item information. It works as a classifier to model the likes and dislikes of the users to evaluate an item \cite{10.1145/371920.372071}. In collaborative filtering-based recommender systems, a two-dimensional data matrix associated with users and items is usually constructed. The 2-D matrix can be factorized into two matrices, namely a user matrix and an item matrix that contain the latent characteristics describing the user preferences and item profiles \cite{10.1145/1864708.1864727}. One disadvantage of the matrix factorization approach is that the context is only a scalar, representing the user's rating of the item. To overcome this shortcoming, Karatzoglou et al. discussed a multidimensional equivalent method to the 2-D matrix factorization approach named Tensor Decomposition \cite{10.1145/1864708.1864727}. Contrary to the single rating feature in the matrix decomposition method, a multidimensional contextual feature vector is established between each user and item. This method allows flexible integration of context information when learning entity embeddings to provide context-aware recommendations. In our work, we consider establishing a rich context feature vector between each user and workout route to derive user embeddings and workout route embeddings. \textbf{Fitness Activity Detection \& Recommendation.} Guo et al. proposed FitCoach, which is a virtual fitness coach built upon data sensed by IoT devices \cite{8057208}. It aims at detecting people’s workout statistics such as exercise types with a lightweight support vector machine (SVM) classifier and providing fine-grained feedback on exercise form scores, i.e., motion strength and performing period, to assist users to maintain proper exercise postures and avoid injuries. Similarly, Zhao et al. introduced a fitness recommender system designed to generate personalized and gamified content to promote daily physical activities \cite{pmid33200994}. They collected various types of user data and built separate sub-models for user profile prediction with a non-machine learning approach. The sub-models work individually, but their results are jointly input into a decision tree-based recommendation engine to create personalized recommendations, such as extending an existing exercise or suggesting a different type of activity. Yong et al. proposed an IoT-based intelligent fitness system that monitors people’s health with data collected by IoT devices, recognizes people’s actions using a convolutional neural network (CNN) based model, and provides fitness-related recommendations, such as reminding users to attend fitness courses or going to gyms based on user predefined exercise plans \cite{10.1016/j.jpdc.2017.05.006}. They explicitly collected users’ scores on the exercise items to build a collaborative filtering based recommender system to realize personalization. Similarly, Saumil Dharia et al. presented the PRO-Fit framework that collects users' multivariate data, including fitness preferences, calendar data, and social network data \cite{10.1007/s00779-017-1039-8}. It applies machine learning algorithms to classify users’ activities into specific types, which are then used to establish user profiles reflecting their current lifestyle (sedentary vs. active). These user profiles are further fed into a collaborative filtering-based recommender system for personalized fitness activity or fitness partner recommendations. Unlike most studies that target the general public, Mogaveera et al. introduced a health monitoring and fitness recommendation system using machine learning targeting patients \cite{9358605}. The system collects data from both patients (body details, disease \& health records) and doctors (disease categories) to monitor patients’ condition based on some predefined rules and to provide personalized recommendations of diet and exercise plans through a decision tree based machine learning model. Compared with the existing research, our focus is on recommending dynamic changes of workout speed and heart rate of an exercise, which are complementary dimensions in the field of fitness recommendation. In addition, the construction of these exercise activity recommendation systems usually boils down to recommending some predefined fitness categories. In contrast, we propose a deep learning framework to solve a rarely studied sequence-to-sequence regression task to model workout speed and heart rate. Moreover, as discussed above, most researchers have achieved personalization with rule-based or collaborative filtering-based methods. In contrast, we propose learning user profiles based on tensor decomposition. Compared with the rule-based method, this method is considered to be more scalable and compared with the collaborative filtering method, it can realize context-aware user profiles \cite{10.1145/1864708.1864727}. \textbf{Sequential Fitness Profile Recommendation.} Berndsen et al. proposed a recommender system that predicts the target finish time of Marathon with an XGBoost model, followed by a collaborative filtering \& K-Nearest-Neighbours (KNN) based framework to generate pacing recommendations for runners to achieve the target finish time \cite{9356261}. More specifically, historical training data of athletes, such as GPS data, completion time, and pacing information, are used to predict the target completion time. Meanwhile, a runner’s user profile can be inferred by applying collaborative filtering and further used to find the successful marathon finishers with the most similar user profiles applying K-Nearest-Neighbours (KNN) algorithm. Finally, pacing strategy recommendation is achieved by using the pacing strategies from these successful marathon finishers with similar user profiles. Compared with their work, the exercises we focus on are less competitive than Marathon, such as jogging, biking, etc. However, our task of predicting workout distance is similar to the prediction of target finish time in \cite{9356261}, except that we propose a multi-layer perceptron model to solve this task. In addition, their approach to workout pacing recommendation is through looking up historical exercises and following existing pacing records. We propose an LSTM based model to generate personalized new pacing sequences. Jianmo et al. proposed a personalized fitness recommendation system named FitRec to solve two tasks: predicting the heart rate and speed at all time steps of a future workout, and short-term prediction of heart rate and speed at a specific time step during an ongoing workout \cite{10.1145/3308558.3313643}. The former predicts a future workout profile to show users the anticipated performances in terms of heart rate and speed while the latter predicts transient heart rate and speed during an ongoing exercise, so as to facilitate tasks like anomaly detection or real-time decision-making. FitRec was primarily designed based on LSTM modules. More specifically, they used LSTMs to project the sequence of the most recent workout measurements of the user into a dense vector, which forms the user embedding vector. Then, the user embedding vector is concatenated with other input attributes of the new workout sequence as the combined contextual sequence input for two different LSTM networks corresponding to the two tasks and adopt a 2-layer stacked LSTM architecture and an LSTM-based encoder-decoder architecture respectively. Compared with FitRec, our work differs in many ways, although we study the same dataset as FitRec, and its task of predicting a future workout profile has a similar purpose to ours. First, we suggest adding caloric expenditure to the input features, so that our system can provide different recommendations according to users’ inputs of target caloric expenditure, which enhances the practical value of our research. Furthermore, FitRec predicts the speed of a workout using distance and time as inputs, which arguably may be seen as a trivial issue solvable by simple math. Therefore, we propose removing the time sequence feature from the inputs to avoid potential information leakage. Besides, we improve the derivation of entity embeddings by learning them from all historical workout records of the users, instead of learning only from the latest one. Our tensor decomposition method for learning entity embedding is also different from theirs. \textbf{Privacy Preservation in Fitness Recommendation.} In privacy-aware recommendation systems, privacy preservation is usually achieved through some independent encryption protocols in the process of data collection or data exchange \cite{Sanchez2020}. \cite{BEG2021102874} proposed a reversible data transform algorithm based privacy-preserving data collection protocol for mobile app recommendation systems. According to this protocol, a user's data is sent through a user group with encryption, which avoids direct communication between a user and a data collector. Likewise, Arijit Ukil addresses the privacy preservation problem through a random security key pre-distribution method \cite{5628748}. According to the proposed scheme, private data can be collected from various sources and aggregated by the service provider or the server securely. \cite{4664769} studies the challenge of privacy preservation in a more specific scenario when the recommendation systems of two independent business entities are merged. In this scenario, both a homomorphic encryption approach and a scalar product approach are proposed to encrypt raw data before data exchange takes place between the two systems. Badsha et al. also proposed a homomorphic encryption approach to enforce privacy preservation in building a recommender system \cite{Badsha2016}. More specifically, they collected encrypted ratings on items from users and sent ciphertexts to recommender servers to calculate the similarity among the rated items homomorphically. The similarity scores were subsequently decrypted by the users without revealing any private information and then used to build a recommender system by using content-based filtering and collaborative filtering methods. In contrast, our work focuses on developing the recommendation algorithm itself with minimum private data, which is complementary to the data encryption approach discussed above. Sanchez et al. conducted a study on users' preferences on privacy permission in the fitness domain and recommended a series of strategies for users to set permissions according to the collected and shared data \cite{Sanchez2020}. They found that users have the highest acceptance rates of privacy permissions in terms of gender and fitness types, while the users are reluctant to share their height, weight, age, and social network information. The results of this study confirm our hypothesis about privacy protection and are consistent with our attempt to construct a personalized fitness recommendation system by using minimum private information from the users. To the best of our knowledge, most existing work in the fitness recommendation domain attempted to collect multidimensional demographic parameters from the users to improve the performances of their recommendation algorithms. In this paper, we demonstrate that a recommendation algorithm can be developed using minimum private data from users.
1,314,259,996,389
arxiv
\section{Introduction} Task-oriented dialogue has a wide range of applications to handle everyday tasks such as booking hotels, movie tickets and restaurants, etc. The system for supporting these tasks mostly adopts a pipelined modular architecture, which usually contains a natural language understanding (NLU) module for recognizing users' intent, a dialogue state tracking (DST) module for extracting and tracking the dialogue states, a policy (POL) module for deciding the system action and a natural language generation (NLG) module for generating the response according to the system action. DST module is the core component of a task-oriented dialogue system since the system response is dependent on its result. An excellent DST can improve the user experience by reducing the number of interactions. The challenge in DSTC 9 Track 2\citep{gunasekara2020overview} is to build a cross-lingual Multi-domain DST which can track the dialogue state with resource-poor language whose original dataset is resource-rich language. The dataset of this challenge is based on MultiWOZ 2.1 \citep{eric2019multiwoz} and CrossWOZ \cite{zhu2020crosswoz}. Competitors should track the dialogue state in Chinese with MultiWOZ 2.1 dataset and in English with CrossWOZ dataset, respectively. To solve the above tasks, we propose a multi-task model to predict the dialogue state and the state operation at each turn. The main contributions are as follows: \begin{quote} \begin{itemize} \item To simplify the problem we use machine translation systems such as Google Translate, Baidu Translate to translate the training dataset from resource-rich language to resource-poor language, thereby the task can be considered as a traditional DST problem whose train and test dataset in the same language. \item The dialogue state extracts from the dialogue history whose text usually has more than 512 tokens, but the encoder stack of Transformer usually has the length limit(512 tokens). We propose a general method to fusion the features from any length of historical turns by defining different masked self-attention structures in the transformer network. Furthermore, we use this feature fusion method to extract the global context information and local context information, respectively, merge those two representations to predict the dialogue state. \item We adjust the model construction which is proposed in \citet{shan2020contextual} by adding a masked hierarchical transformer module, due to its local information module only contains the feature from a single turn. The added module can merge multi-turn features as the local representations so that the local representations can be used to predict multi-class classification such as state operation prediction. We use a three-class state operation prediction as to the auxiliary task. However, that is a bi-class classification task in \citet{shan2020contextual} due to the local representation module in their method only contain the information from the current turn. \item Exploring data augmentation to improve the model performance. \end{itemize} \end{quote} \begin{table*}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l} \hline \begin{tabular}[c]{@{}l@{}}\textbf{User}: I want to find a cheap restaurant in the north part of town. What is the address and food ?\\ \textbf{State}: \textit{price} range=cheap\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}\textbf{Sys}: There is \textbf{\underline{Royal Spice which serves Indian food }}and \textbf{\underline{Da Vinci Pizzeria which serves Italian food}}. Would you like me to book you a table at either of these restaurants ?\\ \textbf{User:} \textbf{\underline{Royal Spice }}sounds good. Can you reserve a table for 8 at 15:45 on Saturday ?\\ \textbf{State: }\textit{price range=cheap; restaurant-name=Royal Spice; \underline{\textbf{food=Indian}}}\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}\textbf{Sys}: Unfortunately , i was not able to book for that specific time and day . Can i try a different day or time for you ?\\ \textbf{User:} How about at 14:45 ?\\ \textbf{State}: \textit{price range=cheap; restaurant-name=Royal Spice; food=Indian}\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}\textbf{Sys}: Sorry, the booking was unsuccessful. would you like for me to find another restaurant ?\\ \textbf{User:} Please tell me \textbf{\underline{the address of Da Vinci Pizzeria}}.\\ \textbf{State}: \textit{price range=cheap; restaurant-name=Da Vinci Pizzeria; \underline{\textbf{food=Italian}}}\end{tabular} \\ \hline \end{tabular}} \caption{An example dialogue. At the last turn(the \textbf{$4$-th} turn), the underlined value of slot ``food" is corrected by the information at the \textbf{$2$-nd} turn.} \label{example} \end{table*} \section{Related Work} Traditional DST methods can be divided into two major types: open-vocabulary\citep{le2020non,goel2019hyst, wu2019transferable} and predefined-ontology\citep{lee2019sumbt,shan2020contextual}. The former one generates slot value at each turn by a generative model such as the decoder stack in RNN and Transformer, and the latter predefines the dialogue ontology and simplifies the DST models into a classification problem. The open-vocabulary methods can partly track unseen slot values but usually has a lower performance than the predefined-ontology methods. Since the ontology in the \textit{ninth DSTC Track 2 Cross-lingual Dialog State Tracking Task} is predefined, we here use the Predefined-ontology methods aims to achieve better performance. On the other hand, traditional DST models \citep{henderson2014word,chao2019bert} usually neglect the dialogue history and consider only utterances at current turn. To avoid the problem of lacking historical context, recent researchers employ autoregressive models to extract historical information. Some of them use a low-level network such as RNN, GRU to interactions between context and slots\citep{lee2019sumbt,goel2019hyst}, others use partial context only\citep{kim2019efficient,sharma2019improving}. These methods cannot extract the relevant context in an effective way. Since the transformer network has been proposed in 2017\citep{vaswani2017attention}, the large-scale pre-trained model such as BERT\citep{devlin2018bert}, RoBERTa\citep{liu2019roberta} demonstrate a strong effect on NLP tasks. However, due to the maximum sequence length limit(eg, 512 for BERT-Base), these models unable to tackle sequences that are composed of thousands of tokens. We here use the feature fusion method on dialogue history, the method can fuse any partial history information through a predefined mask in the transformer network. Furthermore, we consider the historical dialogue information as global information and utterance at the current turn and its adjacency dialogue history as the local information which is in contrast to most existing DST methods depending on either local or global information only. Whilst the local feature aims to predict state operations (UPDATE, CARRYOVER, DONTCARE ) and the global feature exploits relevant context from dialogue history. To the end, we formulate a two-branch architecture, with one branch for learning localized state operation and the other for learning slot value extraction. The two branches are not independent but synergistically jointly learned concurrently. We wish to discover and optimize jointly correlated complementary feature selections in the local and global representations. \section{Approach} \subsection{Problem definition} We assume a dialogue with $T$ turns $D=\left\{ \left( A_{1},U_{1}\right) ,...,\left( A_{T},U_{T}\right) \right\}$ where $A_{t}$ denotes Agent response, $U_{t}$ denotes user utterance at turn $t$, the predefine ontology as $O=\left\{ \left( s,v_{s}\right) ,s\in S,v_{s}\subset V\right\}$ where $S$ is the set of slot names, here the slot name is denoted as domain-slot, for example, ``restaurant-name". $V$ is the total set of slot values and $v_{s}$ is the set of slot values belong to slot $s$, i.e. $v_{s}\subseteq V$, we define $B_{t}=\left\{ \left( s^{j}_{t},v^{j}_{s,t}\right) ,1 \leqslant j \leqslant J\right\} $ as the belief state at each turn $t$, where $J$ is the total number of slots. We add ``none" to the no value slot at current turn. \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{network_paper} \caption{The architecture of our model. We utilize the Masked Hierarchical Transformer to encode the global and local information, respectively. We combine the global and local information to predict slot values and use the local information to predict state transition at turn $t$. Finally, we train both tasks jointly.} \label{fig1} \end{figure*} \subsection{Joint Learning Multi Loss } Figure \ref{fig1} shows the design of the proposed model architecture. The joint learning model consists of two branches Transformer network: (1) The local branch learning the state transition of each slot; (2) Another global branch responsible for learning the slot value label of each slot at each turn. For discovering correlated complementary information between local and global feature selections, the joint learning scheme is considered with two principles as follows: \begin{quote} \begin{itemize} \item Shared low-level features. We construct the two types of branches on a shared lower BERT model. The intuition is that, the low-level features such as word and phrase representations which are common to all patterns in the same sentences. The local and global feature learning branches are two related learning tasks, sharing the low-level BERT model reduces the model overfitting risks. \item Multi-task independent learning. For learning the maximizing of complementary discriminative features from local and global representations, the remaining layers of two branches are learned independently which aims to preserve both local saliencies in state operation prediction and global robustness in dialogue history representation. \end{itemize} \end{quote} \subsection{Feature Fusion } We encoded the turn-level sentence by the BERT-base model due to its length is usually less than 128. On the other hand, since the sentence length in context-level is usually more than 512, we use the masked hierarchical transformer to fuse the feature from each turn. In addition, we fuse total historical context as a global representation that contains all dialogue information up to now and $n$-history context as a local representation at the current turn. The local representation is also used to predict state operation at the current turn (eg. \textit{carryover, update, dontcare}). Due to the state operation cannot be decided by a single turn, we here use $n$-history context as the local representation ($n\geq1$). We also adjust the model construction in \citet{shan2020contextual} by adding a masked hierarchical transformer module after the BERT encoder. \subsection{Network Construction} As shown in Figure 1, the slot names and slot values are encoded by the same BERT with fixed weights. The dialogue is first encoded by a trainable BERT in turn-level and then fuse the turn-level features for global and local representations, respectively. Moreover, the global and the local representations are merged by a gate mechanism. The vectors from this gate mechanism are then used to compute the distance with slot value labels (similar to \citet{shan2020contextual}). Furthermore, the local representations are used for predicting state operations as an auxiliary task. To make the text more aligned with Figure 1, we will describe each module in more detail. \subsubsection{Dialogue History Encode} As the belief state is dependent on the historical dialogue, \citet{shan2020contextual} use a masked hierarchical transformer to encode the dialogue context. We extend this method to encode the context as global and local representations with two different mask metrics. An example of Masked Self-Attention is shown in Figure \ref{fig2}. The information of utterance at each turn is aggregated by a trainable BERT and the utterance at turn $t$ is consisted of user utterance $U_t$ and agent response $A_t$ \citep{lee2019sumbt}. We denote the turn input as $D_{t}=\left[ CLS\right] \oplus A_{t}\oplus \left[ SEP\right] \oplus U_{t}\oplus \left[ SEP\right] $ and the turn-level informations $h_t$ is encoded by BERT as follows:\\ \begin{equation} h_{t}=\text{BERT}_{\text{uttr}}\left( D_{t}\right) \label{utterance_encoder} \end{equation} \noindent The slot name $s$ and the slot value $v$ in \citet{lee2019sumbt} are encoded by a fixed weights BERT. The sequence of slot name and slot value are denote as $q_{s}=\left[ CLS\right] \oplus s\oplus \left[ SEP\right]$ and $q_{v}=\left[ CLS\right] \oplus v\oplus \left[ SEP\right] $ , the outputs of token $\left[ CLS\right] $ in both $q_s$ and $q_v$ are used to represent the information of slot name and slot value, respectively. \begin{equation} h_{s}=\text{BERT}_{\text{slot}}\left( q_{s}\right) \label{slotname_encoder} \end{equation} \begin{equation} h_{v}=\text{BERT}_{\text{slot}}\left( q_{v}\right) \label{slotvalue_encoder} \end{equation} \noindent For more general situations, the slot value at a single turn cannot be discriminate only by current utterance, but is dependent on previous turns, as shown in Table \ref{example}. Furthermore, the local feature with $n$-historical context $c^{loc}_{s,t}$ at turn $t$ can be defined as a multi-head attention between slot name and context of $n$-history, where $n$-history denotes $n$ turns of dialogue history before current turn $t$, i.e. $\left\{ uttr_{t-n},uttr_{t-n+1},...,uttr_{t}\right\} $. Formally, $c^{loc}_{s,t}$ can be denoted as follows: \begin{equation} c^{loc}_{s,t}=\text{MultiHead}\left( h_{s},c_{s,t-n\leqslant i\leqslant t},c_{s,t-n\leqslant i\leqslant t}\right) \label{local_information_encode} \end{equation} \noindent Where \textbf{$c_{s,t-n\leqslant i\leqslant t}$} is the masked hierarcharical encoder result. Figure \ref{fig3} shows what $n$-history mask matrix looks like and \textbf{$c_{s,t-n\leqslant i\leqslant t}$} is denoted as follows: \begin{gather} m^{0}=\left[ c^{word}_{s,1},c^{word}_{s,2},...,c^{word}_{s,t}\right] \ +\ \left[ PE\left( 1\right) ,PE\left( 2\right) ,...,PE\left( t\right) \right],\nonumber \\ m^{N}=\ \text{MaskedTransformer}\left( m^{N-1},m^{N-1},m^{N-1}\right), \nonumber \\ c_{s,t-n\leqslant i\leqslant t} = m^{N} \end{gather} \noindent Where $m^{N}$ is the output of $N$-layers MaskedTransformer. $PE\left( \cdot \right)$ denotes positional encoding define in \citet{devlin2018bert}, $c_{s,t}^{word}$ is the MultiHead Attention between \textit{slot name} and \textit{utterance tokens} at turn $t$, which can be defined as follows: \begin{equation} c_{s,t}^{word} = \text{MultiHead}(h_s,h_t,h_t) \label{word_information_encode} \end{equation} \noindent We use \textbf{$c_{s,0\leqslant i\leqslant t}$} and \textbf{$c_{s,t-n\leqslant i\leqslant t}$} are the global and local contextual information, respectively. The feature fusion method is shown in Figure \ref{fig1}, we expect this feature can better represent the dialogue information through balancing the information from global and local context. Finally, we take the method of \citet{shan2020contextual} to compute the loss for slot-value prediction, the probability distribution of slot value $p\left(v_{t}\mid U_{t},A_{t},s\right) $ at turn $t$ and its loss are defined as follows: \begin{gather} p\left( v_{t}\mid U_{\leq t},A_{\leq t},s\right) \ =\ \frac{exp\left( -\| d_{s,t}-h_{v}\| \right) }{\sum_{{v^\prime} \in V_{s}} exp\left( \| d_{s,t}-h_{v^{\prime }}\| \right) } \nonumber \\ L_{sv}=\sum_{s\in S} \sum^{T}_{t=1} -\text{log}\left( p\left( v_{t}\mid U_{\leq t},A_{\leq t},s\right) \right) \end{gather} \noindent Where $d_{s,t}$ denotes the encoder result with slot name $s$, it will be used to match each slot value representation $h_v$ belongs to $s$. \subsubsection{State Operation Decoder} We define \textit{\textbf{O}}=\{\textit{CARRYOVER, DONTCARE, UPDATE}\} as the state operation category set. An operation $r_t^j$ of slot $j$ is classified by state operation predictor, the mean of each category as follows: \begin{quote} \begin{itemize} \item[] \textit{\textbf{CARRYOVER}}: slot value is unchanged. \item[] \textit{\textbf{UPDATE}}: slot value is changed to a new value from previous one. \item[] \textit{\textbf{DONTCARE}}: slot neither needs to be considered important nor be tracked in the dialogue. \end{itemize} \end{quote} \subsubsection{Input Representation for State Operation} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{mask_self_attention} \caption{Masked Transformer. The position with \textbf{-inf} value in mask matrix will be masked, since the softmax of \textbf{-inf} is \textbf{zero}. } \label{fig2} \end{figure} As the conversation progresses, the state operation at each turn is determined by the previous dialogue state and the current dialogue turn. The flow of state can be modeled by autoregressive decoders. Therefore, we use RNN as our decoder model, $c_{s,t}^{loc}$ and $h_{s,t-1}^{loc}$ as the model inputs: \begin{equation} h^{loc}_{s,t} = \text{RNN}(c_{s,t}^{loc},h_{s,t-1}^{loc} ) \end{equation} \noindent Where $h_{s,t}^{loc}$ is the representation of state operation at turn $t$. The probability distribution over state operations $p_{s,t}^{sop}$ and its loss are defined as follows: \begin{gather} p^{sop}_{s,t}=\text{softmax}\left( W_{sop}h^{loc}_{s,t}\right), \nonumber \\ L_{sop}\ =\ \sum_{s\in S} \sum^{T}_{t=1} -\left( Y_{s,t}^{sop}\right)^{T} \text{log}\left( p^{sop}_{s,t}\right) \end{gather} \noindent Where $W_{sop}$ is a linear project to obtain operation probability distribution $p^{sop}_{s,t}$ and $Y^{sop}_{s,t}$ is the operation label of slot $s$ at turn $t$ . Therefore, we take the sum of losses metioned above as the final joint loss $L_{joint}$ as following: \begin{equation} L_{joint}=\ L_{sv}+L_{sop} \end{equation} \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{different_mask} \caption{$n$-history mask. The blocks with 0 score in the mask matrix means the corresponding utterance is attendable. } \label{fig3} \end{figure*} \section{Experiment} \subsection{Baseline Models} We compare the performance of the proposed method with the following models: \begin{quote} \begin{itemize} \item[] \textbf{SUMBT}: Using a trainable BERT to encode system and user utterances and a fixed weighted BERT to encode slot-type and slot-value information and predict the slot-value label based on a certain metric\citep{lee2019sumbt}. \item[] \textbf{CHAN}: Employing a contextual hierarchical network to fusion contextual information and exploiting the same method in SUMBT to predict the slot-value label\citep{shan2020contextual}. \item[] \textbf{NADST}: Generate dialogue state at each turn by a non-autoregressive decoder model\citep{le2020non}. \item[] \textbf{TRADE}: Using an Encoding Decoding model to generate slot-value label\citep{wu2019transferable}. \end{itemize} \end{quote} \begin{table}[] \resizebox{\columnwidth}{!}{ \begin{tabular}{l|c|c|c} \toprule[2pt] \textbf{ \multirow{2}{*}{Model}} & \textbf{\multirow{2}{*}{Ontology}} & \textbf{MultiWOZ(en $\rightarrow$ zh) }& \textbf{CrossWOZ(zh $\rightarrow$ en) }\\ \cline{3-4} & & \textbf{Joint(\%) } & \textbf{Joint(\%) } \\ \hline TRADE & $\times$ & 29.63 & 7.9 \\ NADST & $\times$ & 31.21 & 8.3 \\ SUMBT & $\checkmark$ & 49.4 & 10.6 \\ CHAN & $\checkmark$ & 49.19 & 11.3 \\ \hline \textbf{CHAN + 4-class STP(Ours)} & \textbf{$\checkmark$} & \textbf{50.16 } & \textbf{11.87} \\ \bottomrule[2pt] \end{tabular}} \caption{ Joint accuracy on human val sets of MultiWOZ(en $\rightarrow$ zh) and CrossWOZ(zh $\rightarrow$ en), respectively. The ontology column indicates if the model is based on predefined ontology or not.} \label{baseline_model_result} \end{table} \noindent Table \ref{baseline_model_result} shows the joint accuracy of baseline models and our model on MultiWOZ (en $\rightarrow$ zh) and CrossWOZ (zh $\rightarrow$ en) human val datasets. Our model achieves 50.16\% and 11.87\% with 0.96\% and 0.57\% improvement respectively. \subsection{Data Augmentation} We found that the human evaluation dataset is generated through real-life conversations, while the training data is generated by Google translator, so it is not as natural as human language. In this work, we consider data augmentation to improve model performance. We use translation services provided by Tencent, Baidu, and our translation model to translate the source language to the target language. We also found that MultiWOZ(en $\rightarrow$ zh) and CrossWOZ(zh $\rightarrow$ en) are provided by different organizers, both of them have some annotation errors, so we here use different methods to correct slot value labels. For the MultiWOZ(en $\rightarrow$ zh) dataset we using the label in MultiWOZ\_2.2 as right labels to correct the older ones, and the changes of each slot on MultiWOZ(en $\rightarrow$ zh) are shown in Table \ref{slotvalue_modified} \begin{table}[] \centering \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{cccc} \toprule[2pt] \textbf{\textit{Slot name}} & \textbf{\textit{\# of total slot values}} & \textbf{\textit{\# modified slot values}} & \textbf{\textit{\% of slot values modified}} \\ \hline attraction-type &10525 & 845 &8 \\ \hline restaurant-food &16095 & 502 &3.1 \\ \hline attraction-name &5843 & 465 &7.9 \\ \hline restaurant-name &7293 & 336 &4.6 \\ \hline train-leave at &7563 & 331 &4.38 \\ \hline hotel-name &8621 & 296 &3.43 \\ \hline taxi-departure &4037 & 254 &6.29 \\ \hline taxi-destination &4108 & 204 &4.97 \\ \hline train-arrive by &7488 & 167 &2.23 \\ \hline restaurant-book-time &8958 & 156 &1.74 \\ \bottomrule[2pt] \end{tabular}} \caption{The \textbf{Top-10} number of slot values modified } \label{slotvalue_modified} \end{table} For the CrossWOZ(zh $\rightarrow$ en) dataset we found that the belief states at some turns are not inherited its previous turns. We consider these as the labeling errors that need to be corrected, so we here use the concept of state transition (carryover, update, dontcare, delete) to correct the belief state at each turn. We estimate the effectiveness of the back-translation data augmentation and state transition prediction task. The joint accuracy reduces by $0.58\%$ with removing the state operation prediction task and reduces by $8.4\%$ with no data augmentation. Moreover, the performance decreases by $9.4\%$ with removing both of them. Table \ref{ablation} demonstrates that the data augmentation and state transition prediction task are crucial for DST. \begin{table}[] \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{lll} \toprule[2pt] & \textbf{MultiWOZ(en $\rightarrow$ zh)} & \textbf{CrossWOZ(zh $\rightarrow$ en)} \\ \hline Final Model & 58.56 & 16.81 \\ \hline remove state transition prediction & 57.98(-0.58) & 16.29(-0.52) \\ \hline remove data argumentation & 50.16(-8.4) & 11.87(-4.94) \\ \hline remove above two(only CHAN) & 49.19(-9.37) & 11.3(-5.51) \\ \bottomrule[2pt] \end{tabular}} \caption{The ablation study of the state transition prediction and the data Augmentation on MultiWOZ(en $\rightarrow$ zh) and CrossWOZ(zh $\rightarrow$ en), respectively. } \label{ablation} \end{table} \begin{table}[] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{cccccc} \toprule[2pt] \textbf{Team} & \textbf{Joint(\%)} & \textbf{Slot(\%) } & \textbf{Slot P/R/F1} & \textbf{Joint(pub/pri)} & \textbf{Rank} \\ \hline \textbf{1(ours)} & \textbf{62.37} & \textbf{98.09 }& \textbf{92.15/94.02/93.07} & \textbf{62.70/62.03} & \textbf{1} \\ 2 & 62.08 & 98.10 & 90.61/96.20/93.32 & 63.25/60.91 & 2 \\ 3 & 30.13 & 94.40 & 87.07/74.67/80.40 & 30.53/29.72 & 3 \\ Baseline & 55.56 & 97.68 & 92.02/91.10/91.56 & 55.81/55.31 & N/A \\ \bottomrule[2pt] \end{tabular} } \caption{MultiWOZ Leaderboard (Best Submissions).} \label{multiwoz_leaderboard} \end{table} \begin{table}[] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{cccccc} \toprule[2pt] \textbf{Team} & \textbf{Joint(\%)} & \textbf{Slot(\%)} & \textbf{Slot P/R/F1} & \textbf{Joint(pub/pri)} & \textbf{Rank} \\ \hline 3 & 16.86 & 89.11 & 68.26/62.85/65.45 & 16.82/16.89 & 1 \\ \textbf{ 1(ours) } & \textbf{15.28} & \textbf{90.37 }& \textbf{65.94/78.87/71.82} & \textbf{15.19/15.37 } & \textbf{2} \\ 2 & 13.99 & 91.92 & 72.63/78.90/75.64 & 14.41/13.58 & 3 \\ Baseline & 7.21 & 85.13 & 55.27/46.15/50.30 & 7.41/7.00 & N/A \\ \bottomrule[2pt] \end{tabular} } \caption{CrossWOZ Leaderboard (Best Submissions).} \label{crosswoz_leaderboard_ori} \end{table} \begin{table}[] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{cccccc} \toprule[2pt] \textbf{Team} & \textbf{Joint(\%) } & \textbf{Slot(\%)} & \textbf{Slot P/R/F1} & \textbf{Joint(pub/pri)} & \textbf{Rank} \\ \hline 2 & 32.30 & 94.35 & 81.39/82.25/81.82 & 32.70/31.89 & 1 \\ \textbf{1(ours)} & \textbf{23.96} & \textbf{92.94} & \textbf{74.96/83.41/78.96} & \textbf{23.45/24.47} & \textbf{2} \\ 3 & 15.31 & 89.70 & 74.78/64.06/69.01 & 14.25/16.37 & 3 \\ Baseline & 13.02 & 87.97 & 67.18/52.18/58.74 & 13.30/12.74 & N/A \\ \bottomrule[2pt] \end{tabular} } \caption{CrossWOZ Leaderboard (Updated Evaluation, Best Submissions).} \label{crosswoz_leaderboard_update} \end{table} \subsection{Overall Results} We train the model with different $n$-history as the local information and we finally choice $1$-history as the best length for the joint learning. By using above improvements, we achieve the result with joint accuracy of $\textbf{62.37\%}$ and $\textbf{23.96\%}$ on MultiWOZ(en $\rightarrow$ zh) and CrossWOZ(zh $\rightarrow$ en) datasets, respectively. With this end-to-end model, we achieve \textbf{Top 1} in MultiWOZ(en $\rightarrow$ zh) dataset, and \textbf{Top 2} in CrossWOZ(zh $\rightarrow$ en) dataset. The results of MultiWOZ(en $\rightarrow$ zh) and CrossWOZ(zh $\rightarrow$ en) tasks are shown in Table \ref{multiwoz_leaderboard} and \ref{crosswoz_leaderboard_ori} respectively. The Organizers found that the CrossWOZ(zh $\rightarrow$ en) test data miss ``name" labels when the user accepts the attraction/hotel/restaurant recommended by the system\citep{gunasekara2020overview}. Table \ref{crosswoz_leaderboard_update} shows the updated leaderboard for CrossWOZ. Moreover, although the evaluation was updated in CrossWOZ(zh $\rightarrow$ en), our algorithm is still ranked second in the updated CrossWOZ Leaderboard, which shows that our method has better generalization ability. \section{Conclusion} In this paper, we introduce a general feature fusion method as the solution in DSTC 9 Track 2 competition, which can merge any parts of context feature from dialogue history. We also construct a multi-task network to improve the feature representation ability. Our proposed model is ranked first in MultiWOZ (en $\rightarrow$ zh) and second in CrossWOZ (zh $\rightarrow$ en) respectively. The proposed model is based on predefined ontology, and we will investigate an open-vocabulary model in the future. \nocite{*}
1,314,259,996,390
arxiv
\section*{Preliminary geometry of elliptic trajectories} \noindent Let $S$ and $S'$, separated by $SS'=2c$, be the foci of an ellipse $(E)$, described by a celestial body $P$ (a planet), that is impelled by a force tending toward a center of force $S$ (a star). Figure~\ref{geometry}. Draw the principal circle $(C_p)$ of center $O$ and radius $OA=a$, where $a$ is the length of the semi major axis of $(E)$. Draw the director circle $(C_d)$ of center $S'$ and radius $S'M=2a$. Produce $S'P$ to meet the director circle $(C_d)$ in $M$, let fall the perpendicular $PH$ to $SM$, then produce $HP$ to meet the principal circle $(C_p)$ in $Z$. Produce $MS$ to meet $(C_p)$ in $R$. \begin{figure}[H] \centering \begin{minipage}{.8\linewidth} \includegraphics[]{alameh1.eps} \end{minipage} \caption{Geometry of elliptic orbits} \label{geometry} \end{figure} It is now required to prove that the radius vector $\bm{r}=\bm{SP}$ is parallel to $\bm{RZ}$\,. For that purpose, we start off with an alternative definition of the ellipse~\cite{ellipse}, which states that an ellipse is the locus of the centers of circles passing through the focus $F$ and internally tangent to the director circle whose center lies at the other focus $F'$ and of radius $2a$. The case being so, it is easy to infer that $\Delta\, PSM$ is isosceles, and that $H$ is the midpoint of $SM$. Let $PL$ be the bisector of the angle $\widehat{SPS'}$. We seek now to prove that $PL$ is parallel to $SM$. Evidently $\widehat{MPH}=\widehat{ZPS'}$ as they are vertically opposite angles, and $\widehat{MPH}=\widehat{SPH}$ corresponding parts of congruent triangles, hence $\widehat{ZPS'}=\widehat{HPS}$, and it is readily inferred that $\widehat{HPL}=90^\circ$. Hence, $PL$ is parallel to $SM$ and correlatively $PH$ is tangent to the ellipse since it is perpendicular to $PL$ which is the internal bisector of angle $\widehat{SPS'}$\,.\\ \noindent Now in $\Delta\, S'SM$, the points $O$ and $H$ are the midpoints of $S'S$ and $SM$ respectively, then $OH=\displaystyle\frac{S'M}{2}=a$, therefore $H$ belongs to the principal circle $(C_p)$, and having the angle $\widehat{RHZ}=90^\circ$, leads to saying that $ZR$ is a diameter of the principal circle.\\ Few more steps are still needed to achieve the required proof, thus we proceed by observing that $\widehat{PSH}=\widehat{PMH}$ since $\Delta\, PSM$ is isosceles, furthermore, $\Delta\, ORH$ is also isosceles, for $OR=OH$ radii of the same circle. Hence $\widehat{ORH}=\widehat{OHR}$\,; but $\widehat{OHR}=\widehat{PMH}$, since $OH$ is parallel to $S'M$, consequently $\widehat{OHR}=\widehat{PMH}$, thus $\widehat{PSH}=\widehat{ORH}$. And correlatively $SP$ is parallel to $RZ$\,, therefore $\widehat{PSW}=\widehat{ZOS}=\theta$. \section*{The geometric mean theorem} \begin{theorem} Let $f\colon x\rightarrow f(x)$ be a continuous function that does not vanish anywhere on the interval $[a,b]$ and differentiable within it, such that $f^{\prime}(x)\neq 0$ within the interval, then there exist a point of abscissa $x=c$\,,\,\,\,where\,\,\,$a<c<b$\,\, such that \begin{equation}f^{2}(c)=f(a)\cdot f(b)\label{h}\end{equation} \end{theorem} \begin{Proof} Let us construct the auxiliary function $\beta(x)$ defined in the interval $[a,b]$ such that \begin{equation}\beta(x)=\displaystyle\frac{f(x)}{f(a)}+\displaystyle\frac{f(b)}{f(x)}\end{equation} Now \begin{equation}\beta(a)=1+\displaystyle\frac{f(b)}{f(a)}\end{equation} And \begin{equation}\beta(b)=\displaystyle\frac{f(b)}{f(a)}+1\end{equation} Therefore \begin{equation}\beta(a)=\beta(b)\end{equation} And so, by {{\it Rolle's theorem}}~\cite{Rolle} there exist a point of abscissa $x=c$\,, such that $\beta^{\prime}(c)=0$ \begin{equation}\beta^{\prime}(c)=\displaystyle\frac{f^{\prime}(c)}{f(a)}-\displaystyle\frac{f(b)\cdot f^{\prime}(c)}{f^{2}(c)}=0 \end{equation} And by rearranging and canceling $f^{\prime}(c)$ we get \begin{equation}f^{2}(c)=f(a)\cdot f(b)\end{equation} And the theorem is proved. \end{Proof} \section*{Probing into Kepler elliptic trajectories} \noindent The radius vector $\bm{r}$ of a planet moving on an elliptic orbit, is a continuous function of the angle $\theta$ that it makes with the major axis from the perihelion side. The angle $\theta$ in turn changes also with time. The modulus of $\bm{r}$ takes a minimum value $r_p=a-c$ at $\theta=0$, when the planet passes through the perihelion and a maximum value $r_a=a+c$ at $\theta=\pi$, in its passage through the aphelion, then according to theorem (1), there exist a value $\theta_b$ of $\theta$ such that $r^2(\theta_b)=r(0) r(\pi)$, that is $r^2(\theta_b)=a^2- c^2$. But in the case of an ellipse, the length of the semi minor axis is given by the relation $b^2=a^2-c^2$. Therefore the modulus of the radius vector should take a value equals to the length of the semi minor axis of the ellipse at a well specified angle. I called it $\theta_b$. \noindent Returning to the geometric features of the figure~\ref{geometry}, by a well known relation we have$\colon$\\ \begin{equation}SH\times SR=SA\times SB \end{equation} that means \begin{equation}SH\times SR=(a-c)(a+c)=b^2~\label{principalcircle1}\end{equation} The position vector of the planet expressed in polar coordinates has the form $\bm{r}=r\,{\bm{\hat{e}_r}}$ and its velocity vector is $\bm{V}=\displaystyle\frac{d\bm{r}}{dt}=\dot{r}\,{\bm{\hat{e}_r}}+r\dot\theta\,{\bm{\hat{e}_\theta}}$\,. Now, since the planet is urged by a centripetal force towards the star, so the applied external torque on the planet is zero, hence its angular momentum $\bm{L}=\bm{r}\times m\bm{V}$ is conserved~\cite{Goldestein}. The magnitude of the angular momentum is $L=mV_\theta r$ where $V_\theta=r\dot\theta$ is the transverse component of the velocity vector. The magnitude of $L$ may also be expressed as $L=m\times V\times SH$ owing to the fact that $SH=r\,\sin\theta$.\\ So, we can say that$\colon$ \begin{equation} L=mV_\theta r=m V\times SH \end{equation} By canceling $m$\,, we get the expression of $h$ which is that of the angular momentum per unit mass as being$\colon$ \begin{equation}h=rV_\theta=SH\times V=r^2\dot\theta ~\label{momentum}\end{equation} In the course of motion, the radius vector should, at a certain moment, take the value $r=b$ at the angular position $\theta=\theta_b$, accordingly $h$ being constant can be expressed as $h=b^2\dot\theta_b$\,, where $\dot\theta_b$ is the angular velocity at $\theta=\theta_b$. So, from equation~(\ref{momentum}) we can say that \begin{equation} SH\times V=b^2\,\dot\theta_b ~\label{beem}\end{equation} and \begin{equation}r\,V_\theta=b^2\,\dot\theta_b ~\label{beemm}\end{equation} Now multiplying equation~(\ref{principalcircle1}) by $\dot\theta_b$ we get \begin{equation}SH\times SR\times\dot\theta_b=b^2\dot\theta_b~\label{principalcircle2}\end{equation} And by comparing equations~(\ref{beem}) and~(\ref{principalcircle2}) and we get $V=SR\times \dot\theta_b$\,.\\ But, $SR=ZS'$, since $\Delta\, OS'Z$ is equal to $\Delta \,OSR$, for $OS=OS'=c$, $OZ=OR=a$ and $\widehat{S'OZ}=\widehat{SOR}$ vertically opposite angles, hence, \begin{equation} V=ZS'\times \dot\theta_b \end{equation} \noindent Therefore \begin{equation} \bm{V}=ZS'\times\dot\theta_b \,\bm{\hat{e}_t}~\label{ange}\end{equation} Where $\bm{\hat{e}_t}$ is a unit vector tangent to the trajectory in the same direction as $\bm{V}$.\\ \emph{In the language of mathematics, the velocity vector $\bm{V}$ is the image of $\bm{ZS'}$ by a direct similitude of ratio $\dot\theta_b>0$ and of an angle $(\bm{ZS'},\bm{V})=-\displaystyle\frac{\pi}{2} +2k\pi$.} \noindent It remains only to put equation~(\ref{ange}) in a more explicit form, by finding the expressions of $S'Z$ and $\bm{\hat{e}_t}$ in terms of the dynamic parameters of the motion. For that purpose and returning to figure~\ref{geometry}, we notice that $\bm{S'Z}=\bm{SO} +\bm{OZ}$ i.e. $\bm{S'Z}=c\,\bm{\hat{e}_x}+ a\,\bm{\hat{e}_r}$ since $\bm{OZ}$ is parallel to $\bm{r}$ as proved before. But $\bm{\hat{e}_r}=\cos\theta \,\bm{\hat{e}_x} + \sin\theta\,\bm{\hat{e}_y}$, therefore, $\bm{S'Z}=(c+a\cos\theta)\,\bm{\hat{e}_x}+a\sin\theta\,\bm{\hat{e}_y}$\,. Then expressing it in polar coordinates by the well known transformation relation$\colon$ \begin{equation}\left(\begin{array}{c}\bm{\hat{e}_x}\\\bm{\hat{e}_y}\\ \end{array}\right)=\left(\begin{array}{lr}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\\\end{array}\right) \left(\begin{array}{c}\bm{\hat{e}_r} \\ \bm{\hat{e}_\theta} \end{array}\right)~\label{matrix}\end{equation} \noindent We get $\bm{S'Z}$ to be$\colon$ \begin{equation}\bm{S'Z}=(a+c\cos\theta)\,\bm{\hat{e}_r}-c\sin\theta\, \bm{\hat{e}_\theta}\end{equation} Thus the modulus of $\bm{S'Z}$ will have the expression \begin{equation} [S'Z]=\sqrt{a^2+c^2+2ac\cos\theta}~\label{moduluss}\end{equation} and a unit vector $\bm{\hat{e}_n}$ along the normal to the trajectory should be $\bm{\hat{e}_n}=-\displaystyle\frac{\bm{S'Z}}{[S'Z]}$, So \begin{equation}\bm{\hat{e}_n}=-\displaystyle\frac{a+c\,\cos\theta}{\sqrt{a^2+c^2+2ac\,\cos\theta}}\,\bm{\hat{e}_r}+\displaystyle\frac{c\,\sin\theta} {\sqrt{a^2+c^2+2ac\,\cos\theta}}\,\bm{\hat{e}_\theta}\end{equation} The unit vector $\bm{\hat{e}_t}$ that is in the direction of the velocity vector $\bm{V}$ is perpendicular to $\bm{\hat{e}_n}$ and its expression will be \begin{equation} \bm{\hat{e}_t}=+\displaystyle\frac{c\,\sin\theta}{\sqrt{a^2+c^2+2ac\,\cos\theta}}\,\bm{\hat{e}_r}+\displaystyle\frac{a+c\,\cos\theta}{\sqrt{a^2+c^2+2ac\,\cos\theta}}\, \bm{\hat{e}_\theta}~\label{tangent1}\end{equation} \noindent Finally by substituting~(\ref{tangent1}) and (\ref{moduluss}) in~(\ref{ange}) we obtain the expression of the velocity vector \begin{equation}\bm{V}= c\,\dot\theta_b\,\sin\theta\,\bm{\hat{e}_r}\,+\,(a+c\,\cos\theta)\,\dot\theta_b\,\bm{\hat{e}_\theta} ~\label{velocity}\end{equation} \noindent Accordingly we pursue our search to retrieve the equation of the elliptic trajectory from what preceded. It is to be noticed in this context that, the components of the velocity vector in the polar system are given by equation~(\ref{velocity}) as$\colon$ \begin{equation}V_r=c\,\dot\theta_b\sin\theta~\label{vr}\end{equation} \begin{equation}V_\theta=(a+c\cos\theta)\,\dot\theta_b~\label{vtheta}\end{equation} Substituting $V_\theta$ as given by equation~(\ref{vtheta}) in equation~(\ref{beemm}) and canceling $\dot\theta_b$\,, will give rise to the equation of the ellipse in a harmonious fashion. \begin{equation} r=\displaystyle\frac{b^2}{a+c\cos\theta}=\displaystyle\frac{a(1-\epsilon^2)}{1+\epsilon\cos\theta}\end{equation} Where $\epsilon=\displaystyle\frac{c}{a}$ is the eccentricity of the ellipse.\\ Further, the expression of $\bm{V}$ can also be transformed into the cartesian system by the use of equation~(\ref{matrix}) in the inverted form, thus$\colon$ \begin{equation}\bm{V}=-a\,\dot\theta_b\,\sin\theta \, \bm{\hat{e}_x} +(c+a\,\cos\theta)\,\dot\theta_b \, \bm{\hat{e}_y} ~\label{velocitycartesian}\end{equation} and its modulus will be \begin{equation}V=\dot\theta_b\,\sqrt{a^2+c^2+2ac\,\cos\theta}~\label{modulus}\end{equation} and by squaring equation~(\ref{modulus}) we get \begin{equation} V^2\,=\,(a\,\dot\theta_b)^2\,+\,2(a\,c)\,\dot\theta_b^2\,\cos\theta \,+\,(c\,\dot\theta_b)^2\end{equation} then by calling $V_c=a\,\dot\theta_b$ and $V_0=c\,\dot\theta_b$ we obtain \begin{equation} V^2\,=\,V_c^2\,+2\,V_c\,V_0\,\cos\theta\,+\,V_0^2 ~\label{modulus1}\end{equation} It must be pointed out that equation~(\ref{modulus1}) has a major importance in providing the value of $V$ in terms of the angle $\theta$ that the planet makes with the perihelion at any moment. \noindent One can also add a peculiar privilege to this approach among others yet to come, namely that of equation~(\ref{ange}) that introduced the second focus of the ellipse into the scene.\\ Moreover, it is to be noticed that if we plug $\theta=0$ at the perihelion and $\theta=\pi$ at the aphelion into equations~(\ref{vr}) and (\ref{vtheta}), one would obtain that in both cases the radial velocity is zero and that the transverse components of the velocity vector $\bm{V}$ will take respectively the values $V_{pe}=(a+c)\,\dot\theta_b$ and $V_{ap}=(a-c)\,\dot\theta_b$. But $V_{pe}=(a-c)\,\dot\theta_{pe}$ and $V_{ap}(a+c)\,\dot\theta_{ap}$\,. Then by matching the two expressions of $V_{pe}$ and $V_{ap}$, one obtains$\colon$ \begin{equation}\dot\theta_{pe}=\displaystyle\frac{a+c}{a-c}\,\dot\theta_b\,\,\,\,\,\,\,\,\mathrm{and} \,\,\,\,\,\,\,\, \dot\theta_{ap}=\displaystyle\frac{a-c}{a+c}\,\dot\theta_b ~\label{ceta}\end{equation} Finally by multiplying the last two expressions we get \begin{equation}\dot\theta_b^2=\dot\theta_{pe}\,\dot\theta_{ap}~\label{mean}\end{equation} and it turns out that our $\dot\theta_b$ is nothing but the geometric mean of $\dot\theta_{pe}$ and $\dot\theta_{ap}$\,.\\ We now turn our attention to deriving the law of force, so we proceed by rearranging equation~(\ref{velocity}) to the form \begin{equation}\bm{V}=c\,\dot\theta_b\,(\sin\theta\,\bm{\hat{e}_r}+\cos\theta\,\bm{\hat{e}_\theta})+a\,\dot\theta_b\, \bm{\hat{e}_\theta} \end{equation} and knowing from equation~(\ref{matrix}) that $\bm{\hat{e}_y}=\sin\theta\,\bm{\hat{e}_r}+\cos\theta\,\bm{\hat{e}_\theta}$ we obtain \begin{equation}\bm{V}=c\,\dot\theta_b\,\bm{\hat{e}_y}+a\,\dot\theta_b \,\bm{\hat{e}_\theta}~\label{velocityy}\end{equation} then we derive equation~(\ref{velocityy}) with respect to time, and knowing that $\displaystyle\frac{d\bm{\hat{e}_y}}{dt}=\bm{0}$\,,\\ and $\displaystyle\frac{d\bm{\hat{e}_\theta}}{dt}=-\dot\theta\,\bm{\hat{e}_r}$ we thus obtain the expression of the acceleration vector \begin{equation} \bm{\mathcal{A}}=-\,a \,\dot\theta_b\, \dot\theta \,\bm{\hat{e}_r}~\label{acc}\end{equation} and given that $r^2\,\dot\theta=b^2\,\dot\theta_b$ means that $\dot\theta$ can be expressed as $\dot\theta=\displaystyle\frac{b^2\,\dot\theta_b}{r^2}$ which when substituted in equation~(\ref{acc}), gives the expression of the acceleration vector as \begin{equation}\bm{\mathcal{A}}=-\displaystyle\frac{a\,b^2\,\dot\theta_b^2}{r^2}\,\bm{\hat{e}_r}\end{equation} And on the basis of Newton's second law $\bm{\mathcal{F}}\, =\,m\,\bm{\mathcal{A}}$\,, one obtains \begin{equation}\bm{\mathcal{F}}=-\displaystyle\frac{m\,a\,b^2\,\dot\theta_b^2}{r^2}\,\bm{\hat{e}_r}\end{equation} and knowing that $b=a\sqrt{1-\epsilon^2}$ we get \begin{equation}\bm{\mathcal{F}}=-\displaystyle\frac{m\,a^3\,(1-\epsilon^2)\,\dot\theta_b^2}{r^2}\,\bm{\hat{e}_r}~\label{Newton}\end{equation} In Newton's law of universal gravitation, the force is given as$\colon$ \begin{equation} \bm{\mathcal{F}}=-\displaystyle\frac{GmM}{r^2}\,\bm{\hat{e}_r}~\label{Newton1}\end{equation} and by comparing equations~(\ref{Newton}) and~(\ref{Newton1}) one obtains the value of $\dot\theta_b$ to be \begin{equation}\dot\theta_b=\sqrt{\displaystyle\frac{GM}{a^3(1-\epsilon^2)}}~\label{thetadot}\end{equation} Moreover, it appears that a closer inspection of equation~(\ref{thetadot}), and given that the star is permanently consuming its mass in favor of energy of electromagnetic radiations and other particles, then one might infer that it had at an earlier stage a mass \begin{equation}M'=\displaystyle\frac{M}{1-\epsilon^2}~\label{mass}\end{equation} and then the angular velocity of the planet would have been $\dot\theta_b=\sqrt{\displaystyle\frac{GM'}{a^3}}$ which is that of a uniform circular motion. Accordingly, one could infer that the planet should have been revolving at that stage in a uniform circular motion, and hence its orbit is becoming more and more elliptic with time. \noindent Furthermore, equation~(\ref{mass}) gives rise to the law that governs the variation of the eccentricity of the elliptic orbit with time as follows \begin{equation} \epsilon(t)=\sqrt{1-\displaystyle\frac{M(t)}{M'}}~\label{eccentricity}\end{equation} \noindent A knowledge of the actual power of the star, along with the use of Einstein's mass energy relation $E=mc^2$ would not be enough to find exactly the relation between the current mass $M$ of the star and its earlier mass $M'$ when the planet was orbiting it in uniform circular motion, because a part of the mass of the star flees it randomly through stellar winds. Despite these difficulties, an interesting feature may be extracted from equation~(\ref{eccentricity}) namely that of the effect of the mass on the geometry, as it indicates that the geometry of the orbit is changing from circular to elliptic as the mass of the star decreases with time. On this respect, one should also notice that these spontaneous modifications in the geometry of the orbit occur in a sense as to change the orbit from the most ordered shape (circle), to a less ordered shape (ellipse). \\ Another implication of Newton's law of gravitational interaction expressed in the form of equation~(\ref{Newton}) may be noticed when it comes to a moon of mass $m$ orbiting a planet of presumably constant mass $M$\,, on an elliptic orbit, then one could predict that another moon of mass \mbox{$m'=\,m\,(1-\epsilon^2)$}\,, let go with the same initial conditions as $m$\,, would orbit the planet in a uniform circular motion, because its angular velocity would then have been $$\dot\theta_b=\sqrt{\displaystyle\frac{GM}{a^3}}$$ \begin{consequence} A planet orbiting a star in an elliptic orbit should possess at two specific instants every one complete revolution around the star an angular velocity that is equal to its angular velocity had it been rotating in a uniform circular motion at an earlier stage of the life of the star. \end{consequence} \noindent Before we proceed to extract the information from equation~(\ref{modulus1}), a little digression into defining the hodograph is needed. Thus, the hodograph is the curve generated by the tip of a vector equipollent to the velocity vector and whose tail lies at the origin of the velocity space. The velocity vector of a moving body is permanently tangent to the trajectory described by that body at any instant. Except for uniform circular motion in which the modulus of the velocity vector remains constant, all other sorts of curvilinear motion are characterized by a changing velocity vector regarding modulus and direction. Nonetheless we still have need to construct the equation of an off-center circle in polar coordinates and for that sake we introduce theorem (2). \begin{theorem} The modulus of the radius vector of a point $M$ moving on a circle of radius $\rho$ centered at $(r_0,\phi_0)$ and of parameter $\theta$ is defined by$\colon$ \begin{equation}r^2=\rho^2 +2\rho \,r_0\cos\theta+r_0^2\end{equation} \begin{Proof} Let us consider a circle $\Omega$ (figure~\ref{Relations}) of radius $\rho$ and center $C(r_0,\phi_0)$\,. A point $M$ on the circle is located by its radius $r$ and by the azimuthal angle $\phi$\,. \begin{figure}[H] \centering \begin{minipage}{.8\linewidth} \includegraphics[]{alameh2.eps} \end{minipage} \caption{Relations satisfied by off-center circles in polar coordinates } \label{Relations} \end{figure} Now \begin{equation}\bm{r}=\bm{r_0}\,+\,\bm{\rho}\label{circle}\end{equation} Projecting equation~(\ref{circle}) successively on $\bm{\hat{e}_x}$ and $\bm{\hat{e}_y}$ we get \begin{equation} r\cos\phi=r_0\cos\phi_0+\rho\cos(\theta+\phi_0)\label{circle1}\end{equation} and \begin{equation}r\sin\phi=r_0\sin\phi_0+\rho\sin(\theta+\phi_0)\label{circle2}\end{equation} In reality equations~(\ref{circle1}) and (\ref{circle2}) represent the parametric equations of the circle $\Omega$ \begin{equation}\left\{\begin{array}{lr}x=\rho_0\cos\phi_0 +\rho\cos(\theta+\phi_0)& ~\\ y=\rho_0\sin\phi_0 + \rho\sin(\theta +\phi_0)& ~\\ \end{array}\right. ~\label{equcircle}\end{equation} \noindent Then, by squaring and adding equations~(\ref{circle1}) and~(\ref{circle2}) we get \begin{equation}r^2=r_0^2+\rho^2+2r_0\rho[\cos(\theta+\phi_0)\cos\phi_0 +\sin(\theta+\phi_0)\sin\phi_0]\end{equation} But \begin{equation} \cos(m-n)=\cos m \cos n + \sin m \sin n \end{equation} Therefore \begin{equation}r^2=\rho^2+2\rho\,r_0\cos\theta + r_0^2\label{circle3}\end{equation} And the theorem is proved. \end{Proof} \end{theorem} It is obvious that equation~(\ref{modulus1}) is the analogue of equation~(\ref{circle3}), hence on the basis of theorem (2), the hodograph is a circle of parametric equations \begin{equation}\left\{\begin{array}{lr} V_x=-a\dot\theta_b\,\sin\theta&~\\ V_y=(c+a\cos\theta)\,\dot\theta_b&~\\ \end{array}\right.~\label{parametric}\end{equation} So, in accordance with equation~(\ref{modulus1}), The hodograph of the motion (figure~\ref{Hodo}) is an off-center circle of radius $V_c=a\,\dot\theta_b$ and center $(V_0=c\,\dot\theta_b,\phi_0=\displaystyle\frac{\pi}{2})$\,, traced in the velocity space by making use of its parametric equations~(\ref{parametric})\,. \begin{figure}[H] \centering \begin{minipage}{.4\linewidth} \includegraphics[]{alameh3.eps} \end{minipage} \caption{Hodograph of an elliptic trajectory} \label{Hodo} \label{Fig} \end{figure} The hodograph of a body in uniform rectilinear motion is a fixed point in the velocity space. Correlatively, the existence of the hodograph curve is a manifestation of departure from uniform rectilinear motion. If two or more motions present the same hodograph, then, these motions undergo the same deviation from uniform rectilinear motion. It appears from equation~(\ref{velocityy}) that the motion of planets in elliptic orbits is a combination of a uniform rectilinear part represented by the component $c\,\dot\theta_b\,\bm{\hat{e}_y}$ and a uniform circular part represented by the component $a\,\dot\theta_b\,\bm{\hat{e}_\theta}$\,. It is evident that the deviation from uniform rectilinear motion that the planet undergoes in elliptic motion is restricted to the uniform circular part, and that this deviation is exactly the same as that it would do had its motion been uniform circular at an earlier stage. Therefrom, one can speak about the uniqueness of the hodograph of planets vis \`a vis the changes in the eccentricity of their elliptic trajectories influenced by the decrease in the mass of the star about which they revolve. In a paper~\cite{adel} published in 2019, I proved that a uniform circular motion of a spaceship around a planet consists of an infinite number of successive infinitesimal free falls, a fact that explains the absence of the sensation of gravity aboard a spaceship revolving a planet in a uniform circular motion. The same reasoning applies here, so, one can attribute the absence of the sensation of the gravity of stars on planets, to the sameness of the deviation from uniform rectilinear motion for elliptic and circular trajectories.\\ To recapitulate, the hodograph of a planet orbiting a star is invariant under mass dissipations occurring in the star. \vspace{-.25cm} \section*{Conclusion} \vspace{-.25cm} \noindent As a matter of fact, all credit goes to Newton who was the first to allude to relation~(\ref{Newton}) in Principia Mathematica by saying literally~\cite{Newtonn}$\colon$ \begin{quote}``If a body $P$, by means of a centripetal force tending to any given point $R$, move in the perimeter of any given conic section whose center is $C$; and the law of centripetal force is required$\colon$ draw $CG$ parallel to the radius $RP$\,, and meeting the tangent $PG$ of the orbit in $G$; and the force required (by Cor.1, and Schol. X, and Cor.3, Prop.VII) will be as $\displaystyle\frac{CG^3}{RP^2}$\,.'' \end{quote} Equation~(\ref{Newton}) giving the expression of the central force acting on a body in elliptic motion around a center of force is in complete agreement with what Newton predicted. Nevertheless, it constitutes a step in advance by realizing that, the point G to which Newton referred and which is called $Z$ in figure \ref{geometry} belongs to the principal circle, and as such we recognize that his $CG$ is nothing but the length of the semi major axis $OZ=a$, furthermore, it provides an explicit formula to the value of the centripetal force in terms of the geometric parameters of the trajectory and the mass of the planet i.e. an equality and not a proportionality. In other words the missing constant in Newton's prediction turned out to be $m\,(1-\epsilon^2)\,\dot\theta_b^2$\,.\\
1,314,259,996,391
arxiv
\section*{} \vspace{-1cm} The concept of the ``empty liquid state'' has been formulated over a decade ago \cite{Bianchi2006} and since then a substantial amount of work has been published to identify and rationalize this phenomenon in framework of the patchy colloidal models \cite{Tavares2009,Russo2011a,Russo2011b,Heras2011,Kalyuzhnyi2013,Rovigatti2013,Sokolowski2014,Tavares2017}. According to Bianchi et al., \cite{Bianchi2006} a decrease of the valency of the patchy colloids (number of singly bondable patches) causes the liquid-gas phase diagram to shrink and disappear in the limit of two patches per particle. Upon approaching this limit the critical temperature decreases, the liquid branch of the phase diagram is moving towards the gas branch and the liquid phase density can take arbitrarily small values. This state of the liquid phase was named ``empty'' \cite{Bianchi2006}. To maintain continuous change of the valency, the authors consider a two-component mixture of patchy hard spheres with two and three bonding sites and assume that the phase behavior of this mixture can be described by a pseudo one-component fluid with an effective valency, which is defined by the relative concentration of the components. Subsequently, it was realized that the empty liquid state can be achieved assuming bonding sites to be of the different type. Due to the competition between formation of the bonds connecting different patches the particles can self-assembly into the structures of different type, e.g. chains and branched structures. Appropriate choice for the energy parameters, which describe interaction between different patches, makes chain formation preferable at low and branching at higher temperatures; as a result the empty liquid state can be achieved \cite{Russo2011a,Russo2011b,Heras2011,Kalyuzhnyi2013,Rovigatti2013,Sokolowski2014,Tavares2017}. The corresponding phase diagrams are re-entrant, which is rather unusual feature for the one-component fluids and makes them different in comparison with those observed earlier \cite{Bianchi2006}. The ability of liquid to be stabilized at vanishing low density offers the possibility to establish equilibrium colloidal gel state \cite{Sciortino2017}. In this state the system is thermodynamically stable, in contrast to the standard colloidal gels, the equilibrium gel is reversible and does not age. Recently, the empty liquid state was observed experimentally for several systems, including suspensions of laponit \cite{Ruzicka2011a,Ruzicka2011b} and montmorillonite \cite{Pujala2015,Pajula2019} clay particles (see \cite{Sciortino2017} and references therein). In this Letter we have studied the phase behavior of the patchy colloidal particles with several equivalent singly bondable patches, confined in the attractive porous media. According to our study, confinement give rise to the re-entrant phase diagram with arbitrarily low density of the liquid phase. \begin{figure}[!h] \centering \includegraphics[clip,height=0.25\textwidth,angle=0]{fig1a.pdf}\\ \includegraphics[clip,height=0.25\textwidth,angle=0]{fig1b.pdf} \caption{\label{fig1} Liquid-gas phase diagrams in $T^*$ vs $\rho_1^*$ coordinate frame for the three-patch (top panel) and four-patch (low panel) colloidal model confined in the Yukawa hard-sphere matrix with $\eta_0=0.1$ and for $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.0$ (black lines), $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.1$ (red lines), $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.16$ (green lines), $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.22$ (blue lines). Here black dashed lines denote the phase diagram at $\eta_0=0$ and $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.0$. } \end{figure} {\it Model.} -- We consider one-component patchy hard-sphere fluid confined in the random porous media. The media is represented by the matrix of Yukawa hard spheres quenched at hard-sphere fluid equilibrium. Each particle of the fluid has $n_s$ square-well bonding sites (patches) located on its surface. Interaction between the fluid particles and between matrix and fluid particles is described by the following pair potentials: \be U_{11}(12)=U_{11}^{(hs)}(r)+\sum_{KL}U_{KL}^{(as)}(12) \label{U11} \end{equation} and \be U_{01}(r)=U_{10}(r)=U_{01}^{(hs)}(r)-{\epsilon_{01}^{(Y)}\sigma_{01}\over r} \exp{\left[-\alpha\left(r-\sigma_{01}\right)\right]}, \label{U01} \end{equation} respectively. Here \be U^{(as)}_{KL}(12)=U^{(as)}_{11}(z_{12})=\left\{ \begin{array}{rl} -\epsilon^{(as)}_{11}, & {\rm for}\;z_{12}\le\omega\\ 0, & {\rm otherwise} \end{array} \right., \label{UKL} \end{equation} the lower indices 0 and 1 denote the particles of the matrix and fluid, respectively. 1 and 2 denote position and orientation of the corresponding particles, the lower indices $K$ and $L$ take the values $A,B,C,\ldots$ and denote bonding sites. $U^{(hs)}_{11}(r)$ and $U^{(hs)}_{01}(r)$ are fluid-fluid and fluid-matrix hard-sphere potentials, respectively, $\epsilon^{(Y)}_{01}$ is the strength of Yukawa interaction, $z_{12}$ denotes the distance between corresponding bonding sites, while $\epsilon_{11}^{(as)}$ and $\omega$ are the depth and width of the site-site square-well interaction. The hard-sphere sizes of the fluid and matrix particles are $\sigma_1$ and $\sigma_0$, respectively, $\sigma_{01}=(\sigma_1+\sigma_0)/2$ and we consider the system at temperature $T$, having the fluid and matrix number densities $\rho_1$ and $\rho_0$. {\it Theory.} -- Theoretical description of the model is carried out combining Wertheim's thermodynamic perturbation theory \cite{Wertheim1987} (TPT), scale particle theory \cite{Holovko2009,Patsahan2011,Holovko2013} (SPT) and replica Ornstein-Zernike (ROZ) equation \cite{Given1992,Given1993}. According to Wertheim's TPT \cite{Wertheim1987} Helmholtz free energy of the system is \be A=A_{hs}+\Delta A_{Y}+\Delta A_{as}, \label{TPT} \end{equation} where $A_{hs}$ is Helmholtz free energy of the hard-sphere fluid confined in the hard-sphere matrix, while $\Delta A_{Y}$ and $\Delta A_{as}$ are contributions to the free energy due to Yukawa and bonding interactions, respectively. $A_{hs}$ is calculated using extension of the SPT, which provides analytical expressions for Helmholtz free energy, chemical potential and pressure of the hard-sphere fluid adsorbed in the hard-sphere matrix \cite{Holovko2009,Patsahan2011,Holovko2013,Kalyuzhnyi2014} (see Appendix A for details). $\Delta A_{Y}$ is calculated using numerical solution of the ROZ equation \cite{Given1992,Given1993} for the hard-sphere fluid in the Yukawa hard-sphere matrix. Corresponding thermodynamic properties are calculated using the energy route. Solution of the ROZ equation is obtained combining Percus-Yevick (PYA) and Mean Spherical (MSA) approximations (see Appendix B for details). For $\Delta A_{as}$ we have \cite{Wertheim1987} \be {\beta\Delta A_{as}\over V}=n_s\rho_1\left(\ln{X}-{1\over 2}X+{1\over 2}\right), \label{Aas} \end{equation} where $X=\left(\sqrt{1+2\Delta}-1\right)/\Delta$, with $ \Delta=8\pi n_s\rho_1g_{11}(\sigma_{1})\int_{\sigma_1}^{\sigma_1+\omega}{\bar f}(r)r^2dr, $ and \be {\bar f}(r)=\left(e^{\beta\epsilon}-1\right)\left(\omega+\sigma_1-r\right)^2 \left(2\omega-\sigma_1+r\right)/\left(6\sigma_1^2r\right) \label{fbar} \end{equation} and $g_{11}(r)$ is the radial distribution function (RDF) of the hard-sphere fluid confined in the Yukawa hard-sphere matrix. This RDF follows from the solution of the ROZ equation. {\it Results.} -- The phase behavior of the model in question was investigated for the three- and four-patch colloids ($n_s=3,4$) being trapped in the Yukawa hard-sphere matrix. The fluid and matrix particles had equal diameters , i.e. $\sigma_0=\sigma_1=\sigma$. Matrix packing fraction was $\eta_0=\pi\rho_0\sigma_0^3/6=0.1$, width of the square-well site-site potential $\omega=0.119\sigma$, and we examined three different values for the strength of the Yukawa interaction $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.0, 0.1, 0.16$, and $0.22$. \begin{figure}[!h] \centering \includegraphics[clip,height=0.25\textwidth,angle=0]{fig2.pdf} \caption{\label{fig2} Fractions of free $x_0$ (red and pink lines) and $3$-times bonded (blue and green lines) particles along the binodals for the model with three patches and with $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.16$ (lower set of the curves) and $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.22$ (upper set of the curves). Here dashed lines denote the gas phase and solid lines denote the liquid phase. } \end{figure} Our results for the phase behavior of the model system with $n_s=3$ and $n_s=4$ are shown in Fig. \ref{fig1}. For the reference we also included results for the phase diagrams in case that no obstacles are present ($\eta_0=0$, black dashed curves). The phase diagrams of the model liquids, confined in purely repulsive hard-sphere matrix ($\epsilon_{01}^{(Y)}$=0, black solid curves), are almost twice narrower than the $\eta_0=0$ cases, while their critical points move toward lower densities and temperatures. For sufficiently low temperature the density of the liquid phase becomes temperature-independent, reaching a certain limiting value. At this temperature almost all bonding sites of the fluid particles are occupied and further decrease of the temperature does not change much the internal structure of the network formed. Such features of the phase diagram have already been observed before \cite{Bianchi2006,Kalyuzhnyi2014}. Weak Yukawa attraction ($\epsilon_{01}^{(Y)}/\epsilon^{(as)}_{01}=0.1$) enhances the bonding and the phase diagrams become wider then at $\epsilon_{01}^{(Y)}=0.0$. In the same time, the critical points move towards higher temperatures and densities (note the red curves on this Figure). Such behavior is consistent with that observed in case of a simple fluid, confined in the attractive matrix \cite{Nelson2020}. However, the overall shape of the phase diagram shown in Fig. 1 is different then observed before; at low temperature the liquid branch is moving towards the lower densities, clearly demonstrating re-entrant behavior. Moreover, for even stronger Yukawa attraction ($\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.16$ and 0.22; green and blue curves, respectively), the phase diagrams show re-entrant behavior at much higher temperatures. At the same time the width of the phase coexistence regions reduces substantially, especially at higher values of $\epsilon_{01}^{(Y)}$ ($\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.22$, blue curves). While for $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.16$ and $n_s=3$ (green curves) the phase diagram in the temperature range ~ $0.065\leq T^* \leq 0.087$ is wider then the corresponding phase diagram for the model with hard-sphere matrix ($\epsilon_{01}^{(Y)}=0$), the phase diagram for $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.22$ and $n_s=3$ (blue curve) is much narrower and located completely inside the phase diagram for the model with $\epsilon_{01}^{(Y)}=0$. Similar behavior is observed also for the four-patch version of the model. Thus, at sufficiently high values of the Yukawa attraction phase diagrams for the models examined here exhibit the re-entrant behaviour, eventually reaching the empty liquid state. The shape of the phase diagrams, shown in Fig. 1, is similar as observed at bulk conditions for the models with patches of different type \cite{Russo2011a,Russo2011b,Kalyuzhnyi2013,Rovigatti2013,Sokolowski2014}. However, the underlying physics of this behavior is different. In the studies mentioned above, the re-entrant phase diagram is a consequence of the competition between chain formation and branching. At higher temperatures (but still below the critical point) the particles are connected by the 3d network of bonds, which results in the phase separation. Upon the temperature decrease, the network is destroyed in favour of chains and at zero temperature the phase separation is suppressed. \begin{figure}[!h] \centering \includegraphics[clip,height=0.3\textwidth,angle=0]{fig3a.pdf}\\ \includegraphics[clip,height=0.3\textwidth,angle=0]{fig3b.pdf} \caption{\label{fig3} Radial distribution functions $g_{01}(r)$ (top panel) and $g_{11}(r)$ (low panel) for the hard-sphere fluid confined in Yukawa hard-sphere matrix with $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.16$ and $\eta_0=0.1$ along the liquid branch of the phase diagram at $\rho_1^*=0.22,\;T^*=0.07$ (black lines), $\rho_1^*=0.06,\;T^*=0.049$ (red lines) and $\rho_1^*=0.027,\;T^*=0.04$ (blue lines). } \end{figure} In contrast to this, the 3d network of the model particles does not experience the restructuring of the type mentioned above. By lowering the temperature the network becomes more connected; i.e. more particles become fully bonded. In Fig. \ref{fig2} we present our results for the fractions of free and 3-times bonded particles, $x_0$ and $x_3$, respectively, along the binodals at different strength of Yukawa interaction. They are calculated using the following relation \cite{Wertheim1987} \be x_j=X^{n_s-j}(1-X)^j, \label{fraction} \end{equation} where $x_j$ is the fraction of $j$-times bonded particles. Upon the temperature decrease, $x_3$ and $x_0$ monotonically approach their limiting values $x_3\rightarrow 1$ and $x_0\rightarrow 0$ in the liquid phase and $x_3\rightarrow 0$ and $x_0\rightarrow 1$ in the gas phase, regardless of the number of patches and strength of Yukawa attraction. In Fig. \ref{fig3} we show our results for the RDFs $g_{01}(r)$ and $g_{11}(r)$ of the reference system. In the present case this is the hard-sphere (not the patchy hard-sphere) fluid confined in the Yukawa hard-sphere matrix. The RDFs are calculated at selected state points along the liquid phase binodal for the case with $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.16$. Due to Yukawa attraction the contact value of the RDF $g_{01}(r)$ increases with the temperature decrease and as a consequence $g_{11}(\sigma_1)$ increases too. Corresponding changes in the shape of the RDF $g_{11}(r)$ reflect the fact that the fluid particles distribute themselves around the matrix obstacles close to their surfaces. Note again, that the RDFs shown here do not include the effects of the site-site square-well interaction between the fluid particles. An account for the presence of the square-well site-site bonding interaction between the fluid particles will not change this tendency it will merely contribute a high and narrow peak for the $g_{11}(r)$ at contact. On one side, the temperature decrease makes the network formed in the liquid phase to become more connected, while on the other it makes the fluid particles to distribute non-uniformly. Most of the fluid particles are located close to the obstacles and their global distribution is defined by the distribution of the matrix particles. Finally, in Fig. \ref{fig4} we present our results for the difference in the entropy per particle $\Delta S$ and internal energy per particle $\Delta U$ of the coexisting phases along the binodals for the 3- and 4-patch versions of the model at $\epsilon_{01}^{(Y)}/\epsilon_{11}^{(as)}=0.22$. Similarly as in previous calculations \cite{Russo2011b} the curves for these differences follow each other closely, with minor differences between models. They indicate that decrease in the energy of the liquid phase is compensated by the corresponding decrease in the entropy at given temperature. Such a behavior is typical for the usual {colloids} but has been observed also for patchy colloidal fluids \cite{Russo2011b,Rovigatti2013}. \begin{figure}[!h] \centering \includegraphics[clip,height=0.25\textwidth,angle=0]{fig4.pdf} \caption{\label{fig4} Differences in the internal energies $\Delta U/(N\epsilon_{11}^{(as)})$ (solid lines) and entropies $T^*\Delta S/(Nk_B)$ (dashed lines) of the liquid and gas phases along the binodals for the models with three (red lines) and four (blue lines) patches at $\epsilon^{(Y)}_{01}/\epsilon^{(as)}_{11}=0.22$. } \end{figure} {\it Conclusions.} -- We have shown that patchy colloidal model with three and four equivalent patches confined in the attractive random porous media undergo the re-entrant gas-liquid phase separation with the possibility for the liquid phase to have vanishing low density. This behavior is caused by an interplay between strong fluid-fluid bonding interaction and relatively weak fluid-matrix interactions. At high temperatures the shape of the phase diagram is defined by the patch-patch interaction between the fluid particles, while weak Yukawa attraction only slightly enhances the fluid-fluid bonding. At low temperature almost all the fluid particles are fully ($n_s$-times) bonded. Under such conditions the network of the fluid particles is formed and the shape of the phase diagram is defined by the Yukawa fluid-obstacle attraction. Due to the fluid-matrix interaction, a layer of mutually bonded particles around the obstacles is formed and the corresponding network becomes strongly nonuniform. Distribution of the fluid particles in the network is defined by the distribution of the obstacles. In such a situation all the particles in the gas phase are free, while those in the liquid are fully bonded. To maintain the mutual compensation of the energy and entropy changes upon phase separation, the liquid binodal moves in the direction of the lower densities, making the phase diagram re-entrant. Our results provides further example of the peculiarities of the phase behavior of the patchy colloidal fluids, which is currently a hot topic of the colloid science. In particular protein self-assembly and phase separation in crowded cell environment can be described using patchy colloid models confined in the porous media \cite{Hvozd2020}. Our study may shed some light on the very complicated phase behavior of biological macromolecules in crowded conditions. The nonuniform equilibrium gelation is of relevance for various biological studies \cite{Cai2017,Cai2018}. To quote \cite{Madl2017}: ``The proteins interact as patchy colloids and form an ordered fluid''. We hope that this and previous relevant studies will open new possibilities in making the equilibrium gels with the predefined nonuniform distribution of particles.\\ {\bf Acknowledgment:} YVK and TH gratefully acknowledge financial support from the National Research Foundation of Ukraine (project No.2020.02/0317). V.V. was supported by the Slovenian Research Agency fund (ARRS) through the Program 0103--0201 and the project J1--1708. \begin{appendix} \setcounter{equation}{0} \renewcommand{\theequation}{A\arabic{equation}} \section{Expressions for Helmholtz free energy, chemical potential and pressure of the hard-sphere fluid confined in the hard-sphere matrix} According to SPT~\cite{Holovko2009,Patsahan2011,Holovko2013} \begin{equation} {\beta\Delta A_{hs}\over N_{hs}}=\beta\Delta\mu_{hs}-{\beta \Delta P_{hs}\over\rho_{hs}} \label{Ahs} \end{equation} and \begin{equation} g_{hs}={1\over \phi_0-\eta}+{3(\eta+\eta_0\tau)\over 2(\phi_0-\eta_0)^2}, \label{g} \end{equation} where $N_{hs}$ is the number of hard-sphere particles, i.e. $N_{hs}=7N$, $N$ is the number of the antibody molecules in the system, $$ {\beta\Delta P_{hs}\over\rho_{hs}}={1\over 1-\eta/\phi_0}{\phi_0\over\phi} +\left({\phi_0\over\phi}-1\right){\phi_0\over\eta}\ln\left(1-{\eta\over\phi_0}\right) $$ \begin{equation} +{a\over 2}{\eta/\phi_0\over(1-\eta/\phi_0)^2}+ {2b\over 3}{(\eta/\phi_0)^2\over(1-\eta/\phi_0)^3}-1, \label{P} \end{equation} $$ \beta\Delta\mu_{hs}= \beta\mu^{(ex)}_{1} -\ln\left(1-{\eta\over\phi_0}\right) + {\eta(\phi_0-\phi)\over\phi_0\phi(1-\eta/\phi_0)} +\left(1+a\right){\eta/\phi_0\over(1-\eta/\phi_0)} $$ \begin{equation} +{(a+2b)\over 2}{(\eta/\phi_0)^2\over(1-\eta/\phi_0)^2} +{2b\over 3}{(\eta/\phi_0)^3\over(1-\eta/\phi_0)^3}, \label{mu} \end{equation} and $\eta_0=\pi\rho_0\sigma_0^3/6$, $\phi_0=1-\eta_0$, $\eta=\pi\rho_{hs}\sigma_{hs}^3/6$ and $\phi=\exp{(-\beta\mu^{(ex)}_{1})}$. \noindent Here \begin{equation} a=6+{3\eta_0\tau\left(\tau+4\right)\over 1-\eta_0}+ {9\eta_0^2\tau^2\over(1-\eta_0)^2}, \;\;\;\;\;\;\;\;\;\;\;\; b={9\over 2}\left(1+{\tau\eta_0\over 1-\eta_0}\right)^2, \label{b1} \end{equation} $$ \beta\mu_{1}^{(ex)}=-\ln{(1-\eta_0)} $$ $$ +{9\eta_0^2\over 2(1-\eta_0)^2}-\eta_0Z_0 +\left[3\eta_0Z_0-{3\eta_0(2+\eta_0)\over(1-\eta_0)^2}\right](1+\tau) $$ \begin{equation} -\left[3\eta_0Z_0-{3\eta_0(2+\eta_0)\over 2(1-\eta_0)^2}\right](1+\tau)^2 +\eta_0Z_0(1+\tau)^3, \label{mu1} \end{equation} $Z_0=(1+\eta_0+\eta_0^2)/(1-\eta_0)^3$ and $\tau=\sigma_{hs}/\sigma_0$.\\ \section{Replica Ornstein-Zernike equations and their closures} The set of the ROZ equations enables one to calculate the structure and thermodynamic properties of the fluid adsorbed into the disorder porous media \cite{Given1992,Given1993}. For hard-sphere fluid confined in the Yukawa hard-sphere matrix the theory is represented by the OZ equation for the direct $c_{00}(r)$ and total $h_{00}(r)$ correlation functions, describing the structure of the matrix, i.e. \be h_{00}-c_{00}=\rho_0c_{00}\otimes h_{00}, \label{MM} \end{equation} and a set of three equations, which include direct $c_{01}(r),c_{11}(r),c_{11(1)}(r)$ and total $h_{01}(r),h_{11}(r),h_{11(1)}(r)$ matrix-fluid (with the lower indices 01) and fluid-fluid (with the lower indices 11 and 11(1)) correlation functions, \be h_{01}-c_{01}=\rho_0c_{01}\otimes h_{00}+\rho_1c_{11(1)}\otimes h_{10}, \label{01} \end{equation} $$ h_{11}-c_{11}=\rho_0c_{10}\otimes h_{01}+\rho_1c_{11(1)}\otimes h_{11} $$ \be +\rho_1\left[c_{11}-c_{11(1)}\right]\otimes h_{11(1)}, \label{11} \end{equation} \be h_{11(1)}-c_{11(1)}=\rho_1c_{11(1)}\otimes h_{11(1)}, \label{111} \end{equation} where the lower index $11(1)$ denote connectedness correlation functions, the symbol $\otimes$ denotes convolution. This set of equations have to be supplemented by the corresponding closure relations. We are using here hypernetted chain (HNC) approximation for the OZ equation (\ref{MM}) and Mean Spherical Approximation (MSA) for the set of equations (\ref{01}), (\ref{11}) and (\ref{111}), i.e. \be h_{00}(r)+1=e^{-\beta U_{hs}(r)+h_{00}(r)-c_{00}(r)} \label{PY} \end{equation} and \be c_{11}(r)=\left[t_{11}(r)+1\right]f_{11}^{(hs)}(r) \label{MSA1} \end{equation} \be c_{01}(r)=-\beta U^{(Y)}_{01}(r)e_{01}^{(hs)}(r)+\left[t_{01}(r)+1\right]f_{01}^{(hs)}(r) \label{MSA0} \end{equation} \be c_{11(1)}(r)=\left[t_{11}(r)+1\right]f_{11}^{(hs)}(r) \label{MSA2} \end{equation} Solution of the set of equations (\ref{MM}) - (\ref{111}) was obtained numerically via direct iteration method. Yukawa contribution to Helmholtz free energy $\Delta A_Y$ was calculated using the energy route, i.e. \be \beta\Delta A_Y= \int_0^\beta E_Yd\beta', \label{AY1} \end{equation} where for the excess internal energy $E_Y$ we have \be {\beta E_Y\over V}=2\pi\beta\rho_1\rho_0\int_0^\infty r^2 U^{(Y)}_{01}(r)g_{01}(r)dr, \label{EY} \end{equation} Corresponding contributions to the chemical potential and pressure are calculated using the standard thermodynamic relations. \end{appendix}
1,314,259,996,392
arxiv
\section{Introduction} Given a finite set $A\subset{\mathbb{N}}$, one can define the \textit{sum set}, and respectively the \textit{product set}, by $$A+A:=\{a+b:a,b\in{A}\}$$ and $$AA:=\{ab:a,b\in{A}\}.$$ The Erd\H{o}s-Szemer\'{e}di \cite{ES} conjecture states, for all $\epsilon>0$, $$\max{\{|A+A|,|AA|\}}\gg{|A|^{2-\epsilon}},$$ and it is natural to extend this conjecture to other fields, particularly the real numbers. In this direction, the current state-of-the-art bound, due to Solymosi \cite{solymosi}, states that for any $A\subset{\mathbb{R}}$ \begin{equation} \max{\{|A+A|,|AA|\}}\gg{\frac{|A|^{4/3}}{(\log{|A|})^{1/3}}}. \label{soly1} \end{equation} When looking to construct a set $A$ which generates a very small sum set $A+A$, one needs to impose an additive structure on $A$, and an additive progression is an example of a highly additively structured set. Similarly, if $A$ has a very small product set, it must be to some extent multiplicatively structured. Loosely speaking, the Erd\H{o}s-Szemer\'{e}di conjecture reflects the intuitive observation that a set of integers, or indeed real numbers, cannot be highly structured in both a multiplicative and additive sense. In this paper, we consider other ways to quantify this observation. In particular, one would expect that a set will grow considerably under a combination of additive and multiplicative operations. Consider the set $$A(A+A):=\{a(b+c):a,b,c\in{A}\}.$$ The same heuristic argument as the above leads us to expect that this set will always be large. Indeed, any progress towards the Erd\H{o}s-Szemer\'{e}di conjecture immediately yields a lower bound for the quantity $|A(A+A)|$. To see this, let us assume for simplicity that $0,1\in{A}$. This implies that $AA$ and $A+A$ are subsets of $A(A+A)$, and therefore, Solymosi's result \eqref{soly1} implies that \begin{equation} |A(A+A)|\gg{\frac{|A|^{4/3}}{(\log{|A|})^{1/3}}}. \label{soly2} \end{equation} The expectation that $|A(A+A)|$ is always large was formalised by Balog\footnote{This conjecture was made as part of a talk at the conference ``Additive Combinatorics in Paris". A similar conjecture was made in \cite{balog} for the set $A+AA$.} \cite{balog}, who conjectured that, for all $\epsilon>0$, $$|A(A+A)|\gg{|A|^{2-\epsilon}}.$$ Note that if $A=\{1,2,\cdots,N\}$, then $$A(A+A)\subset{\{nm:n,m\in{[2N]}\}}.$$ This set obviously has cardinality $O(N^2)$, and in fact it is known that the product set determined by the first $N$ integers has cardinality $o(N^2)$.\footnote{See Ford \cite{ford} for a precise statement concerning the size of this product set.} Therefore, we cannot expect to prove anything stronger than this conjecture. It is worth pointing out that Balog's conjecture is also close to being sharp in the dual case where $A$ is a geometric progression. Indeed, $A(A+A)\subset{AA+AA}$, and if $AA$ has cardinality $O(|A|)$, then $|AA+AA|=O(|A|^2)$. By attacking the problem of establishing lower bounds on $|A(A+A)|$ directly (as opposed to applying Solymosi's sum-product estimate rather crudely), it is possible to obtain quantitatively improved results. Using a straightforward application of the Szemer\'{e}di-Trotter theorem\footnote{To the best of our knowledge, a proof of this does not appear in the existing literature. Exercise 8.3.3 in Tao-Vu \cite{TV} observes that $|AA+A|\gg{|A|^{3/2}}$, and this proof can easily be adapted to show that $|A(A+A)|\gg{|A|^{3/2}}$. These simple proofs are similar to those of the earlier sum-product estimates of Elekes \cite{elekes}.}, one can show that \begin{equation} |A(A+A)|\gg{|A|^{3/2}}. \label{trivial} \end{equation} The original aim here was to improve on this lower bound, which we do by proving\footnote{The rough inequality $\gtrapprox$ is used here to suppress logarithmic and constant factors. See the forthcoming notation section for a precise definition of the meaning of this symbol.} that \begin{equation} |A(A+A)|\gtrapprox{|A|^{\frac{3}{2}+\frac{1}{178}}}. \label{main} \end{equation} Although the method leads only to a small improvement for this problem, it turns out to be much more effective when more variables are involved. To this end we prove the following result: \begin{equation} |A(A+A+A+A)|\gg{\frac{|A|^2}{\log{|A|}}}. \label{A(A+A+A+A)} \end{equation} Observe that this bound is tight, up to logarithmic factors, in the case when $A$ is an arithmetic progression. Indeed, the aforementioned work of Ford tells us that some logarithmic factor is necessary here. The set $A(A+A+A+A)$ has similar characteristics to $A(A+A)$, and inequality \eqref{A(A+A+A+A)} proves a weak version of Balog's conjecture. The main tool in this paper is the Szemer\'{e}di-Trotter theorem, although its application is a little more involved than the straightforward application which gives the bound \sloppy{${|A(A+A)|\gg{|A|^{3/2}}}$}. To be more precise, we use an application of the Szemer\'{e}di-Trotter theorem to establish our main lemma, which bounds the cardinality of $A(A+A)$ in terms of the \textit{multiplicative energy} of $A$. The multiplicative energy, denoted $\E^*(A)$, is the number of solutions to the equation \begin{equation} a_1a_2=a_3a_4, \label{energydefn} \end{equation} such that $a_1,a_2,a_3,a_4\in{A}$. This quantity has been an important feature in some of the existing bounds for the sum-product problem (see \cite{solymosi} and \cite{solymosi2}). Of particular importance in this paper is the forthcoming Lemma \ref{mainlemma}, which gives an improvement to \eqref{trivial} unless the multiplicative energy is almost as large as possible. However, in the case where the multiplicative energy is very large, the Balog-Szemer\'{e}di-Gowers Theorem implies the existence of a large subset $A'\subset{A}$ with the property that the ratio set\footnote{The ratio set $A:A$ determined by $A$ is the set of all pairwise ratios; that is the set $\{a/b:a,b\in{A}\}$.} $A':A'$ is small. We can then use a sum-product estimate from \cite{LiORN2} to get an improvement to \eqref{trivial}. This gives a sketch of the proof of \eqref{main}. Another variation of the sum-product problem is to consider product sets of additive shifts, which we might expect to be large. It was shown by Garaev and Shen \cite{GS} that for a finite set $A\subset{\mathbb{R}}$, one has $|A(A+1)|\gg{|A|^{5/4}}$, and this bound was improved slightly in \cite{TimORN}. Note that the value $1$ is not important here, and these results hold if $1$ is replaced in the statement by any non-zero value. The problem of determining the best possible lower bound for the size of $A(A+1)$ remains open. We will prove several bounds which relate to this problem, as well as the problem of finding better lower bounds for $|A(A+A)|$. For example, in the forthcoming Theorem \ref{translates1}, it will be established that, for at least half of the elements $a\in{A}$ we have \begin{equation} |A(A+a)|\gg{|A|^{3/2}}. \label{example} \end{equation} Note that this result reproves the bound \eqref{trivial}, but using two variables as opposed to three. \subsection*{Structure of this paper} The rest of this paper is structured as follows. We conclude this introductory section by explaining some of notation that will be used. In section 2, we give a full list of the new results in this paper. Section 3 gives proofs of the main preliminary results, all of which follow from the Szemer\'{e}di-Trotter theorem. Section 4 provides proofs of the main results - including \eqref{main} and \eqref{A(A+A+A+A)}. In section 5, we prove several other results concerning growth of sets under additive and multiplicative operations; this includes \eqref{example} and several results in a similar spirit. It will be necessary to call upon some results from earlier works, such as the Szemer\'{e}di-Trotter Theorem and the Balog-Szemer\'{e}di-Gowers Theorem; any such results will be stated as and when they are needed. \subsection*{Notation} Throughout the paper, the standard notation $\ll,\gg$ and, respectively, $O,\Omega$ is applied to positive quantities in the usual way. Saying, $X\gg Y$ or $X=\Omega(Y)$ means that $X\geq cY$, for some absolute constant $c>0$. We write $X\approx{Y}$ if both $X\gg{Y}$ and $X\ll{Y}$. The notation $\gtrapprox$ is occasionally used to suppress both constant and logarithmic factors. To be more precise, we write $X\gtrapprox{Y}$ if there exist positive constants $C$ and $c$ such that $X\geq{c\frac{Y}{(\log{X})^C}}$. All logarithms in this paper are to base 2. Let $A,B\subset{\mathbb{R}\setminus{\{0\}}}$ be finite sets\footnote{Note that the assumption that $0\notin{A}$ is merely added to avoid the inconvenience of the possibility of dividing by zero, and simplifies matters slightly. All of the bounds in this paper are unaffected; we may simply start all proofs by deleting zero and apply the analysis to $A':=A\setminus{\{0\}}$, with only the implied constants being affected.}. We have already defined the sum set $A+B$ and the product set $AB$. The \emph{difference set $A-B$}\/ and the \emph{ratio set $A:B$}\/ are defined by \[ A-B=\{a-b\colon a\in A,b\in B\}\quad\mbox{and}\quad A:B=\{a/b\colon a\in A,b\in B\}. \] Given $x\in{\mathbb{R}}$, we use the notation $r_{A+B}(x)$ to denote the number of representations of $x$ as an element of $A+B$. To be precise $$r_{A+B}(x):=|\{(a,b)\in{A\times{B}}:a+b=x\}|.$$ This notation will be used flexibly throughout the paper to define the number of representations of $x$ as an element of a given set described in the subscript. For example, $$r_{A(B+C)}:=|\{(a,b,c)\in{A\times{B}\times{C}}:a(b+c)=x\}|.$$ In a slight generalisation of the earlier definition, the \textit{multiplicative energy} of $A$ and $B$, denoted $\E^*(A,B) = \E^*_2 (A,B)$, is defined to be the number of solutions to the equation $$a_1b_1=a_2b_2,$$ such that $a_i\in{A}$ and $b_i\in{B}$. This quantity is also the number of solutions to $$\frac{a_1}{a_2}=\frac{b_2}{b_1}$$ and $$\frac{a_1}{b_2}=\frac{a_2}{b_1}.$$ Observe that $\E^*(A,B)$ can also be defined in terms of the representation function $r$ as follows: \begin{align*} \E^*(A,B)&=\sum_{x}r_{A:B}^2(x)\\ &=\sum_xr_{A:A}(x)r_{B:B}(x)\\ &=\sum_xr_{AB}^2(x). \end{align*} We use $\E^*(A)$ as a shorthand for $\E^*(A,A)$. One of the fundamental basic properties of the multiplicative energy is the following well-known lower bound: \begin{equation} \E^*(A,B)\geq{\frac{|A|^2|B|^2}{|AB|}}. \label{CS} \end{equation} The proof is short and straightforward, arising from a single application of the Cauchy-Schwarz inequality. The full details can be seen in Chapter 2 of \cite{TV}. The above definitions can all be extended in the obvious way to define the \textit{additive energy} of $A$ and $B$, denoted $\E^+(A,B)$. So, $$\E^+(A,B):=\sum_xr_{A-B}^2(x).$$ The third \textit{moment multiplicative energy} is the quantity $$\E^*_3(A):=\sum_xr_{A:A}^3(x),$$ and similarly, the \textit{third moment additive energy} is defined by $$\E^+_3(A):=\sum_xr_{A-A}^3(x).$$ In recent years, third moment energy has played an important role in quantitative progress on various problems in arithmetic combinatorics. See for example \cite{TimORN}, \cite{LiORN2}, \cite{SS1},\cite{SS2},\cite{SS3}, \cite{SV} and \cite{TV}. We will use the Katz--Koester trick \cite{kk}, which is the observation that $$ |(A + A) \cap (A + A - s)| \ge |A + A_{s}| \,, $$ and $$ |(A - A) \cap (A - A - s)| \ge |A - (A\cap (A+s))| \,, $$ where $A_s = A\cap (A-s)$. We also need the following identity (see \cite{SV}, Corollary 2.5) \begin{equation}\label{f:A^2-D(A)} \sum_s |A \pm A_s| = |A^2 \pm \Delta(A)| \,, \end{equation} where $$ \Delta(A) = \{ (a,a) ~:~ a\in A \} \,. $$ \section{Statement of results} \subsection{Preliminary Results - Applications of the Szemer\'{e}di-Trotter Theorem} The most important ingredient for the sum-product type results in this paper is the Szemer\'{e}di-Trotter Theorem \cite{ST}: \begin{theorem} \label{ST1} Let $P\subset{\mathbb{R}^2}$ be a finite set of points and let $L$ be a collection of lines in the real plane. Then $$I(P,L):=|\{(p,l)\in{P\times{L}}:p\in{l}\}|\ll{|P|^{2/3}|L|^{2/3}+|L|+|P|}.$$ \end{theorem} Here by $I(P,L)$ we denote the number of incidences between a set of points $P$ and a set of lines $L$. Given a set of lines $L$, we call a point that is incident to at least $t$ lines of $L$ a \emph{$t$-rich point}, and we let $P_t$ denote the set of all $t$-rich points of $L$. The Szemer\'edi-Trotter theorem implies a bound on the number of $t$-rich points: \begin{corollary} \label{STcor} Let $L$ be a collection of lines in $\mathbb{R}^2$, let $t\geq{2}$ be a parameter and let $P_t$ be the set of all $t$-rich points of $L$. Then $$|P_t|\ll{\frac{|L|^2}{t^3}+\frac{|L|}{t}}.$$ Further, if no point of $P_t$ is incident to more than $|L|^{1/2}$ lines, then \[ |P_t|\ll\frac{|L|^2}{t^3}. \] \end{corollary} This result is used to prove the main preliminary results in this paper, which give us information about various kinds of energies. \begin{lemma} \label{sum+} Let $A,B$ and $X$ be finite subsets of $\mathbb{R}$ such that $|X|\leq |A||B|$. Then \[ \sum_{x\in X}\E^+(A,xB)\ll |A|^{3/2}|B|^{3/2}|X|^{1/2}. \] \end{lemma} Note that $\E^+(A,xB)\geq |A||B|$ for all $x$, so the condition $|X|\leq|A||B|$ is necessary. Bourgain formulated a similar theorem (``Theorem C'' of \cite{bourgain}) for subsets of fields with prime cardinality. Bourgain's theorem is closely related to the Szemer\'edi-Trotter theorem for finite fields \cite{dvir,HelfgottRudnev}. This result works in the same way with the roles of addition and multiplication reversed. \begin{lemma} \label{sum*} Let $A,B$ and $X$ be finite subsets of $\mathbb{R}$ such that $|X|\leq{|A||B|}$. Then $$\sum_{x\in{X}}\E^*(A,x+B)\ll{|A|^{3/2}|B|^{3/2}|X|^{1/2}}.$$ \end{lemma} A similar method is used to establish the following important lemma, which will be applied several times in this paper. \begin{lemma}\label{mainlemma} For any finite sets $A,B,C\subset{\mathbb{R}}$, we have \[\E^*_2(A)|A(B+C)|^2\gg{\frac{|A|^4|B||C|}{\log{|A|}}}.\] \end{lemma} We remark that Lemma \ref{mainlemma} is optimal, up to logarithmic factors, in the case when $A=B=C=\{1,\cdots,N\}$. \subsection{Main Results} The next two theorems represent the main results in this paper. Although they were mentioned in the introduction, they are restated here for the completeness of this section. \begin{theorem} \label{main1} Let $A\subset{\mathbb{R}}$ be a finite set. Then $$|A(A+A)|\gtrapprox{|A|^{\frac{3}{2}+\frac{1}{178}}}.$$ \end{theorem} \begin{theorem} \label{main2} Let $A\subset{\mathbb{R}}$ be a finite set. Then $$|A(A+A+A+A)|\gg{\frac{|A|^2}{\log{|A|}}}.$$ \end{theorem} We also prove the following suboptimal result, which is closely related to Theorems \ref{main1} and \ref{main2}: \begin{theorem} \label{main3} Let $A\subset{\mathbb{R}}$ be a finite set. Then \begin{equation} |A(A+A+A)|\gtrapprox{|A|^{\frac{7}{4}+\frac{1}{284}}}. \label{A(A+A+A)} \end{equation} \end{theorem} \subsection{Products of Additive Shifts} We will prove a family of results bounding from below the product set of translates of a set $A$. One may observe a familiar gradient in this sequence of results: the bounds improve as we introduce more variables and more translates. It was proven in \cite{TimORN} that, for any finite set $A\subset{\mathbb{R}}$ and any value $x\in{\mathbb{R}\setminus{\{0\}}}$, \begin{equation} |A(A+x)|\gg{\frac{|A|^{24/19}}{(\log{|A|})^{2/19}}}. \label{old} \end{equation} As mentioned in the introduction, we will prove the following Theorem, which shows that we can usually improve on \eqref{old} in the case when $x\in{A}$. \begin{theorem} \label{translates1} Let $A\subset{\mathbb{R}}$ be a finite set. Then there exists a subset $A'\subset{A}$, such that $|A'|\geq{\frac{|A|}{2}}$, and for all $a\in{A'}$, $$|A(A+a)|\gg{|A|^{3/2}}.$$ \end{theorem} Adding more variables to our set leads to better lower bounds: \begin{theorem} \label{translates2} Let $A\subset\mathbb{R}$ be a finite set. Then there exists a subset $A'\subset{A}$ with cardinality $|A'|\geq{\frac{|A|}{2}}$, such that for all $a\in{A'}$, $$|(A+A)(A+a)|\gg{\frac{|A|^{5/3}}{(\log{|A|})^{1/3}}}.$$ \end{theorem} Theorem \ref{translates2} is similar to the result of Theorem \ref{main1}, especially if we think of the set $A(A+A)$ in the terms $(A+0)(A+A)$. This result tells us that we can usually do better than Theorem \ref{main1} if $0$ is replaced by an element of $A$. The next theorem is quantitatively worse than Theorem \ref{translates2}, but is more general, since it applies not only for most $a\in{A}$, but to all real numbers except for a single problematic value. \begin{theorem} \label{translates3} Let $A\subset\mathbb{R}$ be a finite set. Then, for all but at most one value $x\in{\mathbb{R}}$, \begin{equation} |(A+A)(A+x)|\gg{\frac{|A|^{11/7}}{(\log{|A|})^{3/7}}}. \label{annoying} \end{equation} \end{theorem} Unfortunately, this does not lead to an improvement to Theorem \ref{main1}, since the single bad $x$ that violates \eqref{annoying} may be equal to zero. \subsection{Further results} Finally, we formulate a theorem of a slightly different nature. \begin{theorem} Let $A,B\subseteq \R$ be finite sets. Then \begin{equation}\label{f:main_intr_2_new} |A+B|^3 \gg \frac{|B| \E^{*} (A)}{\log |A|} \ge \frac{|A|^4 |B|}{|AA^{\pm1}| \log |A|} \,, \end{equation} and \begin{equation}\label{f:main_intr_2'-_new} |B+AA|^3 \gg \frac{|B|^{} |A|^{12}}{(\E^*_3 (A))^2 |AA^{-1}| \log |A|} \,. \end{equation} \label{t:main_intr_II} \end{theorem} Let us say a little about the meaning of these two bounds. If we fix $A=B$, then \eqref{f:main_intr_2_new} tells us that $|AA|$ is very large if $|A+A|$ is very small. Similar results are already known; for example, a quantitatively improved version of this statement is a consequence of Solymosi's sum-product estimate in \cite{solymosi}. The benefit of \eqref{f:main_intr_2_new} is that it also works for a mixed sum set $A+B$. One of the main objectives of this paper is to study the set $A(A+A)$, and inequality \eqref{f:main_intr_2'-_new} considers the dual problem of the set $A+AA$. As stated earlier, it is easy to show that $|A+AA|\gg{|A|^{3/2}}$. If we fix $A=B$ in \eqref{f:main_intr_2'-_new}, then this bound gives an improvement in the case when $\E_3^*(A)$ is small. We hope to carry out a more detailed study of the set $A+AA$ in a forthcoming paper. \section{Proofs of Preliminary Results} \subsection*{Proof of Lemma \ref{sum+}} Recall that Lemma \ref{sum+} states that for $|X|\leq|A||B|$, \[ \sum_{x\in X}\E^+(A,xB)\ll |A|^{3/2}|B|^{3/2}|X|^{1/2}. \] Note that \begin{equation} \sum_{x\in{X}}\E^+(A,xB)=\sum_{x\in{X}}\sum_yr_{A+xB}^2(y). \label{firstsum} \end{equation} We will interpret $r_{A+xB}(y)$ geometrically and use corollary \ref{STcor} to show that there are not too many pairs $(x,y)$ for which the quantity $r_{A+xB}(y)$ is large. \begin{claim*} Let $R_t=\{(x,y):r_{A+xB}(y)\geq{t}\}$. Then for any integer $t\geq 2$, \begin{equation} \label{A+xA} |R_t|\ll{\frac{|A|^2|B|^2}{t^3}}. \end{equation} \end{claim*} \begin{proof}[Proof of Claim] Define a collection of lines $$L:=\{l_{a,b}:(a,b)\in{A\times{B}}\},$$ where $l_{a,b}$ is the line with equation $y=ax+b$. Clearly, $|L|=|A||B|$. Since $r_{A+xB}(y)$ counts the number of solutions $(x,y)$ to $y=ax+b$, we see that $r_{A+xB}(y)$ is the number of lines of $L$ that are incident to $(x,y)$. Thus every pair $(x,y)$ in $R_t$ is a $t$-rich point of $L$. Further, because \[r_{A+xB}(y)\leq{\min{\{|A|,|B|\}}}\leq{(|A||B|)^{1/2}}\] there are no pairs $(x,y)$ such that $r_{A+xB}(y) > (|A||B|)^{1/2}$; that is, there are no points incident to more than $|L|^{1/2}$ lines of $L$. It follows from Corollary \ref{STcor} that \[ |R_t|\leq |P_t|\ll\frac{|L|^2}{t^3}=\frac{|A|^2|B|^2}{t^3}, \] which proves the claim. \end{proof} Now we will interpolate between \eqref{A+xA} and a trivial bound. Let $\triangle\geq{1}$ be an integer to be specified later. The sum in \eqref{firstsum} can be divided up as follows: \begin{align} \sum_{x\in{X}}\E^+(A,xB)&=\sum_{x\in{X}}\sum_yr_{A+xB}^2(y) \\&\leq{\sum_{x\in{X}}\sum_{y\,:\,r_{A+xB}(y) \leq \triangle}r_{A+xB}^2(y)+\sum_{(x,y)\,:\,r_{A+xB}(y)>{\triangle}}r_{A+xB}^2(y)}. \label{eq1} \end{align} To bound the first term in \eqref{eq1}, observe that \begin{align} \sum_{x\in{X}}\sum_{y\,:\,r_{A+xB}(y)\leq \triangle}r_{A+xB}^2(y)&\leq{\triangle\sum_{x\in{X}}\sum_{y}r_{A+xB}(y)} \\&=\triangle|A||B|\sum_{x\in{X}}1 \\&=|A||B|\triangle|X|. \label{firstterm} \end{align} To bound the second term in \eqref{eq1}, we decompose dyadically and then apply \eqref{A+xA} to bound the size of the dyadic sets we are summing over: \begin{align} \sum_{(x,y)\,:\,r_{A+xB}(y)>{\triangle}}r_{A+xB}^2(y)&=\sum_{j\geq{1}}\, \sum_{(x,y)\,:\,\triangle2^{j-1}<{r_{A+xB}(y)}\leq\triangle2^j}r_{A+xB}^2(y) \\&\ll{\sum_{j\geq{1}}\frac{|A|^2|B|^2}{(\triangle2^j)^3}(\triangle2^j)^2} \\&=\frac{|A|^2|B|^2}{\triangle}\sum_{j\geq{1}}\frac{1}{2^j} \\&=\frac{|A|^2|B|^2}{\triangle}. \label{secondterm} \end{align} For an optimal choice, set the parameter $\triangle=\left\lceil\frac{|A|^{1/2}|B|^{1/2}}{|X|^{1/2}}\right\rceil\approx{\frac{|A|^{1/2}|B|^{1/2}}{|X|^{1/2}}}>1$. The approximate equality here is a consequence of the assumption $\frac{|A|^{1/2}|B|^{1/2}}{|X|^{1/2}}>1$. Combining the bounds from \eqref{firstterm} and \eqref{secondterm} with \eqref{eq1}, it follows that $$\sum_{x\in{X}}\E^+(A,xB)\ll{|A|^{3/2}|B|^{3/2}|X|^{1/2}},$$ as required. This completes the proof of Lemma \ref{sum+}. \begin{flushright} \qedsymbol \end{flushright} The proof of Lemma \ref{sum*} is essentially the same, with the roles of addition and multiplication reversed. For completeness, a full proof is provided. \subsection*{Proof of Lemma \ref{sum*}} Recall that Lemma \ref{sum*} states that for $|X|\leq |A||B|$, \[ \sum_{x\in X}\E^*(A,B+x)\ll |A|^{3/2}|B|^{3/2}|X|^{1/2}. \] Define a set of lines $L:=\{l_{a,b}:(a,b)\in{A\times{B}}\}$, where $l_{a,b}$ now represents the line with equation $y=a(b+x)$. These lines are all distinct and so $|L|=|A||B|$. Since $r_{A(B+x)}(y)$ is the number of such lines incident to a point $(x,y)$, we can apply Corollary \ref{STcor} and argue as before to show that \begin{equation} |\{(x,y):r_{A(B+x)}(y)\geq{t}\}|\ll{\frac{|A|^2|B|^2}{t^3}}, \label{A(x+A)} \end{equation} for any integer $t\geq{1}$. Next, we use the bound \eqref{A(x+A)} in the following calculation, which holds for any integer $\triangle>1$: \begin{align*} \sum_{x\in{x}} \E^*(A,B+x)&=\sum_{x\in{X}}\sum_{y}r_{A(B+x)}^2(y) \\&\leq{\sum_{x\in{X}}\sum_{y\,:\,r_{A(B+x)}(y)\le \triangle}r_{A(B+x)}^2(y)+\sum_{(x,y)\,:\,r_{A(B+x)}(y)>{\triangle}}r_{A(B+x)}^2(y)} \\&\le \sum_{x\in{X}}\triangle\sum_{y}r_{A(B+x)}(y)+\sum_{j\geq{1}}\sum_{(x,y)\,:\,\triangle2^{j-1}<{r_{A(B+x)}(y)}\le \triangle2^j}r_{A(B+x)}^2(y) \\&\ll{|A||B||X|\triangle+\sum_{j\geq{1}}(2^j\triangle)^2\frac{|A|^2|B|^2}{(2^j\triangle)^3}} \\&=|A||B||X|\triangle+\frac{|A|^2|B|^2}{\triangle}. \end{align*} If we set $\triangle:=\left\lceil\frac{|A|^{1/2}|B|^{1/2}}{|X|^{1/2}}\right\rceil\approx{\frac{|A|^{1/2}|B|^{1/2}}{|X|^{1/2}}}>1$, the proof is complete. \begin{flushright} \qedsymbol \end{flushright} We observe the following Corollary of Lemmas \ref{sum+} and \ref{sum*}. Equation \eqref{f:reformulation_1} is sharp when $A$ is arithmetic progression, which shows that Lemma \ref{sum+} is sharp when $A$ and $B$ are the same arithmetic progression, for a suitable choice of $X$. \begin{corollary} For any $A\subset {\mathbb{R}}$, we have \begin{equation}\label{f:reformulation_1} \left| \frac{A-A}{A-A} \right| \gg |A|^2 \,. \end{equation} \begin{equation}\label{f:reformulation_2} \left| \left\{\frac{a_2b_2-a_1b_1}{a_1-a_2}:a_1,a_2\in A,\, b_1,b_2\in A \right\} \right| \gg |A|^2 \,. \end{equation} \begin{equation}\label{f:reformulation_3} |A-A|^3 \cdot \left| \frac{A-A}{A-A} \right|^{1/2} \gg |A^2 - \Delta(A)|^2 \,. \end{equation} \label{c:reformulation} \end{corollary} \begin{proof} Let $X(u)$ denote the indicator function on $X$. The statements of Lemma \ref{sum+} and Lemma \ref{sum*} can be written as \begin{equation}\label{tmp:13.10.2013_1} \sum_{x,y} r_{A-A} (x) r_{B-B} (y) X(x/y) \ll |A|^{3/2} |B|^{3/2} |X|^{1/2} \end{equation} and \begin{equation}\label{tmp:13.10.2013_2} \sum_{a_1,a_2\in A,\, b_1,b_2\in B} X\left( \frac{a_2 b_2 - a_1 b_1}{a_1-a_2} \right) \ll |A|^{3/2} |B|^{3/2} |X|^{1/2} \end{equation} respectively, provided that $|X| \le |A||B|$. Putting $B=A$ and $X=(A-A)/(A-A)$ into \eqref{tmp:13.10.2013_1} proves \eqref{f:reformulation_1}. Similarly, putting $B=A$ and \[X=\left\{\frac{a_2 b_2 - a_1 b_1}{a_1-a_2}:a_1,a_2\in A,\, b_1,b_2\in B \right\}\] into \eqref{tmp:13.10.2013_2}, we obtain (\ref{f:reformulation_2}). Let $D=A-A$ Taking $A=B= D$, $X=D/D$, summing just over $x,y\in D$ in (\ref{tmp:13.10.2013_1}), and using Katz--Koester trick as well as identity (\ref{f:A^2-D(A)}), we get $$ |A-A|^3 \cdot \left| \frac{A-A}{A-A} \right|^{1/2} \gg \left( \sum_{x\in D} r_{D-D} (x) \right)^2 \ge \left( \sum_{x\in D} |A-A_x| \right)^2 = |A^2 - \D(A)|^2 $$ which coincides with (\ref{f:reformulation_3}). \end{proof} Inequality (\ref{f:reformulation_1}) can also be deduced from Beck's Theorem, which states that a set of $N$ points in the plane which does not have a single very rich line, will determine $\Omega(N^2)$ distinct lines. See Exercise 8.3.2 in \cite{TV}. A geometric result of Ungar \cite{ungar}, concerning the number of different directions determined by a set of points in the plane, also yields \eqref{f:reformulation_1} as a corollary. Although the result here is not new, it has been stated in order to illustrate the sharpness of Lemma \ref{sum+}. Similar results to \eqref{f:reformulation_2} were established in \cite{tim}; it seems likely that \eqref{f:reformulation_2} is suboptimal. \subsection*{Proof of Lemma \ref{mainlemma}} Recall that Lemma \ref{mainlemma} states that $$\E^*(A)|A(B+C)|^2\gg{\frac{|A|^4|B||C|}{\log{|A|}}}.$$ Let $S^\star$ denote the number of solutions to the equation \begin{equation} a_1(b_1+c_1)=a_2(b_2+c_2)\not=0, \label{solutions} \end{equation} such that $a_i\in{A}$, $b_i\in{B}$ and $c_i\in{C}$. This proof uses a familiar strategy: in order to show that a given set is large, show that there cannot be too many solutions to a particular equation. The easy part is to bound $S^\star$ from below, using an elementary application of the Cauchy-Schwarz inequality. First note that \[ \sum_{x\in A(B+C)}r_{A(B+C)}(x)=|A||B||C|. \] Since there are at most $|A||B\cap -C|+|B||C|$ solutions to $a(b+c)=0$, we have \[ \sum_{x\in A(B+C)\setminus\{0\}}r_{A(B+C)}(x)\geq\frac 12|A||B||C|. \] Now we apply the Cauchy-Schwarz inequality: \begin{align} \frac 14(|A||B||C|)^2&\leq\left(\sum_{x\in{A(B+C)\setminus\{0\}}}r_{A(B+C)}(x)\right)^2 \\&\leq{|A(B+C)|\sum_{x\not=0}r_{A(B+C)}^2(x)} \\&=|A(B+C)|S^\star. \label{triv} \end{align} The rest of the proof is concerned with finding a satisfactory upper bound for the quantity $S^\star$. We will eventually conclude that \begin{equation} \label{aim} S^\star\ll{\E^*(A)^{1/2}|B|^{3/2}|C|^{3/2}(\log{|A|})^{1/2}}. \end{equation} If this is proven to be true, one can combine the upper and lower bounds on $S^\star$ from \eqref{aim} and \eqref{triv} respectively, and then a simple rearrangement completes the proof of the lemma. It remains to prove \eqref{aim}. To do this, first observe that \eqref{solutions} can be rewritten in the form \[ \frac{a_1}{a_2}=z=\frac{b_2+c_2}{b_1+c_1}. \] Note that we can divide by $b_1+c_1$ because we excluded $0$ in \eqref{solutions}. If we set $Q=(B+C)/(B+C)$ and \[r_Q(z)=|\{(b_1,b_2,c_1,c_2)\in{B\times{B}\times{C}\times{C}}: z=(b_2+c_2)/(b_1+c_1)\}|,\] then \[ S^\star=\sum_{z\in (A:A)\cap Q} r_{A:A}(z)r_Q(z). \] Applying Cauchy-Schwarz, we have \begin{equation} \label{CS 2} S^\star\leq \left(\sum_{z\in A:A} r_{A:A}^2(z)\right)^{1/2}\left(\sum_{z\in A:A} r_Q^2(z)\right)^{1/2}=\E_2^*(A)^{1/2}\left(\sum_{z\in A:A} r_Q^2(z)\right)^{1/2}. \end{equation} We will bound the RHS of \eqref{CS 2} using the following distributional estimate on $r_Q(z)$: \begin{claim*} Let $Z_t=\{z\colon r_Q(z)\geq t\}$. Then for all $t\geq 1$, \[ |Z_t|\ll\frac{|B|^3|C|^3}{t^2}. \] \end{claim*} If we assume this claim, then by dyadic decomposition: \begin{align*} \sum_{z\in A:A}r_Q^2(z)&\approx \sum_{j\geq 1}^{\lceil\log|A:A|\rceil}2^{2j}|Z_{2^j}|\\ &\ll \sum_{j\geq 1}^{\lceil\log|A:A|\rceil}2^{2j}\frac{|B|^3|C|^3}{2^{2j}}\leq 2|B|^3|C|^3\log|A|. \end{align*} Combining this with \eqref{CS 2} yields the desired bound on $S^\star$: \[ S^\star\ll \E_2^*(A)^{1/2}|B|^{3/2}|C|^{3/2}(\log|A|)^{1/2}. \] This concludes the proof of Lemma \ref{mainlemma}, pending the claim. \begin{flushright} \qedsymbol \end{flushright} Now we will prove the claimed estimate for the distribution of $r_Q(z)$. \begin{proof}[Proof of Claim] First we will get an easy estimate for $|Z_t|$ from Markov's inequality. Since\footnote{$r_Q(z)$ is supported on $Q$, so if $t\geq 1$ we have $Z_t\subseteq Q$.} \[ t|Z_t|\leq\sum_{z\in Z_t}r_Q(z)\leq\sum_{z\in Q}r_Q(z)=|B|^2|C|^2, \] we have \begin{equation} \label{r_Q Markov} |Z_t|\leq\frac{|B|^2|C|^2}t. \end{equation} Note that if $|Z_t|\geq |B||C|$, then it follows from \eqref{r_Q Markov} that $t\leq |B||C|$. But then \[ \frac{|B|^2|C|^2}t\leq\frac{|B|^3|C|^3}{t^2}, \] so we have proved the claim in the case $|Z_t|\geq |B||C|$. Now we will prove the claim when $|Z_t|\leq |B||C|$ using Lemma \ref{sum+}. To do this we make a key observation, which is inspired by the Elekes-Sharir set-up from \cite{rectangles}: every solution of the equation \[z=\frac{b_2+c_2}{b_1+c_1}\] is a solution to the equation \[ b_2-zc_1=zb_1-c_2=y \] for some $y$. Thus \[ r_Q(z)\leq\sum_y r_{zB-C}(y)r_{B-zC}(y). \] By the arithmetic-geometric mean inequality \[r_{zB-C}(y)r_{B-zC}(y)\leq\frac{r_{zB-C}^2(y)+r_{B-zC}^2(y)}2,\] so \[ r_Q(z)\leq \frac{\E^+(zB,-C)+\E^+(B,-zC)}2. \] Now if $|Z_t|\leq |B||C|$, we can sum over $Z_t$ and apply Lemma \ref{sum+}: \[ t|Z_t|\leq\sum_{z\in Z_t}r_Q(z)\leq\frac 12\sum_{z\in Z_t}\E^+(zB,-C)+\frac 12\sum_{z\in Z_t}\E^+(B,-zC)\ll |B|^{3/2}|C|^{3/2}|Z_t|^{1/2}. \] Rearranging yields the estimate \[ |Z_t|\ll\frac{|B|^3|C|^3}{t^2}, \] as claimed. \end{proof} We remark here that this is not the only proof we have found of Lemma \ref{mainlemma} during the process of writing this paper. In particular, it is possible to write a ``shorter" proof which is a relatively straightforward application of an upper bound from \cite{rectangles} on the number of solutions to the equation $$(a_1-b_1)(c_1-d_1)=(a_2-b_2)(c_2-d_2),$$ such that $a_i\in{A},\cdots,d_i\in{D}$. Although this proof may appear to be shorter, it relies on the bounds from \cite{rectangles}, which in turn rely on the deeper concepts used by Guth and Katz \cite{GK} in their work on the Erd\H{o}s distinct distance problem. For this reason, we believe that this proof is the more straightforward option. In addition, this approach leads to better logarithmic factors and works over the complex numbers (see the discussion at the end of the paper). The following corollary gives an analogous result for third moment multiplicative energy, however, unlike Lemma \ref{mainlemma}, this result does not appear to be optimal. \begin{corollary}\label{E3} For any finite sets $A,B,C\subset{\mathbb{R}}$, we have $$\E^*_3(A)|A(B+C)|^4\gg{\frac{|A|^{6}|B|^2|C|^2}{(\log{|A|})^2}}.$$ \end{corollary} \begin{proof} By the Cauchy-Schwarz inequality, \begin{align*} \sum_xr_{A:A}^2(x)&=\sum_xr_{A:A}^{3/2}(x)r_{A:A}^{1/2}(x) \\&\leq{\left(\sum_xr_{A:A}^3(x)\right)^{1/2}\left(\sum_xr_{A:A}(x)\right)^{1/2}} \\&=\left(\E_3^*(A)\right)^{1/2}|A|, \end{align*} so that $(\E^*(A))^2\leq{\E^*_3(A)|A|^2}$. Meanwhile, Lemma \ref{mainlemma} gives $(\E^*(A))^2|A(B+C)|^4\gg{\frac{|A|^8|B|^2|C|^2}{(\log{|A|})^2}}$. Comparing these two bounds gives the desired result. \end{proof} \section{Proofs of Main Results} The first task in this section is to prove Theorem \ref{main1}. This will require an application of the Balog-Szemer\'{e}di-Gowers Theorem. Following the conventional notation $\Gr$ represents a group, whose operation here is written additively\footnote{For our purposes the role of $\Gr$ will usually be played by the set of non-zero real numbers under multiplication.}, and $\E^+(A)$ has the same meaning as was given in section 1. We will need the following result. \begin{theorem} Let $A\subseteq \Gr$ be a set, $K\geq{1}$ and $\E^+(A)\geq{\frac{|A|^{3}}{K}}$. Then there is $A'\subseteq A$ such that $$ |A'| \gtrapprox{\frac{|A|}{K}}\,, $$ and $$ |A'-A'| \lessapprox K^4 \frac{|A'|^3}{|A|^2} \,. $$ \label{BSG} \end{theorem} We remark that the first preprint of this paper used a different version of the Balog-Szemer\'{e}di-Gowers Theorem, due to Schoen \cite{schoen_BSzG}. Shortly after uploading this, we were informed by M.~Z.~Garaev of a quantitatively improved version of the Balog-Szemer\'{e}di-Gowers Theorem, in the form of Theorem \ref{BSG}. This leads to a small improvement in the statement of Theorem \ref{main1}, since our earlier result had an exponent of $\frac{3}{2}+\frac{1}{234}$. The proof of Theorem \ref{BSG} result is short, arising from an application of Lemmas 2.2 and 2.4 in \cite{BG}. It is possible that further small improvements can be made to Theorem \ref{main1} by combining more suitable versions of the Balog-Szemer\'{e}di-Gowers Theorem with our approach. We will also need a sum-product estimate which is effective in the case when the product set or ratio set is relatively small. The best bound for our purposes is the following\footnote{In the notation of \cite{LiORN2}, we apply this bound with $C=-f(A)$ and $f(x):=\log{x}$.} (see \cite{LiORN2}, Theorem 1.2): \begin{theorem} \label{convex} Let $A\subset{\mathbb{R}}$. Then $$|A:A|^{10}|A+A|^9\gtrapprox{|A|^{24}}.$$ \end{theorem} \subsection*{Proof of Theorem \ref{main1}} Recall that Theorem \ref{main1} states that $$|A(A+A)|\gtrapprox{|A|^{\frac{3}{2}+\frac{1}{178}}}.$$ Write $\E^*(A)=\frac{|A|^3}{K}$. Applying Lemma \ref{mainlemma} with $A=B=C$, it follows that $$\frac{|A|^3}{K}|A(A+A)|^2\gtrapprox{|A|^6},$$ and so \begin{equation} |A(A+A)|\gtrapprox{|A|^{3/2}K^{1/2}}. \label{lb1} \end{equation} On the other hand, by Lemma \ref{BSG}, there exists a subset $A'\subset{A}$ such that \begin{equation} |A'|\gg{\frac{|A|}{K}} \label{subset1} \end{equation} and \begin{equation} |A':A'|\lessapprox{K^4\frac{|A'|^3}{|A|^2}}. \label{subset2} \end{equation} Now, Theorem \ref{convex} can be applied, and this states that $$|A':A'|^{10}|A'+A'|^9\gtrapprox{|A'|^{24}}.$$ Applying \eqref{subset2}, it follows that $$\frac{|A'|^{30}}{|A|^{20}}K^{40}|A'+A'|^9\gtrapprox{|A'|^{24}},$$ so that after rearranging, and applying the crude bound $|A'|\leq{|A|}$, we obtain $$K^{40}|A'+A'|^9\gtrapprox{\frac{|A|^{20}}{|A'|^{6}}}\geq{|A|^{14}}$$ Using another crude bound, \begin{equation}\label{f:crude} |A(A+A)| \geq |A+A| \geq |A'+A'|, \end{equation} yields \begin{equation} |A(A+A)|\gtrapprox{\frac{|A|^{14/9}}{K^{40/9}}}. \label{lb2} \end{equation} Finally, we note that the worst case occurs when $K\approx{|A|^{\frac{1}{89}}}$. If $K\geq{|A|^{\frac{1}{89}}}$, then \eqref{lb1} implies that $$|A(A+A)|\gtrapprox{|A|^{3/2}K^{1/2}}\geq{|A|^{\frac{3}{2}+\frac{1}{178}}},$$ whereas, if $K\leq{|A|^{\frac{1}{89}}}$, one can check that \eqref{lb2} tells us $$|A(A+A)|\gtrapprox{\frac{|A|^{14/9}}{K^{40/9}}}\geq{|A|^{\frac{3}{2}+\frac{1}{178}}}.$$ We have checked that $|A(A+A)|\gg{|A|^{\frac{3}{2}+\frac{1}{178}}}$ holds in all cases, and so the proof of Theorem \ref{main1} is complete. \begin{flushright} \qedsymbol \end{flushright} Let us show that the main result can be refined to obtain \begin{equation}\label{f:main_pred_1_new} |A(A+A)| \gg |A|^{\frac{3}{2}+\frac{1}{178}+\eps_0} \,, \end{equation} where $\eps_0>0$ is an absolute constant. To do this we need in an asymmetric version of Balog--Szemer\'{e}di--Gowers theorem, see \cite{TV}, Theorem 2.35. \begin{theorem} Let $A,B\subseteq \Gr$ be two sets, $|B| \le |A|$, and $M\ge 1$ be a real number. Let also $L=|A|/|B|$ and $\eps \in (0,1]$ be a real parameter. Suppose that \begin{equation}\label{cond:BSzG_as} \E (A,B) \ge \frac{|A| |B|^2}{M} \,. \end{equation} Then there are two sets $H\subseteq \Gr$, $\L \subseteq \Gr$ and $z\in \Gr$ such that \begin{equation}\label{f:BSzG_as_1} |(H+z) \cap B| \gg_\eps M^{-O_\eps (1)} L^{-\eps} |B| \,, \quad \quad |\L| \ll_{\eps} M^{O_\eps (1)} L^\eps \frac{|A|}{|H|} \,, \end{equation} \begin{equation}\label{f:BSzG_as_2} |H - H| \ll_{\eps} M^{O_\eps (1)} L^\eps \cdot |H| \,, \end{equation} and \begin{equation}\label{f:BSzG_as_3} |A\cap (H+\L)| \gg_{\eps} M^{-O_\eps (1)} L^{-\eps} |A| \,. \end{equation} \label{t:BSzG_as} \end{theorem} {\it Proof of inequality (\ref{f:main_pred_1_new}).} To get $\eps_0$ we need to improve (\ref{f:crude}), that is to show $|A(A+A)| \ge |A+A|^{1+\eps}$, where $\eps >0$ is some (other) absolute constant. Suppose not, then $\E^{*} (A,A+A) \gg_M |A|^2 |A+A|$, where $M=|A|^{\eps}$. Using Theorem \ref{t:BSzG_as} with $B=A$, $A=A+A$, we find, in particular, $H\subseteq A$ such that $|H| \gg_{M} |A|$ and $|H H^{-1}| \ll_M |H|$. Applying Theorem \ref{convex}, we see that $$ |A(A+A)| \ge |A+A| \ge |H+H| \gg_M |A|^{14/9} \,. $$ This completes the proof. $\hfill\Box$ As one can see, the number $\eps_0$ from (\ref{f:main_pred_1_new}) is a result of using of the asymmetric version of Balog--Szemer\'{e}di--Gowers theorem, and thus is rather small. Note that the sum-product estimates in \cite{LiORN2} are quantitatively better when the sum set is replaced by the difference set $A-A$. To be precise, it is proven in \cite{LiORN2} that $$|A:A|^6|A-A|^5\gg{\frac{|A|^{14}}{(\log{|A|})^2}}.$$ Therefore, the argument of the proof of Theorem \ref{main1} outputs a slightly better bound for the set $A(A-A)$. One can check that \begin{equation} |A(A-A)|\gtrapprox{|A|^{\frac{3}{2}+\frac{1}{106}}}. \label{A(A-A)} \end{equation} Again, the asymmetric version of the Balog-Szemer\'{e}di-Gowers Theorem can then be used as above to prove that $$|A(A-A)|\gg{|A|^{\frac{3}{2}+\frac{1}{106}+\eps_0}},$$ where $\eps_0>0$ is an absolute constant. \subsection*{Proof of Theorem \ref{main2}} Recall that Theorem \ref{main2} states that $$|A(A+A+A+A)|\gg{\frac{|A|^2}{\log{|A|}}}.$$ The essential step in Solymosi's \cite{solymosi} work on the sum-product problem was to obtain an upper bound on the multiplicative energy in terms of the sum set, as follows: \begin{equation} \E^*(A)\ll{|A+A|^2\log{|A|}}. \label{soly3} \end{equation} We mention this bound explicitly because it will now be used in the proof of Theorem \ref{main2}. Apply Lemma \ref{mainlemma} with $B=C=A+A$. This implies that $$\E^*(A)|A(A+A+A+A)|^2\gg{\frac{|A+A|^2|A|^4}{\log{|A|}}}.$$ Applying the upper bound \eqref{soly3} on $\E^*(A)$ and then rearranging yields $$|A(A+A+A+A)|\gg{\frac{|A|^2}{\log{|A|}}}.$$ \begin{flushright} \qedsymbol \end{flushright} \subsection*{Proof of Theorem \ref{main3}} Recall that Theorem \ref{main3} states that $$|A(A+A+A)|\gtrapprox{|A|^{\frac{7}{4}+\frac{1}{284}}}.$$ For the ease of the reader, we begin by writing down a short proof of the fact that \begin{equation} |A(A+A+A)|\gtrapprox{\frac{|A|^{7/4}}{(\log{|A|})^{3/4}}}. \label{crudeaim} \end{equation} First note that, since $r_{A:A}(x)\leq{|A|}$ for any $x$, \begin{equation}\label{f:main3_E_E_3} \E^*_3(A)=\sum_{x\in{A:A}}r_{A:A}^3(x)\leq{|A|\sum_{x\in{A:A}}r_{A:A}^2(x)}=|A|\E^*(A) \,, \end{equation} so that \eqref{soly3} yields \begin{equation} \E^*_3(A)\ll{|A||A+A|^2\log{|A|}}. \label{soly4} \end{equation} Now, apply Corollary \ref{E3}, with $B=A$ and $C=A+A$. We obtain $$\E^*_3(A)|A(A+A+A)|^4\gg{\frac{|A|^8|A+A|^2}{(\log{|A|})^2}}.$$ Combining this with the upper bound on $\E^*_3(A)$ from \eqref{soly4}, it follows that $$|A(A+A+A)|\gg{\frac{|A|^{7/4}}{(\log{|A|})^{3/4}}},$$ which proves \eqref{crudeaim}. Now, we will show how a slightly more subtle argument can lead to a small improvement in this exponent. Apply \eqref{soly3} and Lemma \ref{mainlemma}, with $B=A$ and $C=A+A$, so that \begin{equation}\label{tmp:29.10.2013_1} |A|^5|A+A|\lessapprox{\E^*(A)|A(A+A+A)|^2}\lessapprox{|A+A|^2|A(A+A+A)|^2} \,, \end{equation} and thus \begin{equation} |A+A||A(A+A+A)|^2\gtrapprox{|A|^5}. \label{first} \end{equation} Write $\E^*(A)=\frac{|A|^3}{K}$, for some value $K\geq{1}$. By the first inequality from (\ref{tmp:29.10.2013_1}), it follows that \begin{equation} |A(A+A+A)|\gtrapprox{|A|K^{1/2}|A+A|^{1/2}} \,. \label{second} \end{equation} Applying Solymosi's bound for the multiplicative energy then yields \begin{equation} |A(A+A+A)|\gtrapprox{|A|^{7/4}K^{1/4}}. \label{second2} \end{equation} Now, by Theorem \ref{BSG} there exists a subset $A'\subset{A}$ such that \begin{equation} |A'|\gtrapprox{\frac{|A|}{K}} \label{BSG1} \end{equation} and \begin{equation} |A':A'|\lessapprox{K^4\frac{|A'|^3}{|A|^2}}. \label{BSG2} \end{equation} By Theorem \ref{convex} and \eqref{BSG2}, \begin{align*} |A'|^{24}&\lessapprox{|A'+A'|^{9}|A':A'|^{10}} \\&\ll{|A+A|^9K^{40}\frac{|A'|^{30}}{|A|^{20}}}, \end{align*} and then $$|A+A|^9\gtrapprox{\frac{|A|^{20}}{|A'|^6K^{40}}}\geq{\frac{|A|^{14}}{K^{40}}}.$$ From the latter inequality we now have $|A+A|\gtrapprox{\frac{|A|^{14/9}}{K^{40/9}}}$. Comparing this with \eqref{second} leads to the following bound: \begin{equation} |A(A+A+A)|\gtrapprox{\frac{|A|^{16/9}}{K^{31/18}}}. \label{fourth} \end{equation} The worst case occurs when $K\approx{|A|^{1/71}}$. It can be verified that if $K<|A|^{1/71}$, then $$|A(A+A+A)|\gtrapprox{|A|^{\frac{7}{4}+\frac{1}{284}}},$$ by inequality \eqref{fourth}. On the other hand, if $K\geq{|A|^{1/71}}$, then it follows from inequality \eqref{second2} that $$|A(A+A+A)|\gtrapprox{|A|^{\frac{7}{4}+\frac{1}{284}}}.$$ Therefore, we have proved that \eqref{A(A+A+A)} holds for all $K$ (i.e. for all possible values of $\E^*(A)$), which concludes the proof. \begin{flushright} \qedsymbol \end{flushright} \section{Proofs of Results on Products of Translates} We record a short lemma which will be used in the proofs of Theorem \ref{translates2} and \ref{translates3} \begin{lemma} \label{handylemma} Let $A\subset{\mathbb{R}}$ be a finite set. Then, for any $x\in{\mathbb{R}}$, $$|(A+x)(A+A)||A+A|\gg{\frac{|A|^3}{\log{|A|}}}.$$ \end{lemma} \begin{proof} Note that for any $x\in{\mathbb{R}}$ \begin{align} |A+A|^2&=|(A+x)+(A+x)|^2 \\&\gg{\frac{\E^*(A+x)}{\log{|A|}}} \label{al1} \\&\gg{\frac{|A|^6}{|(A+x)(A+A)|^2(\log{|A|})^2}}, \label{al2} \end{align} where \eqref{al1} is an application of Solymosi's bound \eqref{soly3}, and \eqref{al2} comes from Lemma \ref{mainlemma}. The lemma follows after rearranging this expression. \end{proof} \subsection*{Proof of Theorem \ref{translates1}} Recall that Theorem \ref{translates1} states that $$|A(A+a)|\gg{|A|^{3/2}}$$ holds for at least half of the elements $a$ belonging to $A$. Lemma \ref{sum*} tells us that, for some fixed constant $C$ $$\sum_{a\in{A}}\E^*(A,a+A)\leq{C|A|^{7/2}}.$$ Let $A'\subset{A}$ be the set $$A':=\{a\in{A} ~:~ \E^*(A,a+A)\leq{2C|A|^{5/2}}\},$$ and observe that $$2C|A|^{5/2}|A\setminus{A'}|\leq{\sum_{a\in{A\setminus{A'}}}\E^*(A,a+A)}\leq{C|A|^{7/2}},$$ which implies that $$|A\setminus{A'}|\leq{\frac{|A|}{2}}.$$ This implies that $|A'|\geq{\frac{|A|}{2}}$. To complete the proof, we will show that for every $a\in{A'}$ we have $|A(A+a)|\gg{|A|^{3/2}}$. To see this, simply observe that, for any $a\in{A'}$, $$\frac{|A|^4}{|A(A+a)|}\leq{\E^*(A,A+a)}\ll{|A|^{5/2}}.$$ The lower bound here comes from \eqref{CS}, whilst the upper bound comes from the definition of $A'$. Rearranging this inequality gives $$|A(A+a)|\gg{|A|^{3/2}},$$ as required. \begin{flushright} \qedsymbol \end{flushright} We remark that it is straightforward to adapt this argument slightly---switching the roles of addition and multiplication and using Lemma \ref{sum+} in place of Lemma \ref{sum*}---in order to show that there exists a subset $A'\subset{A}$, such that $|A'|\geq{\frac{|A|}{2}}$, with the property that $$|A+aA|\gg{|A|^{3/2}},$$ for any $a\in{A'}$. It is also easy to adapt the proof of Theorem \ref{translates1} in order to show that, for any $0<\epsilon<1$ and any $A\subset{\mathbb{R}}$, there exists a subset $A'\subset{A}$ such that $|A'|\geq{(1-\epsilon)|A|}$, and for all $a\in{A'}$, $$|A(A+a)|\gg_{\epsilon}{|A|^{3/2}}.$$ In other words, the set $A(A+a)$ is large for all but a small positive proportion of elements $a\in{A}$. The analogous statement for $A+aA$ is also true. \subsection*{Proof of Theorem \ref{translates2}} Recall that Theorem \ref{translates2} states that $$|(A+a)(A+A)|\gg{\frac{|A|^{5/3}}{(\log{|A|})^{1/3}}}$$ holds for at least half of the elements $a$ belonging to $A$. This proof is similar to the proof of Theorem \ref{translates1}. Again, Lemma \ref{sum*} tells us that for a fixed constant $C$, we have $$\sum_{a\in{A}} \E^*(A+A,a+A)\leq{C|A|^2|A+A|^{3/2}}.$$ Define $A'\subset{A}$ to be the set $$A':=\{a\in{A} ~:~ \E^*(A+A,a+A)\leq{2C|A||A+A|^{3/2}} \},$$ and observe that $$2C|A||A+A|^{3/2}|A\setminus{A'}|\leq{\sum_{a\in{A\setminus{A'}}} \E^*(A+A,a+A)}\leq{C|A|^2|A+A|^{3/2}}.$$ This implies that $|A\setminus{A'}|\leq{\frac{|A|}{2}}$, and so $$|A'|\geq{\frac{|A|}{2}}.$$ Next, observe that, for any $a\in{A'}$, $$\frac{|A|^2|A+A|^2}{|(A+a)(A+A)|}\leq{\E^*(A+A,A+a)}\ll{|A||A+A|^{3/2}}.$$ The lower bound here comes from \eqref{CS}, whilst the upper bound comes from the definition of $A'$. After rearranging, we have \begin{equation} |(A+a)(A+A)|\gg{|A||A+A|^{1/2}}, \label{nearly} \end{equation} for any $a\in{A'}$. To complete the proof we need a useful lower bound on $|A+A|$. This comes from Lemma \ref{handylemma}, which tells us that for any $a\in{\mathbb{R}}$, and so certainly any $a\in{A}$, $$|A+A|^{1/2}\gg{\frac{|A|^{3/2}}{(\log{|A|})^{1/2}|(A+a)(A+A)|^{1/2}}}.$$ Finally, this bound can be combined with \eqref{nearly}, to conclude that $$|(A+a)(A+A)|\gg{\frac{|A|^{5/3}}{(\log{|A|})^{1/3}}},$$ as required. \begin{flushright} \qedsymbol \end{flushright} \subsection*{Another upper bound on the multiplicative energy} \newcommand{\differentfont}[1]{\ensuremath{#1}} Before proceeding to the proof of Theorem \ref{translates3}, it is necessary to establish another upper bound on the multiplicative energy. This is essentially a calculation, based on earlier work from \cite{GS} and \cite{TimORN}. We will need the following lemma: \begin{lemma} \label{multiplicity} Suppose that $\differentfont{A},\differentfont{B}$ and $\differentfont{C}$ are finite subsets of $\mathbb{R}$\/ such that $0\not\in A,B$, and $\alpha\in{\mathbb{R}\setminus{\{0\}}}$. Then, for any integer $t\geq{1}$, $$|\{s:r_{\differentfont{A}\differentfont{B}}(s)\geq{t}\}|\ll{\frac{|(\differentfont{A}+\alpha)\cdot{\differentfont{C}}|^2|\differentfont{B}|^2}{|\differentfont{C}|t^3}}.$$ \end{lemma} This statement is a slight generalisation of Lemma 3.2 in \cite{TimORN}. We give the proof here for completeness. \begin{proof} For some values $p$ and $b$, define the line $l_{p,b}$ to be the set $\{(x,y):y=(px-\alpha)b\}$. Let $\differentfont{L}$ be the family of lines $$\differentfont{L}:=\{l_{p,b}:p\in{(\differentfont{A}+\alpha)\differentfont{C}},b\in{\differentfont{B}}\}.$$ Observe that, since $\alpha$ is non-zero, $|\differentfont{L}|=|(\differentfont{A}+\alpha)\differentfont{C}||\differentfont{B}|.$\footnote{Note that it is not true in general that $|L|=|(A+\alpha)C||B|$. Indeed, if $0\in{B}$, then $l_{p,0}=l_{p',0}$ for $p\neq{p'}$, and so the lines may not all be distinct. However, we may assume again that zero does not cause us any problems. To be more precise, we assume that $0\notin{\differentfont{B}}$, as otherwise $0$ can be deleted, and this will only slightly change the implied constants in the statement of the lemma. If $0\notin{B}$, then the statement that $|L|=|(A+\alpha)C||B|$ is true.} Let $P_t$ denote the set of all $t$-rich points in the plane. By Corollary \ref{STcor}, for $t\geq{2}$, \begin{equation} |P_t|\ll{\frac{|\differentfont{B}|^2|(\differentfont{A}+\alpha)\differentfont{C}|^2}{t^3}+\frac{|\differentfont{B}||(\differentfont{A}+\alpha)\differentfont{C}|}{t}}, \label{ST3} \end{equation} and it can once again be simply assumed that \begin{equation} |P_t|\ll{\frac{|\differentfont{B}|^2|(\differentfont{A}+\alpha)\differentfont{C}|^2}{t^3}}. \label{ST4} \end{equation} This is because, if the second term from \eqref{ST3} is dominant, it must be the case $$t>|(\differentfont{A}+\alpha)\differentfont{C}|^{1/2}|\differentfont{B}|^{1/2}\geq{\min{\{|\differentfont{A}|,|\differentfont{B}|\}}}.$$ However, in such a large range, $|\{s:r_{\differentfont{AB}}(s)\geq{t}\}|=0$, and so the statement of the lemma is trivially true. Next, it will be shown that for every $s\in{\{s:r_{\differentfont{AB}}(s)\geq{t}\}}$, and for every element $c\in{\differentfont{C}}$, \begin{equation} \left(\frac{1}{c},s\right)\in{P_t}. \label{finally} \end{equation} Once, \eqref{finally} has been established, it follows that $|P_t|\geq{|\differentfont{C}||\{s:r_{\differentfont{AB}}(s)\geq{t}\}|}$. Combining this with \eqref{ST4}, it follows that \begin{equation} |\{s:r_{\differentfont{AB}}(s)\geq{t}\}|\ll{\frac{|\differentfont{B}|^2|(\differentfont{A}+\alpha)\cdot{\differentfont{C}}|^2}{|\differentfont{C}|t^3}}, \label{SThere} \end{equation} for all $t\geq{2}$. We can then check that \eqref{SThere} is also true in the case when $t=1$, since $$\frac{|B|^2|(A+\alpha)C|^2}{1^3|C|}\geq{|B|^2|(A+\alpha)C|}\geq{|AB|}=|\{s:r_{AB}(s)\geq{1}\}|.$$ It remains to establish \eqref{finally}. To do so, fix $s$ with $r_{\differentfont{AB}}(s)\geq{t}$ and $c\in{\differentfont{C}}$. The element $s$ can be written in the form $s=a_1b_1=\cdots=a_tb_t$. For every $1\leq{i}\leq{t}$ we have \begin{align*} s&=a_ib_i \\&=(a_i+\alpha-\alpha)b_i \\&=\left(\frac{(a_i+\alpha)c}{c}-\alpha\right)b_i, \end{align*} which means that $\left(\frac{1}{c},s\right)$ belongs to the line $l_{(a_i+\alpha)c,b_i}$. As $i$ varies from $1$ through to $t$ this is still true, and it is also true that the lines $l_{(a_i+\alpha)c,b_i}$ are distinct for all values of $i$ in this range. Therefore, $\left(\frac{1}{c},s\right)\in{P_t}$, as claimed. This concludes the proof. \end{proof} We use this to prove another lemma: \begin{lemma} \label{etranslates} For any finite sets $\differentfont{A}$ and $\differentfont{C}$ in $\mathbb{R}$, and any $\alpha\in{\mathbb{R}\setminus{\{0\}}}$, $$\E^*(\differentfont{A})\ll{\frac{|(\differentfont{A}+\alpha)\differentfont{C}||\differentfont{A}|^2}{|\differentfont{C}|^{1/2}}}.$$ \end{lemma} \begin{proof} Let $\triangle\geq{1}$ be a fixed integer to be specified later. Observe that, $$\E^*(\differentfont{A})=\sum_{x}r_{\differentfont{A}:\differentfont{A}}^2(x)=\sum_{x\,:\,r_{\differentfont{A}:\differentfont{A}}(x) \le \triangle}r_{\differentfont{A}:\differentfont{A}}^2(x)+\sum_{x:r_{\differentfont{A}\,:\,\differentfont{A}}(x) > {\triangle}}r_{\differentfont{A}:\differentfont{A}}^2(x).$$ The first term is bounded by $$\sum_{x\,:\,r_{\differentfont{A}:\differentfont{A}}^2(x) \le \triangle}r_{\differentfont{A}:\differentfont{A}}^2(x)\leq{\triangle\sum_{x}r_{\differentfont{A}:\differentfont{A}}(x)}=\triangle|\differentfont{A}|^2.$$ For the second term, apply a dyadic decomposition and use Lemma \ref{multiplicity} as follows: \begin{align*} \sum_{x:r_{\differentfont{A}:\differentfont{A}}(x)>{\triangle}}r_{\differentfont{A}:\differentfont{A}}^2(x)&=\sum_{j}\sum_{x\,:\,2^{j-1}\triangle<{r_{\differentfont{A}:\differentfont{A}}(x)} \le 2^j\triangle}r_{\differentfont{A}:\differentfont{A}}^2(x) \\&\ll{\sum_{j}(2^j\triangle)^2\frac{|(\differentfont{A}+\alpha)\differentfont{C}|^2|\differentfont{A}|^2}{|\differentfont{C}|(2^j\triangle)^3}} \\&=\frac{|(\differentfont{A}+\alpha)\differentfont{C}|^2|\differentfont{A}|^2}{|\differentfont{C}|\triangle}. \end{align*} This shows that $$\E^*(\differentfont{A})\ll{\triangle|\differentfont{A}|^2+\frac{|(\differentfont{A}+\alpha)\differentfont{C}|^2|\differentfont{A}|^2}{|\differentfont{C}|\triangle}},$$ and we optimise by setting $\triangle=\left\lceil\frac{|(\differentfont{A}+\alpha)\differentfont{C}|}{|\differentfont{C}|^{1/2}}\right\rceil\approx{\frac{|(A+\alpha)C|}{|C|^{1/2}}}$. This shows that $$\E^*(\differentfont{A})\ll{\frac{|(\differentfont{A}+\alpha)\differentfont{C}||\differentfont{A}|^2}{|\differentfont{C}|^{1/2}}},$$ as required. \end{proof} \subsection*{Proof of Theorem \ref{translates3}} Let $a$ and $b$ be distinct real numbers. We will show that \begin{equation} |(A+a)(A+A)|^5|(A+b)(A+A)|^2\gg{\frac{|A|^{11}}{(\log{|A|})^3}}. \label{aim2} \end{equation} Once we have established \eqref{aim2}, the theorem follows, since this implies that for any $a,b\in{\mathbb{R}}$ with $a\neq{b}$, we have \begin{equation} \max{\{|(A+a)(A+A)|,|(A+b)(A+A)|\}}\gg{\frac{|A|^{11/7}}{(\log{|A|})^{3/7}}}, \label{aim3} \end{equation} and therefore, there may exist at most one value $x\in{\mathbb{R}}$ which violates the inequality $$|(A+x)(A+A)|\gg{\frac{|A|^{11/7}}{(\log{|A|})^{3/7}}}.$$ It remains to prove \eqref{aim2}. First, apply Lemma \ref{etranslates} with $A=A+a$, $C=A+A$ and $\alpha=b-a\neq{0}$. This yields \begin{align} \E^*(A+a)&\ll{\frac{|(A+a+(b-a))(A+A)||A|^2}{|A+A|^{1/2}}} \\&=\frac{|(A+b)(A+A)||A|^2}{|A+A|^{1/2}}. \label{upper} \end{align} Meanwhile, Lemma \ref{mainlemma} informs us that \begin{equation} \E^*(A+a)\gg{\frac{|A|^6}{|(A+a)(A+A)|^2\log{|A|}}}, \label{lower} \end{equation} and combining \eqref{upper} and \eqref{lower}, we have \begin{equation} |(A+a)(A+A)|^2|(A+b)(A+A)|\gg{\frac{|A|^4|A+A|^{1/2}}{\log{|A|}}}. \label{nearly2} \end{equation} Finally, we apply Lemma \ref{handylemma} which tells us that $$|A+A|^{1/2}\gg{\frac{|A|^{3/2}}{(\log{|A|})^{1/2}|(A+a)(A+A)|^{1/2}}}.$$ Plugging this bound into \eqref{nearly2} and rearranging, it follows that $$|(A+a)(A+A)|^5|(A+b)(A+A)|^2\gg{\frac{|A|^{11}}{(\log{|A|})^3}}.$$ Thus we have established \eqref{aim2}, which completes the proof. \begin{flushright} \qedsymbol \end{flushright} \subsection*{Proof of Theorem \ref{t:main_intr_II}} \label{sec:further} Before we prove Theorem \ref{t:main_intr_II}, we need some auxiliary statements. First we note a corollary of the proof of Lemma \ref{mainlemma}. \begin{corollary} \label{mainlemmacor} Let $A,B,$ and $C$ be finite subsets of $\mathbb{R}$, and let \[ S^\star=|\{(a,b,c,a',b',c')\in(A\times B\times C)^2\colon a(b+c)=a'(b'+c')\not=0\}| \] Then \begin{equation} S^\star= \sum_{x\not=0}r_{A(B+C)}^2(x)\ll \E_2^*(A)^{1/2}|B|^{3/2}|C|^{3/2}(\log|A|)^{1/2}. \end{equation} \end{corollary} Now recall a lemma from \cite{SS3}. \begin{lemma} \label{corpop} Let $A$ be a subset of an abelian group, $P_* \subseteq A-A$ and $\sum_{s\in P_*} |A_s| = \eta |A|^2$, $\eta \in (0,1]$. Then \begin{equation*} \sum_{s\in P_*} |A\pm A_s| \geq \frac{\eta^2 |A|^6}{\E_3(A) } \,. \end{equation*} \end{lemma} \begin{corollary} Let $A$ be a subset of an abelian group. Then \begin{equation}\label{f:E(A+A)_1} \E(A+A) \ge |A-A|^{-1} |A\m A + \D(A)|^2 \ge |A|^2 \max\{ |A-A|, |A+A| \} \,, \end{equation} \begin{equation}\label{f:E(A+A)_1.5} \E(A-A) \ge |A-A|^{-1} |A\m A - \D(A)|^2 \ge |A|^2 |A-A| \,, \end{equation} and \begin{equation}\label{f:E(A+A)_2} \E(A \pm A) \ge \frac{|A|^{12}}{\E^2_3 (A) |A-A|} \,. \end{equation} \label{c:E(A+A)} \end{corollary} \begin{proof} We prove the statement for sums, the result for differences can be obtained similarly. Put $S=A+A$ and $D=A-A$. By Katz--Koester trick \cite{kk}, we get $$ |(A + A) \cap (A + A - s)| \ge |A + A_{s}| \,, $$ and \[ |(A - A) \cap (A - A - s)| \ge |A - (A\cap (A+s))| \,. \] Thus by the Cauchy--Schwarz inequality \begin{align*} \E(S) &\ge \sum_{s\in D} r_{S-S}^2 (s) \ge \sum_{s\in D} |A + A_s|^2 \ge |D|^{-1} \left( \sum_{s\in D} |A+A_s| \right)^2\\ &=|D|^{-1} |A \m A + \D(A)|^2\ge |A|^2 |A-A| \,, \end{align*} and, similarly, \begin{align*} \E(S) &\ge |D|^{-1} |A \m A + \D(A)|^2 \ge |D|^{-1} \left( \sum_{x\in S} |A+(A\cap (x-A))| \right) \left( \sum_{s\in D} |A+A_s| \right)\\ &\ge |A|^2 |A+A| \end{align*} as required. Here we have used the fact $$ |A \m A + \D(A)| = \sum_{s\in D} |A+A_s| = \sum_{x\in S} |A+(A\cap (x-A))| \,, $$ which follows from the consideration of the projections of the set $A \m A + \D(A)$. More precisely, one has $A \m A + \D(A) = \{ (a_1+a,a_2+a) ~:~ a,a_1,a_2\in A \}$. Whence, writing $s=(a_1+a) - (a_2+a) = a_1-a_2 \in D$, we get $a_2 \in A_s$, $a+a_2 \in A+A_s$ and viceversa. Similarly, put $x=a_1+a_2 \in S$, one get $a_2 \in A\cap (x-A)$, $a+a_2 \in A+(A\cap (x-A))$ and viceversa. Further, by Lemma \ref{corpop} $$ |A|^6 \le \E_3 (A) \sum_x D(x) r_{S-S} (x) \,. $$ Applying the Cauchy--Schwarz inequality, we get $$ |A|^{12} \le \E^2_3 (A) \E (S) |D| $$ and formula (\ref{f:E(A+A)_2}) follows. The result for the set $D$ is similar. \end{proof} Finally, we can prove Theorem \ref{t:main_intr_II}: \begin{proof}[Proof of Theorem \ref{t:main_intr_II}] We begin with the first formula of the result. Take $C=A-B$ in Corollary \ref{mainlemmacor}. Note that $r_{(A-B)+B}(a)\geq |B|$ for all $a\in A$, which implies that $r_{A(B+C)}(x)\geq r_{AA}(x)|B|$. Thus by Corollary \ref{mainlemmacor} we have\footnote{Note that $r_{AA}(x)=0$ for $x=0$ since we have assumed that $0\not\in A$.} \[ |B|^2\E^*_2(A)\leq\sum_{x\not=0}r_{A(B+C)}^2(x)\ll \E^*_2(A)^{1/2}|B|^{3/2}|A-B|^{3/2}(\log|A|)^{1/2}. \] Rearranging and applying the Cauchy-Schwarz lower bound for $\E_2^*(A)$ yields \[ \frac{|A|^4|B|}{|AA^{\pm 1}|}\leq |B|\E_2^*(A)\ll |A-B|^3\log|A|, \] as required. Combining (\ref{f:main_intr_2_new}) with Corollary \ref{c:E(A+A)}, we obtain (\ref{f:main_intr_2'-_new}). This completes the proof. \end{proof} \section*{Concluding remarks - the complex case} We conclude by pointing out that almost all of the results in this paper also hold in the more general case whereby $A$ is a finite set of complex numbers, since the tools we have made use of can all be extended in this direction. Indeed, the Szemer\'{e}di-Trotter was extended to points and lines in $\mathbb{C}^2$ by Toth \cite{toth}. More modern proofs have recently appeared due to Zahl \cite{zahl}, and Solymosi-Tao \cite{solytao}, although the latter of these results has exponents which are infinitesimally worse. The other main tool which has been imported to this paper is Solymosi's \cite{solymosi} bound on the multiplicative energy (which we earlier labelled \eqref{soly3}). This result was recently extended to the case when $A\subset{\mathbb{C}}$ by Konyagin and Rudnev \cite{KR}. \section*{Acknowledgements} We would like to thank Antal Balog and Tomasz Schoen for several helpful conversations, and Misha Rudnev for helping to significantly simplify the proof of Lemma \ref{mainlemma}. We are grateful to M. Z. Garaev for informing us about Theorem \ref{BSG}.
1,314,259,996,393
arxiv
\section{Stopping rule for \\anonymous workers} \label{sec:unweighted} We start with a simpler case when workers are anonymous, in the sense that there is no prior information on which workers are better than others. Absent such information, we treat all workers equally: essentially, we give each worker's vote the same weight. \OMIT{ Further, we rely on an intuition that the requester is willing to tolerate a higher error rate for HITs with a very small bias, in order to improve the error rate vs. average cost trade-off for the entire workload. } \subsection{Algorithm} For simplicity, let us assume there are only two answers: $A$ and $B$. In each round $t$, let $V_{A,t}$ and $V_{B,t}$ be the number of workers that vote for $A$ and $B$, respectively. Note that $t=V_{A,t}+V_{B_t}$. Our stopping rule is as follows: \begin{align}\label{eq:stopping-rule-unweighted} \text{Stop if}\; |V_{A,t} - V_{B,t}| \geq C\sqrt{t} - \eps t. \end{align} Here $\eps \geq 0$ and $C \geq 0$ are parameters that need to be chosen in advance. After the algorithm stops, the selection rule is simply to select the most frequent answer. Note that the right-hand side is not an integer, so we can randomly round it to one of the two closest integers in a way that is proportional to the fractional part. \subsubsection{Discussion} Our intuition is that each worker's reply is drawn IID from some fixed distribution over answers; recall that the bias of a HIT is the difference between the top two probabilities in this distribution. For two answers: $$\bias = |\Pr[A]-\Pr[B]|$$ Informally, the meaning of parameter $\eps$ is that we are willing to tolerate a higher error rate for HITs with $\bias \leq \eps$, in order to improve the error-cost trade-off for the entire workload. We find in our simulations that a small value of $\eps$ performs better than $\eps=0$. Parameter $C$ controls the error-cost trade-off: increasing it increases the average cost and decreases the error rate. In practice, the parameters $(C,\eps)$ should be adjusted to typical workloads to obtain the desirable error-cost trade-off. \subsubsection{Analysis} For a formal analysis, the idea is that our algorithm returns a correct answer with high probability if $\bias \geq \eps$. We consider the following two hypotheses: \begin{description} \item[(H1)] The correct answer is A and $\bias \geq \epsilon$, \item[(H2)] The correct answer is B and $\bias \geq \epsilon$. \end{description} Effectively, if one hypothesis is right, our algorithm rejects the other with high probability. With $\eps=0$, the expected cost (stopping time) is on the order of $\bias^{-2}$, in line with standard results on biased coin tossing. Using $\eps>0$ relaxes this to $(\eps+\bias)^{-2}$. For the sake of analysis, we allow the parameter $C$ to (logarithmically) depend on the time $t$. \begin{lemma}\label{lm:error-rate} Fix $\delta>0$. Consider the algorithm~\eqref{eq:stopping-rule-unweighted} with parameters $\eps>0$ and $C = \sqrt{\log (t^2/\delta)}$. Suppose this algorithm is applied to a HIT with $\bias = \eps_0$. \begin{description} \item[(a)] If $\eps_0\geq \eps$ then the algorithm returns a correct answer with probability at least $1-O(\delta)$. \item[(b)] The expected cost (stopping time) is at most $O\left( \rho^{-2}\, \log \tfrac{1}{\delta \rho} \right)$, where $\rho = \eps+\eps_0$. \end{description} \end{lemma} \begin{proof}[Sketch] Suppose $\Pr[A]\geq \Pr[B]$. Consider the difference $Z_t = V_{A,t} - V_{B,t}-\eps_0 t$, where $t$ ranges over rounds. Denote $C_t = \sqrt{\log (t^2/\delta)}$. Note that $Z_t$ is a random walk, so with high probability $|Z_t| \leq C_t\sqrt{t}$ (where parameter $C_t$ controls how high is the probability). For part (a), assume that hypothesis (H1) holds, i.e. that $\eps_0\geq \eps$, but the algorithm returns an incorrect answer, i.e. stops at some round $t$ so that answer $B$ is chosen. Then \begin{align*} V_{B,t} - V_{A,t} &> C_t\sqrt{t} - \eps t \\ Z_t &< (\eps-\eps_0) t-C_t\sqrt{t} \\ Z_t &< -C_t\sqrt{t}. \end{align*} Now, $Z_t<-C_t\sqrt{t}$ is a low-probability event: in fact, one can show that this event happens with probability at most $O(\delta)$. For part (b), we note that $Z_t>-C_t \sqrt{t}$ and $t\geq (2C_t/\rho)^2$ implies $V_{A,t} - V_{B,t} > C_t\sqrt{t} - \eps t $, so that the algorithm stops and returns the correct answer. Since $Z_t>-C_t \sqrt{t}$ is a high-probability event, we conclude that the expected stopping time is as claimed. It is easy to make the above arguments formal via a standard application of the \emph{Azuma-Hoeffding Inequality}. \end{proof} \OMIT{Parameter $\eps$ represents an estimated guarantee on the bias (drift) of the opposite hypothesis that we are rejecting. In a way, if the bias (drift) is smaller than $\eps$ then this allows an incorrect answer that will increase the error rate. } \newcommand{varying-$C$ curve\xspace}{varying-$C$ curve\xspace} \newcommand{varying-$C$ curves\xspace}{varying-$C$ curves\xspace} \subsubsection{Extension to multiple answers} One can extend the stopping rule \eqref{eq:stopping-rule-unweighted} to more than two answers in an obvious way. Let $A$ and $B$ be the answers with the largest and second-largest number of votes, respectively. The stopping rule is \begin{align}\label{eq:stopping-rule-unweighted-multiple} \text{Stop if}\; V_{A,t} - V_{B,t} \geq C\sqrt{t} - \eps t. \end{align} The selection rule is to select the most frequent answer. Lemma~\ref{lm:error-rate} easily carries over to multiple answers (we omit the details from this version). \subsection{Experimental Results} We used a simulated workload, consisting of 10,000 HITs, each with two answers. For each HIT, the bias towards the correct answer (the difference between the probabilities of the two answers) was chosen uniformly at random in the interval $[0.1, 0.6]$. This closely matches an empirical distribution of biases, as we have found in the previous experiments. For each worker answering this HIT, the answer was chosen independently at random with the corresponding bias. For each pair $(\eps,C)$ of parameters, running our algorithm on a single HIT gives a two-fold outcome: the cost and whether the correct answer was chosen. Thus, running our algorithm on all HITs in our workload results in two numbers: average cost and error rate (over all HITs). We plot these pairs of numbers on a coordinate plane (where the axes are average cost and error rate). Thus, fixing $\eps$ and varying $C$ we obtain a curve on this plane, which we call the \emph{varying-$C$ curve\xspace}. We consider several values for $\eps$, ranging from $0$ to $1$. For each value of $\eps$, we plot the corresponding varying-$C$ curve\xspace (Figure~\ref{fig:chart1}). Surprisingly, we find that, up to some minor noise, for any two varying-$C$ curves\xspace it holds that one lies below another. This did not have to be the case, as two curves could criss-cross. If one varying-$C$ curve\xspace lies below another varying-$C$ curve\xspace, this means that the $\eps$ parameter for the former curve is always better: for any $C$, it gives better average cost for the same error rate. Thus, we find that for any two significantly different values of parameter $\eps$, one value is better than another, regardless of the $C$. From Figure~\ref{fig:chart1}, we find that the most promising range for $\eps$ is $[.05, .2]$. We zoom in on this range in Figure~\ref{fig:chart2}. \begin{figure}[h] \centering \input{chart1.tex} \caption{Cost-quality trade-off for various $\eps$ values.} \label{fig:chart1} \end{figure} \begin{figure}[h] \centering \input{chart2.tex} \caption{Cost-quality trade-off: zooming in on the most promising $\eps$ values.} \label{fig:chart2} \end{figure} \section{Discussion} \label{discussion} From a practical perspective, we have identified several rules-of-thumb that in our experience one should follow when designing a crowdsourcing task. We list them here in the hope that they will prove useful to practitioners of the field: \begin{OneLiners} \item Do use gold HITs. \item Do answer some of the HITs yourself. This will often reveal problems in the task design, such as ambiguous questions or cases not covered by the possible answers. The solved HITs can also be used as gold HITs. \item Use adaptive stopping rules rather than a fixed number of workers per HIT. \item Experiment with a value of $C$ that will give the desired error rate. We have found in practice that the relationship between $C$ and error rate does not significantly depend on the task at hand. So one can use the $C$ value from another task as an initial estimate for $C$. \item Pick a value for $\eps$ from the range $[.05, .2]$. \item If worker reputation scores are available, do use a weighted stopping rule and do update the worker weights after every answer. \item Increase the weights of the better workers, and decrease the weights of the worse workers. Update the weights by $5\%-10\%$ after every answer. \end{OneLiners} \vspace{0.5cm} In practice, our algorithms can be implemented either by a crowdsourcing platform, as a service to requesters, or by individual requesters themselves. In the latter case, the platform needs to provide requesters with a capability to adaptively decide when to stop processing a given HIT.% \footnote{Much of the work on algorithms for crowdsourcing requires some platform support even if algorithms are mostly implemented requester-side. See \cite{Crowdsourcing-PositionPaper13} for a related discussion of modeling choices for the platform and the requesters.} To implement our algorithm for scalable gold HIT creation, requesters could use a capability to choose which worker to ask, if one is provided by the platform, or run batches and use the platform's API to pre-qualify workers for further iterations. \section{Conclusions} \label{conclusions} Information retrieval relies on assessments and label quality for measuring performance quality. As new and different types of media sources, such as micro-blogs and social media, become available, the need emerges to build search and content organization infrastructure for them. Gathering high quality labels thus becomes an essential part of building this infrastructure. Ground truth generation and adjusting the number of workers in assessment tasks can thus cover a wide range of applications. In this paper, we mainly focus on the issue of deciding how many workers to ask for a given HIT. The number of workers asked defines a trade-off between the cost of the HIT and the error rate of the final answer. We propose an adaptive stopping rule which, every time a worker is asked, decides whether to stop or continue asking another worker. The stopping rule takes into account the differences in the workers' answers and the uncertainty from the limited number of these answers. This allows asking few workers for easy HITs, where their answers are mostly identical, and thus incurring low cost. On the other hand, for harder HITs, more workers are asked in order to maintain a low error rate. A simpler scheme that uses a fixed number of workers per HIT wastes answers on the easy HITs and lacks enough answers on the harder HITs. When workers' skill levels are approximately known from their past performance, we can improve the stopping rule to take the skill levels into account. The difficulty of a new HIT is, as before, assumed to be unknown. From our data analysis we know that all workers tend to perform well on easy HITs, whereas on harder HITs the skill level of the workers tends to make a big difference. We can thus estimate the HIT difficulty by the number of answers when the stopping rule decided to stop. We use this information to re-weight the answers of the workers according to their known skill, so that for harder HITs we rely more on the better workers. One can envision an approach where the worker skill is not known beforehand but can be learned algorithmically. For example, after the stopping rule decides to stop and produce a final answer for the HIT, we could compare the worker's answer to the final answer. If their answer matches, we can assume they gave a correct answer. This approach is particularly suitable to the problem of scalable gold HIT creation. However, further research is required to establish if this can produce accurate results in practice or if it leads to ``self-fulfilling loops'' where the workers who are considered skilled provide the same wrong answer. Such answer is then interpreted as the ``correct'' answer by the system, which in turn reinforces the belief that these workers are highly skilled. While our stopping rules return a single answer for a given HIT, they can be extended to HITs with \emph{several} correct answers. For example, if the vote difference is small between the top two answers, but large between the second and the third answer, then we could stop and output the top two answers as both being correct. With similarly simple modifications, the rules can be expanded to deal with HITs in which the answers correspond to specific numerical values. In that case, it is not only the vote difference that matters but also the difference between the corresponding numerical values. These extensions are the subject of future research. \section{Scalable Gold HIT creation} \label{gold-hit} \newcommand{\ensuremath{\mathtt{Index}}}{\ensuremath{\mathtt{Index}}} We turn to the problem of scalable gold HIT creation, as described in the Introduction. We consider a stylized model with heterogeneity in worker quality (but not in HIT difficulty). The system processes a stream of HITs, possibly in parallel. Each HIT is assigned to workers, sequentially and adaptively, at unit cost per worker, until the gold HIT answer is generated with sufficient confidence or the system gives up. Worker skill levels are initially not known to the algorithm, but can be estimated over time based on past performance. The goal is to minimize the total cost while ensuring low error rate. \subsection{Algorithm} We adopt the following idea from prior work on multi-armed bandits: for each worker, combine exploration and exploitation in a single numerical score, updated over time, and at each decision point choose a worker with the highest current score~\cite{Thompson-1933,Gittins-index-79,bandits-ucb1}. This score, traditionally called an \emph{index}, takes into account both the average skill observed so far (to promote exploitation) and the uncertainty from insufficient sampling (to promote exploration). Over time, the algorithm zooms in on more skilled workers. We use a simple algorithm which builds on~\cite{bandits-ucb1,BanditSurveys-colt13}. For each worker $i$, let $t_i$ be the number of performed HITs for which the algorithm has generated a gold HIT answer, and let $t^+_i$ be the number of those HITs where the worker's answer coincides with the gold HIT. If $t_i\geq 1$, we define this worker's index as \begin{align*} \ensuremath{\mathtt{Index}}_i = \frac{t^+_i}{t_i} + \frac{1}{\sqrt{t_i}}. \end{align*} Note that $\ensuremath{\mathtt{Index}}_i \leq 2$. For initialization, we set $\ensuremath{\mathtt{Index}}_i=2$. Now that we've defined $\ensuremath{\mathtt{Index}}_i$, the algorithm is very simple: \begin{itemize} \item At each time step, pick a worker with the highest index, breaking ties arbitrarily. \item For each HIT, use the unweighted stopping rule \eqref{eq:stopping-rule-unweighted-multiple} to decide whether to stop processing this HIT. Then the gold HIT answer is defined as the majority answer. \end{itemize} \noindent (If an exogenous reputation system is available, one can use the weighted stopping rules developed in Section~\ref{sec:weighted}.) \newcommand{\ensuremath{\mathcal{D}_{\mathtt{qty}}}}{\ensuremath{\mathcal{D}_{\mathtt{qty}}}} \subsection{Experimental Results} To study the empirical performance of our index-based algorithm, we use a simulation parameterized by real data as follows. We focus on HITs with binary answers. We have $1,000$ workers and each worker generates a correct answer for each HIT independently, with some fixed probability (\emph{success rate}) which reflects her skill level. The success rate of each worker is drawn independently from a realistic ``quality distribution'' \ensuremath{\mathcal{D}_{\mathtt{qty}}}. We determined $\ensuremath{\mathcal{D}_{\mathtt{qty}}}$ by examining a large set ($>1,500$) of real workers from our internal platform (cf. Section \ref{data_analysis}), and computing their average success rates over several months. Thus we obtained an empirical quality distribution, which we approximate by a low degree polynomial (see Figure~\ref{fig:quality}). \begin{figure}[h] \centering \input{chart_worker_pdf.tex} \caption{Worker quality distribution \ensuremath{\mathcal{D}_{\mathtt{qty}}}.} \label{fig:quality} \end{figure} We compare our index-based algorithm to a naive algorithm, called \texttt{Random}, which assigns each HIT to a random worker. Both algorithms use the same unweighted stopping rule \eqref{eq:stopping-rule-unweighted-multiple}. In our simulation, each algorithm processes HITs one by one (but in practice the HITs could be processed in parallel). Recall that the stopping rule comes with two parameters, $\eps$ and $C$. We consider three different values of $\eps$, namely $\eps=0$, $\eps=0.05$ and $\eps=0.1$. (Recall that according to our simulations in Section~\ref{sec:unweighted}, $[0.05,2]$ is the most promising range for $\eps$.) For each algorithm and each value of $\eps$, we vary the parameter $C$ to obtain different cost vs. quality trade-offs. For each value of $C$, we compute $5K$ gold HITs using each algorithm. Thus, for each algorithm and each value of $\eps$ we obtain a varying-$C$ curve\xspace. The simulation results are summarized in Figure~\ref{fig:main-experiment}. The main finding is that our index-based algorithm reduces the per-HIT average cost by 35\% to 50\%, compared to \texttt{Random} with the same error rate. Recall that the ``cost" here refers to the number of workers, which in practical terms translates to both time and money. Thus, we suggest adaptive exploration, and particularly index-based algorithms, as a very promising approach for automated gold HIT creation. \begin{figure}[h] \centering \input{chart_5K_cost_vs_error.tex} \caption{Simulation results.} \label{fig:main-experiment} \end{figure} \section{Introduction} Crowdsourcing has become a central tool for improving the quality of search engines and many other large scale on-line services that require high quality data, assessments or labels. In this usage of crowdsourcing, a task or parts thereof are broadcast to multiple independent, relatively inexpensive workers, and their answers are aggregated. Automation and optimization of this process at a large scale allows to significantly reduce the costs associated with setting up, running, and analyzing experiments that contain such tasks. In a typical industrial scenario that we consider in this paper, a \emph{requester} has a collection of human intelligence tasks (HITs), where each HIT has a specific, simple structure and involves only a small amount of work. We focus on multiple-choice HITs, that is, a HIT that contains a question with several possible answers. The goal of the requester is to learn the preference of the crowd on each of the HITs. For example, if a HIT asks whether a particular URL should be labeled as spam and most workers believe it should, then the requester would like to learn this. This abstract scenario with multiple-choice HITs covers important industrial applications such as relevance assessment and other optimizations in a web search engine and construction of training sets for machine learning algorithms. Obtaining high quality labels is not only important for both model training and development but also for quality evaluation. The requester has two goals: extract high-quality information from the crowd (i.e., reduce the error rate), and minimize costs (e.g., in terms of money and time spent). There is a tension between these two goals; we will refer to it as the \emph{quality-cost trade-off}. In practice, it is assumed that there is some noise from the crowd, so the requester defines in advance how many workers are needed per assignment for the whole task. This approach may not always be the right thing to do. For example, assessing the relevance of the query-URL pair (\texttt{\small facebook}, \texttt{\small www.facebook.com}) should need no more than one or two workers for such popular destination. In contrast, the pair (\texttt{\small solar storms}, \texttt{\small solarstorms.org}) would require more workers as the topic may not be familiar to some. Using a fixed number of workers may result in wasting resources for cases that are not needed or in not pooling more answers in assignments that require more power. Wouldn't it be useful if there is a flexible mechanism for adjusting the number of workers? For cost-efficiency, one needs to take into account the heterogeneity in task difficulty and worker skills: some tasks are harder than others, and some workers are more skilled than others. Further, workers' relative skill level may vary from one task to another. In general, it is desirable to (1) use less aggregation for easier tasks, (2) use more skilled workers. The system initially has a very limited knowledge of task difficulty, and possibly also of worker skills, but both can, in principle, be learned over time. \\ A common application that stems from the assessment scenario is the generation of ground truth or gold standard, usually called \emph{gold} HITs or \emph{honey pots}. These gold HITs are a set of HITs where the associated answers are known in advance. They can be a very effective mechanism to measure the performance of workers and data quality. Gold HITs are usually generated manually, typically by hired domain experts. This approach is not scalable: it is expensive, time consuming and error prone. We believe that much more automated systems should be available, whereby a requester starts with a relatively small gold HIT set for bootstrapping, and uses the crowd to generate arbitrarily larger gold HIT sets of high quality. A central challenge in designing a mechanism for automated gold HIT creation is cost-efficient quality control. With error-prone workers, one needs to aggregate the answers of several workers to obtain a statistically robust answer for a gold HIT. \OMIT{ We provide a data analysis of a real crowdsourcing system and design new stopping rule algorithms based on our conclusions. There could be other ways a quality score can be used, but we argue that there is good reason to consider systems that only use them to optimize the stopping rule. In particular, we assume that each worker gets paid the same payment for each HIT she answers and we assume that the platform cannot choose which workers show up and answer the tasks. These assumptions match the current behavior of many industrial crowdsourcing platforms. Moreover adding differential payments and/or differential task assignment introduces a wide range of complexities. We believe that modifying these assumptions may raise some interesting research questions that are beyond the scope of this paper. } \OMIT{ Gathering relevance labels at scale is a difficult problem that requires an iterative and time consuming process. Our goal is to help the requester by providing an algorithmic solution for collecting high quality labels for any type of multiple-choice tasks. } \xhdr{Logged data analysis.} We collected and analyzed a real-world data set from logs of \emph{UHRS}, a large in-house crowdsourcing platform operated by Microsoft. We note that our data set cannot be easily replicated on a publicly accessible crowdsourcing platform such as Amazon Mechanical Turk. Indeed, this is a much larger data set (250,000 total answers) than one could realistically collect via targeted experiments (i.e., without access to platform's logs) because of budget and time limitations. Moreover, using realistic HITs in an open experiment tends to be difficult because of trade secrets. We make two empirical observations. First, we find that the difficulty of a random HIT is distributed near-uniformly across a wide range. Second, we investigate the interplay between HIT difficulty and worker quality, and we find that the high-quality workers are significantly better than the low-quality workers for the relatively harder tasks, whereas there is very little difference between all workers for the relatively easy tasks. These observations motivate our algorithms and allow us to construct realistic simulated workloads. The above observations are based on a large-scale data analysis, which makes them valuable even if they may seem intuitive to one's common sense (albeit perhaps counterintuitive to someone else's). UHRS, Amazon Turk, CrowdFlower, and others have similar architectural characteristics (e.g., HITs, task templates, payment system, etc.) so our data should be comparable to other platforms. Due to the proprietary nature of UHRS, this is the best information that we can share with the community. \xhdr{Stopping rules for a single HIT.} We consider obtaining a high-quality answer for a single HIT. We investigate a natural \emph{adaptive} approach in which the platform adaptively decides how many workers to use before stopping and choosing the answer. The core algorithmic question here is how to design a \emph{stopping rule}: an algorithm that at each round decides whether to stop or to ask one more worker. An obvious quality-cost trade-off is that using more workers naturally increases both costs and quality. In view of our empirical observation, we do not optimize for a particular difficulty level, but instead design \emph{robust} algorithms that provide a competitive cost-quality trade-off for the entire range of difficulty. As a baseline, we consider a scenario where workers are ``anonymous'', in the sense that the stopping rule cannot tell them apart. We design and analyze a simple stopping rule for this scenario, and optimize its parameters using a realistic simulated workload. As workers vary in skill and expertise, one can assign quality scores to workers based on their past performance (typically, as measured on gold HITs). We investigate how these quality scores can help in building better stopping rules. While an obvious approach is to assign a fixed ``voting weight'' to each worker depending on the quality score, we find that more nuanced approaches perform even better. Given our empirical observations, we would like to utilize all workers for easy tasks, while giving more weight to better workers on harder tasks. As the task difficulty is not known a priori, we use the stopping time as a proxy: we start out believing that the task is easy, and change the belief in the ``harder'' direction over time as we ask more workers. We conduct simulations based on the real workload, and conclude that this approach performs better than the ``fixed-weight'' approach. We focus on the workers' quality scores that are given externally. This is for a practical reason: it is extremely difficult to design the entire crowdsourcing platform as a single algorithm that controls everything. Instead, one is typically forced to design the system in a modular way. In particular, while different requesters may want to have their own stopping rules, the system may have a separate ``module'' that maintains workers' quality scores over different requesters. \OMIT{ Accordingly, we start with all workers having equal weights, and gradually modify the weights over time: increase weights for better workers and/or decrease weights for worse workers. We consider several algorithms based on this weight-modifying approach, and compare them to more obvious algorithms that do not change weights over time. } \xhdr{Scalable gold HIT creation.} Creating gold HITs presents additional challenges compared to the normal HITs. As the quality of the entire application hinges on the correctness of gold HITs, it is feasible and in fact desirable to route gold HITs to more reliable workers on the crowdsourcing platform. Note that the other workers would not be starved as they can work on other HITs. However, worker quality is typically estimated via performance on the gold HITs that are already present in the system, so the estimates may be very imprecise initially, and gradually improve over time as more gold HITs are added. To find answers for individual HITs in a cost-efficient manner, one can use stopping rules as described above. We tackle these challenges using ideas from \emph{multi-armed bandits}, a problem space focused on sequentially choosing between a fixed and known set of alternatives with a goal to increase the cumulative reward and/or converge on the best alternative. A multi-armed bandit algorithm needs to trade off \emph{exploration}, trying out various alternatives in order to gather information probably at the expense of short-term gains, and \emph{exploitation}, choosing alternatives that perform well based on the information collected so far. We consider a stylized model in which HITs arrive one by one, and the system sequentially assigns workers to a given HIT until it concludes that the answer is known with sufficient confidence. In particular, such system needs to ``explore'' the available workers in order to estimate their quality. We incorporate an insight from multi-armed bandits called \emph{adaptive exploration}: not only the exploitation decisions, but also the exploration schedule itself can be adapted to the data points collected so far (e.g., we can give up early on low-performing alternatives). To implement adaptive exploration, we take a well-known approach from prior work on multi-armed bandits and tailor it to our setting, connecting it with the stopping rules described above. We evaluate our algorithm on a simulation parameterized by a realistic workload, and find that it performs significantly better than the baseline uniform assignment of workers. \xhdr{Sanitization.} Since our data set is somewhat sensitive, we, unfortunately, have been required to sanitize our results. In particular, for algorithm evaluation we used simulated workloads (parameterized by the properties of the collected real-life data). One advantage of using simulated workloads is that one can replicate our algorithm evaluation (after choosing some values for the first column in Table~\ref{tab:worker-HIT-groups-diff}). Also, we have been able to generate as much simulated data as needed for the experiments, whereas the available number of workers in the original data set was insufficient for some HITs. \xhdr{A note on scope.} We do not consider the issue of worker availability. While some workers may be more available than others, it is not clear how to incorporate that into algorithm evaluation with our data set. Likewise, handling ``spammers" is outside of our scope. \xhdr{Organization of the paper.} The paper is organized as follows. Section \ref{related_work} summarizes the related work on this area. We describe preliminary background in Section \ref{preliminaries}. We provide an analysis using data from an industrial platform in Section \ref{data_analysis}. The design of a stopping rule for anonymous workers and its evaluation are described in Section \ref{sec:unweighted}. Similarly, the case for non-anonymous workers is described in Section \ref{sec:weighted}. The gold HIT creation method is described in Section \ref{gold-hit}. In Section~\ref{discussion} we list a number of practical recommendations based on our findings. Finally, conclusions and future work are outlined in Section \ref{conclusions}. \vspace{0.3cm} \section{Related work} \label{related_work} For general background on crowdsourcing and human computation, refer to Law and von Ahn~\cite{Law11}. The use of crowdsourcing as a cheap, fast and reliable mechanism for gathering labels was demonstrated in the areas of natural language processing~\cite{Snow08}, machine translation~\cite{Callison-Burch09} and information retrieval~\cite{AlonsoM12} by running HITs on Amazon Mechanical Turk or CrowdFlower and comparing the results against an existing ground truth. While early publications have shown that majority voting is a reasonable approach to achieve good results, new strategies have emerged in the last few years. Jointly with that, several papers consider \emph{task allocation}, the problem of allocating tasks to workers (or vice versa). Below we discuss the most related papers, mainly pointing out the differences between them and us. Due to the sheer volume of related work, a more thorough review is out of our scope. Oleson et al.~\cite{Oleson} propose to use the notion of \emph{programmatic gold}, a technique that employs manual spot checking and detection of bad work, in order to reduce the amount of manual work. Ground truth creation is a problem for new evaluation campaigns when no gold standard is available. Blanco et al.~\cite{Blanco11} rely on manual creation of gold answers (positive and negative) for monitoring worker quality in a semantic search task. Scholer et al.~\cite{Scholer11} study the feasibility of using duplicate documents as ground truth in test collections. Sheng et al.~\cite{Sheng08} design an algorithm that adaptively decides how many labels to use on a given HIT based on the distribution of all previously encountered HITs. Crucially, they assume that all HITs have the same difficulty for a given worker. However, our empirical evidence shows that HITs have widely varying difficulty levels; our algorithms are tailored to deal with this heterogeneity. Also, they optimize the quality of an overall classification objective, rather than the error rate. Vox Populi~\cite{Dekel09} is a data cleaning algorithm that prunes low quality workers with the goal of improving a training set. The technique uses the aggregate label as an approximate ground truth and eliminates the workers that tend to provide incorrect answers. Karger et al.~\cite{KOS11} optimize task allocation given budgets. Unlike ours, their solution is non-adaptive, in the sense that the task allocation is not adapted to the answers received so far. Further, \cite{KOS11} assume known Bayesian prior on both tasks and judges, whereas we do not. From a methodology perspective, CrowdSynth~\cite{Kamar12} focuses on addressing consensus tasks by leveraging supervised learning. Parameswaran et al.~\cite{CrowdScreen-sigmod12} consider a setting similar to our stopping rules for HITs with two possible answers. Unlike us, they assume that all HITs have the same difficulty level, and that the (two-sided) error probabilities are known to the algorithm. They focus designing algorithms for computing an optimal stopping rule. Settings similar to stopping rules for anonymous workers, but incomparable on a technical level, were considered in prior work, e.g. \cite{Bechhofer59}, \cite{Ramey79}, \cite{Bechhofer85}, \cite{Dagum-sicomp00}, \cite{Mnih-icml08}, \cite{BanditSurveys-colt13}. \xhdr{Scalable gold HIT creation.} Our model emphasizes explore-exploit trade-off, and as such is related to multi-armed bandits; see \cite{CesaBL-book,Bubeck-survey12} for background on bandits and \cite{Crowdsourcing-PositionPaper13} for a discussion of explore-exploit problems that arise in crowdsourcing markets. Our algorithm builds on a bandit algorithm from Auer et al.~\cite{bandits-ucb1}. Ho et al.~\cite{Jenn-icml13}, Abraham et al.~\cite{BanditSurveys-colt13} and Chen et al.~\cite{Chen-icml13} consider models for adaptive task assignment with heterogeneity in task difficulty levels and worker skill that are technically different from ours. In~\cite{Jenn-icml13}, the interaction protocol is ``inverted'': workers arrive one by one, and the algorithm sequentially and adaptively assigns tasks to each worker before irrevocably moving on to the next one. (The exploration schedule in \cite{Jenn-icml13} is non-adaptive, unlike ours, in the sense that it does not depend on the observations already collected.) The solution in~\cite{BanditSurveys-colt13} focuses on a single HIT. The algorithm in~\cite{Chen-icml13} develops an approach based on Bayesian bandits that requires exact knowledge of the experimentation budget and the Bayesian priors. \section{Preliminaries} \label{preliminaries} A HIT is a question with a set of $S$ of possible answers. For each HIT we assume that there exists one answer which is the correct answer (more on this below under ``probabilistic assumptions"). A requester has a collection of HITs, which we call a \emph{workload}. The goal of the requester is to learn what is the correct answer for each HIT. The requester has access to a crowdsourcing system. We model a stylized crowdsourcing system that operates in rounds. In each round the crowdsourcing systems chooses one HIT from the workload and a worker arrives, receives the HIT, submits his answer and gets paid a fixed amount for her work. The crowdsourcing system needs to output an answer for each HIT in the workload. The algorithm can adaptively decide for each HIT how many workers to ask for answers. We mostly focus on a single HIT. In each round, a worker arrives and submits an answer to this HIT. The algorithm needs to decide whether to stop (stopping rule) and if so, which answer to choose (selection rule). There are two measures to be minimized in such an algorithm: (1) the \emph{error rate} for this workload (the percentage of HITs for which the algorithm outputs the wrong answer), and (2) the \emph{average cost} for this workload (the average cost per HIT paid to the workers by the algorithm). Formally this is a bi-criteria optimization problem. If all workers are paid equally, the average cost is simply the average number of rounds. \xhdr{Probabilistic Assumptions.} \label{sec:probab-assumptions} We model each worker's answer as a random variable over $S$, and assume that these random variables are mutually independent. We assume that the most probable answer is the same for each worker. For the purposes of this paper, the ``correct answer'' is just the most probable answer, and this is the answer that we strive to learn \footnote{While the most probable answer may actually be false, we do not attempt to learn which answer is \emph{really} correct. Besides, it is not clear how to learn this from workers' responses on a particular HIT.} The difference between the probability of the most probable answer and the second most probably answer is called the \emph{bias} of a given worker on a given HIT. This quantity, averaged over all workers, is the \emph{bias} of a given HIT. A large bias (close to 1) corresponds to our intuition that the HIT is very easy: the error rate is very small, whereas a small bias (close to 0) implies that the HIT is hard, in the sense that it is very difficult to distinguish between the two most probable options. Here, our notion of easy/hard HITs is objective (reflecting agreement with majority), rather than subjective (reflecting workers' sentiments). Hereafter we use the bias of a HIT as a measure of its hardness. In particular, we say that HIT A is \emph{harder} than HIT B if the bias of A is smaller than the bias of B. \OMIT{ \subsection{Adaptive Exploration} Any good solution should involve \emph{adaptive HIT assignment}, the assignment of HITs to workers which changes based on previous observations. This raises a natural trade-off between \emph{exploration} (experimentation to learn more about worker skill and task difficulty) and \emph{exploitation} (making optimal decisions based on the experimentation results available so far). This trade-off occurs in many different scenarios, and is well-studied in Machine Learning and Operations Research (see~\cite{CesaBL-book,Bubeck-survey12,Gittins-book11} for background). In many settings, the best algorithms for explore-exploit trade-off involve \emph{adaptive exploration}, where not only the exploitation decisions, but the exploration schedule itself is adapted to the previous observations. For example, once we are sufficiently confident that a given alternative is bad, we can give up on it early and focus our exploration budget on more promising alternatives. } \section{A Stopping rule with access to quality scores} \section{Stopping rule for \\non-anonymous workers} \label{sec:weighted} Depending of the task and qualifications required, some workers may be better than others, and one can often estimate who is better by looking at the past performance. We assume workers have a one-dimensional personal measure of {\em expertise} or skill level, which influences their error rate on HITs. Further, we assume we have access to a {\em reputation system} which can (approximately and coarsely) rank workers by their expertise level. We develop a weighted version of the stopping rule from Section~\ref{sec:unweighted} that is geared to take advantage of such a reputation system. We begin by describing a general weighted stopping rule, then detail how we use it. \subsection{Algorithm} In each round $t$, the worker is assigned weight $w_t$. In general, the weights may depend on the available information about the worker and the task. Also, the stopping rule can update the next worker's weight depending on the number $t$ itself. For now, we do not specify \emph{how} the weights are assigned. Absent any prior information on the workers, all weights are 1. Such stopping rules will be called \emph{unweighted}; we have discussed them in Section~\ref{sec:unweighted}. \OMIT{A particularly simple class \emph{weighted} stopping rule is ones with $w_t\in \{0,1\}$; in other words, each worker $t$ is either given full weight ($w_t=1$), or completely ignored ($w_t=0$); we call it a \emph{0-1 weight} stopping rule.} Fix some round $t$. The weighted vote $V_{A,t}$ for a given answer $A$ is defined as the total weight of all workers that arrived up to (and including) round $t$ and chose answer $A$. For simplicity, assume there are only two answers: $A$ and $B$. Our stopping rule is as follows: \begin{align}\label{eq:stopping-rule-weighted} \text{Stop if}\; |V_{A,t} - V_{B,t}| \geq C\,\sqrt{\sum_{s=1}^t w_s^2} - \eps \sum_{s=1}^t w_s. \end{align} Here $C>0$ and $\eps\in[0,1)$ are parameters that need to be chosen in advance. Note that in the unweighted case ($w_t\equiv 1$), this reduces to \refeq{eq:stopping-rule-unweighted}. Our default selection rule is to choose the answer with the largest weighted vote. We call this the \emph{deterministic} selection rule. \subsubsection{Discussion} The goal for weighted stopping rule is identical to the unweighted case: among the two hypotheses (H1) and (H2), reject the one that is wrong. Letting $W_{t,q} = (\sum_{s=1}^t w_s^q)^{1/q}$, we can re-write the stopping rule \eqref{eq:stopping-rule-weighted} more compactly as \begin{align}\label{eq:stopping-rule-weighted-W} \text{Stop if}\; |V_{A,t} - V_{B,t}| \geq C\,W_{t,2} - \eps W_{t,1}. \end{align} The meaning of the right-hand side is as follows. Recall that $Z_t = V_{A,t} - V_{B,t}$ can be viewed as a biased random walk. First, assuming either (H1) or (H2) holds, the expected drift of this random walk, defined as $\E[Z_t]$, has an absolute value at least $\eps\, W_{t,1}$. Second, the standard deviation $\sigma(Z_t)$ is at most $W_{t,2}$, and in fact $W_{t,2}$ is the best available upper bound on $\sigma(Z_t)$. \subsubsection{Extension to multiple answers} It is easy to extend the stopping rule \eqref{eq:stopping-rule-weighted} to more than two answers. Let $A$ and $B$ be the answers with the largest and second-largest weighted vote, respectively. The stopping rule is \begin{align}\label{eq:stopping-rule-weighted-multiple} \text{Stop if}\; V_{A,t} - V_{B,t} \geq C\,W_{t,2} - \eps W_{t,1}. \end{align} \newcommand{\ensuremath{\mathtt{qty}}}{\ensuremath{\mathtt{qty}}} \newcommand{\ensuremath{\mathtt{good}}}{\ensuremath{\mathtt{good}}} \newcommand{\ensuremath{\mathtt{average}}}{\ensuremath{\mathtt{average}}} \newcommand{\ensuremath{\mathtt{bad}}}{\ensuremath{\mathtt{bad}}} \subsubsection{Defining the weights} We restrict our attention to \emph{coarse} quality scores. This is because a reputation system is likely to be imprecise in practice, especially in relation to a specific HIT. So more fine-grained quality scores, especially continuous ones, are not likely to be meaningful. Suppose each worker is assigned a coarse quality score $\ensuremath{\mathtt{qty}}$, e.g. $\ensuremath{\mathtt{qty}} \in \{ \ensuremath{\mathtt{good}}, \ensuremath{\mathtt{average}}, \ensuremath{\mathtt{bad}} \}$. Our general approach, which we call \emph{reputation-dependent exponentiation}, is as follows. For each possible quality score $\ensuremath{\mathtt{qty}}$ we have an initial weight $\lambda_\ensuremath{\mathtt{qty}}$ and the multiplier $\gamma_{\ensuremath{\mathtt{qty}}}$. If in round $t$ a worker with quality score $\ensuremath{\mathtt{qty}}$ is asked, then her weight is $$w_t = \lambda_\ensuremath{\mathtt{qty}}\, \gamma_{\ensuremath{\mathtt{qty}}}^{t-1}$$ A notable special case is \emph{time-invariant weights}: $\gamma_\ensuremath{\mathtt{qty}}=1$. The intuition is that we want to gradually increase the weight of the better workers, and gradually decrease the weight of the worse workers. The gradual increase/decrease may be desirable because of the following heuristic argument. As we found empirically (see Table~\ref{tab:worker-HIT-groups-diff}), the difference in performance between good and bad workers is more significant for hard HITs than for easy HITs, whereas for very easy HITs all workers tend to perform equally well. Therefore we want to make the difference in \emph{weights} between the good and bad workers to be more significant for harder HITs. While we do not know a priori how difficult a given HIT is, we can estimate its difficulty as we get more answers. One very easy estimate is the number of answers so far: if we asked many workers and still did not stop, this indicates that the HIT is probably hard. Thus, we increase/decrease weights gradually over time. \subsection{Experimental Results} \subsubsection{Simulated workload} We use the real data 9-by-9 table of error rates for different worker and HIT groups to generate a simulated workload with 10,000 HITs, all with two answers, and 1,000 workers that answer all these HITs. We split workers uniformly across worker groups, and split HITs uniformly among HIT groups. For each worker and each HIT, the correct answer is chosen with the probability given by the corresponding cell in the table. We define a coarse quality score depending on the worker group: the best 3 worker groups were designated \emph{good}, the middle three \emph{average} and the last three \emph{bad}. This quality score is given to the algorithm.% \footnote{The algorithm is not given the HIT group, because we believe that in practice the difficulty of a given HIT is essentially not known in advance, whereas the skill level of the workers can often be (coarsely) estimated from their previous performance.} \subsubsection{Algorithms tested} We tested several ``reputation-dependent exponentiation'' algorithms. Recall that the weights in each such algorithm are defined by the initial weights $\lambda_\ensuremath{\mathtt{qty}}$ and the multipliers $\gamma_\ensuremath{\mathtt{qty}}$ for each quality score $\ensuremath{\mathtt{qty}} \in \{ \ensuremath{\mathtt{good}}, \ensuremath{\mathtt{average}}, \ensuremath{\mathtt{bad}} \}$. For convenience, we denote the initial weights $\vec{\lambda} = (\lambda_\ensuremath{\mathtt{good}},\, \lambda_\ensuremath{\mathtt{average}},\, \lambda_\ensuremath{\mathtt{bad}} )$ and likewise the multipliers $\vec{\gamma} = (\gamma_\ensuremath{\mathtt{good}},\, \gamma_\ensuremath{\mathtt{average}},\, \gamma_\ensuremath{\mathtt{bad}} )$. We experimented with many assignments for $(\vec{\lambda}, \vec{\gamma})$. Below we report on several paradigmatic versions: \begin{itemize} \item Time-invariant weights ($\gamma_\ensuremath{\mathtt{qty}}\equiv 1$). We consider different initial weights $\vec{\lambda}$: \begin{OneLiners} \item[(V1)] $\vec{\lambda} = (1, 1, 1)$. \item[(V2)] $\vec{\lambda} = (1.2, 1, 0.8)$. \end{OneLiners} \item Time-varying weights, equal start ($\lambda_\ensuremath{\mathtt{qty}} \equiv 1$). We consider different multipliers $\vec{\gamma}$, so that weights change as follows. \begin{OneLiners} \item[(V3)] for all workers: $\vec{\gamma} = (1.05, 1, 0.95)$. \item[(V4)] for all workers (but faster): $\vec{\gamma} = (1.1, 1, 0.9)$. \item[(V5)] only for good workers: $\vec{\gamma} = (1.1, 1, 1)$. \item[(V6)] only for bad workers: $\vec{\gamma} = (1, 1, 0.9)$. \end{OneLiners} \end{itemize} \noindent For each assignment of $(\vec{\lambda}, \vec{\gamma})$, we consider several values for the parameter $\eps$, and for each $\eps$ we plotted a varying-$C$ curve\xspace in the error rate vs. expected cost plane. To showcase our findings, some representative choices are shown in Figure~\ref{fig:chart3}. \begin{figure}[t] \centering \input{chart3.tex} \caption{Cost-error trade-off for weighted stopping rules. (For all varying-$C$ curves\xspace, $\eps = 0.3$.)} \label{fig:chart3} \end{figure} \subsubsection{Findings} As in the previous section, we find that (up to some minor noise) for any two varying-$C$ curves\xspace, one lies below another. This enables comparisons between different algorithms that are valid for all choices of parameter $C$. We conclude the following: \begin{OneLiners} \item In general, it is better to update the weights over time, rather than keep them fixed. \item It is better to use $\eps>0$. The best value for $\eps$ is usually in the range $[.05, .2]$, and the effect of changing $\eps$ within this range is usually very small. \end{OneLiners} We experimented with various combinations of weight updating schemes and multiplier values. We tried updating the weights every four rounds rather than every round (updating by $\gamma^4_\ensuremath{\mathtt{qty}}$, accordingly, for each quality score $\ensuremath{\mathtt{qty}}$), and we found that updating every round performs better. Further, we investigated the effect of the magnitude of the multipliers $\vec{\gamma}$. We tried the previously mentioned versions V4 - V6 with multipliers that modify the worker weights by 5\%, 10\%, 20\%, and 30\% (for example, $\vec{\gamma} = (1.3, 1, 0.7)$ for V4 with 30\% weight updates). We found the differences to be very small, with the updates of 5\% and 10\% to be very slightly better. \section{Logged Data Analysis} \label{data_analysis} We worked with UHRS, a large in-house crowdsourcing platform operated by Microsoft. UHRS is used by many different groups across Microsoft for evaluation, label collection and ML training. The tasks range from TREC-like evaluations to domain specific labeling/experimentation. In particular, UHRS is used to gather training and evaluation data for various aspects of Bing. Using the logs of UHRS, we collected a data set from a variety of tasks and workers. In that data set, we selected all tasks that contained at least 50 HITs, and all HITs with at least 50 answers. (These HITs have been used for training and/or quality control, which explains an unusually large number of answers per HIT. This large number has been essential for our purposes.) We considered all HITs in all these tasks. This gave us a data set containing 20 tasks, 3,000 workers, 2,700 HITs, and 250,000 total answers. For each HIT we computed the majority answer, which we considered as the ``correct'' answer. Details of the different types of HITs and other specific metrics are left out due to proprietary information. \subsection{Empirical Biases of HITs} Workers' replies to a given HIT are, at first approximation, IID samples from some fixed distribution $\mD$. A crucial property of $\mD$ is the difference between the top two probabilities, which we call bias of this HIT; note that the bias completely defines $\mD$ if there are only two answers. Informally, larger bias corresponds to easier HIT. We study the distribution over biases in our workload. For each HIT, we consider the empirical frequencies of answers, and define ``empirical bias'' as the difference between the top two frequencies. We plot the CDF for empirical biases in Figure~\ref{fig:CDF-bias}. \begin{figure}[h] \centering \input{chart_gap_distribution_manyOptions.tex} \caption{CDF for the empirical bias of HITs.} \label{fig:CDF-bias} \end{figure} We conclude that HITs have a wide range of biases: some are significantly more difficult than others. In particular, tailoring a decision rule to HITs with a specific narrow range of biases is impractical. Further, we observe that the empirical distribution is, roughly, near-uniform. We use this observation to generate the simulated workload in the next section. \subsection{Error Rates} For each worker, we compute the average error rate across all HITs that she answered. According to that, we split all workers into 9 equally sized groups, from best-performing ($W_0$) to worst-performing ($W_8$). Similarly, for each HIT we compute the average error rate across all workers that answered it. We split all HITs into 9~equally-sized groups, from easiest ($H_0$) to most difficult ($H_8$). Let $\error(W_i,H_j)$ be the average error rate of the workers in the worker group $W_i$ when answering the HITs in the HIT group $H_j$. To make our main finding clearer, and also because our data set is somewhat sensitive, we report a 9-by-8 table (Table~\ref{tab:worker-HIT-groups-diff}): for each HIT group $H_i$, $i=0\ldots 8$ and each worker group $W_j$, $j = 1\ldots 8$, the corresponding cell contains the difference \begin{align} \error(W_i,H_j)-\error(W_0,H_j). \label{eq:table-cell} \end{align} The table is also visualized as a heat map in Figure \ref{fig:heatmap}. \begin{table}[h] \begin{center} \begin{tabular}{c|ccccccccc} \% & $W_1$& $W_2$ & $W_3$ & $W_4$ & $W_5$ & $W_6$ & $W_7$ & $W_8$ \\ \hline $H_0$& 0 & 0 & 0 & 1 & 1 & 1 & 2 & 4 \\ $H_1$& 1 & 1 & 2 & 2 & 3 & 4 & 6 & 15 \\ $H_2$& 1 & 3 & 3 & 4 & 6 & 8 & 11 & 20 \\ $H_3$& 1 & 4 & 4 & 7 & 7 & 11 & 16 & 27 \\ $H_4$& 4 & 7 & 8 & 12 & 13 & 17 & 23 & 36 \\ $H_5$& 5 & 9 & 11 & 14 & 18 & 20 & 26 & 43 \\ $H_6$& 7 & 11 & 15 & 18 & 22 & 25 & 30 & 47 \\ $H_7$& 11 & 14 & 19 & 21 & 25 & 26 & 33 & 48 \\ $H_8$& 19 & 24 & 27 & 29 & 31 & 35 & 39 & 50 \end{tabular} \end{center} \caption{Error rates for different worker/HIT groups. The cell $(W_i,H_j)$ contains the difference \eqref{eq:table-cell}, in percent points.} \label{tab:worker-HIT-groups-diff} \end{table} \begin{figure}[h] \includegraphics[width=8cm]{heatmap} \caption{Error rates for different worker/HIT groups.} \label{fig:heatmap} \end{figure} \vspace{0.5cm} \xhdr{Findings.} From Table~\ref{tab:worker-HIT-groups-diff}, we make the following observations. For difficult tasks ($H_6 \ldots H_8$) the set of good judges ($W_0\ldots W_2$) is significantly better (has a lower error rate) than the set of bad judges ($W_6\ldots W_8$). For easy tasks ($H_0 \ldots H_2$) there is very little difference between \emph{all judges} (expect perhaps for the very worst judges). These observations are robust to changing the number of HIT and worker groups (from 5 to 9). To summarize, {\em the difference in performance between good and bad workers is much more significant for harder HITs than for easier HITs.} Accordingly, we devise algorithms that tend to use all workers for easier HITs, and favor better workers for more difficult HITs.
1,314,259,996,394
arxiv
\section{Introduction} Global surfaces of section are an important tool to reduce the dynamics of flows on 3-manifolds to the dynamics of surface diffeomorphisms. According to Ghys \cite{Ghys-09} global surfaces of section in their simplest form are a paradise for dynamicists. In fact, just the existence of global surfaces of section gives information about the flow; for example, having a global surface of section without boundary for a flow on a $3$-manifold $M$ implies that $M$ fibers over the circle. Since many $3$-manifolds, such as for example $S^3$, do not fiber over the circle, any global surface of section for such a manifold must have boundary, and hence periodic orbits. On the other hand, in \cite{Kuperberg94} Kuperberg has shown that there are flows on $S^3$ without any periodic orbits. It is hence natural to look for global surfaces of section in a more restricted class of vector fields. One candidate is the class of Hamiltonian vector fields, but without further restrictions, there are still difficulties. For example, the horocycle flow on the unit cotangent bundle $ST^*\Sigma_g$ of a higher genus surface provides an example of a Hamiltonian flow on a compact manifold without any periodic orbits, see \cite{Asselle}. Since $ST^*\Sigma_g$ does not fiber over the circle, it does not admit a global surface of section. In both of the above examples, the main obstruction is the existence of periodic orbits. In the case of Reeb flows, this obstruction vanishes. In \cite{Taubes} Taubes has proved the so-called Weinstein conjecture for contact $3$-manifolds, which asserts the existence of periodic Reeb orbits on compact contact manifolds. This takes care of this particular obstruction, and there are indeed various results known on the existence of global surfaces of section for contact $3$-manifolds. One of the first results is due Hofer, Wysocki and Zehnder, \cite{HWZ}, who have shown that Reeb flows on a dynamically convex 3-sphere must have disk-like global surfaces of section. The condition of dynamical convexity means, roughly speaking, that the winding number of the linearized flow is sufficiently large. Hryniewicz and Salom\~ao \cite{Hryniewicz-Salomao} have extended this result by weakening the condition of dynamical convexity. The global surfaces of section in these results have the simplest possible topology, and this can be useful in the analysis of the return map. In this paper, we will construct Reeb flows for which any global surface of section must have a much more ``complicated'' topology. This extends the result of \cite{vK19} in which it was shown that there are Reeb flows without a disk-like global surface of section. The precise statement of our first result is as follows: \begin{thm} \label{thm1} Let $M$ be an integral homology 3-sphere with a contact structure $\xi$. Then for any integer $n>1$, there exists a contact form $\alpha$ for $\xi$ whose Reeb flow does not admit global surfaces of section with fewer than $n$ boundary components. \end{thm} Now one may also ask whether these Reeb flows with ``complicated'' global surfaces of section are, in some sense, ``generic''. We answer this question positively by showing that our construction of the Reeb flow is stable under $C^ {\infty}$-small perturbations. To keep the statement simple, we remind the reader that the Reeb flow from Theorem~\ref{thm1} is a Hamiltonian dynamical system on $(\mathbb{R}_{>0} \times M,d(\rho \alpha))$ with Hamiltonian $H=\rho$. \begin{thm} \label{thm2} Let $M$ be an integral homology $3$-sphere with a contact structure $\xi$, and let $\alpha$ denote the contact form obtained in the proof of Theorem~\ref{thm1}. Then there is a deformation $\bar H$ of the Hamiltonian $H=\rho$ on the symplectic manifold $(\mathbb{R}_{>0} \times M,d(\rho \alpha))$ with the following property: For any $C^{4+\epsilon}$-small perturbation $\bar H_\delta$ of $\bar H$, the dynamics on the level set $\bar H_\delta=1$ do not admit global surfaces of section with fewer than $n$ boundary components. \end{thm} Complementary to the negative results here, one can also investigate what topologies are possible for a given Reeb flow. For progress on this question, we mention the work of Albach, Geiges, and that of Albers, Geiges and Zehmisch\cite{AG,AGZ}. To construct the Reeb flow for Theorem~\ref{thm1}, we will use the open book decomposition, and the book-connected sum operation. Since a connected sum with the standard $3$-sphere does not change the contact structure on $M$, performing book-connected sums with annuli will complicate the dynamics of the Reeb flow, while leaving the contact structure on $M$ unchanged. This is discussed in more depth in Section~\ref{sec:general_theory}, with an analysis of the invariant sets for the Reeb flow after the book-connected sum. In Section~\ref{secpf1}, we generalize the result from \cite{vK19} by constructing a Reeb flow originating from an open book decomposition of $M$ which has many periodic orbits that do not link with each other. Then, a linking number argument will show that a global surface of section for the Reeb flow must have many boundary components. Theorem~\ref{thm2} will be proved in Section~\ref{sec:stability}. To see that we still have enough invariant sets to run a linking argument we apply KAM theory. This complicates the original construction somewhat, and makes the initial perturbation necessary. Since the perturbation is in particular $C^2$-small, we still reconstruct the open book decomposition, and hence a global surface of section, on the perturbed level set. The key point, stability near the binding, is explained in Appendix~\ref{appendix:stab_ob}; this only requires the usual implicit function theorem. The global surface of section simplifies the analysis of the dynamics. \section{General Theory} \label{sec:general_theory} \subsection{Global Surfaces of Section} \noindent Let $M$ be an oriented $3$-manifold with a flow $\phi_t$ generated by a vector field $X$ on $M$. \begin{defn} A $\textit{global surface of section}$ for $(M,\phi_t)$ is a connected, compact surface $S$ embedded in $M$ such that the following conditions hold; \begin{enumerate \item $X$ is positively transverse to the interior of $S$. \item For every $p\in M$, there exists some $t^+>0$ and $t^-<0$ such that $\phi_{t^+}(p),\:\phi_{t^-}(p)\in S$. \item The boundary of $S$ consists of periodic orbits of $\phi_t$. \end{enumerate} \end{defn} As indicated in the introduction, an important dynamical significance of this concept is that it allows to convert problems of flows in $3$-manifolds into problems of surface diffeomorphisms. We also get an immediate topological consequence from the definition of a global surface of section. Namely, each orbit of $\phi_t$ lies either in the boundary of $S$, or positively intersects $S$. This leads us to consider the linking number of orbits with $S$. As a reminder to the reader, the linking number of two oriented knots $k, \ell$, or more generally oriented links, in an oriented integral homology sphere $M$ is defined as the sum of the oriented number of intersections between $\ell$ and a Seifert surface $F_k$ for $k$. This number is well-defined, i.e.~it is independent of the choice of Seifert surface. We explain this in Appendix~\ref{app:seifert}. We will also need some additional facts. First of all, linking numbers are symmetric. \begin{lemma} For two knots $\ell_1, \ell_2$ in an (oriented) integral homology 3-sphere $M$, the linking number is symmetric, so $\lk(\ell_1, \ell_2)=\lk(\ell_2, \ell_1)$. \end{lemma} \begin{proof} We first recall that any oriented 3-manifold is an oriented boundary of an oriented 4-manifold. Indeed, by the Lickorish-Wallace theorem, all oriented 3-manifolds can be obtained by surgery on a framed link $L$ on $S^3$. Now identify $S^3$ as the boundary of $D^4$, and attach 2-handles with the framing given by the framed link $L$ to the obtain the oriented $4$-manifold $N$ with oriented boundary $M$ We can take a collar neighborhood $N_M$ of $M$ in $N$. Also, let $F_1, F_2$ be Seifert surfaces in $M$ that have boundaries $\ell_1, \ell_2$. We can push the interior of $F_2$ into $N_M$ to make $F_2'$ in $N_M$ such that $F_2'\cap M$ is $\ell_2$. Then the intersection of $F_1, F_2'$ will only happen in $M$, therefore $F_1\cap F_2'=F_1\cap \ell_2$. Furthermore, the orientations match as well Now observe that for the maps $\imath:D^2\to F_2$, $\imath':D^2\to F_2'$, $\imath$ and $\imath'$ are homotopic. Therefore, the intersection number of $F_1, F_2$ is equal to the intersection number of $F_1, F_2'$. Thus, when we define another $F_1'$ in the same way, we have \begin{equation*} \lk(\ell_1,\ell_2)=\langle [F_1]\cdot [F_2'], [N]\rangle=\langle [F_2]\cdot [F_1'], [N]\rangle= \lk(\ell_2,\ell_1). \end{equation*} \end{proof} We will often make use of the following observation when dealing with linking numbers of links. \begin{thm} Let $F$ be a surface in $M$ with boundary components $L_1,\cdots,L_k$. Assume that $\gamma$ is a link that does not intersect with $L_1,\cdots,L_k$. If $\:\lk(\gamma,L_i)=0$ for all $i$, then $\gamma$ also has algebraic intersection number 0 with $F$. \end{thm} \begin{proof} Recall that $H_1(M\setminus L_i)\cong \mathbb{Z}$, since $M$ is a homology sphere. Therefore we can compute the linking number of $\gamma$ with $L_i$ as the number $n$ that satisfies $[\gamma]=n\cdot \alpha$ for a preferred (oriented) generator $\alpha$ of $H_1(M\setminus L_i)$. Since $\lk(\gamma,L_i)=0$ for all $i$, we can see that $[\gamma]=0$ in all $H_1(M\setminus L_i)$. Another way to define the linking number is through Poincar\'e duality: first take Seifert surfaces $K_i$ for each link component $L_i$, whose existence will be reviewed in Appendix~\ref{app:seifert}. We can take the fundamental class $[K_i]\in H_2(K_i,L_i)$, and map it with the natural map $H_2(K_i,L_i)\to H_2(M,L_i)\cong H^1(M\setminus L_i)$ to $D([K_i])$. We can define the linking number of $\gamma$ and $L_i$ to be $D([K_i])([\gamma])$, where $[\gamma]\in H_1(M\setminus L_i)$ is the homology class generated by $\gamma$. It can be shown that these two definitions are equivalent: see the book of Gompf and Stipsicz, \cite{Gompf}. Consider the intersection number $[F] \cdot [\gamma]$ in $H^*(M\setminus L)$. The same argument shows that we can compute $[F] \cdot [\gamma]$ by $D([F])([\gamma])$ for $[\gamma]\in H_1(M\setminus L)$. Since we have shown that the homology class generated by $\gamma$ is zero in each $H_1(M\setminus L_i)$, the class $[\gamma]\in H_1(M\setminus L)$ is also zero. Therefore, we conclude $[F] \cdot [\gamma]=0$. \end{proof} Now let us consider a global surface of section $S$ for a flow $\phi_t$. Let $\gamma$ be a periodic orbit of $\phi_t$ which is not a cover of any of the boundary orbits of $S$. Therefore, we can apply Theorem 2.3 to show that $\gamma$ has intersection number $0$ with $S$, or some $\lk(\gamma,L_i)$ is nonzero. The definition of a global surface of section excludes the first case, so we conclude \begin{lemma} \label{link} Let $S$ be a global surface of section for a Reeb flow $\phi_t$ on an integral homology sphere $M$, and denote by $L_1,\cdots,L_k$ the boundary components of $S$. If $\gamma$ is a periodic orbit of $\phi_t$ which is not any of the $L_i$, then at least one of the linking numbers $\lk(\gamma,L_i)$ is nonzero for some $i$ in $1,\ldots,k$. \end{lemma} \subsection{Open book decompositions} We will make use of so-called open books to prove our theorems. Here is the definition. \begin{defn} An \textit{open book decomposition} for an oriented manifold $M$ is a pair $(B,\pi)$ that satisfies \begin{enumerate \item $B$ is an oriented link in $M$. \item $\pi:M\setminus B\to S^1$ is a (smooth) fiber bundle such that the closure of each fiber $F_{\theta}=\pi^{-1}(\theta)$ has boundary $B$. \end{enumerate} \end{defn} We call $B$ the binding, and the closure of each $F_{\theta}$ the page of the open book. We will sketch briefly how this purely topological concept is related to global surfaces of section. Suppose that $S$ is a global surface of section for an oriented $3$-manifold $M$ with a smooth flow $\phi_t$. Put $B:=\partial M$. This is an oriented link. For each $x\in M\setminus B$ we define the minimal forward return time as $$ \tau_+(x):=\inf_{t>0, \phi_t(x)\in S} t $$ and the minimal backward return time as $$ \tau_-(x):=\inf_{t>0, \phi_{-t}(x)\in S} t. $$ Since the flow is smooth, we see that $\tau_\pm$ are smooth functions on $M\setminus S$. Define the map \[ \pi: M\setminus B \longrightarrow S^1 =\mathbb{R} /\mathbb{Z},\quad x \longmapsto \begin{cases} [\frac{\tau_-(x)}{\tau_+(x)+\tau_-(x)}] & \text{if }x \notin S,\\ [0] & x\in S. \end{cases} \] We note that this map is continuous, but it is not smooth in general. This means that we have constructed a continuous open book for which the global surface of section $S$ is a single page. The map $\pi$ can be smoothed, but we won't need this construction, so we will not go into the details. \begin{remark} \label{rem:OB_implies_GSS} There is a converse statement: an open book \emph{together with a vector field $X$} that is transverse to the pages gives rise to a global surface of section. Namely each page of such an open book will be a global surface of section for the flow of $X$. \end{remark} There is an alternative description of an open book, which is more convenient for constructions. This is the following. \begin{defn} Define an $\textit{abstract open book}$ as a pair $(\Sigma,\phi)$ such that \begin{enumerate \item $\Sigma$ is an oriented, compact surface with boundary. \item The \textit{monodromy} $\phi:\Sigma\to\Sigma$ is a diffeomorphism restricting to the identity near the boundary. \end{enumerate} \end{defn} There is a correspondence between open book decompositions and abstract open books. To go from an abstract open book to an open book decomposition, we do the following. \begin{itemize} \item construct the mapping torus $$ M(\Sigma,\phi):=\Sigma \times \mathbb{R} /(x,\theta) \sim (\phi(x),\theta-2\pi) $$ \item put $B:=\partial \Sigma$, and set $M:=B\times D^2 \cup_\partial M(\Sigma,\phi)$ \item define \[ \pi: M\setminus B \longrightarrow S^1=\mathbb{R} /2\pi \mathbb{Z}, \quad x \longmapsto \begin{cases} [\theta] & \text{if }x=(b;r,\theta)\in B\times D^2,\\ [\theta] & \text{if }x=[(s,\theta)]\in M(\Sigma,\phi). \end{cases} \] The map $\pi$ is a well-defined, smooth map. \end{itemize} As a result, we obtain a smooth manifold $M$ together with an open book decomposition. Conversely, given an open book decomposition $(B,\pi)$ on $M$, we define $\Sigma$ to be a page of the open book $\Sigma := \overline{\pi^{-1}([0])}$. The monodromy $\phi$ can be constructed by choosing a symplectic connection that is standard near the binding. Details of this monodromy construction can be found in \cite{vK17}. \subsection{Contact open books} \label{sec:ob_form} We now adapt this open book setup to the setting of contact manifolds and Reeb flows. We will essentially copy the above construction for going from an abstract open book to an open book decomposition, but now in the presence of a geometric structure. We will follow the Giroux construction. First we strengthen the requirements to be able to get a contact structure. \begin{itemize} \item We require the compact, oriented surface $\Sigma$ with boundary to be a so-called Liouville domain. This means that $\Sigma$ comes equipped with a Liouville form $\lambda$. This is a $1$-form $\lambda$ such that $d\lambda$ is an area-form, which induces the given orientation on $\Sigma$, and such that $\lambda$ induces the natural orientation\footnote{as defined by the outward pointing normal} on the boundary $\partial \Sigma$. \item We require the monodromy $\phi$ to be an exact symplectomorphism that is the identity near the boundary of $\Sigma$. This means that $\phi$ is an area-preserving diffeomorphism that satisfies $\phi^*\lambda=\lambda - dT$. \end{itemize} We can and will assume that $T<0$. We slightly modify the definition of the mapping torus. We will define $M(\Sigma, \phi)$ as the quotient of $\Sigma\times\mathbb{R}$ by a relation $\sim$, where $(p,\theta)\sim(\phi(p),\theta+T(p))$. This mapping torus is diffeomorphic to the one we defined earlier. The reason to deform the mapping torus like this, comes from the following observation. The contact form $\mu=d\theta+\lambda$ descends to our deformed $M(\Sigma,\phi)$. The Reeb vector field for this contact form $\mu$ is given by $\partial_\theta$. In particular, we see that periodic Reeb orbit in the mapping torus correspond to periodic points of the monodromy $\phi$. As before, we define a neighborhood of the binding of the open book decomposition as $B(\Sigma)= \partial\Sigma\times D^2$. We need to describe the gluing between a neighborhood of the binding and the mapping torus. Given an annulus $R(\frac{1}{2},1]$ of inner and outer radii $\frac{1}{2},1$, including the outer circle of radius 1, take \begin{align*} \Phi:\partial\Sigma\times R(\frac{1}{2},1]&\to (-\frac{1}{2},0]\times\partial \Sigma \times S^1\subset M(\Sigma,\phi) \\ (p,r,\theta)&\mapsto (\frac{1}{2}-r,p,-\frac{\theta T(p)}{2\pi}). \end{align*} Because $\phi$ is the identity near the boundary of $\Sigma$, the function $T$ is locally constant near the boundary. Hence we see that $\Phi$ is a well-defined map from $\partial\Sigma\times R(\frac{1}{2},1]$ in $B(\Sigma)$ to $(-\frac{1}{2},0]\times\partial W\times S^1$ in $M(\Sigma,\phi)$. Define $$ M:=B(\Sigma)\amalg M(\Sigma,\phi) / \Phi, $$ and write $B=\partial\Sigma\times\{0\}$ in $B(\Sigma)$. As before it is possible to construct a map $\pi$ making this into an open book decomposition on $M$, but we won't use this. We already have a contact form $\mu=d\theta+\lambda$ on $M(\Sigma,\phi)$, which we will now extend to the whole of $M$. Since $\lambda$ is a Liouville form, we can write $\mu$ near the boundary as $d\theta+e^R\lambda\vert_{\partial\Sigma}$, where $R$ is the collar parameter near the boundary. This form is pulled back by $\Phi$ to $\Phi^*\mu=e^{\frac{1}{2}-r}\lambda\vert_{\partial\Sigma}-\frac{T(p)}{2\pi}dt$ on $B\times R(\frac{1}{2},1]$. To extend this form to all of $B(\Sigma)$, we will take smooth functions $f_1,f_2$ such that $\alpha=f_1\lambda\vert_{\partial\Sigma}+f_2d \theta$. We choose these functions $f_1,f_2$ to depend only on $r$, and to satisfy the following conditions: \begin{enumerate \item For $\frac{1}{2}\leq r \leq 1$, $f_1(r)=e^{\frac{1}{2}-r}$, $f_2(r) = -\frac{T(p)}{2\pi}$. The latter is a constant depending on the component of $B(\Sigma)$. \item For $r$ near 0, $f_1(r)=2-a r^2, f_2(r)=r^2$, where $a$ is a positive, irrational number to be determined, later. \item $f_1f_2'-f_1f_2'>0$ for all $r>0$. \end{enumerate} The second condition ensures that $\alpha$ is a smooth $1$-form; the numerical constants are chosen to have a suitable Reeb flow near the binding for the construction in Section~\ref{sec:stability}. Together with the third condition, this ensures that $\alpha$ is a positive contact form. We sketch the graphs for $f_1,f_2$ in the Figure above. \begin{figure} \centering \plotf \plotg \caption{The functions $f_1,f_2$ depending on $r$. \\$T(p)$ depends on the component of $B(\Sigma)$.} \label{plotfg} \end{figure} The upshot is that the manifold $M$ comes equipped with a contact form, which we will denote by $\alpha$. We will denote the contact manifold constructed from the abstract open book $(\Sigma, \phi)$ by $\mathcal{OB}(\Sigma,\phi)$, and call it a contact open book. \subsection{Reeb flow on a contact open book} Near the binding of a contact open book $\mathcal{OB}(\Sigma,\phi)$ with contact form $\alpha$, the Reeb field has a particularly simple form. Indeed, \begin{equation} \label{eq:Reeb_vf} R_\alpha=\frac{1}{f_1f_2'-f_2f_1'}(f_2'R_\lambda-f_1'\partial_\theta), \end{equation} where $R_\lambda$ is the Reeb flow for the contact form $\lambda|_{\Sigma}$. In our case, where $\Sigma$ is just a surface, this vector field $R_\lambda$ is just the (positively oriented) unit tangent vector field to the boundary of $\Sigma$. The flow of this vector field has no component in the $\partial_r$ direction, so it preserves the $r$ coordinates in $\partial\Sigma\times D^2$. Hence we obtain the following invariant sets of this flow. \begin{enumerate \item The sets $\partial\Sigma\times S^1$ with fixed radius $r$ are invariant. \item The binding $B$ is an invariant set of the Reeb flow. \end{enumerate} By Remark~\ref{rem:OB_implies_GSS}, each page $F_\theta$ is a global surface of section for the Reeb flow. Since every contact $3$-manifold admits a supporting open book by a result of Giroux, \cite{Giroux03}, it follows that given any contact $3$-manifold $(Y,\xi)$ there is some Reeb flow that admits a global surface of section. \begin{observation} Every compact contact $3$-manifold $(Y,\xi)$ has some Reeb flow that admits a global surface of section. \end{observation} The stronger statement, whether every Reeb flow admits a global surface of section, can of course not be settled with the above techniques. The following example of an open book decomposition will be crucial for our constructions in the subsequent sections. \begin{lemma} \label{s3} Let $A=[-1,1] \times S^1$ be an annulus equipped with the Liouville form $r d\theta$, and let $\phi:A\rightarrow A$ be a positive Dehn twist, defined as $\phi(x,\theta)=(x,\theta+\sigma(x))$ where $\sigma(x)=\pi(1-x)$. Then the manifold $\mathcal{OB}(A,\phi)$ carries the standard contact structure $\xi_0$ on $S^3$. \end{lemma} We review another operation on open books that will simplify our arguments later. \begin{defn} Let $(\Sigma_1.\phi_1)$, $(\Sigma_2,\phi_2)$ be two abstract open books, and $c_0$, $c_1$ two arcs each properly embedded in $\Sigma_1$, $\Sigma_2$, and take rectangular neighborhoods $c_1\times [-1,1]$, $c_2\times[-1,1]$ each in $\Sigma_1$, $\Sigma_2$. The \textit{Murasugi sum} $\mathcal{OB}(\Sigma_1,\phi_1)*\mathcal{OB}(\Sigma_2,\phi_2)$ is defined as the manifold constructed from the abstract open book with page $\Sigma_1 \natural \Sigma_2$, where we identify $c_1\times\{-1,1\}$ with $\partial c_2\times[-1,1]$. The monodromy is defined as the conjugation $\tilde \phi_1\circ \tilde \phi_2$, where $\tilde \phi_1$ and $\tilde \phi_2$ are the extensions of $\phi_1$ and $\phi_2$ as the identity to the boundary connected sum $\Sigma_1 \natural \Sigma_2$. \end{defn} A result of Torisu, \cite{Torisu} shows that $\mathcal{OB}(\Sigma_1,\phi_1)*\mathcal{OB}(\Sigma_2,\phi_2)$ is contactomorphic to $\mathcal{OB}(\Sigma_1, \phi_1)\#\mathcal{OB}(\Sigma_2,\phi_2)$. Note that the Murasugi sum reduces the number of boundary components of an open book. Thus, given any abstract open book for a manifold $M$, we may repeat this construction to obtain an abstract open book for $M$ with only one boundary component. \subsection{Invariant Sets for the book-connected sum construction} We now review another operation on abstract open books, called the \textit{book-connected sum}. Let $(W_i,\psi_i)$, $i=1,2$ be two abstract open books with a contact structure. Then we can define a new abstract open book $(W_1\natural_{B_1,B_2}W_2,\psi_1\natural_{B_1,B_2}\psi_2)$ along specified boundary components $B_i\subseteq \partial W_i$. We will write $\natural$ for the boundary connected sum, an operation that can also be seen as $1$-handle attachment. The symbol $\#$ stands for connected sum. We use this notation both for operations on manifolds and for gluing maps together (silently extending a map as the identity if this is necessary). The subscripts clarify where this operations are performed. We omit subscripts whenever their meaning is clear from the context. For completeness, here are our definitions. The page $W_1\natural_{B_1,B_2}W_2$ is formed by attaching a Weinstein 1-handle $H$ to the disjoint union $W_1\coprod W_2$ along two Darboux balls in boundary, each in $B_1$ and $B_2$, respectively. The symplectomorphism $\psi_1\natural \psi_2$ is given by $\psi_i$ on the copy of $W_i$ in $W_1\coprod W_2$, and the identity on the handle. Note that we have the contact structure and Reeb vector field induced from the open book construction. Since the book-connected sum is a special case of the Murasugi sum we have explained earlier, we can apply the result of Torisu to show \begin{lemma} \label{sums} The book-connected sum $\mathcal{OB} (W_1\natural_{B_1,B_2}W_2,\psi_1\natural_{B_1,B_2}\psi_2)$ is contactomorphic to the contact connected sum $\mathcal{OB}(W_1,\psi_1)\#\mathcal{OB}(W_2,\psi_2)$. \end{lemma} The following description of the invariant sets will be crucial for following discussions. \begin{lemma} \label{inv} The induced Reeb flow decomposes the book-connected sum into four invariant sets: The handle orbits, the neighborhood $N(B_1\#B_2):=(B_1\#B_2)\times D^2$ of the boundary component used for the book-sum, and the remaining disjoint union of two ``page sets" $P_i$ described in the proof. \end{lemma} See Figure~\ref{booksum} for an example of such a decomposition. \begin{proof} The Reeb flow preserves the $r$-coordinate of the disk in the solid tori $\partial W\times D^2$, so $N(B_1\#B_2)$ is actually foliated by invariant tori. Since the monodromy is the identity on the handle, the orbits of $h\in H$ are the circles $\{h\}\times S^1$. Also, the mapping tori $M(W_i,\psi_i)$ viewed as subspaces of $M(W_1\natural W_2,\psi_1\natural \psi_2)$, are clearly disjoint invariant sets as the flows generated by each $\psi_i$ cannot cross into each other. Finally, the remaining sets are the neighborhoods $N(B)$ of the boundary components $B$ that are not the connect-summed knot $B_1\#B_2$, and thus are each foliated into invariant sets under the flow as well. Defining $P_i$ as the union $$ P_i:=M(W_i,\psi_i) \cup \bigcup_{B\text{ component of }\partial W_i \setminus B_i}N(B), $$ the classification is complete. \end{proof} \begin{figure}[h] \centering \begin{tikzpicture} \draw [fill=lightgray] (0,0) circle (2 and 1.5); \path [fill=white] (0,0.1) -- (0,-0.2) arc [start angle=270, end angle=311, x radius=1, y radius=0.8] (0,-0.2) arc [start angle=270, end angle=229, x radius=1, y radius=0.8] -- (0,0.1); \path [fill=white] (0,0) -- (0,0.2) arc [start angle=90, end angle=49, x radius=1, y radius=0.8] (0,0.2) arc [start angle=90, end angle=131, x radius=1, y radius=0.8] -- (0,0); \path [fill=lightgray] (1.8,0) -- (2.1,0.7) arc [start angle=260, end angle=230, radius=1.21]; \path [draw=white, fill=white] (1.8,0) arc [start angle=180, end angle=90, x radius=0.3, y radius=0.7] (2.1,0.7) arc [start angle=260, end angle=230, radius=1.21]; \path [fill=lightgray] (1.8,0) -- (2.1,-0.7) arc [start angle=100, end angle=130, radius=1.21]; \path [draw=white, fill=white] (1.8,0) arc [start angle=180, end angle=270, x radius=0.3, y radius=0.7] (2.1,-0.7) arc [start angle=100, end angle=130, radius=1.21]; \draw (0,-0.2) arc [start angle=270, end angle=330, x radius=1, y radius=0.8]; \draw (0,-0.2) arc [start angle=270, end angle=210, x radius=1, y radius=0.8]; \draw (0,0.2) arc [start angle=90, end angle=49, x radius=1, y radius=0.8]; \draw (0,0.2) arc [start angle=90, end angle=131, x radius=1, y radius=0.8]; \path [draw=none, fill=gray] (2.1,0.7) arc [start angle=260, end angle=250, radius=1.21] arc [start angle=100, end angle=260, x radius=0.38, y radius=0.77]; \path [draw=none, fill=gray] (2.1,-0.7) arc [start angle=100, end angle=110, radius=1.21] -- (1.8,0); \draw (1.89,0.757) arc [start angle=101, end angle=259, x radius=0.38, y radius=0.77]; \draw [draw=none, fill=gray] (2.1,0) circle (0.3 and 0.7); \draw [fill=gray, rotate around={310:(4.7,0)}] (4.7,0) circle (2.1 and 1.7); \draw [draw=none, fill=lightgray!50] (2.1,-0.2) rectangle (3.2,0.2); \draw [draw=none, fill=lightgray] (1.995,-0.664) arc [start angle=-70, end angle=70, x radius=0.3, y radius=0.7]; \draw [draw=none, fill=lightgray] (2.005,0.664) arc [start angle=110, end angle=250, x radius=0.3, y radius=0.7]; \draw (1.8,0) arc [start angle=180, end angle=30, x radius=0.3, y radius=0.7]; \draw (1.8,0) arc [start angle=180, end angle=330, x radius=0.3, y radius=0.7]; \draw [draw=none, fill=gray] (2.19,0.2) rectangle (2.9,0.4); \draw [draw=none, fill=gray] (2.3,-0.4) rectangle (3,-0.2); \draw (2.1,0.7) arc [start angle=260, end angle=230, radius=1.21]; \draw (2.1,-0.7) arc [start angle=100, end angle=130, radius=1.21]; \draw (2.2,0) arc [start angle=0, end angle=70, x radius=0.3, y radius=0.7]; \draw (2.2,0) arc [start angle=0, end angle=-70, x radius=0.3, y radius=0.7]; \draw (2.343,0.4) -- (2.83,0.4); \draw (2.35,-0.4) -- (2.98,-0.4); \draw (2.19,0.2) -- (3.03,0.2); \draw (2.19,-0.2) -- (3.12,-0.2); \draw [fill=lightgray, rotate around={310:(4.7,0)}] (4.7,0) circle (1.9 and 1.5); \draw [fill=white] (4.3,0.6) circle (0.4); \draw [fill=white] (5.1,-0.6) circle (0.4); \draw (0,-0.8) node {$P_1$}; \draw (4,-0.5) node {$P_2$}; \draw (2.63,0) node {$H$}; \draw (2.45,1.5) node {$B_1\#B_2$}; \end{tikzpicture} \caption{Invariant sets of a book-connected sum: the page sets $P_i$, boundary neighborhood $N(B_1\#B_2)$ (dark grey), and the handle $H$.} \label{booksum} \end{figure} A similar result clearly holds for multiple book-connected sums: each handle, boundary neighborhood, and page set are invariant sets, where the page set $P_i$ is now defined as the union of $M(W_i,\psi_i)$ and $N(B)$ for all boundary components $B$ of $\partial W_i$ not modified in the connected sum operations. This description of the invariant sets imply the following corollary: \begin{corollary} \label{cor:orbit} Let $p_1$, $p_2$ be periodic orbits in $P_1$, $P_2$, and let $h$ be a periodic orbit in the handle set, $H$. Then $\lk(p_1,p_2)=\lk(p_1,h)=\lk(p_2,h)=0$. \end{corollary} \begin{proof} We argue by finding a Seifert surface for $p_1$. By Lemma~\ref{sums}, $\mathcal{OB} (W_1\natural W_2,\psi_1\natural \psi_2)$ can be seen as a connected sum of $\mathcal{OB} (W_i,\psi_i)$, and we can assume that the balls used in the sum were contained in the solid tori $N(B_i)$. Since $P_i$ is invariant under the connected sum operation, we may consider $p_1$ as an orbit in $\mathcal{OB} (W_1,\psi_1)=P_1\cup N(B_1)$, and choose a Seifert surface $S\subset \mathcal{OB} (W_1,\psi_1)$ for $p_1$. Now we isotope $S$ so that $S$ intersects $N(B_1)$ only in disks of the form $\{v\}\times D^2$ for finitely many $v\in \partial W_1$. We further modify $S$ such that none of the $v$ are in the ball used for the connected sum. Thus, we may conversely view $S$ as a surface in $\mathcal{OB} (W_1\natural W_2,\psi_1\natural \psi_2)$, so that $S\subset P_1\cup N(B_1\# B_2)$. Finally, by the classification in Lemma~\ref{inv}, the surface $S$ cannot intersect either $p_2$ or $h$, so the corresponding linking numbers must be zero. \end{proof} \section{Proof of Theorem~\ref{thm1}} \label{secpf1} Let $(M,\xi)$ be an oriented integral homology sphere with contact structure, and fix $n\geq 1$. Our objective is to construct a Reeb flow on $M$ which does not admit a global surface of section with $n$ or fewer boundary components. By a theorem of Alexander, we can find an abstract open book $(W,\psi)$ that supports $\xi$, and we further assume $W$ to have a unique boundary component $B$. Let $\mathcal{OB}(A_i,\phi_i)$, $i=1,\cdots,n+1$ be copies of the open book of Lemma \ref{s3}, with $U_i=\{1\}\times S^1$, $L_i=\{0\}\times S^1\subset A_i$. Consider the following book-connected sum: \begin{align*} X&=\mathcal{OB}(W\natural_{B,L_1}A_1\natural_{U_1,L_2}\cdots\natural_{U_{n},L_{n+1}}A_{n+1},\psi\natural_{B,L_1}\phi_1\natural_{U_1,L_2}\cdots\natural_{U_{n},L_{n+1}}\phi_{n+1})\\ &\cong\mathcal{OB}(W,\psi)\#_{B,L_1}\mathcal{OB}(A_1,\phi_1)\#_{U_1,L_2}\mathcal{OB}(A_2,\phi_2)\#_{U_2,L_3}\cdots\#_{U_{n},L_{n+1}}\mathcal{OB}(A_{n+1},\phi_{n+1})\\ &=\vcentcolon\mathcal{OB}_0\#\mathcal{OB}_1\#\cdots\mathcal{OB}_{n+1}\;\;\; \mathrm{in}\;\mathrm{shorthand} \end{align*} \begin{figure}[h!] \centering \begin{tikzpicture} \draw [fill=lightgray] (-2.5,0) circle (2); \draw [fill=lightgray] (2.5,0) circle (2); \draw [draw=none, fill=lightgray] (-0.8,-0.5) rectangle (0.8,0.5); \draw [fill=white] (2.5,0) circle (1.5); \draw (-0.57,0.5) -- (0.57,0.5); \draw (-0.57,-0.5) -- (0.57,-0.5); \draw [fill=lightgray] (2.5,0) circle (1); \draw [fill=white] (2.5,0) circle (0.5); \draw [draw=none, fill=lightgray] (3.2,-0.25) rectangle (4.2,0.25); \draw (3.46,-0.25) -- (3.985,-0.25); \draw (3.46,0.25) -- (3.985,0.25); \draw (-2.5,0) node {$W$}; \draw (2.6,0.2) node {$U_2$}; \draw (0,1) node {$B\#L_1$}; \draw (4.5,2) node {$U_1\#L_2$}; \draw (4,1.8) -- (3.45,1.17); \draw (4.7,1.8) -- (3.3,0.6); \end{tikzpicture} \caption{A page of $X$ for $n=1$.} \label{pf1} \end{figure} The page of $X$ for $n=1$ and $W=D^2$ is depicted in Figure~\ref{pf1}. Since the book-connected sum with the standard open book for $(S^3,\xi_0)$ does not change the contact structure, the open book $X$ is contactomorphic to the original homology sphere $(M,\xi)$. By viewing the annuli $A_i$ as rectangular strips with the ends glued together, we may draw an equivalent diagram which is convenient for the following argument: see Figure~\ref{pf2}. The 1-handles have been drawn at the same height for simplicity. We use the same description to illustrate the invariant sets in Figure~\ref{pf3}. Different colors each correspond to the page sets $P_i$, handle sets $H_i$, and boundary neighborhoods $N_i=N(U_i\#L_{i+1})$ and $N_0=N(B\#L_1)$. \begin{figure} \centering \begin{tikzpicture} \draw [fill=lightgray] (-0.17,0) circle (1.5); \draw [draw=none, fill=lightgray] (2.8,-2) rectangle (4,2); \draw [draw=none, fill=lightgray] (5.6,-2) rectangle (6.8,2); \draw [draw=none, fill=lightgray] (1.1,-0.6) rectangle (2.9,0.6); \draw [draw=none, fill=lightgray] (3.9,-0.6) rectangle (5.7,0.6); \draw (2.8,-2) -- (2.8,-0.6); \draw (2.8,0.6) -- (2.8,2); \draw (4,-2) -- (4,-0.6); \draw (4,0.6) -- (4,2); \draw (5.6,-2) -- (5.6,-0.6); \draw (5.6,0.6) -- (5.6,2); \draw (6.8,-2) -- (6.8,2); \draw [loosely dashed] (2,2) -- (7.6,2); \draw [loosely dashed] (2,-2) -- (7.6,-2); \draw (1.2,-0.6) -- (2.8,-0.6); \draw (1.2,0.6) -- (2.8,0.6); \draw (4,-0.6) -- (5.6,-0.6); \draw (4,0.6) -- (5.6,0.6); \draw (1.2,2.4) node {$B$}; \draw (2,2.4) node {$\#$}; \draw (2.8,2.4) node {$L_1$}; \draw (4,2.4) node {$U_1$}; \draw (4.8,2.4) node {$\#$}; \draw (5.6,2.4) node {$L_2$}; \draw (6.8,2.4) node {$U_2$}; \draw (-0.17,0) node {$W$}; \draw (3.4,0) node {$A_1$}; \draw (6.2,0) node {$A_2$}; \end{tikzpicture} \caption{An equivalent diagram for the page; the dashed lines are identified.} \label{pf2} \end{figure} \begin{figure}[h] \centering \begin{tikzpicture} \draw [draw=none, fill=lightgray] (1.33,-1.99) rectangle (10.395,1.99); \draw [draw=none, fill=white] (1.2,-0.4) rectangle (2.8,0.4); \draw [draw=none, fill=lightgray!50] (1.2,-0.4) rectangle (2.8,0.4); \draw [draw=none, fill=gray] (-0.17,0) circle (1.5); \draw [fill=lightgray] (-0.17,0) circle (1.3); \path [fill=lightgray] (1.33,0) arc [start angle=0, end angle=15.5, radius=1.5] -- (0,0.4) -- (0,-0.1) -- (1.33,0) arc [start angle=0, end angle=-15.5, radius=1.5] -- (0,-0.4) -- (0,-0.1); \draw (1.33,0) arc [start angle=0, end angle=20, radius=1.5]; \draw (1.33,0) arc [start angle=0, end angle=-20, radius=1.5]; \draw [draw=none, fill=gray] (1.2,0.4) rectangle (3,2); \draw [draw=none, fill=gray] (1.2,-0.4) rectangle (3,-2); \draw [draw=none, fill=gray] (3.8,0.4) rectangle (5.8,2); \draw [draw=none, fill=gray] (3.8,-0.4) rectangle (5.8,-2); \draw [draw=none, fill=gray] (6.6,0.4) rectangle (9.4,2); \draw [draw=none, fill=gray] (6.6,-0.4) rectangle (9.4,-2); \draw [draw=none, fill=white] (1.19,0.6) rectangle (2.8,2.1); \draw [draw=none, fill=white] (1.19,-0.6) rectangle (2.8,-2.1); \draw [draw=none, fill=white] (4,0.6) rectangle (5.6,2.1); \draw [draw=none, fill=white] (4,-0.6) rectangle (5.6,-2.1); \draw [draw=none, fill=white] (6.8,0.6) rectangle (9.2,2.1); \draw [draw=none, fill=white] (6.8,-0.6) rectangle (9.2,-2.1); \draw (-1.67,0) arc [start angle=180, end angle=23.5, radius=1.5]; \draw (-1.67,0) arc [start angle=180, end angle=336.5, radius=1.5]; \draw [draw=none, fill=white] (4,-0.4) rectangle (5.6,0.4); \draw [draw=none, fill=lightgray!50] (4,-0.4) rectangle (5.6,0.4); \draw [draw=none, fill=white] (6.8,-0.4) rectangle (9.2,0.4); \draw [draw=none, fill=lightgray!50] (6.8,-0.4) rectangle (9.2,0.4); \draw [draw=none, fill=white] (7.6,-1) rectangle (8.4,1); \draw(1.06,-0.4) -- (3,-0.4); \draw(1.06,0.4) -- (3,0.4); \draw (2.8,-2) -- (2.8,-0.6); \draw (2.8,0.6) -- (2.8,2); \draw(3,-2) -- (3,-0.4); \draw(3,0.4) -- (3,2); \draw (4,-2) -- (4,-0.6); \draw (4,0.6) -- (4,2); \draw(3.8,-2) -- (3.8,-0.4); \draw(3.8,0.4) -- (3.8,2); \draw(3.8,-0.4) -- (5.8,-0.4); \draw(3.8,0.4) -- (5.8,0.4); \draw (5.6,-2) -- (5.6,-0.6); \draw (5.6,0.6) -- (5.6,2); \draw(5.8,-2) -- (5.8,-0.4); \draw(5.8,0.4) -- (5.8,2); \draw (6.8,-2) -- (6.8,-0.6); \draw (6.8,0.6) -- (6.8,2); \draw(6.6,-2) -- (6.6,-0.4); \draw(6.6,0.4) -- (6.6,2); \draw (1.2,-0.6) -- (2.8,-0.6); \draw (1.2,0.6) -- (2.8,0.6); \draw (4,-0.6) -- (5.6,-0.6); \draw (4,0.6) -- (5.6,0.6); \draw (6.8,-0.6) -- (7.6,-0.6); \draw (6.8,0.6) -- (7.6,0.6); \draw(6.6,-0.4) -- (7.6,-0.4); \draw(6.6,0.4) -- (7.6,0.4); \draw (8,0) node {$\mathbf{\cdots}$}; \draw (8.4,-0.6) -- (9.2,-0.6); \draw (8.4,0.6) -- (9.2,0.6); \draw(8.4,-0.4) -- (9.4,-0.4); \draw(8.4,0.4) -- (9.4,0.4); \draw (9.2,-2) -- (9.2,-0.6); \draw (9.2,0.6) -- (9.2,2); \draw(9.4,-2) -- (9.4,-0.4); \draw(9.4,0.4) -- (9.4,2); \draw (10.4,-2) -- (10.4,2); \draw(2.8,-0.4) -- (2.8,0.4); \draw(4,-0.4) -- (4,0.4); \draw(5.6,-0.4) -- (5.6,0.4); \draw(6.8,-0.4) -- (6.8,0.4); \draw(9.2,-0.4) -- (9.2,0.4); \draw [loosely dashed] (2,2) -- (11.2,2); \draw [loosely dashed] (2,-2) -- (11.2,-2); \draw (2,1.2) node {$N_0$}; \draw (4.8,1.2) node {$N_1$}; \draw (11,1.2) node {$U_{n+1}$}; \draw (-0.17,0) node {$P_0$}; \draw (3.4,0) node {$P_1$}; \draw (6.2,0) node {$P_2$}; \draw (9.8,0) node {$P_{n+1}$}; \draw (2,0) node {$H_0$}; \draw (4.8,0) node {$H_1$}; \end{tikzpicture} \caption{The invariant sets of a page of $X$ for general $n$.} \label{pf3} \end{figure} The argument will be as follows: we will take a periodic orbit $h_i$ from each handle $H_i$. By Lemma~\ref{link}, each $h_i$ will either be the boundary of the global surface of section $S$, or will have positive linking number with the boundary components of $S$. However, Corollary~\ref{cor:orbit} strongly restricts the periodic orbits of nonzero linking number with $h_i$. We will use a counting argument to show that $\partial S$ should have more than $n$ components to satisfy the linking number condition. \par Now assume that the induced flow on $X$ admits a global surface of section with $m(\leq n)$ boundary components $K_1,\cdots,K_m$. Since the handles consist of periodic orbits, we can find orbits $h_i$ in $H_i$ that is not equal to any of the $K_1,\cdots,K_m$. By Lemma~\ref{link}, $\lk(h_0,K_i)$ is nonzero for some $i$; without loss of generality, let $\lk(h_0,K_1)\neq 0$. \par In view of the sum $(\mathcal{OB}_0)\#(\mathcal{OB}_1\#\cdots)$, Corollary~\ref{cor:orbit} implies that only orbits in $H_0$ or $N_0$ can have nonzero linking number with $h_0$. In either case, $K_1$ is contained in the "left" page $P_0\cup H_0\cup N_0\cup P_1$ of the sum $(\mathcal{OB}_0\#\mathcal{OB}_1)\#(\mathcal{OB}_2\#\cdots)$. We apply Corollary~\ref{cor:orbit} again to obtain $\lk(h_1,K_1)=0$. Therefore, $\lk(h_1,K_2)\neq 0$ for some $K_2$. \par We repeat this argument inductively. On the $i$th step, we have $\lk(h_j,K_{j+1})\neq 0$ for all $0\leq j\leq i$, so that $K_{j+1}\subset H_j$ or $N_j$. Therefore the knots $K_1,\cdots,K_{i+1}$ are all contained in the left-hand page of $(\mathcal{OB}_0\#\cdots\#\mathcal{OB}_{i+1})\#(\mathcal{OB}_{i+2}\#\cdots)$, so that none of these link with $h_{i+1}$. Accordingly, there must exist a $K_{i+2}$ such that $\lk(h_{i+1},K_{i+2})\neq 0$. However, at the $(m-1)$-st step, we cannot find a new boundary component $K_{m+1}$ with nonzero linking number with $h_m$. Therefore, we obtain the desired contradiction. \qed \section{Stability} \label{sec:stability} We now investigate if this boundary component condition for global surfaces of section is stable under perturbation, specifically of $C^\infty$-small deformations of the Hamiltonian on the symplectization $\mathbb{R}_{>0} \times \mathcal{OB}(W,\phi)$. In this section, we review the Kolmogorov–Arnold–Moser (KAM) theory in order to study invariant sets of the perturbed Hamiltonian. \par We first state the Jost-Arnold-Liouville theorem. Recall that two functions $H_1, H_2$ on a symplectic manifold $(M,\omega)$ are said to be in involution if their Poisson bracket $\{H_1, H_2\}$ is zero. Given enough integrals in involution, the Arnold-Liouville theorem describes the invariant sets for the Hamiltonian motion. \begin{thm} Let $(M^{2n},\omega)$ be a symplectic manifold of dimension $2n$. Assume the functions $H_1,\cdots,H_n$ are in involution. Suppose that $L$ is a compact, connected component of a regular level set of the function $H=(H_1,\cdots,H_n): M \to \mathbb{R}^n$. Then we have the following: \begin{itemize} \item The set $L$ is a Lagrangian torus. \item There is a neighborhood $\nu_M(L)$ that is diffeomorphic to $T^n\times D^n$ via the diffeomorphism \[ \begin{split} \Phi:T^n\times D^n \longrightarrow \nu_M(L). \end{split} \] \item There are \emph{angle-action} coordinates $\phi,I$ on $T^n\times D^n$ with the properties \begin{enumerate} \item The symplectic form is standard: $$ \Phi^*\omega=\sum_j dI_j\wedge d\theta^j. $$ \item The coordinates $I_j$ depend only on the integrals $\{ H_k \}_{k=1}^n$. \item The flow $\phi_{H_j}^t$ is linear in these coordinates, i.e. $\Phi^{-1}\circ \phi_{H_j}^t \circ \Phi(\theta,I)=(\theta+t \Omega_j(I),I)$. \end{enumerate} \end{itemize} \end{thm} A proof can be found in Arnold \cite[Chapter 49]{Arnold} and Moser, Zehnder \cite[Section 3.1]{Moser_Zehnder}. We will call these invariant sets \textit{Liouville tori} for the given Hamiltonian. We will look at the Hamiltonian flow of the first Hamiltonian $H_1$ in these coordinates. The above theorem tells us that the flow is linear on each Liouville torus, with slope $\Omega_1$. We will just write $\Omega= \Omega_1$ and call it the frequency of a Liouville torus. The theory developed by Kolmogorov, Arnold and Moser, KAM, tells us that some of these Liouville tori survive perturbations. We will review the statements from KAM theory that we will use and refer to \cite[Section 1,2]{Treschev} for proofs for the statements. \begin{defn} Let $(M^{2n},\omega)$ be a symplectic manifold equipped with a family of Hamiltonians $H=(H_1,\ldots, H_n)$. Assume that $T$ is a connected, compact, regular level set for which the Hamiltonians are in involution. Then $T$ is a Liouville torus associated with the Hamiltonian action of $H$. We will call this torus \textit{non-resonant} if $\Omega$ satisfies the condition that $k\cdot\Omega$ is nonzero for all $k\in\mathbb{Z}^n\setminus \{0\}$. We call the first Hamiltonian $H_1$ \emph{non-degenerate} if the $n\times n$-matrix $$ (\frac{\partial^2 H_1}{\partial I_i \partial I_j})_{i,j} $$ is invertible. \end{defn} In short, an integrable Hamiltonian $H_1$ is non-degenerate if the Jacobian of the frequency map is non-degenerate. For non-degenerate (integrable) Hamiltonians, most Liouville tori are non-resonant tori. \begin{thm} Let $(M^{2n},\omega)$ be a symplectic manifold with a non-degenerate Hamiltonian $H=(H_1,\ldots,H_n)$. Then the non-resonant tori form a dense subset of the phase space. \end{thm} Now consider the case where we perturb the Hamiltonian. Given action-angle variables $I_1,\cdots,I_n$, $\theta^1,\cdots,\theta^n$ and a smooth function $F(I,\theta,\epsilon)$, we define a perturbation of the Hamiltonian to be $H(I,\theta, \epsilon)=H_0(I)+\epsilon F(I,\theta,\epsilon)$. To guarantee that these Liouville tori survive the perturbation, we will have to require a stronger condition of non-degeneracy. \begin{defn} A non-resonant Liouville torus with action-angle variables $I,\theta$ is said to be \textit{Diophantine} if the following inequality holds for some $c, \gamma>0$: \begin{equation*} \forall\: \kappa\in\mathbb{Z}^n\setminus\{0\}, \quad \vert\langle \kappa, \Omega\rangle\vert\geq \frac{1}{c\lVert k\rVert^\gamma}. \end{equation*} \end{defn} To indicate our strategy we first give a rough, imprecise version of the KAM theorem. We will state a more technical version later, Theorem~\ref{thm:twist_map}, that we actually use, see \cite[Theorem~2.1]{Treschev} \begin{protothm} \label{thm:KAM} Let $(M^{2n},\omega)$ be a symplectic manifold with non-degenerate Hamiltonian $H_0$. Let $I,\theta$ be the action-angle variables for $H_0$, and $f(I,\theta,\epsilon)$ a smooth function of sufficiently high regularity. Define a small perturbation of the Hamiltonian $H(I,\theta,\epsilon)=H_0(I)+\epsilon f(I,\theta,\epsilon)$. Then the following assertion holds: \begin{itemize \item A Diophantine torus in $M$ with respect to $H_0$ will survive a sufficiently small perturbation as a Diophantine torus with respect to $H$. \end{itemize} \end{protothm} \subsection{Adaptation to our construction} \label{sec:non-deg_Ham} Let us outline how to apply this type of result to our setting of the symplectization $\mathbb{R}_{>0} \times \mathcal{OB}(\Sigma,\phi)$. The main point of section is to explain how we can ensure non-degeneracy of the Hamiltonian. In general, we do not have an integrable system near the entire level set $\{ 1 \}\times M$, since the contact structure or topology on $M$ may obstruct the existence of integrals. However, due to the special form of Reeb flow near the binding, namely the one from Equation~\eqref{eq:Reeb_vf}, we always have an integrable system near the binding, and all pieces of the book-connected sum also have such a structure. With this in mind, let us start the construction. Let $\alpha$ by the contact form constructed in the proof of Theorem~\ref{thm1}. Since the $1$-form $\lambda$ is just the standard angular form on the circle, we write $$ \alpha=f_1(r)d\theta^1+f_2(r) d\theta^2. $$ Let $H_0=\rho$ denote the Hamiltonian on $(\mathbb{R}_{>0} \times M, d(\rho \alpha) )$ which generates the Reeb flow. Near the binding (but not at the binding), the action-angle coordinates are $$ I_1=\rho f_1(r), I_2 =\rho f_2(r), \theta^1, \theta^2. $$ Indeed, we can compute the Jacobian of $I_1$ and $I_2$ to see that this is indeed a proper coordinate transformation for $\rho,r>0$. We have $$ \omega=d (\rho \alpha) =d I_1 \wedge d \theta^1 + d I_2 \wedge d \theta^2. $$ With the special form of $f_1$ and $f_2$ chosen in Section~\ref{sec:ob_form}, we find that near the binding we have $$ H_0 = \rho=\frac{1}{2}(I_1 +a I_2). $$ This Hamiltonian is obviously degenerate with constant frequency $(\frac{1}{2},\frac{a}{2})$, but we can correct this by reparametrization and adding an integrable perturbation. More explicitly we take the modified Hamiltonian $$ \bar H_0= \frac{1}{4}(I_1 +a I_2)^2+g(r(I_1,I_2) ), \text{ where } r(I_1,I_2)=\sqrt{ \frac{2I_2}{I_1+a I_2} }, $$ where $g$ is a smooth function of $r$ that vanishes in a neighborhood of $r=0$. The new frequency matrix is given by $$ \Omega= \left( I_1+a I_2 +g' \frac{\partial r}{\partial I_1}, \quad a(I_1+a I_2) +g' \frac{\partial r}{\partial I_2} \right). $$ Inspecting this expression, we can verify that we can make the new Hamiltonian $\bar H_0$ non-degenerate on the set $\rho=1$, and an open interval of $r$-values by choosing the function $g$ sufficiently general. Furthermore, we can ensure that many Diophantine frequencies are attained. For example, we can go through the slope $\sqrt2$, which has continued fraction $1+\frac{1}{2+\frac{1}{2+\cdots}}$, and is Diophantine. Below, we will apply Theorem~\ref{thm:KAM}. \subsection{Stability of the global surface of section} In this section, we will reconstruct a global surface of section for $C^{4+\epsilon}$-small perturbations of the Hamiltonian we have constructed above. This stability of the global surface of section will be used to prove stability of certain periodic orbits in the following section. \par Consider the contact manifold $(M,\alpha)$ defined by the book-connected sum $ X=\mathcal{OB}(W,\psi)\# \mathcal{OB}_1 \# \ldots \mathcal{OB}_{n+1}$ as in Section~\ref{secpf1}. The Reeb dynamics of this contact form correspond to the Hamiltonian dynamics of $H_0=\rho$ on the symplectization $(\mathbb{R}_{>0} \times X, d (\rho \alpha)\, )$. In general, this is not an integrable system, since the monodromy of the open book on the homology sphere may obstruct the existence of first integrals. We need to setup some notation to deal with this issue. We will write the space $X$ as $M \# M_0$, where $M_0 =\mathcal{OB}_1 \# \ldots \mathcal{OB}_{n+1}$ is contactomorphic to the tight contact $3$-sphere, where we keep in mind that a connected sum comes with the following decomposition $$ M \# M_0 = M\setminus B \cup_\partial M_0 \setminus B_0, $$ where $B$ is a Darboux ball in $M$ and $B_0$ is a Darboux ball in $M_0$. Since the construction in the proof of Theorem~\ref{thm1} involves the book-connected sum, both $B$ and $B_0$ lie in a neighborhood of the binding, where first integrals do exist. \par To ``separate'' the non-integrable part of the dynamics, we define a cutoff function $\chi$ with the following properties: \begin{itemize} \item $\chi(x)=1$ for all $x\in X$ with $x\in M_0 \setminus B_0 \subset X$, \item $\chi(x)=0$ for $x\in X$ with $x\in M(W,\psi) \subset M \setminus B \subset X$. Or in words, the function $\chi$ vanishes in the mapping torus region of $M$, \item on the set where $\chi$ is not defined by the above, we can write $x=(\theta^1,r,\theta^2)\in \nu(B)=S^1\times D^2$ with angle coordinates $\theta^1$ and $\theta^2$. We then choose $\chi$ to be a decreasing function of $r$ with the property that it equals $1$ for small $r$ (so as to be compatible with the first condition), and such that it vanishes for large $r$ (so that it is compatible with the second condition). \end{itemize} Now define $$ M_1:=\{x \in M ~|~\chi(x) =1 \} . $$ By restricting the Hamiltonian $H_0$ to a neighborhood of $\{ 1 \} \times M_1$ we obtain a completely integrable system with a complete flow. \iffalse \begin{figure}[h] \centering \begin{tikzpicture} \draw [fill=lightgray] (-1,0) circle (2 and 1.5); \path [fill=white] (-1,0.1) -- (-1,-0.2) arc [start angle=270, end angle=311, x radius=1, y radius=0.8] (-1,-0.2) arc [start angle=270, end angle=229, x radius=1, y radius=0.8] -- (-1,0.1); \path [fill=white] (-1,0) -- (-1,0.2) arc [start angle=90, end angle=49, x radius=1, y radius=0.8] (-1,0.2) arc [start angle=90, end angle=131, x radius=1, y radius=0.8] -- (-1,0); \path [fill=lightgray] (1.5,0) -- (2.1,0.7) arc [start angle=260, end angle=230, radius=1.21]; \path [draw=white, fill=white] (1.8,0) arc [start angle=180, end angle=90, x radius=0.3, y radius=0.7] (2.1,0.7) arc [start angle=260, end angle=230, radius=1.21]; \path [fill=lightgray] (1.5,0) -- (2.1,-0.7) arc [start angle=100, end angle=130, radius=1.21]; \draw (-1,-0.2) arc [start angle=270, end angle=330, x radius=1, y radius=0.8]; \draw (-1,-0.2) arc [start angle=270, end angle=210, x radius=1, y radius=0.8]; \draw (-1,0.2) arc [start angle=90, end angle=49, x radius=1, y radius=0.8]; \draw (-1,0.2) arc [start angle=90, end angle=131, x radius=1, y radius=0.8]; \path [draw=none, fill=gray] (2.1,0.7) arc [start angle=260, end angle=250, radius=1.21] arc [start angle=100, end angle=260, x radius=0.38, y radius=0.77]; \path [draw=none, fill=gray] (2.1,-0.7) arc [start angle=100, end angle=110, radius=1.21] -- (0.8,0); \draw [draw=none, fill=gray] (2.1,0) circle (0.3 and 0.7); \draw [fill=gray] (4.7,0) circle (1.84 and 1.84); \draw (2.1,0) circle (0.3 and 0.7); \draw [draw=none, fill=gray] (2.19,0.2) rectangle (2.9,0.4); \draw [draw=none, fill=gray] (2.3,-0.4) rectangle (3,-0.2); \draw [draw=none, fill=lightgray] (1.995,-0.664) arc [start angle=-70, end angle=70, x radius=0.3, y radius=0.7]; \draw [draw=none, fill=lightgray] (2.005,0.664) arc [start angle=110, end angle=250, x radius=0.3, y radius=0.7]; \draw (2.1,0.7) arc [start angle=260, end angle=230, radius=1.21]; \draw (2.1,-0.7) arc [start angle=100, end angle=130, radius=1.21]; \draw (2.2,0) arc [start angle=0, end angle=70, x radius=0.3, y radius=0.7]; \draw (2.2,0) arc [start angle=0, end angle=-70, x radius=0.3, y radius=0.7]; \draw [draw=none, fill=lightgray] (-0.2,-0.95) rectangle (1.55,0.95); \draw (0.52,-0.96) -- (1.55,-0.96); \draw (0.52,0.96) -- (1.55,0.96); \draw [fill=lightgray] (4.7,0) circle (1.5 and 1.5); \draw [fill=white] (4.7,0) circle (0.4); \draw [draw=none, fill=lightgray] (2.1,-0.2) rectangle (3.4,0.2); \draw (2.343,0.4) -- (2.9,0.4); \draw (2.35,-0.4) -- (2.9,-0.4); \draw (2.19,0.2) -- (3.2,0.2); \draw (2.19,-0.2) -- (3.2,-0.2); \draw (0.72,-1.3) node {$T_1$}; \draw (-0.5,-0.8) node {$X_1$}; \draw (1.47,-1.3) node {$T_2$}; \draw (4,-0.5) node {$X_3$}; \draw (1.07,0) node {$X_2$}; \draw (2,1) node {$B$}; \path [fill=lightgray] (1.3,0) -- (1.8,-0.7) arc [start angle=106, end angle=112, radius=1.21]; \draw (0.72,-0.96) -- (0.72,0.96); \draw (1.47,-0.96) -- (1.47,0.96); \draw [draw=none, fill=lightgray] (1.5,-0.2) rectangle (1.6,0.2); \draw (1.89,0.757) arc [start angle=101, end angle=259, x radius=0.38, y radius=0.77]; \end{tikzpicture} \end{figure} \fi \begin{figure}[H] \centering \begin{tikzpicture} \draw [fill=lightgray] (0,0) circle (2 and 1.5); \path [fill=white] (0,0.1) -- (0,-0.2) arc [start angle=270, end angle=311, x radius=1, y radius=0.8] (0,-0.2) arc [start angle=270, end angle=229, x radius=1, y radius=0.8] -- (0,0.1); \path [fill=white] (0,0) -- (0,0.2) arc [start angle=90, end angle=49, x radius=1, y radius=0.8] (0,0.2) arc [start angle=90, end angle=131, x radius=1, y radius=0.8] -- (0,0); \path [fill=lightgray] (2.8,0) -- (3.1,0.7) arc [start angle=265, end angle=244, radius=5.72]; \path [draw=white, fill=white] (2.8,0) arc [start angle=180, end angle=90, x radius=0.3, y radius=0.7] (3.1,0.7) arc [start angle=260, end angle=230, radius=1.21]; \path [fill=lightgray] (2.8,0) -- (3.1,-0.7) arc [start angle=95, end angle=116, radius=5.72]; \path [draw=white, fill=white] (2.8,0) arc [start angle=180, end angle=270, x radius=0.3, y radius=0.7] (3.1,-0.7) arc [start angle=100, end angle=130, radius=1.21]; \draw (0,-0.2) arc [start angle=270, end angle=330, x radius=1, y radius=0.8]; \draw (0,-0.2) arc [start angle=270, end angle=210, x radius=1, y radius=0.8]; \draw (0,0.2) arc [start angle=90, end angle=49, x radius=1, y radius=0.8]; \draw (0,0.2) arc [start angle=90, end angle=131, x radius=1, y radius=0.8]; \path [draw=none, fill=lightgray] (1.6,1) -- (3,0) -- (1.6,-1); \path [draw=none, fill=gray] (3.1,0.7) arc [start angle=265, end angle=262.5, radius=5.72] arc [start angle=107, end angle=253, x radius=0.38, y radius=0.76]; \path [draw=none, fill=gray] (3.1,-0.7) arc [start angle=95, end angle=97.5, radius=5.72] -- (2.8,0); \draw (2.85,0.73) arc [start angle=108, end angle=252, x radius=0.38, y radius=0.765]; \draw (2.3,0.83) arc [start angle=112, end angle=247, x radius=0.45, y radius=0.9]; \draw (1.7,1) arc [start angle=105, end angle=255, x radius=0.53, y radius=1.035]; \draw [draw=none, fill=gray] (3.1,0) circle (0.3 and 0.7); \draw [draw=none, fill=lightgray] (2.995,-0.664) arc [start angle=-70, end angle=70, x radius=0.3, y radius=0.7]; \draw [draw=none, fill=lightgray] (3.005,0.664) arc [start angle=110, end angle=250, x radius=0.3, y radius=0.7]; \draw (2.8,0) arc [start angle=180, end angle=30, x radius=0.3, y radius=0.7]; \draw (2.8,0) arc [start angle=180, end angle=330, x radius=0.3, y radius=0.7]; \draw (3.1,0.7) arc [start angle=265, end angle=244, radius=5.72]; \draw (3.1,-0.7) arc [start angle=95, end angle=116, radius=5.72]; \draw (3.2,0) arc [start angle=0, end angle=70, x radius=0.3, y radius=0.7]; \draw (3.2,0) arc [start angle=0, end angle=-70, x radius=0.3, y radius=0.7]; \draw [fill=gray] (5.5,0) circle (1.7); \draw [fill=lightgray] (5.5,0) circle (1.5); \draw [fill=white] (5.5,0) circle (0.7); \draw [draw=none, fill=gray] (3.2,0.2) rectangle (3.9,0.4); \draw [draw=none, fill=gray] (3.2,-0.2) rectangle (3.9,-0.4); \draw [draw=none, fill=lightgray] (3,-0.2) rectangle (4.5,0.2); \draw (3.343,0.4) -- (3.85,0.4); \draw (3.19,0.2) -- (4.01,0.2); \draw (3.343,-0.4) -- (3.85,-0.4); \draw (3.19,-0.2) -- (4.01,-0.2); \draw (1.7,-1.35) node {$T_1$}; \draw (2.3,-1.2) node {$T_2$}; \draw (3,1) node {$B$}; \draw (0,-0.8) node {$X_1$}; \draw (1.65,0) node {$X_2$}; \draw [decorate,decoration={brace,mirror}] (2.3,-2) -- (7.2,-2) node[midway, below, yshift=-3pt]{$X_3$}; \end{tikzpicture} \caption{Diophantine tori $T_1,T_2$ near the boundary component $B$ split the manifold into spaces $X_1,\:X_2,\:X_3$ with disjoint dynamics. The dark region represents the transit orbits between the homology sphere and the standard 3-sphere region. The global surface of section can be reconstructed for $C^{4+\epsilon}$-small perturbations of $\bar(H_0)$ in the $X_3$ region.} \label{Separate} \end{figure} We now consider a $C^{4+\epsilon}$-small perturbation $\bar H_\delta:=\bar H_0 +\delta h$. Since $\partial_\rho \bar H_0\neq 0$, we can apply the implicit function theorem and conclude that $\bar H_\delta^{-1}(1)$ is diffeomorphic to $\bar H_0^{-1}(1)$. In fact, we see that $\bar H_\delta^{-1}(1)$ is a graph over $\{1\} \times M$, so we can still use $\chi$ to decompose this level set. In particular we see that $\bar H_\delta$ defines a $C^{4+\epsilon}$-small perturbation of an integrable system near $\{1 \} \times M_1$. Furthermore, we also see that $\bar H_\delta^{-1}(1)$ is contactomorphic to $\bar H_0^{-1}(1)$ by using Gray stability. \par We now apply Proposition~\ref{prop:stab_gss} from Appendix~\ref{appendix:stab_ob} to see that the level set $M_\delta:= \bar H_\delta^{-1}(1)$ admits a global surface of section $\Sigma_\delta$ for the Reeb flow on that level set. The situation is depicted in Figure~\ref{Separate}. \subsection{Invariant orbits under small perturbations of $\bar H_0$} We will now apply perturbation theory and KAM theory to identify two families of invariant sets. These invariant sets are the handle orbits corresponding to the critical points of the Morse Hamiltonian constructed in Appendix~\ref{app:morse_ham}, and the Diophantine tori in the annulus regions. We first introduce a theorem from perturbation theory: the following statement is proved in the remarks from \cite[Theorem 2.2]{Moser_Zehnder}. \begin{thm} \label{moser} Consider an autonomous vector field $\dot{X}=f(X;\epsilon)$ on a smooth manifold $M$. Suppose that for $\epsilon=0$ there exists a periodic orbit $p(t;0)$, $p(0;0)=p$, of period $T>0$. Let $\varphi^T$ denote the time-$T$ flow of the system. If 1 is a simple eigenvalue of $d\varphi_p^T:T_pM\rightarrow T_pM$, then for small $\epsilon$ there exists a periodic orbit $p(t,\epsilon)$ with period $T(\epsilon)$, such that $p(t,\epsilon)\rightarrow p(t,0)$ and $T(\epsilon)\rightarrow T$ as $\epsilon \rightarrow 0$. This orbit is unique up to a time shift. In addition, suppose $\Sigma$ is a hypersurface transverse to the flow for small $\epsilon$, and let $\psi$ be the local diffeomorphism generated by the flow when $\epsilon=0$. Then the map $d\varphi_p^T$ has the following matrix representation: \begin{gather*} d\varphi_p^T= \begin{pmatrix*} 1 & \cdots \\ 0 & d\psi_p \end{pmatrix*}. \end{gather*} \end{thm} \begin{figure}[h] \centering \begin{tikzpicture} \draw [draw=none, fill=lightgray] (0.6,-2) rectangle (6.05,2); \draw [draw=none, fill=white] (1.5,1.5) arc [start angle=180, end angle=360, radius=0.5]; \draw [draw=none, fill=white] (1.5,-1.5) arc [start angle=180, end angle=0, radius=0.5]; \draw [draw=none, fill=white] (4,1.5) arc [start angle=180, end angle=360, radius=0.5]; \draw [draw=none, fill=white] (4,-1.5) arc [start angle=180, end angle=0, radius=0.5]; \draw [draw=none, fill=white] (1.5,2.01) rectangle (2.5,1.49); \draw [draw=none, fill=white] (1.5,-2.01) rectangle (2.5,-1.49); \draw [draw=none, fill=white] (4,2.01) rectangle (5,1.49); \draw [draw=none, fill=white] (4,-2.01) rectangle (5,-1.49); \draw (1.5,-2) -- (1.5,-1.5); \draw (1.5,2) -- (1.5,1.5); \draw (1.5,1.5) arc [start angle=180, end angle=360, radius=0.5]; \draw (1.5,-1.5) arc [start angle=180, end angle=0, radius=0.5]; \draw (2.5,-2) -- (2.5,-1.5); \draw (2.5,2) -- (2.5,1.5); \draw (4,-2) -- (4,-1.5); \draw (4,2) -- (4,1.5); \draw (4,1.5) arc [start angle=180, end angle=360, radius=0.5]; \draw (4,-1.5) arc [start angle=180, end angle=0, radius=0.5]; \draw (5,-2) -- (5,-1.5); \draw (5,2) -- (5,1.5); \draw (0.6,-2) -- (0.6,2); \draw (0.9,-2) -- (0.9,2); \draw (3.1,-2) -- (3.1,2); \draw (3.4,-2) -- (3.4,2); \draw plot [smooth] coordinates {(1.2,-2) (1.3,-0.9) (2,0) (2.7,0.9) (2.8,2)}; \draw plot [smooth] coordinates {(1.2,2) (1.3,0.9) (2,0) (2.7,-0.9) (2.8,-2)}; \draw plot [smooth] coordinates {(3.7,-2) (3.8,-0.9) (4.5,0) (5.2,0.9) (5.3,2)}; \draw plot [smooth] coordinates {(3.7,2) (3.8,0.9) (4.5,0) (5.2,-0.9) (5.3,-2)}; \draw (5.6,2) -- (5.6,-2); \draw (5.9,2) -- (5.9,-2); \draw (6.5,0) node {$\cdots$}; \draw [loosely dashed] (0.1,2) -- (6.5,2); \draw [loosely dashed] (0.1,-2) -- (6.5,-2); \end{tikzpicture} \caption{Level sets of the Morse Hamiltonian. Note the critical points in each handle set, and the continuum of Liouville tori in each annulus set.} \label{level} \end{figure} Now recall that the Morse Hamiltonian as constructed in Appendix~\ref{app:morse_ham} has a critical point at each handle set: the level sets are depicted in Figure~\ref{level}. We will apply the above theorem to show that these critical points correspond to invariant orbits with respect to the perturbation on $\bar H_0$. \begin{cor} \label{handle} After perturbation, there exists a periodic orbit $h_i$ on each handle connecting $A_i$ and $A_{i+1}$, near the original hyperbolic orbit, for $i=1,\ldots, 3n+1$. \end{cor} \begin{proof} Take the hypersurface $\Sigma$ as a page of the open book before perturbation, and $p(t;0)$ as the original handle orbit. Let $p=p(0;0)=p(t;0)\cap \Sigma$. Since the return map $\psi$ is locally hyperbolic near the fixed point, $d\psi_p$ has no eigenvalue of modulus one. More precisely, near the center of the handle the Hamiltonian vector field constructed in Appendix~\ref{app:morse_ham} is up to rescaling $-2\pi (y \partial_x+x\partial_y)$, whose flow along $y=x$ (resp. $y=-x$) is expansion (resp. contraction) by $e^{2\pi t}$. The eigenvalues of $d\psi_p$ are therefore $e^{\pm 2\pi}$. Thus 1 is a simple eigenvalue of $d\varphi_p^T$ and the theorem applies. \end{proof} Clearly the orbits $h_i$ are unaffected by the handle attachment $Y=\mathcal{OB}_0 \# Y^0$, so we may view them as orbits in $Y$ with respect to the induced flow. \par In the proof of Theorem~\ref{thm1}, the handle sets functioned as `blockages' separating the page sets, so that page orbits cannot link with one another, and long orbits linking with multiple handle orbits cannot exist. Since we can now only assure the existence of one orbit on each handle, this feature is lost. However, we are still able to obtain similar results by finding invariant tori which partition the manifold into regions with separate dynamics. The following invariant curve theorem is originally due to Moser \cite[Theorem 2.11]{Moser}, and its strengthening for lower regularity is due to Salamon, \cite{Salamon}. \begin{thm} \label{thm:twist_map} Let $[a,b]\times S^1$ be an annulus with a twist mapping $(r,\theta)\mapsto (r,\theta+\sigma(r))$, such that $\sigma\in C^k$ for some $k>3$ and $|\sigma'|$ is bounded below by some positive constant. Then for any $\epsilon>0$, there exists $\delta>0$ such that all area-preserving mappings of the annulus into $\mathbb{R}^2$ of the form $$(r,\theta)\mapsto (f_1(r,\theta),\theta+f_2(r,\theta)),\quad \left\Vert f_1-r\right\Vert_{C^k}+\left\Vert f_2-\sigma \right\Vert_{C^k}< \delta$$ have an invariant curve of the following form, parametrized by $\gamma$: $$ r=r_0+g_1(\gamma),\quad\theta=\gamma+g_2(\gamma)$$ where $g_1,g_2\in C^1$ and $\left\Vert g_1 \right\Vert_{C^1}+\left\Vert g_2 \right\Vert_{C^2}<\epsilon$. The induced mapping on the curve is given by $\gamma\mapsto \gamma+\kappa$, for some $\kappa$ incommensurable with $2\pi$. Furthermore, for any choice of $\kappa \in\mathrm{im}\,\sigma$ satisfying the conditions $$\left\vert \frac{\kappa}{2\pi}-\frac{p}{q}\right\vert \geq \alpha q^{-\beta},\quad \forall p,q\in\mathbb{Z},\;q>0$$ for some positive $\alpha,\beta$, there exists an invariant curve corresponding to $\kappa$ in the above sense. \end{thm} We now apply this theorem to the set $[-1/2,1/2]\times S^1$ in each annulus set $A_i$. Since the Hamiltonian vector field on this set is given by $\pi(r+1)\partial_\theta$, the twist condition is satisfied. We perturb the Hamiltonian $\bar H_0$ by a $C^{4+\epsilon}$-small perturbation, so the Hamiltonian vector field is $C^{3+\epsilon}$-close to the unperturbed vector field, as is the return map. To guarantee that we have an annulus map, we apply a cutoff function $\delta$ to the perturbation. This cutoff function vanishes near the boundary of the annulus and is $1$ in smaller annulus. After that, the changes in the $r,\theta$ coordinates satisfy the condition for the Moser twist theorem. We can now choose $\kappa$ such that the invariant curve lies in the region where the cutoff function is $1$. Therefore we can conclude that \begin{cor} After perturbation, there is still an invariant curve $c_i$ on each annulus $A_i$, $i=1,\ldots,3n+2$, away from the upper and lower boundary circles. \end{cor} We will denote the invariant torus obtained by following the Reeb through $c_i$ by $T_i$. \section{Proof of Theorem 2} We have identified invariant sets in both the handle and annulus sets in $X=\mathcal{OB}(W,\psi)\# \mathcal{OB}_1 \# \ldots \# \mathcal{OB}_{3n+2}$. We will first use some of the invariant tori to separate the dynamics of the homology sphere $M$ with the $M_0$ region. \par We label the invariant tori in each annulus $A_i$ as $T_i$, and each invariant handle orbit in the handle set $H_i$ to be $h_i$. Denote by $V_i$ the invariant set between $T_i$ and $T_{i+1}$: the situation is depicted in Figure~\ref{perturbpic}. \begin{figure}[h] \centering \begin{tikzpicture} \draw [draw=none, fill=lightgray] (1.3,-2) rectangle (10.4,2); \draw [draw=none, fill=gray] (-0.17,0) circle (1.5); \draw [fill=lightgray] (-0.17,0) circle (1.3); \path [fill=lightgray] (1.33,0) arc [start angle=0, end angle=15.5, radius=1.5] -- (0,0.4) -- (0,-0.1) -- (1.33,0) arc [start angle=0, end angle=-15.5, radius=1.5] -- (0,-0.4) -- (0,-0.1); \draw [draw=none, fill=gray] (1.2,0.4) rectangle (2,2); \draw [draw=none, fill=gray] (1.2,-0.4) rectangle (2,-2); \draw [draw=none, fill=gray] (2.8,0.4) rectangle (4.2,2); \draw [draw=none, fill=gray] (2.8,-0.4) rectangle (4.2,-2); \draw [draw=none, fill=gray] (5,0.4) rectangle (7.2,2); \draw [draw=none, fill=gray] (5,-0.4) rectangle (7.2,-2); \draw [draw=none, fill=gray] (8,0.4) rectangle (9.4,2); \draw [draw=none, fill=gray] (8,-0.4) rectangle (9.4,-2); \draw [draw=none, fill=white] (1.19,0.6) rectangle (1.8,2.1); \draw [draw=none, fill=white] (1.19,-0.6) rectangle (1.8,-2.1); \draw [draw=none, fill=white] (3,0.6) rectangle (4,2.1); \draw [draw=none, fill=white] (3,-0.6) rectangle (4,-2.1); \draw [draw=none, fill=white] (5.2,0.6) rectangle (7,2.1); \draw [draw=none, fill=white] (5.2,-0.6) rectangle (7,-2.1); \draw [draw=none, fill=white] (8.2,0.6) rectangle (9.2,2.1); \draw [draw=none, fill=white] (8.2,-0.6) rectangle (9.2,-2.1); \draw[draw=none, fill=lightgray] (1,-0.4) rectangle (2,0.4); \draw (-1.67,0) arc [start angle=180, end angle=23.5, radius=1.5]; \draw (-1.67,0) arc [start angle=180, end angle=336.5, radius=1.5]; \draw(1.06,-0.4) -- (2,-0.4); \draw(1.06,0.4) -- (2,0.4); \draw (1.8,-2) -- (1.8,-0.6); \draw (1.8,0.6) -- (1.8,2); \draw(2,-2) -- (2,-0.4); \draw(2,0.4) -- (2,2); \draw (3,-2) -- (3,-0.6); \draw (3,0.6) -- (3,2); \draw(2.8,-2) -- (2.8,-0.4); \draw(2.8,0.4) -- (2.8,2); \draw(2.8,-0.4) -- (4.2,-0.4); \draw(2.8,0.4) -- (4.2,0.4); \draw (4,-2) -- (4,-0.6); \draw (4,0.6) -- (4,2); \draw(4.2,-2) -- (4.2,-0.4); \draw(4.2,0.4) -- (4.2,2); \draw (5.2,-2) -- (5.2,-0.6); \draw (5.2,0.6) -- (5.2,2); \draw(5,-2) -- (5,-0.4); \draw(5,0.4) -- (5,2); \draw (1.2,-0.6) -- (1.8,-0.6); \draw (1.2,0.6) -- (1.8,0.6); \draw (3,-0.6) -- (4,-0.6); \draw (3,0.6) -- (4,0.6); \draw (4.2,-0.6) -- (4.2,-0.6); \draw (4.2,0.6) -- (4.2,0.6); \draw(5,-0.4) -- (7.2,-0.4); \draw(5,0.4) -- (7.2,0.4); \draw (5.2,0.6) -- (7,0.6); \draw (5.2,-0.6) -- (7,-0.6); \draw (7,0.6) -- (7,2); \draw (7,-0.6) -- (7,-2); \draw (7.2,0.4) -- (7.2,2); \draw (7.2,-0.4) -- (7.2,-2); \draw (8.2,0.6) -- (8.2,2); \draw (8.2,-0.6) -- (8.2,-2); \draw (8,0.4) -- (8,2); \draw (8,-0.4) -- (8,-2); \draw (8.2,-0.6) -- (9.2,-0.6); \draw (8.2,0.6) -- (9.2,0.6); \draw(8,-0.4) -- (9.4,-0.4); \draw(8,0.4) -- (9.4,0.4); \draw (9.2,-2) -- (9.2,-0.6); \draw (9.2,0.6) -- (9.2,2); \draw(9.4,-2) -- (9.4,-0.4); \draw(9.4,0.4) -- (9.4,2); \draw (10.4,-2) -- (10.4,2); \draw (3.2,0) node {$h_1$}; \draw [fill=black] (3.5,0) circle (0.04); \draw (5.4,0) node {$h_2$}; \draw [fill=black] (5.7,0) circle (0.04); \draw (9.3,0) node {$h_{3n+1}$}; \draw [fill=black] (8.7,0) circle (0.04); \draw (2.4,-2) -- (2.4,2); \draw (4.6,-2) -- (4.6,2); \draw (7.6,-2) -- (7.6,2); \draw (-0.17,-2.4) node {$\mathcal{OB}_0$}; \draw (2.4,-2.4) node {$\mathcal{OB}_1$}; \draw (4.6,-2.4) node {$\mathcal{OB}_2$}; \draw (7.6,-2.4) node {$\mathcal{OB}_{3n+1}$}; \draw (9.8,-2.4) node {$\mathcal{OB}_{3n+2}$}; \draw (2.4,2.5) node {$\downarrow$};\draw (4.6,2.5) node {$\downarrow$}; \draw (7.6,2.5) node {$\downarrow$}; \draw (2.4,2.9) node {$T_1$};\draw (4.6,2.9) node {$T_2$};\draw (7.6,2.9) node {$T_{3n+1}$}; \path[draw,decorate,decoration={brace}] (2.45,2.2) -- (4.55,2.2); \path[draw,decorate,decoration=brace] (-1.6,2.2) -- (2.35,2.2); \path[draw,decorate,decoration=brace] (4.65,2.2) -- (7.55,2.2); \path[draw,decorate,decoration=brace] (7.65,2.2) -- (10.4,2.2); \draw (3.5,2.55) node {$V_1$}; \draw (0.375,2.55) node {$V_0$}; \draw (9.025,2.55) node {$V_{3n+1}$}; \draw [draw=none, fill=white] (5.9,-1) rectangle (6.7,2.7); \draw (6.3,0) node {$\mathbf{\cdots}$}; \draw [loosely dashed] (1,2) -- (11.2,2); \draw [loosely dashed] (1,-2) -- (11.2,-2); \end{tikzpicture} \caption{Invariant sets of a page of the perturbed flow.} \label{perturbpic} \end{figure} First, we will find two Diophantine tori $T_1,~T_1'$ in the tori connected to the homology sphere. As in Figure~\ref{Separate}, these Diophantine tori prevent the existence of orbits in the homology sphere that link with orbits in $\mathcal{OB}_2 \# \ldots \# \mathcal{OB}_{3n+2}$. Therefore, we can use a similar linking argument in the annuli connected sum region to provide a lower bound for the boundary components of the global surface of section. \par The key observation we make is the following: \begin{proposition} \label{link2} Orbits in $V_i$ and $V_j$ cannot link with each other if $|i-j|\geq 2$. \end{proposition} \begin{proof} Consider the manifolds $Y^r=\mathcal{OB}_0\#\ldots\#\mathcal{OB}_{i+1}$ and $Y^{\ell}=\mathcal{OB}_{i+3}\#\cdots \# \mathcal{OB}_{3n+1}$. We may assume that the attachment of the $(i+1)$th handle was performed using a ball in $Y^r$ to the right of $T_{i+1}$ and a ball in $Y^{\ell}$ to the left of $T_{i+2}$. Thus, orbits in $V_i$ and $V_j$ are unaffected by the connected sum, and may be viewed as sitting in $Y^r$ and $Y^{\ell}$, respectively. Take a Seifert surface for an orbit $k$ contained in $V_i$. We can perform the book-connected sum along a Darboux ball that does not intersect the Seifert surface in $V_i$, which ensures the linking number with any other orbit contained in $V_j$ is zero. \end{proof} Now we can carry out the same linking number argument for the book-connected sum of $M$ with $3n+2$ copies of $S^3$ to conclude the proof. Assume that after a $C^{4+\epsilon}$ perturbation $h$, there exists a global surface of section on $\{1\}\times M$ with fewer than $n$ boundary components $K_1,\cdots,K_m$, for $m< n$. We first look at the handle orbits: each handle orbit $h_i$ should have positive linking number with a boundary component. Assume that $K_1$ has positive linking number with $h_2$. Then by Proposition~\ref{link2}, the knot $K_1$ cannot link with $h_l$ for $l\geq4$. We can repeat this process for each handle orbit to show that the global surface of section should have at least $n$ distinct boundary components, which yields a contradiction. \begin{appendices} \section{Seifert Surfaces for Integral Homology Spheres} \label{app:seifert} In this appendix, we will show that Seifert surfaces exist for any knot in a integral homology 3-sphere. Recall that a Seifert surface for an oriented link $k$ in an integral homology sphere $M$ is a connected oriented compact surface embedded in $M$ such that the oriented boundary $\partial M$ is equal to the link $k$. \begin{thm} Let $M$ be an integral homology 3-sphere, so $H_*(M;\mathbb{Z})\cong H_*(S^3;\mathbb{Z})$. For any oriented knot $k$ in $M$, there exists a Seifert surface $S$ for $k$ embedded in $M$. \end{thm} \begin{proof} Take a tubular neighborhood $N_k$ of $k$ in $M$, and let the boundary of $N_k$ be $K$. Define $X:=M\setminus N_k$. We are going to construct the Seifert surface $S$ by first defining a map $f:K\to S^1$, and extending it to $\Tilde{f}:X\to S^1$. By the transversality theorem, we can assume that both $f, \Tilde{f}$ are transverse to some $p\in S^1$. Then $\Tilde{f}^{-1}(p)$ defines a surface $T$ with boundary in $K$. We then connect the boundary of $T$ in $K$ to $k$ to obtain a surface with boundary $k$. \par First, we will define a map from $K$ to $S^1$. For this reason, we need to identify $K$ with $S^1\times S^1$. There is a natural choice for a meridian $m$: it is a generator for $H_1(X)\cong \mathbb{Z}$. Then define a longitude for $K$ by taking a knot $l\in K$ such that intersection number in $K$ satisfies $ l\bullet m=1$ and $[l]\in H_1(X)=0$. With these two knots, we can identify $K$ with $S^1\times S^1$. The map $f:K\to S^1$ will be defined as the projection to the $S^1$ factor described by $m$. \par Now take the map $f$ defined above. Since $S^1$ is an Eilenberg-Maclane space $K(\mathbb{Z},1)$, we can use a cohomology class $[f]$ in $H^1(K;\mathbb{Z})$ to represent $f:K\to S^1$. We want to know whether this $f$ extends to all of $X$. For this, consider the inclusion $\iota:K\to X$. The extension of $f$ to a map $\tilde f:X \to S^1$ is equivalent to finding $[\Tilde{f}]$ in $H^1(X;\mathbb{Z})$ that maps to $[f]\in H^1(K)$ under the induced map $\iota^*:H^1(X)\to H^1(K)$. From the exact sequence of the pair, we can find such $[\Tilde{f}]$ if and only if $\partial^*[f]\in H^2(X,K)$ is zero. \begin{equation*} \begin{tikzcd} H^1(X) \arrow[r,"\iota^*"] &H^1(K) \arrow[r,"\partial^*"] &H^2(X,K) \end{tikzcd} \end{equation*} To show that $\partial^*[f]=0$, we will consider the Poincar\'e dual of $[f]$ in $H_1(K)$. For a generator $\alpha$ of $H^1(S^1)$, $[f]$ is defined to be $f^*\alpha$. Then the Poincar\'e dual is $[l]\in H_1(K)$. Now in order to show $\partial^*[f]=0$, it suffices that show that $j^*[l]$ vanishes, where $j:K\to X$ is the inclusion. This claim holds because $[l]\in H_1(X)=0$. We conclude that there is an extension $\Tilde{f}:X\to S^1$. \par From the transversality theorem, we can assume for a regular value $p\in S^1$ that $f$, $\Tilde{f}$ are both transverse to $p$. Therefore the preimage $T=f^{-1}(p)$ is a surface in $X$ with boundary in $K$. Now look at the tubular neighborhood $N_k$ of $k$. The boundary of $T$ is in $K=\partial N_k$, so we can extend $T$ to a surface $S$ with boundary $k$ by connecting the boundary of $T$ to $k$ in $N_k$. This $S$ will be our required Seifert surface for $k$. \end{proof} Another fact that we have implicitly used is that Liouville tori inside integral homology spheres divide the manifold into two connected components: we provide a short proof using the Mayer-Vietoris sequence. \par Assume $T$ is a surface in $M$ homeomorphic to the two-torus $\mathbb{T}^2$. Let $N$ be the tubular neighborhood of $T$ in $M$, with a homeomorphism $\phi:(-\epsilon,\epsilon)\times T\to N$. Then we can form two sets $A=M\setminus T$, and $B=N$. We remark that $A\cap B$ can be identified with $(-\epsilon,0)\cup(0,\epsilon)\times T$ using the map $\phi$. The Mayer-Vietoris sequence for the pair $(A,\:B)$ gives \begin{equation*} \begin{tikzcd} H_1(M) \arrow[r] &H_0(T\times\{0,\:1\}) \arrow[r] &H_0(A)\oplus H_0(B) \arrow[r] &H_0(M) \arrow[r] &0 \end{tikzcd} \end{equation*} Since $H_0(T\times\{0,\:1\})\cong\mathbb{Z}^2$, $H_0(B)\cong\mathbb{Z}$, $H_0(M)\cong\mathbb{Z}$, we can conclude that $H_0(A)$ has rank 2, and therefore $M\setminus T$ has exactly 2 connected components. \section{Construction of the Morse Hamiltonian} \label{app:morse_ham} In this appendix we will give an explicit construction of the Morse Hamiltonian $H$ on the page $Y^0$. The induced Hamiltonian flow will also have the following properties, which we use in the proof of Theorem~\ref{thm2}: \begin{enumerate} \item $H$ has a critical point at each handle set, which corresponds to a hyperbolic periodic orbit for the induced Hamiltonian flow. \item The level sets of $H$ on the annulus part form a continuum of Liouville tori, some of which are Diophantine tori. \item The monodromy induced by the Hamiltonian flow is isotopic to the return map of $Y^0$, and is the identity near the boundary. \end{enumerate} We first present the construction for the union of an annulus and a handle set. Consider a standard model for such a set given as the domain $D=[-1,1]\times S^1\cup_\partial [1,\frac{5}{2}]\times[-\frac{\pi}{4},\frac{\pi}{4}]$, with the Liouville form $rd\theta$. The two sets $[-1,1]\times S^1$, $[\frac{1}{2},\frac{5}{2}]\times[-\frac{\pi}{4},\frac{\pi}{4}]$ each correspond to the annulus and handle sets. We assign to each domain a Hamiltonian $H_1, H_2$ given by \begin{enumerate} \item $H_1$: $[-1,1]\times S^1\to \mathbb{R}: (r,\theta)\mapsto -\frac{\pi}{2}r^2+\pi r$, \item $H_2$: $[\frac{1}{2},\frac{5}{2}]\times[-\frac{\pi}{4},\frac{\pi}{4}]\to \mathbb{R}: (r,\theta)\mapsto -C(1-(r-\frac{3}{2})^2+(\frac{4}{\pi}\theta)^2)$, \end{enumerate} where the constant $C>0$ will be determined later in the proof. \begin{figure} \centering \begin{tikzpicture} \draw (-2,-2) rectangle (0,2); \draw [loosely dashed] (-3,-2) -- (1,-2); \draw [loosely dashed] (-3,2) -- (1,2); \fill[gray] (-2,-2) rectangle (0,2); \fill[white] (-1,-1) rectangle (0,1); \draw (0,-1) rectangle (1,1); \fill[lightgray] (0,-1) rectangle (1,1); \draw [white, line width=2pt] (0,-1) -- (0,1); \draw [line width=2pt, line cap=round] (0,-1) -- (-1,-1) -- (-1,1) -- (0,1); \draw [line cap=round] plot [smooth] coordinates {(0,0.9) (-0.8,0.8) (-0.9,0) (-0.8,-0.8) (0,-0.9)}; \draw [line cap=round] plot [smooth] coordinates {(0,0.85) (-0.6,0.6) (-0.7,0) (-0.6,-0.6) (0,-0.85)}; \draw [line cap=round] plot [smooth] coordinates {(0,0.8) (-0.4,0.5) (-0.5,0) (-0.4,-0.5) (0,-0.8)}; \draw [line cap=round] plot [smooth] coordinates {(0,0.7) (-0.2,0.4) (-0.25,0) (-0.2,-0.4) (0,-0.7)}; \draw [line width=2pt, line cap=round] (0,-0.65) -- (0,0.65); \draw (-2.7,0) node {$\rho=1$}; \draw (1.7,0) node {$\rho=0$}; \end{tikzpicture} \caption{Level sets of the cutoff function $\rho$ (not to scale). The left rectangle corresponds to the annulus set, and the right square corresponds to the handle set.} \label{cutoff} \end{figure} We now define a cutoff function to connect the level sets on the handle and the annulus. Take $\rho$ as a smooth function from $[\frac{1}{2},1]\times[-\frac{\pi}{4},\frac{\pi}{4}]$ to $[0,1]$ such that $\rho=1$ on $\{\frac{1}{2}\}\times [-\frac{\pi}{4},\frac{\pi}{4}]\cup [\frac{1}{2},1]\times \{\pm \frac{\pi}{4}\}$ and $\rho=0$ on $\{1\}\times [-\frac{\pi}{4},\frac{\pi}{4}]$. We extend $\rho$ to the whole set $D$ by assigning constant values $1$, $0$ such that $\rho$ is smooth. The level sets of $\rho$ are sketched in Figure \ref{cutoff}. Note that the cutoff function $\rho$ has discontinuities at the region where the annulus and handle sets attach. \par Now define the Hamiltonian $H_0=\rho H_1+(1-\rho)H_2$. The level sets of $H_0$ and the flow with respect to the Hamiltonian vector field is depicted in Figure~\ref{morse}. The constant $C$ is chosen such that the level sets match as in Figure~\ref{morse}. Note that this construction is made to trim the boundary to a smooth submanifold. Therefore we must check that the level sets of this Hamiltonian behave as in the Figure~\ref{morse}. We will use Morse theory arguments to determine the topology of the level sets. Since the only critical point of $H_0$ is contained in the handle set, the homotopy type of $H_0^{-1}(x)$ only changes when $x=-C$. Since the level set $H_0^{-1}(-C)$ behaves as in Figure~\ref{morse}, for small $\epsilon>0$ $H_0^{-1}(-C+\epsilon)$ can be used to ``trim off'' the boundary to a smooth set. If we choose $\epsilon$ small enough, we can also make the level set $H_0^{-1}(-C+\epsilon)$ to not contain any discontinuities of $\rho$. Therefore, we can restrict $H_0$ to $H_0^{-1}(-C+\epsilon)$ to a smooth function. \par We now check if the constructed Hamiltonian satisfies our proposed conditions. The Hamiltonian vector field for $H_1, H_2$ can be computed to be $X_{H_1}=\pi(-r+1)\partial_\theta$, $X_{H_2}=-2C(\frac{4}{\pi}\theta\partial_r+(r-\frac{3}{2})\partial_\theta)$. Since the Hamiltonian vector field on the annulus part generates a positive Dehn twist, we can ensure that the contact manifold generated by the return map is a tight $S^3$. Therefore, the return map is isotopic to the return map of the book-connected sum. \par To check conditions (1),~(2), we will look at the level sets of $H_0$. On the handle region, the Hamiltonian $H_2$ has a hyperbolic critical point for $(r,\theta)=(\frac{3}{2},0)$, which corresponds to the hyperbolic periodic orbit. In the annulus region, the Hamiltonian vector field $X_1=\pi(-r+1)\partial_\theta$ generates Liouville tori for $-\frac{1}{2}\geq r\geq \frac{1}{2}$. Therefore, we have checked that conditions (1), (2) are satisfied. \par \tikzset{->-/.style={decoration={ markings, mark=at position 0.5*\pgfdecoratedpathlength+.5*2mm with {\arrow{Latex[length=2mm]}}},postaction={decorate}}} \begin{figure} \centering \begin{tikzpicture} \draw [fill=lightgray!50, draw=none] (-6,-3) rectangle (-2.2,3); \draw [fill=lightgray!50, draw=none] (-4,-2) rectangle (4,2); \draw [fill=lightgray!50, draw=none] (2.2,-3) rectangle (6,3); \draw [fill=white, draw=none] (-1.94,2) ellipse (0.26 and 0.515); \draw [fill=white, draw=none] (1.94,2) ellipse (0.26 and 0.515); \draw [fill=white, draw=none] (-1.94,-2) ellipse (0.26 and 0.515); \draw [fill=white, draw=none] (1.94,-2) ellipse (0.26 and 0.515); \draw [fill=white, draw=none] plot [smooth] coordinates {(-2,1.5) (0,1) (2,1.5) (2,2)}; \draw [fill=white, draw=none] plot [smooth] coordinates {(-2,-1.5) (0,-1) (2,-1.5) (2,-2)}; \draw [fill=white, draw=none] plot (-2,2) -- (-2,1.48) -- (2.5,2); \draw [fill=white, draw=none] plot (-2,-2) -- (-2,-1.48) -- (2.5,-2); \draw [line width=1pt] (-4,-2) rectangle (4,2); \draw [dashed] (-4,-2) -- (-2,-1); \draw [dashed] (-4,2) -- (-2,1); \draw [line width=1pt] (-2,-3) -- (-2,3); \draw [line width=1pt] (-6,-3) -- (-6,3); \draw [line width=1pt] (2,-3) -- (2,3); \draw [line width=1pt] (6,-3) -- (6,3); \draw [loosely dashed] (-7,-3) -- (-1,-3); \draw [loosely dashed] (-7,3) -- (-1,3); \draw [loosely dashed] (1,-3) -- (7,-3); \draw [loosely dashed] (1,3) -- (7,3); \draw[->-] (-4,-3) -- (-4,3); \draw[->-] (-4.5,-3) -- (-4.5,3); \draw[->-] (-5,-3) -- (-5,3); \draw (-5.5,0) node {$\cdots$}; \draw[->-] (4,3) -- (4,-3); \draw[->-] (4.5,3) -- (4.5,-3); \draw[->-] (5,3) -- (5,-3); \draw (5.5,0) node {$\cdots$}; \draw (-3.6,-3) -- (-3.6,-2); \draw (3.6,-3) -- (3.6,-2); \draw (-3.6,2) -- (-3.6,3); \draw (3.6,2) -- (3.6,3); \draw [line cap=round] plot [smooth] coordinates {(-3.6,2) (-3.4,1.5) (-2,0.5) (-1.6,0) (-2,-0.5) (-3.4,-1.5) (-3.6,-2)}; \draw [line cap=round] plot [smooth] coordinates {(3.6,-2) (3.4,-1.5) (2,-0.5) (1.6,0) (2,0.5) (3.4,1.5) (3.6,2)}; \draw (-3.8,-3) -- (-3.8,-2); \draw (-3.8,2) -- (-3.8,3); \draw (3.8,-3) -- (3.8,-2); \draw (3.8,2) -- (3.8,3); \draw [line cap=round] plot [smooth] coordinates {(-3.8,2) (-3.7,1.5) (-3.3,0.5) (-3.2,0) (-3.3,-0.5) (-3.7,-1.5) (-3.8,-2)}; \draw [line cap=round] plot [smooth] coordinates {(3.8,-2) (3.7,-1.5) (3.3,-0.5) (3.2,0) (3.3,0.5) (3.7,1.5) (3.8,2)}; \draw (-3.1,-3) -- (-3.1,-2); \draw (-3.1,2) -- (-3.1,3); \draw (3.1,-3) -- (3.1,-2); \draw (3.1,2) -- (3.1,3); \draw (-2,-1) -- (2,1); \draw (-2,1) -- (2,-1); \draw [line cap=round] plot [smooth] coordinates {(-3.1,2) (-3.03,1.7) (-2.7,1.4) (-2,1)}; \draw [line cap=round] plot [smooth] coordinates {(3.1,2) (3.03,1.7) (2.7,1.4) (2,1)}; \draw [line cap=round] plot [smooth] coordinates {(-3.1,-2) (-3.03,-1.7) (-2.7,-1.4) (-2,-1)}; \draw [line cap=round] plot [smooth] coordinates {(3.1,-2) (3.03,-1.7) (2.7,-1.4) (2,-1)}; \draw (-2.2,-3) -- (-2.2,-2); \draw (-2.2,2) -- (-2.2,3); \draw (2.2,-3) -- (2.2,-2); \draw (2.2,2) -- (2.2,3); \draw [line cap=round] (-2.2,2) arc[start angle=180, end angle=255, x radius=0.26, y radius=0.515]; \draw [line cap=round] plot [smooth] coordinates {(-2,1.5) (0,1) (2,1.5)}; \draw [line cap=round] (2,1.5) arc[start angle=285, end angle=360, x radius=0.26, y radius=0.515]; \draw [line cap=round] (-2.2,-2) arc[start angle=180, end angle=105, x radius=0.26, y radius=0.515]; \draw [line cap=round] (2.2,-2) arc[start angle=15, end angle=90, x radius=0.26, y radius=0.715]; \draw [line cap=round] plot [smooth] coordinates {(-2,-1.5) (0,-1) (2,-1.5)}; \draw [fill=black] (-2.04,1.7) rectangle (-1.96,2); \draw [fill=black] (-2.04,-1.7) rectangle (-1.96,-2); \draw [fill=black] (2.04,1.7) rectangle (1.96,2); \draw [fill=black] (2.04,-1.7) rectangle (1.96,-2); \draw (-6,-3.3) node {$\downarrow$}; \draw (-6,-3.7) node {$r=-1$}; \draw (-4,-3.3) node {$\downarrow$}; \draw (-4,-3.7) node {$1/2$}; \draw (-2,-3.3) node {$\downarrow$}; \draw (-2,-3.7) node {$1$}; \draw (6,-3.3) node {$\downarrow$}; \draw (6,-3.7) node {$r=-1$}; \draw (4,-3.3) node {$\downarrow$}; \draw (4,-3.7) node {$1/2$}; \draw (2,-3.3) node {$\downarrow$}; \draw (2,-3.7) node {$1$}; \draw (-6.5,0) node {$\Big\uparrow$}; \draw (-7,0) node {$\theta$}; \draw (6.5,0) node {$\Big\uparrow$}; \draw (7,0) node {$\theta$}; \draw (-2.2,3.3) node {$\uparrow$}; \draw (-1.5,3.7) node {$H_0=-C+\epsilon$}; \draw (-3.1,3.3) node {$\uparrow$}; \draw (-3.4,3.7) node {$H_0=-C$}; \draw[->-] [draw=none] (-1.6,-1) -- (-1.6,1); \draw[->-] [draw=none] (1.6,1) -- (1.6,-1); \draw[->-] [draw=none] (-3.2,-1) -- (-3.2,1); \draw[->-] [draw=none] (3.2,1) -- (3.2,-1); \draw[->-] [draw=none] (2,1) -- (0,0); \draw[->-] [draw=none] (-2,-1) -- (0,0); \draw[->-] [draw=none] (0,0) -- (-2,1); \draw[->-] [draw=none] (0,0) -- (2,-1); \draw[->-] [draw=none] (-1,-1) -- (1,-1); \draw[->-] [draw=none] (1,1) -- (-1,1); \end{tikzpicture} \caption{The Hamiltonian flow generated by the Morse Hamiltonian (not to scale). The parallel dashed lines glue to form an annulus. The thickened part shows the discontinuities of the cutoff function $\rho$, which we trim back by the level set $H_0^{-1}(-C+\epsilon)$.} \label{morse} \end{figure} \par Now we consider the union of both annuli and the handle sets. The Hamiltonian is defined as before using cutoff functions. We remark that the right annulus set as depicted in Figure~\ref{morse} has opposite orientation of $r$, so the Hamiltonian flow is also a positive Dehn twist in the annulus set. \par We choose small $\epsilon$ and ``trim'' the domain by a smooth level set of $H_0^{-1}(-C+\epsilon)$. Since the Hamiltonian $H_0$ in this model only has a critical point in the handle set, the same argument as above shows that we can choose $\epsilon$ such that $H_0^{-1}(-C+\epsilon)$ has smooth boundary, and $H_0$ restricted to this domain is a smooth function. As for the return map near the boundary, recall that the size of the Hamiltonian vector field only depends on the slope of the Hamiltonian. Therefore we increase the slope near the boundary such that the return map near the boundary is the identity. Since this perturbation will not generate new critical points, the Hamiltonian is still Morse, and generates the return map to be identity near the boundary. \par We conclude that the final Hamiltonian $H$ constructed by this process is the Morse Hamiltonian satisfying the properties (1),~(2),~(3) above. \section{Stability of global surfaces of section} \label{appendix:stab_ob} In this appendix, we will prove the following proposition. \begin{proposition} \label{prop:stab_gss} Suppose that $(M,\omega)$ is a symplectic manifold with a Hamiltonian $H$. Assume that the level set $Y=H^{-1}(0)$ is of contact type and admits a global surface of section $\Sigma$ with possibly disconnected binding $B$. Also assume that the periodic orbits of $H$ associated with $B$ are non-degenerate. Then for a $C^2$-small perturbation $H_\delta$ of $H$, there is an embedded surface $\tilde \Sigma$ that is a global surface of section for the flow of Hamiltonian flow of $H_\delta$. \end{proposition} \begin{remark} Although we have used the words Hamiltonian and contact type to stay in line with the rest of the paper, the statement holds for any $C^1$-small perturbation of a smooth vector field $X$ that admits a global surface of section with non-degenerate binding orbits. \end{remark} \begin{proof} We first outline the argument before going into the details. Since the global surface of section $\Sigma$ is by definition transverse to the flow on the interior of $\Sigma$, the transversality part is clear for a $C^1$-small perturbation of the vector field generating the flow as long as we stay away from the binding. We use the linearized flow in order to see that the binding orbits survive a perturbation, and also to see that there is still a strong twist around the binding. Let us now look at some details. We consider a perturbation $H+\delta h$ for a $C^2$-small $h$. Except for a neighborhood of the binding, the original open book satisfies the transversality condition for the perturbed Reeb vector field. Now consider a small neighborhood $N$ of the binding. Since the complement of $N$ is a compact region, we can consider $h$ to be small enough that the perturbed Reeb vector field is transverse to the pages of the original open book. Therefore, we only need to consider the inside of the binding neighborhood to show that an open book still exists for the perturbed Hamiltonian. We identify the binding neighborhood with $S^1\times D^2$, by introducing coordinates $(x, y, t)$ centered at the original binding. We first look at the Reeb vector field $R_\delta$ for the perturbed Hamiltonian $H_\delta=H+\delta h$. We will actually look at the normalized vector field $\Tilde{R_\delta}$ that has $t$ component identically $1$. The flow of the normalized vector field is a reparametrization of the flow of the original vector field. We now apply an implicit function argument, that shows for small $\delta$, there exists a periodic orbit $\gamma_\delta$ near the original periodic orbit. Define $X(x,\delta)$ to be the Hamiltonian vector field of the perturbed Hamiltonian $H+\delta h$. We can take a point $p$ on the binding. Take a local surface of section $\Sigma$ transverse to the binding at p. We define the return map of the Hamiltonian vector field at perturbation $\delta$ to be $\varphi(x,\delta)$. Since the binding orbit is non-degenerate, the linearization of $\varphi(x,\delta)$ at $(p,0)$ does not have $1$ as an eigenvalue. Now consider the map $\Sigma\times[0,\delta_1]\to\Sigma$, where $[0,\delta_1]$ is the domain of the parameter $\delta$. From the discussion above, we have that the matrix $\frac{\partial}{\partial \delta}\varphi-I$ is non-singular at $(p,0)$. Therefore from the implicit function theorem, we can find a point $\gamma_\delta(0)$ in $\Sigma$ such that $\varphi(\gamma_\delta(0),\delta)=\gamma_\delta(0)$ for small $\delta$. This implies that the orbit $\gamma_\delta(t)$ through this point is a smooth periodic orbit for the perturbed Hamiltonian $H+\delta h$. We denote the coordinates of this orbit by $(\gamma_{\delta,x}, \gamma_{\delta,y},\gamma_{\delta,t})$. Then we introduce new variables $\tilde u, \tilde v$ by putting $\tilde u=x-\gamma_{\delta,x}(\gamma_{\delta,t}^{-1}), \tilde v=y-\gamma_{\delta,y}(\gamma_{\delta,t}^{-1})$. This $u,v$ measures the distance from the perturbed orbit $\gamma_\delta$, and $z$ is a reparametrization of the $t$ coordinate. These coordinates $(\tilde u,\tilde v,z)$ are defined in a tubular neighborhood $\nu_Y(\gamma_0)$ of the unperturbed periodic orbit $\gamma_0$, but their behavior on the boundary of $\nu_Y(\gamma_0)$ depends on $\delta$. To fix this, we use a cutoff function $\rho$, which equals $1$ on $\gamma_\delta$ and vanishes on a neighborhood of $\partial \nu_Y(\gamma_0)$. Then we put $u_\delta=x-\rho\gamma_{\delta,x}(z), v_\delta=y-\rho\gamma_{\delta,y}(z)$. Define the modified ``open book'' projection $\theta_\delta=\frac{(u_\delta, v_\delta)}{\sqrt{u_\delta^2+v_\delta^2}}\in S^1$. This map coincides with the original open book projection near $\partial \nu_Y(\gamma_0)$ and can hence be extended smoothly to $Y$ by using the original open book projection. By cutting out the global surface of section, we can lift this map to $\mathbb{R}$, which we will do to have a convenient description of the derivative; we will continue to write $\theta_\delta$, also for this lifted map. Since the unperturbed Reeb vector field $R_0$ is transverse to the interior of the global surface of section, we can find $C>0$ such that $R_0(\theta_0)>2C$ on $Y \setminus \nu_Y(\gamma_0)$ (away from all binding orbits). As $R_\delta$ is $C^1$-close to $R_0$, we still have $R_\delta(\theta_\delta)>C$ if we choose $\delta_1$ sufficiently small. It hence suffices that to show that we have transversality on a neighborhood of the binding orbit, $\nu_Y(\gamma_0)$. To analyze this, consider the smooth $1$-form $$ \Omega_\delta=u_\delta dv_\delta-v_\delta du_\delta. $$ We observe that $$ d \theta_\delta =\frac{\Omega_\delta}{u_\delta^2 +v_\delta^2}, $$ so $R_\delta(\theta_\delta)>0$ is equivalent to $\Omega_\delta(R_\delta)>0$. Since $\Omega_\delta (R_\delta)$ is a smooth function of $p=(u,v,t)$ and $\delta$, we consider a Taylor expansion in the $u,v$-coordinates and $\delta$. The $0$-th order term in this expansion is \begin{equation*} \Omega_0(R_0)(u,v,t) = C_t(u^2+v^2)+o(u^2 + v^2). \end{equation*} This can be seen most easily from the explicit form of the Reeb vector field~\eqref{eq:Reeb_vf}, but below we shall see that such an expression follows for all small $\delta$ by analyzing the linearized flow. We make the following two claims.~\\ \noindent {\bf Claim 1: } $\Omega_0(R_0)\geq 0$ and $\Omega_0(R_0)$ vanishes only along $\gamma_0$. ~\\ \noindent {\bf Claim 2: } there is a uniform (i.e.~independent of $\delta$) neighborhood $N$ of $\gamma_0$, and $\delta_2$ such that for $\delta \in [0,\delta_2]$ the following hold. \begin{itemize} \item $\gamma_\delta\subset N$ \item for all $p\in N$ we have $\Omega_\delta(R_\delta)(p) \geq 0$ and $\Omega_\delta(R_\delta)(p)=0$ if and only if $p\in \gamma_\delta$. \end{itemize} The first claim is clear. We verify the second claim by analyzing the linearized flow. Put $P=(U,V,Z)$, and set $p =\gamma_\delta+\epsilon P$. The flow equation for $p$ is $$ \frac{d p}{dt} =\tilde R_\delta(p), $$ and by expanding in $\epsilon$ we obtain the linearized equation $\frac{d P}{dt} =\epsilon \nabla_P \tilde R_\delta +o(\epsilon)$. Since we only need the component normal to $\gamma_\delta$, we will use the following matrix representation for the normal component of the linearized flow. $$ \left( \begin{array}{c} \dot U \\ \dot V \end{array} \right) = A_\delta \left( \begin{array}{c} U \\ V \end{array} \right) , $$ where $A_\delta$ is a time-dependent matrix. We now compute the value of $\Omega_\delta(\tilde R_\delta)$ at $p$ using the above definitions. \begin{align*} \Omega_\delta(R_\delta) (p)&= (u_\delta dv_\delta-v_\delta du_\delta)(R_\delta)\\ &= \epsilon^2(U \dot V - V \dot U) + o(\epsilon^2)\\ &= \left(\begin{smallmatrix} \epsilon U \\ \epsilon V \end{smallmatrix}\right)^t \left(\begin{smallmatrix} 0 & 1\\ -1 & 0 \end{smallmatrix}\right) A_{\delta} \left(\begin{smallmatrix} \epsilon U \\ \epsilon V \end{smallmatrix}\right)+o(\epsilon^2). \end{align*} For fixed $t$, we know from the above that $\left(\begin{smallmatrix} 0 & 1\\ -1 & 0 \end{smallmatrix}\right)A_0$ is positive definite, so $\left(\begin{smallmatrix} 0 & 1\\ -1 & 0 \end{smallmatrix}\right) A_\delta$ is too, for sufficiently small $\delta$. This settles the second claim. To complete the proof, we argue by contradiction. Suppose that for all $\delta>0$, there is a point $p_\delta \notin \gamma_\delta$ such that $\Omega_\delta(\tilde R_\delta)(p_\delta)=0$. We obtain a sequence $\delta_n$ converging to $0$ and, by compactness, a converging sequence $p_n$, such that $\Omega_{\delta_n}(\tilde R_{\delta_n})(p_n)=0$ for all $n$. By claim 1, we see that $p_\infty\in \gamma_0$. But this means that $p_n$ lies in $N$ for sufficiently large $n$, contradicting Claim 2. This completes the proof. \end{proof} \end{appendices}
1,314,259,996,395
arxiv
\section{Introduction} Many online companies earn money from auctions, selling advertisement space or other items. One widely used auction paradigm is second-price auctions with reserve \citep{easley}. In this paradigm, the company sets a {\em reserve price}, the minimal price at which they are willing to sell, before potential buyers cast their bids. If the highest bid is smaller than the reserve price then there is no transaction; the company does not earn money. If any bid is larger than the reserve price then the highest bidding buyer wins the auction, and the buyer pays the larger of the second highest bid and the reserve price. To maximize their profit from a specific auction, the host company wants to set the reserve price as close as possible to the (future, unknown) highest bid, but no higher. Imagine a company which hosts second-price auctions with reserve to sell baseball cards. This auction mechanism is designed to be incentive compatible \citep{bar2002incentive}, which means that it is advantageous for baseball enthusiasts to bid exactly what they are willing to pay for the Stanley Kofax baseball card they are eager to own\footnote{In contrast, the auction mechanism used on eBay is not incentive compatible since the bids are not sealed. As a result, experienced bidders refrain from bidding the true amount they are willing to pay until seconds before the auction ends to keep sale prices low.}. Before each auction starts the company has to set the reserve price. When companies run millions of auctions of similar items, they have the opportunity to learn how to opportunistically set the reserve price from their historical data. In other words, they can try to learn their users' value of different items, and take advantage of this knowledge to maximize profit. This is the problem that we address in this paper. We develop a probabilistic model that predicts a good reserve price from prior features of an auction. These features might be properties of the product, such as the placement of the advertisement, properties of the potential buyers, such as each one's average past bids, or other external features, such as time of day of the auction. Given a data set of auction features and bids, our method learns a predictor of reserve price that maximizes the profit of future auctions. \begin{figure}[t] \centering \subcaptionbox {Revenue function of $4$ auctions from the eBay data set as a function of reserve price. In second-price auctions with reserve the revenue depends on the highest and the second highest bid (dashed lines).\label{revFun}}[0.49\textwidth] {\includegraphics[width=0.49\textwidth]{revenues.eps}} \subcaptionbox {The effect of smoothing on the revenue function of an auction from the eBay data set. The smaller $\sigma$ the closer the smoothed revenue approximates the actual revenue function.\label{fig:smooth} }[0.49\textwidth] {\includegraphics[width=0.49\textwidth]{smooth.eps}} \caption{The revenue (a) and smoothed revenue (b) for example auctions from the eBay data set.} \end{figure} A typical solution to such real-valued prediction problems is linear regression. However, the solution to this problem is more delicate. The reason is that the revenue function for each auction---the amount of money that we make as a function of the reserve price $y$---is asymmetric. It remains constant to the second-highest bid $b$, increases to the highest bid $B$, and is zero beyond the highest bid. Formally, \begin{align} \label{eq:revenue} R(y, B,b) = \begin{cases} b & \text{ if } y <b\\ y & \text{ if } b\leq y \leq B\\ 0 & \text{ otherwise} \end{cases}. \end{align} Fig. \ref{revFun} illustrates this function for four auctions of sports collectibles from eBay. This figure puts the delicacy into relief. The best reserve price, in retrospect, is the highest bid $B$. But using a regression to predict the reserve price, e.g., by using the highest bid as the response variable, neglects the important fact that overestimating the reserve price is much worse than underestimating it. For example, consider the top left panel in Fig. \ref{revFun}, which might be the price of a Stanley Kofax baseball card. (Our data are anonymized, but we we use this example for concreteness.) The best reserve price in retrospect is \$43.03. A linear regressor is just as likely to overestimate as to underestimate and hence fails to reflect that setting the price in advance to \$44.00 would yield zero earnings while setting it to \$40.00 would yield the full reserve. To solve this problem we develop a new idea, \textit{the objective variable}. Objective variables use the machinery of probabilistic models to reason about difficult prediction problems, such as one that seeks to optimize Eq.\ref{eq:revenue}. Specifically, objective variables enable us to formulate probabilistic models for which MAP estimation directly uncovers profitable decision-making strategies. We develop and study this technique to set the reserve price in second-price auctions. In more detail, our aim is to find a parameterized mechanism $f(x_i; w)$ to set the reserve price from the auction features $x_i$. In our study, we will consider a linear predictor, kernelized regression, and a neural network. We observe a historical data set of $N$ auctions that contains features $x_i$, and the auction's two highest bids $B_i$ and $b_i$; we would like to learn a good mechanism by optimizing the parameter $w$ to maximize the total (retrospective) revenue $\sum_{i=1}^N R(f(x_i; w), B_i, b_i)$. We solve this optimization problem by turning it into a maximum a posteriori (MAP) problem. For each auction we define new binary variables---these are the objective variables---that are conditional on a reserve price. The probability of the objective variable being on (i.e., equal to one) is related to the revenue obtained from the reserve price; it is more likely on if the auction produces more revenue. We then set up a model that first assumes each reserve price is drawn from the parameterized mechanism $f(x_i; w)$ and then draws the corresponding objective variable. Note that this model is defined conditioned on our data, the features and the bids. It is a model of the objective variables. With the model defined, we now imagine a ``data set'' where all of the objective variables are on, and then fit the parameters $w$ subject to these data. Because of how we defined the objective variables, the model will prefer more profitable settings of the parameters. With this set up, fitting the parameters by MAP estimation is equivalent to finding the parameters that maximize revenue. The spirit of this technique is that the objective variables are likely to be on when we make good decisions, that is, when we profit from our setting of the reserve price. When we imagine that they are all on, we are imagining that we made good decisions (in retrospect). When we fit the parameters to these data, we are using MAP estimation to find a mechanism that helps us make such decisions. We first derive our method for linear predictors of reserve price and show how to use the expectation-maximization algorithm~\citep{dempster1977maximum} to solve our MAP problem. We then show how to generalize the approach to nonlinear predictors, such as kernel regression and neural networks. Finally, on simulated data and real-world data from eBay, we show that this approach outperforms the existing methods for setting the reserve price. It is both more profitable and more easily scales to larger data sets. \parhead{Related work.} Second-price auctions with reserve are first introduced in \cite{easley}. Ref.~\cite{ostrovsky2011reserve} empirically demonstrates the importance of optimizing reserve prices; Their study quantifies the positive impact it had on Yahoo!'s revenue. However, most previous work on optimizing the reserve price are limited in that they do not consider features of the auction~\cite{ostrovsky2011reserve,cesa2013regret}. Our work builds on the ideas in Ref.~\cite{mohri2014learning}. This research shows how to learn a linear mapping from auction features to reserve prices, and demonstrates that we can increase profit when we incorporate features into the reserve-price setting mechanism. We take a probabilistic perspective on this problem, and show how to incorporate nonlinear predictors. We show in Sec. \ref{sec:experiments} that our algorithms scale better and perform better than these approaches. The objective variable framework also relates to recent ideas from reinforcement learning to solve partially observable Markov decision processes (POMDPs) \cite{toussaint2006probabilistic,toussaint2008hierarchical}. Solving an POMDP amounts to finding an action policy that maximizes expected future return. Refs.~\cite{toussaint2006probabilistic,toussaint2008hierarchical} introduce a binary reward variable (similar to an objective variable) and use maximum likelihood estimation to find such a policy. Our work solves a different problem with similar ideas, but there are also differences between the methods. In one way, the problem in reinforcement learning is more difficult because the reward is itself a function of the learned policy; in auctions, the revenue function is known and fixed. In addition, the work in reinforcement learning focuses on simple discrete policies while we show how to use these ideas for continuously parameterized predictors. \section{Objective Variables for Second-Price Auctions with Reserve}\label{sec:SPAWR} We first describe the problem setting and the objective. Our data come from previous auctions. For each auction, we observe features $x_i$, the highest bid $B_i$, and the second highest bid $b_i$. The features represent various characteristics of the auction, such as the date, time of day, or properties of the item. For example, one of the auctions in the eBay sport collectibles data set might be for a Stanley Kofax baseball card; its features include the date of the auction and various aspects of the item, such as its condition and the average price of such cards on the open market. When we execute an auction we set a reserve price before seeing the bids; this determines the revenue we receive after the bids are in. The revenue function (Eq. \ref{eq:revenue}), which is indexed by the bids, determines how much money we make as a function of the chosen reserve price. We illustrate this function for 4 auctions from eBay in Fig. \ref{revFun}. Our goal is to use the historical data to learn how to profitably set the reserve price from auction features, that is, before we see the bids. For now we will use a linear function to map auction features to a good reserve price. Given the feature vector $x_i$, we set the reserve price with $f(x_i;w) = w^\top x_i$. (In Sec. \ref{nonlin} we consider nonlinear alternatives.) We fit the coefficients $w$ from data, seeking $w$ that maximizes the regularized revenue \begin{align} \label{eq:regularized-revenue} w^* = \argmax \sum_{i=1}^N R(f(x_i;w), B_i, b_i) + (\lambda/2)w^\top w. \end{align} We have chosen an $L_2$ regularization controlled by parameter $\lambda$; other regularizers are also possible. Before we discuss our solution to this optimization, we make two related notes. First, the previous reserve prices are \textit{not} included in the data. Rather, our data tell us about the relationship between features and bids. All the information about how much we might profit from the auction is in the revenue function; the way previous sellers set the reserve prices is not relevant. Second, our goal is not the same as learning a mapping from features to the highest bid. Not all auctions are made equal: Consider the top left auction in Fig. \ref{revFun} with highest and second highest bid $B_1 = \$43.03$ and $b_1 = \$17.5$ compared to the bottom left auction in Fig. \ref{revFun} with both highest and second highest bids almost identical at $B_3 = \$39.83$ and $b_3=\$39.17$. The profit margin in the first auction is much larger, so predicting the reserve price for this auction well is much more important than when the two highest bids are close to each other. We account for this by directly maximizing revenue, rather than by modeling the highest bid. \subsection{The smoothed revenue} The optimization problem in Eq. \ref{eq:regularized-revenue} is difficult to solve because $R(\cdot)$ is discontinuous (and thus non-convex). Previous work~\cite{mohri2014learning} addresses this problem by iteratively fitting differences of convex (DC) surrogate functions and solving the resulting DC-program \cite{tao1998dc}. We define an objective function related to the revenue, but that smooths out the troublesome discontunuity. In the next section we show how to optimize this objective with an expectation-maximization algorithm. We first place a Gaussian distribution on the reserve price centered around the linear mapping, $y_i \sim \mathcal{N}(f(x_i; w),\sigma^2)$. We define the smoothed regularized revenue to be \begin{align} \label{eq:smoothed_revenue} \mathcal{L}(w) = \sum_{i=1}^N \log \mathbb{E}_{y_i}\left[\exp\left\{-R(y_i, B_i, b_i)\right\}\right] - (\lambda / 2) w^\top w. \end{align} Figure~\ref{fig:smooth} shows one term from Eq. \ref{eq:smoothed_revenue} and how -- for a specific auction -- the smoothed revenue becomes closer to the original revenue function as $\sigma^2$ decreases. This approach was inspired by probit regression, where a Gaussian expectation is introduced to smooth the discontinuous 0-1 los ~\citep{albert1993bayesian,holmes2006bayesian}. We now have a well-defined and continuous objective function; in principle, we can use gradient methods to fit the parameters. However, we will fit them by recasting the problem as a regularized likelihood under a latent variable model and then using the expectation-maximization (EM) algorithm~\citep{dempster1977maximum}. This leads to closed-form updates in both the E and M steps, and facilitates replacing linear regression with a nonlinear predictor. \subsection{Objective variables} \label{sec:TOV} To reformulate our optimization problem, we introduce the idea of the the \textit{objective variable}. Objective variables are part of a probabilistic model for which MAP estimation recovers the parameter $w$ that maximizes the smoothed revenue in Eq. \ref{eq:smoothed_revenue}. Specifically, we define binary variables $z_i$ for each auction, each conditioned on the reserve price $y_i$, the highest bid $B_i$, and next bid $b_i$. We can interpret these variables to indicate ``Is the auction host satisfied with the outcome?'' Concretely, the likelihood of satisfaction is related to how profitable the auction was relative to the maximum profit, $p(z_i=1 \, | \, y_i, B_i, b_i) = \pi(y_i, B_i, b_i)$ where \begin{align} \label{eq:objective_probability} \pi(y_i, B_i, b_i) = \exp \left\{- (B_i - R(y_i, B_i, b_i))\right\}. \end{align} The revenue function $R(\cdot)$ is in Eq. \ref{eq:revenue}. The revenue is bounded by $B_i$; thus the probability is in $(0,1]$. What we will do is set up a probability model around the objective variables, assume that they are all ``observed'' to be equal to one (i.e., we are satisfied with all of our auction outcomes), and then fit the parameter $w$ to maximize the posterior conditioned on this ''hallucinated data''. Fig. \ref{fig:posterior} provides visual intuition why the modes of the posterior are profitable. For fixed $w$ the posterior of $y_i$ is proportional to the product of its prior centered at $f(x_i;w)$ and the likelihood of the objective variable (Eq. \ref{eq:objective_probability}) which captures the profitability of each possible reserve price prediction. Consider the following model, \begin{align}\label{eqn:generative} w & \sim \mathcal{N}(0, \lambda^{-1} I) \\ y_i \, | \, w, x_i & \sim \mathcal{N}(f(x_i;w), \sigma^2) \quad i \in \{1, \ldots, N\} \\ z_i \, | \, y_i, B_i, b_i & \sim \mathrm{Bernoulli}(\pi(y_i, B_i, b_i)) \end{align} where $f(x_i;w)=x_i ^ \top w$ is a linear map (for now). This is illustrated as a graphical model in Fig. \ref{gm}. Now consider a data set $\mathbf{z}$ where all of the objective variables $z_i$ are equal to one. Conditional on this data, the log posterior of $w$ marginalizes out the latent reserve prices $y_i$, \begin{align} \log p(w \, | \, \mathbf{z}, \mathbf{x}, \mathbf{B}, \mathbf{b}) = \log p(w \, | \, \lambda) + \sum_{i=1}^{N} \left(\log \mathbb{E}\left[\exp\{R(y_i, B_i, b_i)\}\right] - B_i\right) - C, \end{align} where $C$ is the normalizer. This is the smoothed revenue of Eq. \ref{eq:smoothed_revenue} plus a constant involving the top bids $B_i$ in Eq. \ref{eq:objective_probability}, constant components of the prior on $w$, and the normalizer. Thus, we can optimize the smoothed revenue by taking MAP estimates of $w$. \begin{figure}[tp] \centering \subcaptionbox {The objective variable model (OV model). The objective variable is shaded with diagonal lines to distinguish that its value is not observed but rather set to our desired value.\label{fig:gm} }[0.5\textwidth] { \begin{tikzpicture} \tikzstyle{main}=[circle, minimum size = 6mm, thick, draw =black!80, node distance = 8mm and 16mm] \tikzstyle{connect}=[-latex, thick] \tikzstyle{box}=[rectangle, draw=black!100] \node[main, fill = blue!20,pattern=north east lines] (z) [label=above:$z_i$] { }; \node[main, fill = white!10] (y) [left=of z,label=above:$y_i$] {}; \node[main] (w) [left=of y,label=below:$w$] { }; \node[main, fill = black!40] (b) [below=of z,label=below:$B_i b_i$] {}; \node[main, fill = black!40] (x) [left=of b,label=below:$x_i$] { }; \path (x) edge [connect] (y) (w) edge [connect] (y) (b) edge [connect] (z) (y) edge [connect] (z); \node[rectangle, inner sep=0.6mm, fit= (b),label=right:$N$, yshift=-8mm, xshift=1mm] {}; \node[rectangle, inner sep=8mm, draw=black!100, fit =(b) (x) (y) (z)] {}; \end{tikzpicture} } \subcaptionbox {For fixed w the posterior of the latent reserve price (red) is proportional to the prior (blue) times the likelihood of the objective (green). MAP estimation uncovers profitable modes of the posterior.\label{fig:posterior}}[0.49\textwidth] {\includegraphics[width=0.45\textwidth]{priors.eps}} \caption{The objective variable framework transforms the revenue maximization task into a MAP estimation task. The model and the hallucinated data are designed such that the modes of the model's posterior are the local maxima of the smoothed revenue in Eq. \ref{eq:smoothed_revenue}}. \end{figure} As we mentioned above, we have defined variables corresponding to the auction host's satisfaction. With historical data of auction attributes and bids, we imagine that the host was satisfied with every auction. When we fit $w$, we ask for the reserve-price-setting mechanism that leads to such an outcome. \subsection{MAP estimation with expectation-maximization} \label{sec:EM} The EM algorithm is a technique for maximum likelihood estimation in the face of hidden variables~\citep{dempster1977maximum}. (When there are regularizers, it is a technique for MAP estimation.) In the E-step, we compute the posterior distribution of the hidden variables given the current model settings; in the M-step, we maximize the expected complete regularized log likelihood, where the expectation is taken with respect to the previously computed posterior. In the OV model, the latent variables are the reserve prices $\mathbf{y}$; the observations are the objective variables $\mathbf{z}$; and the model parameters are the coefficients $w$. We compute the posterior expectation of the latent reserve prices in the E-step and fit the model parameters in the M-step. This is a coordinate ascent algorithm on the expected complete regularized log likelihood of the model and the data. Each E-step tightens the bound on the likelihood and the new bound is then optimized in the M-step. \parhead{E-step.} At iteration $t$, the E-step computes the conditional distribution of the latent reserve prices $y_i$ given the objective variables $z_i = 1$ and the parameters $w^{(t-1)}$ of the previous iteration. It is \begin{align} \label{qYi} p(y_i|z_i=1,w^{(t-1)}) & \propto p(z_i=1|y_i) p(y_i|w^{(t-1)}) \\ & \propto \exp\left\{-(B_i-R(y_i, B_i, b_i)\right\} \phi\left(\frac{y_i-f(x_i; w^{(t-1)})}{\sigma}\right). \end{align} where $\phi(\cdot)$ is the pdf of the standard normal distribution. The normalizing constant is in the appendix in Eq. \ref{normalizing}; we compute it by integrating Eq. \ref{qYi} over the real line. We can then compute the posterior expectation $\mathbb{E}\left[y_i \, | \, z_i, w^{(t-1)}\right]$ by using the moment generating function. (See Eq. \ref{EY},Sec. \ref{sec:updateEY}) \parhead{M-step.} The M-step maximizes the complete joint log-likelihood with respect to the model parameters $w$. When we use a linear predictor to set the reserve prices, i.e. $f(x_i;w) = x_i^\top w$, the M-step has a closed form update, which amounts to ridge regression against response variables $\mathbb{E}\left[y_i \, | \, z_i, w^{(t-1)}\right]$ (Eq. \ref{EY}) computed in the E-step. The update is \begin{align} w^{(t)} = \left(\lambda I + \frac{1}{\sigma^2}\mathbf{x}^\top \mathbf{x}\right)^{-1} \frac{1}{\sigma^2} \mathbf{x}^\top \mathbb{E}\left[\mathbf{y} \, | \, \mathbf{z}, w^{(t-1)}\right] \label{wls} \end{align} where $\mathbb{E}\left[\mathbf{y} \, | \, \mathbf{z}, w^{(t-1)}\right]$ denotes the vector with $i^{th}$ entry $\mathbb{E}\left[y_i \, | \, \mathbf{z}, w^{(t-1)}\right]$ and similarly $\mathbf{x}$ is a matrix of all feature vectors $x_i$. \parhead{Algorithm details.} To initialize, we set the expected reserve prices to be the highest bids $\mathbb{E}[y_i \, | \, z_i] = B_i$ and run an M-step. The algorithm then alternates between updating the weights using Eq. \ref{wls} in the M-step and then integrating out the latent reserve prices in the E-step. The algorithm terminates when the change in revenue on a validation set is below a threshold. (We use $10^{-5}$.) The E-step is linear in the number of auctions $N$ and can be parallelized since the expected reserve prices are conditionally independent in our model. The least squares update has asymptotic complexity $O(d^2N)$ where $d$ is the number of features. \subsection{Nonlinear Objective Variable Models} \label{sec:nonlin} One of the advantages of our EM algorithm is that we can change the parameterized prediction technique $f(x_i; w)$ from which we map auction features to the mean of the reserve price. So far we have only considered linear predictors; here we show how we can adapt the algorithm to nonlinear predictors. As we will see in Sec. \ref{sec:experiments}, these nonlinear predictors outperform the linear predictors. In our framework, much of the model in Fig. \ref{fig:gm} and corresponding algorithm remains the same even when considering nonlinear predictors. The distribution of the objective variables is unchanged (Eq. \ref{eq:objective_probability}) as well as the E-step update in the EM algorithm (Eq. \ref{EY}). All of the changes are in the M-step. \parhead{Kernel regression.} Kernel regression~\cite{aizerman1964theoretical} maps the features $x_i$ into a higher dimensional space through feature map $\psi(\cdot)$; the mechanism for setting the reserve price becomes $f(x_i;w) = \psi(x_i)^Tw$. In kernel regression we work with the $N \times N$ Gram matrix $K$ of inner products, where $K_{ij} = \psi(x_i)^T \psi(x_j)$. In this work we use a polynomial kernel of degree $D$, and thus compute the gram matrix without evaluating the feature map $\psi(\cdot)$ explicitly, $K = (\mathbf{x}^\top \mathbf{x} + 1)^D$. Rather than learning the weights directly, kernel methods operate in the dual space $\alpha \in \mathbb{R}^N$. If $K_i$ is the $i^{th}$ column of the Gram matrix, then the mean of the reserve price is \begin{align} f(x_i;w) = \psi(x_i)^Tw=K_i^T\alpha. \end{align} The corresponding M-step in the algorithm becomes \begin{align} \label{eqn:kls} \alpha^{(t)} = \left(\frac{1}{\sigma^2} K +\lambda I_N\right)^{-1}\frac{1}{\sigma^2}\mathbb{E}[\mathbf{y} \, | \, \mathbf{z}, \alpha^{(t-1)}]. \end{align} See \cite{bishop2006pattern} for the technical details around kernel regression. We will demonstrate in Sec. \ref{sec:experiments} that replacing linear regression with kernel regression can lead to better reserve price predictions. However, working with the Gram matrices comes at a computational cost and we consider neural networks as a scalable alternative to infusing nonlinearity into the model. \parhead{Neural networks.} We also explore an objective variable model that uses a neural network~\cite{bishop1995neural} to set the mean reserve prices. We use a network with one hidden layer of $H$ units and activation function $\tanh(\cdot)$. The parameters of the neural net are the weights of the first layer and the second layer: $w = \{w^{(1)} \in \mathbb{R}^{H \times d}, w^{(2)} \in \mathbb{R}^{1\times H}\}$. The mean of the reserve price is \begin{align}\label{eqn:nn} f(x_i;w) = w^{(2)}(\tanh(w^{(1)}x_i)). \end{align} The M-step is no longer analytic; Instead, the network is trained using stochastic gradient methods. \section{Empirical Study} \label{sec:experiments} \begin{table}[t] \centering \footnotesize \caption{The performance of the EM algorithms from Sec. \ref{sec:SPAWR} (OV Regression, OV Kernel Regression with degree $2$ and $4$, OV Neural Networks) against the current state of the art (DC~\cite{mohri2014learning} and NoF~\cite{cesa2013regret}). We report results in terms of percentage of maximum possible revenue (computed by an oracle that knows the highest bid in advance). For each data set, we report mean and standard error aggregated from ten train/validation/test splits. Our methods outperform the existing methods on all data.} \label{tab:results} \begin{tabular}{| c || c | c | c | c || c | c | } \hline &OV Reg & OV Kern (2) & OV Kern (4) & OV NN &DC~\cite{mohri2014learning} & NoF~\cite{cesa2013regret} \\ \hline\hline Linear Sim. &$\mathbf{81.4\pm0.2}$&$81.2\pm0.2$&$78.2\pm0.6$&$72.2\pm1.5$&$80.3\pm0.3$&$49.9\pm0.1$\\ \hline Nonlinear Sim. &$50.3\pm0.3$&$66.2\pm0.4$&$\mathbf{70.1\pm0.6}$&$63.7\pm2.9$&$59.4\pm2.0$&$49.9\pm0.2$\\ \hline eBay (s) &$61.0\pm0.7$&$63.7\pm3.0$&$63.4\pm2.8$&$\mathbf{74.4\pm1.1}$&$59.5\pm1.1$&$55.8\pm0.3$\\ \hline eBay (L) &$62.4\pm0.2$& - & - &$\mathbf{84.0\pm0.2}$& - &$56.0\pm0.1$\\ \hline \end{tabular} \end{table} We studied our algorithms with two simulated data sets and a large collection of real-world auction data from eBay. In each study, we fit a model on a subset of the data (using a validation set to set hyperparameters) and then test how profitable we would be if we used the fitted model to set reserve prices in a held out set. Our objective variable methods outperformed the existing state of the art. \parhead{Data sets and replications.} We evaluated our method on both simulated data and real-world data. \begin{itemize}[leftmargin=*] \item \textit{Linear simulated data.} Our simplest simulated data contains $d=5$ auction features. We drew features $x_i \sim N(0,I) \in \mathbb{R}^d$ for 2,000 auctions; we drew a ground truth weight vector $\hat{w} \sim \mathcal{N}(0,I) \in \mathbb{R}^{d}$ and an intercept $\alpha \sim \mathcal{N}(0,1)$; we drew the highest bids for each auction from the regression $B_i \sim \mathcal{N}(w^\top x_i + \alpha, 0.1)$ and set the second bids $b_i = B_i / 2$. (Data for which $B_i$ is negative are discarded and re-drawn.) We split into $N_{\text{train}} = 1000$ and $N_{\text{valid}} = N_{\text{test}}=500$. \item \textit{Nonlinear simulated data.} These data contain features $x_i$, true coefficients $\hat{w}$, and intercept $\alpha$ generated as for the linear data. We generate highest bids by taking the absolute value of those generated by the regression and second highest bids by halving them, as above. Taking the absolute value introduces a nonlinear relationship between features and bids. \item \textit{Data from eBay.} Our real-world data is auctions of sports collectibles from eBay.\footnote{This data set comes from http://cims.nyu.edu/~munoz/data/index.html} There are $d=74$ features. All covariates are centered and rescaled to have mean zero and standard deviation one. We analyze two data sets from eBay, one small and one large. On the small data set, the total number of auctions is $6,000$, split into $N_{\text{train}} = N_{\text{valid}} = N_{\text{test}}=2,000$. On the large data set the total number is 70,000, split into $N_{\text{train}} = 50,000$, and $N_{\text{valid}} =N_{\text{test}} = 10,000$. \end{itemize} In our study, we fit each method on the training data, use the validation set to decide on hyperparameters, and then evaluate the fitted predictor on the test set, i.e., compute how much revenue we make when we use it to set reserve prices. For each data set, we replicate each study ten times, each time randomly creating the training set, test set, and validation set. \parhead{Algorithms.} We describe the objective variable algorithms from Sec. \ref{sec:SPAWR}, all of which we implemented in Theano~\citep{bergstra2010scipy,bastien2012theano}, as well as the two previous methods we compare against. \begin{itemize}[leftmargin=*] \item \textit{OV Regression}. OV Regression learns a linear predictor $w$ for reserve prices using the algorithm in Sec. \ref{sec:EM}. We find a good setting for the smoothing parameter $\sigma$ and regularization parameter $\lambda$ using grid search. \item \textit{OV Kernel Regression.} OV Kernel Regression uses a polynomial kernel to predict the mean of the reserve price; we study polynomial kernels of degree 2 and 4. \item \textit{OV Neural Network.} OV Neural Network fits a neural net for predicting the reserve prices. As we discussed in Sec. \ref{sec:nonlin}, the M-step uses gradient optimization; we used stochastic gradient ascent with a constant learning rate and early stopping~\cite{prechelt2012early}. Further, we used a warm-start approach, where the next M-step is initialized with the results of the previous M-step. We set the number of hidden units to $H = 5$ for the simulated data and $H=100$ for the eBay data. We use grid search to set the smoothing parameter $\sigma$, the regularization parameters, the learning rate, the batch size, and the number of passes over the data for each M-step. \item \textit{Difference of Convex Functions (DC)~\cite{mohri2014learning}.} The DC algorithm finds a linear predictor of reserve price with an iterative procedure based on DC-programming~\cite{tao1998dc}. Grid search is used on the regularization parameter as well as the margin to select the surrogates for the auction loss. \item \textit{No Features (NoF)~\citep{cesa2013regret}.} This is the state-of-the-art approach to set the reserve prices when we do not consider the auction's features. The algorithm iterates over the highest bids in the training set and evaluates the profitability of setting all reserve prices to this value on the training set. Ref.~\cite{mohri2014learning} gives a more efficient algorithm based on sorting. \end{itemize} \parhead{Results.} Tab. \ref{tab:results} gives the results of our study. The metric is the percentage of the highest possible revenue, where an oracle anticipates the bids and sets the reserve price to the highest bid. A trivial strategy (not reported) sets all reserve prices to zero, and thus earns the second highest bid on each auction. The algorithm using no features~\cite{cesa2013regret} does slightly better than this but not as well as the algorithms which use features. OV Regression [this paper] and DC~\cite{mohri2014learning} both fit linear mappings and exhibit similar performance. However, the DC algorithm does not scale to the large eBay data set. The nonlinear OV algorithms (OV Kernel Regression and OV Neural Networks) outperform the linear models on the nonlinear simulated data and the real-world data. Note that the kernel algorithms do not scale to the large eBay data set because working with the Gram matrix becomes infeasible as the training set gets large. OV Neural Networks significantly outperforms the existing methods on the real-world data. This is a viable solution to maximizing profit from historical auction data. \section{Summary and Discussion} We developed the objective variable framework for combining probabilistic modeling with optimal decision making. We used this method to solve the problem of how to set the reserve price in second-price auctions. Our algorithms scaled better and outperformed the current state of the art on both simulated and real-world data. \subsubsection*{\refname}} \usepackage{bbm} \usepackage{algorithm} \usepackage{algcompatible} \usepackage{bbm} \usepackage{tikz} \usepackage{subcaption} \usetikzlibrary{fit,positioning,patterns}
1,314,259,996,396
arxiv
\section{Introduction} \label{sec:introduction-2} At the end of the last century, the astronomical observations of high redshift type Ia supernovae (SNIa) indicated that our universe is not only expanding, but also accelerating, which conflicts with our deepest intuition of gravity. With some other observations, such as cosmic microwave Background radiation (CMBR), baryon acoustic oscillations (BAO) and large-scale structure (LSS), physicists proposed a new standard cosmology model, $\Lambda$CDM, which introduces the cosmological constant back again. Although this unknown energy component accounts for 73\% of the energy density of the universe, the measured value is too small to be explained by any current fundamental theories.\cite{Peebles}-\cite{Hao} If one tries to solve this trouble phenomenologically by setting the cosmological constant to a particular value, the so-called fine-tuning problem would be brought up, which is considered as a basic problem almost any cosmological model would encounter. A good model should restrict the fine-tuning as much as possible. In order to alleviate this problem, various alternative theories have been proposed and developed these years, such as dynamical dark energy, modified gravity theories and even inhomogeneous universes. Recently, a new attempt, called torsion cosmology, has attracted researchers' attention, which introduces dynamical torsion to mimic the contribution of the cosmological constant. It seems more natural to use a pure geometric quantity to account for the cosmic acceleration than to introduce an exotic energy component. Torsion cosmology could be traced back to the 1970s, and the early work mainly focused on issues of early universe, such as avoiding singularity and the origin of inflation. In some recent work, researchers attempted to extend the investigation to the current evolution and found it might account for the cosmic acceleration. Among these models, Poincar\'e gauge theory (PGT) cosmology is the one that has been investigated most widely. This model is based on PGT, which is inspired by the Einstein special relativity and the localization of global Poincar\'e symmetry\cite{PGT}. Goenner \textit{ et al}. made a comprehensive survey of torsion cosmology and developed the equations for all the PGT cases.\cite{Goenner} Based on Goenner's work, Nester and his collaborators\cite{Nester} found that the dynamical scalar torsion could be a possible reason for the accelerating expansion. Li\textit{ et al}.\cite{Sun} extended the investigation to the late time evolution, which shows us the fate of our universe. Besides PGT cosmology, there is another torsion cosmology, de Sitter gauge theory (dSGT) cosmology, which can also be a possible explanation to the accelerating expansion. This cosmological model is based on the de Sitter gauge theory, in which gravity is introduced as a gauge field from de Sitter invariant special relativity (dSSR), via the localization of de Sitter symmetry.\cite{Guo} dSSR is a special relativity theory of the de Sitter space rather than the conventional Minkowski spacetime, which is another maximally symmetric spacetime with an uniform scalar curvature $1/R$. And the full symmetry group of this space is de Sitter group, which unifies the Lorentz group and the translation group, putting the spacetime symmetry in an alternatively interesting way. But in the limit of $R\rightarrow \infty$, the de Sitter group could also degenerate to the Poincar\'e group. To localize such a global symmetry, de Sitter symmetry, requires us to introduce certain gauge potentials which are found to represent the gravitational interaction. The gauge potential for de Sitter gauge theory is the de Sitter connecion, which combines Lorentz connection and orthonormal tetrad, valued in $\mathfrak{s o}$(1,4) algebra. The gravitational action of dSGT takes the form of Yang-Mills gauge theory. Via variation of the action with repect to the the Lorentz connection and orthonormal tetrad, one could attain the Einstein-like equations and gauge-like equations, respectively. These equations comprise a set of complicated non-linear equations, which are difficult to tackle. Nevertheless, if we apply them to the homogeneous and isotropic universe, these equations would be much more simpler and tractable. Based on these equations, one could construct an alternative cosmological model with torsion. Analogous to PGT, dSGT has also been applied to the cosmology recently to explain the accelerating expansion.\cite{Chaoguang} Our main motivation of this paper is to investigate (i)whether the cosmological model based on de Sitter gauge theory could explain the cosmic acceleration; (ii)where we are going, i.e., what is the fate of our universe; (iii) the constraints of the parameters of model imposed by means of the comparison of observational data. By some analytical and numerical calculations, we found that, with a wide range of initial values, this model could account for the current status of the universe, an accelerating expanding, and the universe would enter an exponential expansion phase in the end. This paper is organized as follows: First, we summarize the de Sitter gauge theory briefly in Sec.~\ref{sec:de-sitter-gauge}, and then show the cosmological model based on de Sitter gauge theory in Sec.~\ref{sec:cosm-evol-equat}. Second, we rewrite these dynamical equations as an autonomous system and do some dynamical analysis and numerical discussions on this system in the Sec.~\ref{sec:autonomous-system} and~\ref{sec:numer-demonstr}. Next in the \ref{sec:supern-data-fitt}th section, we compare the cosmological solutions to the SNIa data and constrain the parameters. Last of all, we discuss and summarize the implications of our findings in Section \ref{sec:summary-conclusion}. \section{de Sitter Gauge Theory of Gravitation} \label{sec:de-sitter-gauge} \label{Supernovae Data Fitting}In dSGT, the de Sitter connection is introduced as the gauge potential, which takes the form as \begin{equation} (\check {B}^{AB}_{\ \ \ {\mu}})=\left( \begin{array}{cc} B^{ab}_{~~{\mu}} & R^{-1} e^a_\mu\\[0.1cm] -R^{-1}e^b_\mu &0 \end{array} \right ) \in \mathfrak{so}(1,4), \end{equation} where $\check{B}^{AB}_{\ \ \ \mu}=\eta^{BC}\check{B}^A_{~C\mu}$, $\check{B}^{AB}_{\ \ \ 4}=\eta^{BC}\check{B}^A_{~C4}$ and $\eta^{AB}=\rm{diag}(1,-1,-1,-1,-1)$, which combines the Lorentz connection and the orthonormal tetrad \footnote{In this paper, the Greek indices, $\mu,\nu,...,$ are 4D coordinate indices, whereas the capital Latin indices $A,B,C,...,$ and the lowercase Latin indices, $a,b,...,$ denote 5D and 4D orthonormal tetrad indices, respectively.} . The associated field strength is the curvature of this connection, which is defined as \begin{eqnarray} \label{eq:curvature} {\check {\cal F}}_{\mu\nu}= ( \check{\cal F}^{AB}_{~~~\mu\nu}) =\left( \begin{array}{cc} F^{ab}_{~~\mu\nu} + R^{-2}e^{ab}_{~~ \mu\nu} & R^{-1} T^a_{~\mu\nu}\\[0.1cm] -R^{-1}T^b_{~\mu\nu} &0 \end{array} \right )\in \mathfrak{so}(1,4), \end{eqnarray} where $e^a_{~b\mu\nu}=e^a_\mu e_{b\nu}-e^a_\nu e_{b\mu}, e_{a\mu}=\eta_{ab}e^b_\mu$, $R$ is the de Sitter radius, and $ F^{ab}_{~~ \mu\nu}$ and $ T^a_{~\mu\nu}$ are the curvature and torsion of Lorentz-connection, \begin{eqnarray} T^a_{~\mu\nu}&=&\partial_\mu e^a_\nu-\partial_\nu e^a_ \mu+B^a_{~c \mu}e^c_\nu-B^a_{~c \nu}e^c_\mu,\\ F^a_{~b \mu\nu}&=&\partial_\mu B^a_{~b\nu} -\partial_ \nu B^a_{~b\mu}+B^a_{~c\mu}B^c_{~b \nu}-B^a_{~c\nu}B^c_{~b\mu}, \end{eqnarray} which also satisfy the respective Bianchi identities. The gauge-like action of gravitational fields in dSGT takes the form, \cite{Chaoguang} \begin{eqnarray} S_{\rm G}&=&\frac{\hbar}{4g^2}\int_{\cal M}d^4 x e {\bf Tr}_{dS}(\check{\cal F}_{\mu\nu}\check{\cal F}^{\mu\nu})\\ &=& -\int_{\cal M}d^4 x e \left[ \frac{\hbar}{4g^{2}} F^{ab}_{~~\mu\nu}F_{ab}^{~~\mu\nu}- \chi \left (F-\frac{6}{R^{2}}\right) - \frac{\chi}{2} T^a_{~\mu\nu}T_a^{~\mu\nu} \right].\label{GYM} \end{eqnarray} Here, $e=\det(e^a_\mu)$, $g$ is a dimensionless constant describing the self-interaction of the gauge field, $\chi$ is a dimensional coupling constant related to $g$ and $R$, and $F=-\frac{1}{2}F^{ab}_{\ \mu\nu}e_{ab}^{\ \mu\nu}$ is the scalar curvature of the Cartan connection. In order to be consistent with Einstein-Cartan theory, we take $\chi=1/(16\pi G)$ and $\hbar g^{-2}=3\chi \Lambda^{-1}$, where $\Lambda=3/R^{2}$. Assuming that the matter is minimally coupled to gravitational fields, the total action of dSGT could be written as: \begin{equation}\label{totalaction} S_T=S_{G}+S_M, \end{equation} where $S_{M}$ denotes the action of matter, namely the gravitational source. Now we can obtain the field equations via variational principle with respect to $e^{a}_{\mu}, B^{ab}_{\ \ \mu}$, \begin{eqnarray}\label{FEQ1}% &&\nabla_{\nu}T_{a}^{~\mu\nu } -F_{~a}^\mu+\frac{1}{2}F e_a^\mu -\Lambda e^{\mu}_{a}- \frac{8\pi G\hbar}{g^{2}}\left(e_{a}^\kappa {\rm Tr}(F^{\mu \lambda}F_{\kappa \lambda})-\frac{1}{4}e_a^\mu {\rm Tr}(F^{\lambda \sigma} F_{\lambda \sigma})\right)\nonumber\\ && -16\pi G \chi\left(e_a^\kappa T_b^{~\mu\lambda}T^{b}_{~\kappa\lambda}-\frac{1}{4}e_a^\mu T_b^{~\lambda\sigma}T^b_{~\lambda\sigma}\right)=8\pi GT_{Ma}^{~\mu}\\[0.2cm] \label{FEQ2}% &&\nabla_{\nu}F_{ab}^{~~\mu\nu}-R^{-2}\left(Y^\mu_{~\,\lambda\nu} e_{ab}^{~~\lambda\nu}+Y ^\nu_{~\, \lambda\nu } e_{ab}^{~~\mu\lambda} +2T_{[a}^{~\mu\lambda} e_{b]\lambda}\right) = 16\pi GR^{-2}S^{\quad \mu}_{{\rm M}ab},% \end{eqnarray} where\begin{eqnarray} T_{M a}^\mu :=-\frac{1}{e}\frac{\delta S_{M}}{\delta e^{a}_{\mu}}, \qquad S_{M ab}^{\mu} :=\frac{1}{2\sqrt{-g}} \frac{\delta S_{M}}{\delta B^{ab}_{\mu}}, \end{eqnarray} represent the effective energy-momentum density and spin density of the source, respectively, and \begin{eqnarray} Y ^\nu_{~\, \lambda\nu } := \frac{1}{2} (T^\lambda _{\ \,\nu\mu}+T^{\ \lambda} _{\mu \ \,\nu}+T^{\ \lambda} _{\nu \ \,\mu}), \end{eqnarray} is the contorsion. It is worth noticing that the Nabla operator in Eq. (\ref{FEQ1}) and (\ref{FEQ2}) is the covariant derivative compatible with Christoffel symbols \{$^{\mu}_{\nu\kappa}$\} for coordinate indices, and Lorentz connection $B_{b\mu}^a$ for orthonormal tetrad indices. Readers can be referred to Ref.\cite{Chaoguang} for more details on dSGT. \section{The Cosmological Evolution Equations} \label{sec:cosm-evol-equat} Since current observations favor a homogeneous, isotropic universe, we here work on a Robertson-Walker (RW) cosmological metric \begin{equation} \label{eq:RW} \mathrm{d}s^{2}=\mathrm{d}t^{2}-a (t)^{2} \left[ \frac{\mathrm{d}r^{2}}{1-kr^{2}}+r^{2}(\mathrm{d}\theta^{2}+\sin^{2}\mathrm{d}\phi^{2})\right]. \end{equation} For Robertson-Walker metric, the nonvanishing torsion tensor components are of the form \footnote{Here, the Latin indices i, j, k..., are 3D orthonormal tetrad indices with range 1, 2, 3.}, \begin{eqnarray} \label{eq:torsion-components} \label{torsion} T^{i}_{j0}(t)=T_{+}(t)~\delta^{i}_{j}, \quad T^{i}_{jk}(t)=T_{-}(t)~\epsilon^{i}_{~jk}, \end{eqnarray} where $T_{+}$ denotes the vector piece of torsion, namely, in components, the trace of the torsion, and $T_{-}$ indicates the axial-vector piece of torsion, which corresponds in components to the totally antisymmetric part of torsion. $T_{+}$ and $T_{-}$ are both functions of time $t$, and their subscripts, + and -, denote the even and odd parities, respectively. The nonvanishing torsion 2-forms in this case are \begin{eqnarray} {\bf T}^0 &=& 0 \nonumber \\ {\bf T}^1 &=& {T_+}\, {\vartheta}^0\wedge {\vartheta}^1 + {T_-}\, {\vartheta}^2\wedge {\vartheta}^3\nonumber \\ {\bf T}^2 &=& {T_+}\, {\vartheta}^0\wedge {\vartheta}^2 + {T_-}\, {\vartheta}^3\wedge {\vartheta}^1 \\ {\bf T}^3 &=& {T_+}\, {\vartheta}^0\wedge {\vartheta}^3 + {T_-}\, {\vartheta}^1\wedge {\vartheta}^2 , \nonumber \end{eqnarray} where $\vartheta^{0}=\mathrm{d} t,\ \vartheta^{1}=\frac{a(t)\mathrm{d} r}{\sqrt{1-k r^2}},\ \vartheta^{2}=a(t)r\mathrm{d} \theta\ \mathrm{and} ~ \vartheta^{3}=a(t)r\sin\theta \mathrm{d} \phi $. According to the RW metric Eq.\eqref{eq:RW} and the torsion Eq.~\eqref{torsion}, the field equations could be reduced to \begin{eqnarray} \label{el-00}% && - \frac {\ddot a^2} {a^2} - \left(\dot T_++ 2\frac{\dot a}{ a} T_+ -2\frac {\ddot a}{a} \right)\dot T_+ + \frac1 4 \left(\dot T_-+2\frac{\dot a} aT_- \right)\dot T_- + T_+^4-\frac {3}{2} T_+^2 T_-^2+ \frac{1} {16} T_-^4 + \left(5 \frac{\dot a^2} {a^2}\right. \nonumber \\ && \quad \left. + 2\frac{ k} {a^2}-\frac{3}{R^2}\right) T_+^2-\frac{1}{2} \left(\frac{5}{2}\frac{\dot a^2} {a^2} + \frac{k}{a^2} -\frac{3}{R^2}\right) T_-^2 + 2 \frac {\dot a} a \left(\frac{\ddot a} {a} - 2 \frac{\dot a^2} {a^2}-2\frac{ k} {a^2} +\frac{3}{R^2}\right)T_+ - \frac{\dot a} {a} (4 T_+^2 \nonumber\\ &&\quad - 3 T_-^2)T_+ +\frac{\dot a^2}{a^2}\left( \frac {\dot a^2}{a^2} + 2 \frac{k} {a^2}- \frac{2}{R^2}\right) +\frac{k^2}{a^4} - \frac{2} {R^2} \frac{k}{a^2} +\frac{ 2}{R^4}=-\frac{16\pi G\rho}{3 R^2}, \\[0.3cm] \label{el-11}% &&\frac{\ddot a^2} {a^2} + \left(\dot T_+ + 2\frac{\dot a} a T_+ - 2\frac{\ddot a} a + \frac{6}{R^2}\right)\dot T_+ -\frac 1 4 \left(\dot T_- + 2 \frac {\dot a} a T_-\right)\dot T_- - T_+^4 + \frac 3 2 T_+^2 T_-^2 - \frac1 {16} T_-^4 \nonumber\\ &&\quad+ \frac {\dot a} a(4 T_+^2 - 3 T_-^2)T_+ - \left(5\frac{\dot a^2} {a^2} + 2 \frac k {a^2} + \frac3 {R^2}\right) T_+^2+ \frac 1 2 \left(\frac 5 2\frac{\dot a^2} {a^2} + \frac k {a^2} + \frac 3 {R^2}\right) T_-^2- 2\frac{\dot a} a \left(\frac{\ddot a } {a}- 2\frac{\dot a^2} {a^2 }\right.\nonumber \\[0.2cm] &&\quad \left. - 2 \frac k {a^2}- \frac6 {R^2}\right)T_+ - \frac 4 {R^2} \frac{\ddot a} a -\frac{\dot a^2} {a^2} \left(\frac{\dot a^2}{a^2} +2\frac k {a^2}\right )+ \frac2 {R^2} -\frac{k^2}{a^4} - \frac2 {R^2}\frac k {a^2} +\frac6 {R^4} = -\frac{16\pi G p}{R^2}, \\[0.3cm] \label{yang1} % &&\ddot T_- + 3 \frac{\dot a} a \dot T_- + \left( \frac 1 2 T_-^2 - 6 T_+^2 + 12 \frac {\dot a} a T_+ +\frac{\ddot a} a - 5\frac{\dot a^2}{a^2} - 2\frac k {a^2}+ \frac 6 {R^2}\right) T_-=0, \\[0.3cm] \label{yang2}% && \ddot T_+ + 3 \frac{\dot a} a \dot T_+ -\left( 2 T_+^2 -\frac 3 2 T_-^2 - 6\frac{\dot a} a T_+ -\frac {\ddot a} a + 5 \frac {\dot a^2} {a^2} + 2 \frac k {a^2}- \frac 3 {R^2}\right) T_+ - \frac 3 2 \frac{\dot a} a T_-^2-\frac{\dddot a} a - \frac{\dot a\ddot a}{a^2} \nonumber\\ && \quad+ 2\frac {\dot a^3} {a^3} + 2\frac{\dot a} a \frac k {a^2} =0, % \end{eqnarray} where Eqs.\eqref{el-00} and \eqref{el-11} are the $(t,t)$ and $(r,r)$ component of Einstein-like equations, respectively; and Eqs.\eqref{yang1} and \eqref{yang2} are 2 independent Yang-like equations, which is derived from the $(r,\theta,\phi)$ and $(t,r,r)$ components of Lorentz connection. The spin density of present time is generally thought be very small which could be neglected. Therefore, we here assumed the spin density is zero. The Bianchi identities ensure that the energy momentum tensor is conserved, which leads to the continuity equation: \begin{equation} \label{eq:continuity} \dot{\rho}=-\frac{3\dot{a}}{a}(\rho+p). \end{equation} Equation \eqref{eq:continuity} can also be derived from Eqs.~\eqref{el-00}-~\eqref{yang2}, which means only four of Eqs.~\eqref{el-00}-~\eqref{eq:continuity} are independent. With the equation of state (EoS) of matter content, these four equations comprise a complete system of equations for five variables, $a(t),T_+(t),T_-(t),\rho(t)~\mathrm{and }~p(t)$. By some algebra and differential calculations, we could simplify these 5 equations as: \begin{eqnarray} \label{eq:Hubble} \dot{H}&=&-2H^{2}-\frac{k}{a^{2}}+\frac{2}{R^{2}}+\frac{4\pi G}{3}(\rho+3p)+\frac{3}{2}\left(\dot{T}_++3H T_{+}-T_{+}^{2}+\frac{T_-^2}{2}\right) \nonumber\\ &&+(1+3w)\rho,\\ \ddot{T}_+&=&-3\left(H +\frac 3 2 T_+\right)\dot{T}_+ -3T_{-}\dot{T}_- -\frac{8\pi G}{3}(\rho+3p)^. -\frac{3}{2}H T_{-}^{2}+\left[\frac {13} 2 ({T_+}-3 H){T_+}\right.\nonumber \\ && \left.+ 6H^2+\frac{3k}{a^2}+\frac{9T_-^2}{4}-\frac 8 {R^2}-\frac{28\pi G}{3}(\rho+3p)\right]T_+,\\ \ddot{T}_- &=&-3H\dot{T}_- -\left[-\frac{15}{2}T_{+}^{2}+\frac{33H T_{+}}{2}-6H^{2}-\frac{3k}{a^2}+\frac{8}{R^{2}}+\frac{5}{4}T^{2}_{-}+\frac{3}{2}\dot{T}_+ \right.\nonumber\\ &&\left. + \frac{4\pi G}{3}(\rho+3p)\right]T_{-},\\ \dot{\rho}&=&-3H(\rho+p),\\ \label{eq:EoS} w&=&\frac{p}{\rho}, \end{eqnarray} where $H=\dot{a}/a$ is the Hubble parameter. \section{Autonomous System} \label{sec:autonomous-system} If we rescale the variables and parameters as \begin{eqnarray} &&t\rightarrow t/l_0;\quad H\rightarrow l_0 H;\quad k\rightarrow l_0^{2}k;\quad R\rightarrow R/l_0;\nonumber \\ &&T_{\pm}\rightarrow l_0 T_{\pm};\quad \rho \rightarrow \frac{4\pi G l_{0}^2}{3 }\rho;\quad p \rightarrow \frac{4\pi G l_{0}^2}{3 }p,\label{transformation} \end{eqnarray} where $l_0=1/H_0$ is the Hubble radius in natural units, these variables and parameters would be dimensionless. Under this transformation, Eqs.~\eqref{eq:Hubble}-\eqref{eq:EoS} remain unchanged expect for the terms including $4\pi G\rho/3$ and $4\pi G p/3$, which change into $\rho$ and $p$ respectively. The contribution of radiation and spatial curvature in current universe are so small that it could be neglected, so we here just consider the dust universe with spatial flatness, whose EoS is equal to zero. By some further calculations, these equations could be transformed to a set of six one-order ordinary derivative equations, which forms a six-dimensional autonomous system, as follows, \begin{eqnarray}\label{dia} \dot{H}&=&-2H^{2}+\frac{2}{R^{2}}+\frac{3}{2}\left(P+3H T_{+}-T_{+}^{2}+\frac{T_-^2}{2}\right)+\rho,\\ \dot P &=& -3\left(H +\frac 3 2 T_+\right)P -3T_{-}Q-\frac{3}{2}H T_{-}^{2}+ \left[\frac {13} 2 ({T_+}-3 H){T_+}+ 6H^2\right. \nonumber \\ && \left.+\frac{9T_{-}^{2}}{4}-\frac 8 {R^2} - 7\rho\right]T_+ + 6H\rho ,\\[0.2cm] \dot{T_{+}}&=&P,\\ \dot Q&=&-3H Q -\left(-\frac{15}{2}T_{+}^{2}+\frac{33H T_{+}}{2}-6H^{2}+\frac{8}{R^{2}}+\frac{5}{4}T^{2}_{-}+\frac{3}{2}P + \rho \right)T_{-},\\ \dot{T_{-}}&=&Q,\\ \dot{\rho}&=&-3H\rho. \label{rho} \end{eqnarray} For such an autonomous system, we can use the dynamics analysis to investigate its qualitative properties. Critical points are some exact constant solutions in the autonomous system, which indicate the asymptotic behavior of evolution. For example, some solutions, such as heteroclinic orbits, connect two different critical points, and some others, such as homoclinic orbits, are a closed loop starting from and coming back to the same critical point. In the dynamics analysis of cosmology, the heteroclinic orbit is more interesting.\cite{Zhao} Thus, critical points could be treated as the basic tool in dynamics analysis, form which one could know the qualitative properties of the autonomous system. By some algebra calculation, we find all the 9 critical points ($H_{c},\ P_{c},\ Q_{c},\ T_{+c}, \ T_{-c}, \rho_{c}$) of this system, as shown in Table 1. \begin{table}[t] \centering \begin{tabular}{| p{0.8cm} p{3.5cm} p{7.2cm} | }\hline &Critical Points & Eigenvalues \\ \hline &&\\[-0.3cm] (i)&$(\frac{1}{R},0,0,0,0,0)$& $-\frac{1}{R},-\frac{1}{R},-\frac{2}{R},-\frac{2}{R},-\frac{3}{R},-\frac{4}{R}$ \\[0.12cm] (ii)&$(-\frac{1}{R},0,0,0,0,0)$& $\frac{1}{R},\frac{1}{R},\frac{2}{R},\frac{2}{R},\frac{3}{R},\frac{4}{R}$ \\[0.12cm] (iii)&$(-\frac{1}{2R},0,0,-\frac{2}{R},0,0)$& $-\frac{2}{R},\frac{2}{R},\frac{3}{2R},-\frac{5}{2R},\frac{7}{2 R},\frac{4}{R}$ \\[0.12cm] (iv)&$(\frac{1}{2R},0,0,\frac{2}{R},0,0)$& $-\frac{2}{R},\frac{2}{R},\frac{5}{2R},-\frac{3}{2R},-\frac{4}{R},-\frac{7}{2R}$ \\[0.12cm] (v)&$(-\frac{1}{2R},0,0,\frac{1}{2R},0,0)$& $\frac{1}{2R},\frac{1}{R},-\frac{1}{R},\frac{2}{R},\frac{3}{2R},\frac{5}{2R}$ \\[0.12cm] (vi)&$(\frac{1}{2R},0,0,-\frac{1}{2R},0,0)$& $-\frac{1}{2R},\frac{1}{R},-\frac{1}{R},-\frac{3}{2R},-\frac{5}{2R},-\frac{2}{R}$ \\[0.12cm] (vii)&$(0,0,0,-\frac{\sqrt{3/2}}{R},0,\frac{1}{4R^{2}})$&$-\frac{\sqrt{3}}{\sqrt{2}R}, \frac{\sqrt{3}}{\sqrt{2}R}, -\frac{\sqrt{3}}{R}, \frac{\sqrt{3}}{R}, -\frac{\sqrt{6}}{R}, \frac{\sqrt{6}}{R} $ \\[0.12cm] (viii)&$(0,0,0,\frac{\sqrt{3/2}}{R},0,\frac{1}{4R^{2}})$&$-\frac{\sqrt{3}}{\sqrt{2}R}, \frac{\sqrt{3}}{\sqrt{2}R}, -\frac{\sqrt{3}}{R}, \frac{\sqrt{3}}{R}, -\frac{\sqrt{6}}{R}, \frac{\sqrt{6}}{R} $ \\[0.2cm] (ix)&$(0,0,0,0,0,\frac{-2}{R^{2}})$ & $-\frac{\sqrt{3}-3\rm{i}}{\sqrt{2}R}, \frac{\sqrt{3}-3\rm{i}}{\sqrt{2}R}, -\frac{\sqrt{3}+3\rm{i}}{\sqrt{2}R}, \frac{\sqrt{3}+3\rm{i}}{\sqrt{2}R}, -\frac{\rm{i}\sqrt{6}}{R}, \frac{\rm{i}\sqrt{6}}{R} $ \\[0.10cm] \hline \end{tabular} \caption{ \label{tab:critical-points}The critical points and their corresponding eigenvalues. The point 9 is not physically acceptable, for its negative energy density.} \end{table} Furthermore, we analyze the stabilities of these critical points by means of the first-order perturbations. Substituting these linear perturbations into these dynamcial equations, we would obtain the perturbation equations around the critical points, i.e. \begin{align} \delta \boldsymbol{\dot{x}} = A \boldsymbol{x}, & \quad A = \frac{\partial \boldsymbol{f}}{\partial \boldsymbol {x}}|_{\boldsymbol{x}=\boldsymbol{x}_c}, \end{align} where $x$ means the six variables of this autonomous system and $f$ denotes the corresponding vector function on the right-hand side of Eqs.~\eqref{dia}-\eqref{rho}. Using the coefficient matrix $A$'s eigenvalue, we could analyze the stabilities of these critical points. And the classification of these critical points is shown in Table~\ref{tab:stabilities}. Among these critical points, there are only one positive attractor, i.e. point (i), whose eigenvalues are all negative, and only one negative attractor, i.e. point (ii), whose eigenvalues are all positive. The negative attractor works as a source, from which the phase orbits start off, whereas the positive attractor works as a sink, which the orbits finally approach. And it is the heteroclinic line that connects the positive attractor and the negative attractor, as shown in Fig. 1. Positive attractors are stable exact solutions, describing the infinite future behavior of evolution, while the unstable negative attractors depict the stories of infinite past. Therefore the positive attractor, point (i), here shows us the picture of late time universe, where all quantities tend to zero, except the Hubble parameter which approaches a finite value. At that time, the whole universe is entering an exponential expansion phase, just like the $\Lambda$CDM model. \begin{table} \centering \begin{tabular}{|p{2.5cm}p{3.5cm}p{1.5cm}|}\hline Critical Points& Property& Stability \\ \hline (i)& Positive-Attractor& Stable\\[0.05cm] (ii)& Negative-Attractor& Unstable\\[0.05cm] (iii)& Saddle & Unstable\\[0.05cm] (iv)& Saddle & Unstable\\[0.05cm] (v)& Saddle & Unstable\\[0.05cm] (vi)& Saddle & Unstable\\[0.05cm] (vii)& Saddle& Unstable\\[0.05cm] (viii)& Saddle& Unstable\\[0.05cm] (ix)& Spiral-Saddle & Unstable\\[0.05cm] \hline \end{tabular} \caption{ \label{tab:stabilities}The stability properties of critical points} \end{table} \begin{figure} \centering \includegraphics[width=8cm,height=6.5cm]{heteroclinic.eps} \caption{ \label{fig:heteroclinic}The $(H,T_+,\rho)$ section of the phase diagram with R=4/3. The heteroclinic orbits connect the critical point (i) and (ii).} \end{figure} \section{Numerical Demonstration} \label{sec:numer-demonstr} In order to confirm these qualitative results derived from dynamics analysis and know better about the global properties of this model, we explore the autonomous system by numerical methods. To solve the Eqs.~\eqref{dia}-\eqref{rho} numerically, we choose some generic initial conditions and parameters, as shown in Table \ref{tab:numerical-demonstrations}. \begin{table}[h] \centering \begin{tabular}{|p{1.2cm} p{1.cm} p{1.cm}p{1.cm}p{1.cm}p{1.cm}p{1.cm}p{0.5cm}|}\hline Case& $R$ &$H_{0}$ &$P_{0}$ &$Q_{0}$&$T_{+0}$&$T_{-0}$ &$\rho_{0}$\\ \hline &&&&&&&\\[-0.25cm] (a.1)& 1.5 &1&0 &0&0&0&0.5\\ (a.2)& 1.5 &1&0 &-0.5&-0.5&0&1\\ (a.3)& 1.5 &1&-0.75 &-1&2&1.2&0.7\\ \hline &&&&&&&\\[-0.25cm] (b.1)&0.4 &1&0 &0&-1.5&0&0.8\\ (b.2)&0.6 &1&0 &0&-1&0&1\\ (b.3)&1.5 &1&0 &0&0&0&0.5\\ \hline \end{tabular} \caption{ \label{tab:numerical-demonstrations}The values of initial conditions and parameters for the evolution curves in Fig.~\ref{fig:vtvr}. } \end{table} First, we vary initial conditions $(P_{0} , Q_{0},T_{-0},T_{+0},\rho_{0})$ with a fixed de Sitter radius, and the results are shown in Fig.2(a). Then we change the de Sitter radius, and show the results in Fig.2(b). Because of the rescaling Eqs.~\eqref{transformation}, the current Hubble parameter here must be 1. From these numerical results, it is easy to find that the Hubble paramter of all the solutions approaches a particular finite value in the infinite future, whatever the initial conditions are, and this value only depends on the de Sitter radius $R$. Such results demonstrate the dynamics analysis we have done in the former section. We could also find that this positive attractor covers a wide range of initial conditions, and therefore the troublesome fine-tuning problem has been alleviated. In comparison with the result of PGT, we find the cosmology based on de Sitter gauge theory is quite different from the Poincare gauge theory, where the expansion will asymptotically come to a halt. It is the existence of the de Sitter radius that makes such a discrepancy. If we let $R\rightarrow\infty$, the de Sitter gauge theory would degenerate to the PGT. \begin{figure}[t] \centering \includegraphics[width=7.2cm,height=5.5cm]{vt.eps} \includegraphics[width=7.2cm,height=5.5cm]{vr.eps} \caption{ \label{fig:vtvr}The evolution of Hubble parameter $H$ with respect to some initial values and parameter choice $(R,H_0,P_0,Q_0,T_{+0},$ $T_{-0},\rho_0)$. According to the transformations (\ref{transformation}), the unit of time here is the Hubble Time. In Fig 2(a), we fixed $R$ and changed $T_\pm$, while in Fig 2(b), we changed $R$.} \end{figure} \section{Supernovae Data Fitting} \label{sec:supern-data-fitt} A basic approach to testing a cosmological model is the supernova fitting through its description of the expansion history of the universe. In this section we fit the initial conditions and model parameters to current type Ia supernovae data. And the maximum likelihood technique is used here, which could determine the best fit values of parameters and initial conditions and the goodness of this model. The supernova data are comprised of the distance modulus $\mu_{obs}$, which is equal to the difference between apparent magnitude $m_{i}$ and absolute magnitude $M_{i}$, and redshifts $z_{i}$ of supernovae with their corresponding errors $\sigma_{i}$. Note that the error here are assumed to be normally distributed and independent. The theoretical distance modulus is related to the luminosity distance $d_{L}$ by \begin{eqnarray} \label{eq:modulus} \mu_{th}(z_{i})&=&5 \log_{10}\left(\frac{d_{L}(z_{i})}{\mathrm{Mpc}}\right)+25 \nonumber\\ &=&5 \log_{10}D_{L}(z_{i})-5\log_{10}\left( \frac{c H_{0}^{-1}}{\mathrm{Mpc}} \right)+25 \nonumber\\[0.12cm] &=& 5 \log_{10}D_{L}(z_{i})-5\log_{10}h +42.38, \end{eqnarray} where the $D_{L}(z)$ is the dimensionless 'Hubble-constant free' luminosity defined by $D_{L}(z)=H_{0}d_{L}(z)/c$. For a spatially flat cosmological model, which we consider here, the luminosity distance could be expressed in terms of Hubble parameter $H(z)$, as follows, \begin{eqnarray} \label{eq:dL} D_{L}(z)&=& (1+z)\int^{z}_{0} \mathrm{d}z' \frac{1}{H(z'; a_{1},...a_{n})}, \end{eqnarray} where the Hubble parameter $H(z'; a_{1},...a_{n})$ here is the dimensionless Hubble parameter under the rescaling transformation Eq \eqref{transformation}. As we know, due to the normal distribution of errors, we could use the $\chi^{2}$ parameter as the maximum likelyhood estimator to determine the best fit values of parameters and initial conditions ($R,P_{0},Q_{0},T_{+0},T_{-0},\rho_0$) of the model. The $\chi^{2}$ here for the SNIa data is \begin{eqnarray} \label{eq:chi2} \chi^{2}(\theta)&=&\sum^{N}_{i} \frac{[\mu_{obs}(z_{i})-\mu_{th}(z_{i})]^{2}}{\sigma_{i}^{2}},\nonumber\\ &=&\sum^{N}_{i} \frac{[\mu_{obs}(z_{i})-5\log_{10}D_{L}(z_{i};\theta)-\mu_{0}]^{2}}{\sigma_{i}^{2}}, \end{eqnarray} where $\mu_{0}=-5\log_{10}h + 42.38$ , $\theta$ denotes all the parameters and initial conditions, and $\sigma_{i}$ are the statistical errors of SNIa. If we want to include the systematic errors, which are comparable to the statistical errors and should be taken into account seriously, we could resort to the covariance matrix $C_{SN}$, and the Eq.~\eqref{eq:chi2} turn out to take the form \begin{eqnarray} \label{eq:chi22} \chi^{2}(\theta)&=&\sum^{N}_{i,j}\left[\mu_{obs}(z_{i})-\mu_{th}(z_{i})\right] (C_{SN}^{-1})_{i j}[\mu_{obs}(z_{j})-\mu_{th}(z_{j})],\\ &=&\sum^{N}_{i,j}\left[\mu_{obs}(z_{i})-5\log_{10}D_{L}(z_{i};\theta)-\mu_{0}\right] (C_{SN}^{-1})_{i j}[\mu_{obs}(z_{j})-5\log_{10}D_{L}(z_{i};\theta)-\mu_{0}]. \end{eqnarray} The parameter $\mu_{0}$ here is a nuisance parameter, whose contribution we are not interested in. So we marginalize over this parameter, $\mu_{0}$, thus obtaining a new $\chi^2$, \begin{eqnarray} \label{eq:chi23} \chi^{2}(\theta)=A(\theta)-\frac{B(\theta)^{2}}{C}+\ln\left(\frac{C}{2\pi}\right), \end{eqnarray} where \begin{eqnarray} \label{eq:marginalization} A(\theta)&=&\sum^{N}_{i,j}\left[\mu_{obs}(z_{i};\theta)-5\log_{10}D_{L}(z_{i};\theta)\right] (C_{SN}^{-1})_{i j}[\mu_{obs}(z_{j};\theta)-5\log_{10}D_{L}(z_{j};\theta)],\\ B(\theta)&=&\sum^{N}_{j} (C_{SN}^{-1})_{i j}[\mu_{obs}(z_{j};\theta)-5\log_{10}D_{L}(z_{j};\theta)],\\ C(\theta)&=&\sum^{N}_{i,j}=(C_{SN}^{-1})_{ij}. \end{eqnarray} Now we try to constrain our model parameter and initial values by this maximum likelihood estimator. The dataset we use here is the ``Union2'' SNIa dataset (N=557), the most comprehensive one to date, which combines all the previous SNIa dataset in a homogeous manner. By minimizing the $\chi^{2}$, we find the best fit parameters of dSGT model, as shown in Table.~\ref{tab:best-fit} \begin{table}[h] \centering \begin{tabular}{|p{0.8cm}p{1cm} p{1cm} p{1.cm}p{1.cm}p{1.cm}p{1.cm}p{1.2cm}|}\hline Case& $R$ &$P_{0}$ &$Q_{0}$&$T_{+0}$&$T_{-0}$ &$\rho_{0}$&$\chi^{2}$\\ \hline &&&&&&&\\[-0.25cm] I. & 1.1005 & 0 & 0 & 0 & 0 & 0.0431 &535.3384\\ \hline \end{tabular} \caption{\label{tab:best-fit}The best-fit initial data and parameters } \end{table} Based on the current observations, the present density of torsion in our universe is very small, so it is reasonable to assume that the initial values of all torsions and their first order derivative are zero at $z=0$. But their second order derivatives does not vanish yet, which would have a significant impact on the history and future of the evolution of our universe. In this case, the number of parameter and initial value is reduced, and the rest parameters and initial values are just $R$ and $\rho_{0}$. It is easy to find the best-fit of these 2 parameters, which are shown in Table \ref{tab:best-fit}. And the minimal $\chi^{2}$ is $535.3384$, whereas the value for $\Lambda$CDM is $536.634$, with $\Omega_{m}=0.27, \Omega_{\Lambda}=0.73$. In Fig.~\ref{fig:chi2-distribution} we show the $\chi^{2}$ distribution with respect to $R$ and $\rho$ compared to $\Lambda$CDM model, where the plane $\chi^{2}=536.635$ corresponds to the value of $\Lambda$CDM. Furthermore, we plot the contours of some particular confidence levels, as shown in Fig.~\ref{fig:contour}. From these figures, we could find that the evolution of our universe is insensitive to the initial value, which alleviate the fine-tuning problem. \begin{figure}[t] \centering \includegraphics[width=17cm,height=9cm]{surf.eps} \caption{ \label{fig:chi2-distribution}The $\chi^{2}$ distribution with respect to $R$ and $\rho$, compared to the $\Lambda$CDM, the plane $\chi^{2}=536.634$. Here we assume that all the torsions and their first order derivatives vanish at present time.} \end{figure} \begin{figure} \centering \includegraphics[width=15.cm,height=7cm]{contour.eps} \caption{ \label{fig:contour}The 68.3\%, 95.4\% and 99.7\% $\chi^{2}$ confidence contours of dSGT with respect to $R$ and $\rho$, using the Union 2 dataset. Here we also assume that the current-time values for all the torsions and their first order derivatives are zero. The yellow point is the best-fit point. } \end{figure} \section{Summary and Conclusion} \label{sec:summary-conclusion} The astronomical observations imply that our universe is accelerating to a de Sitter spacetime. This gives us a strong motive to consider the cosmic evolution based on the de Sitter gauge theory instead of other gravity theories. The localization of de Sitter symmetry requires us to introduce curvature and torsion. So in de Sitter gauge theory, the torsion is an indispensable quantity, by which people tried to include the effect of spin density in gravity theory at first. But now this essential quantity might account for the acceleration of our universe, if we apply dSGT to cosmology. We found the cosmological equations for dust universe in dSGT could form an autonomous system by some transformations, where the evolution of the universe is described in terms of the orbits in phase space. Therefore, by dynamics analysis to the dSGT cosmology, one could study the qualitative properties of this phase space. We found all 9 critical points, as shown in Table~\ref{tab:critical-points}. We also analyzed the stabilities of these critical points, and found among these critical points there is only one positive attractor, which is stable. The positive attractor alleviates the fine-tuning problem and implies that the universe will expand exponentially in the end, whereas all other physical quantities will turn out to vanish. In this sense, dSGT cosmology looks more like the $\Lambda$CDM,than PGT cosmology. And we conducted some concrete numerical calculations of this the destiny of our model of the universe, which confirms conclusions from dynamics analysis. Finally, in order to find the best-fit values and constraints of model parameters and initial conditions, we fitted them to the Union 2 SNIa dataset. The maximum likelihood estimator here we used is the $\chi^{2}$ estimate. By minimizing the $\chi^{2}$, we found the best-fit parameters $R=1.135,~\rho=0.274$ and the corresponding $\chi^{2}=535.3384$, while the value for $\Lambda$CDM is $536.634$, with $\Omega_{m}=0.27, \Omega_{\Lambda}=0.73$. Note that we here set all the initial values of torsions and their first-order derivatives to zero at $t=t_{0}$, since the contribution of torsion to the current universe is almost negligible. We also plotted the confidence contour Fig.~\ref{fig:contour} with respect to $R$ and $\rho$, from which it is easy to see that the fine-tuning problem is alleviated and the evolution is not so sensitive to the initial values and model parameters. If we want to go deeper into cosmology based on de Sitter gauge theory, there are a lot of work need to be done. We should fit this model to some other observations, like BAO and LSS etc, to constrain the parameters better. We also could study the perturbations in the early universe, and compare the results to CMBR data. These issues will considered in some upcoming papers. \section*{Acknowledgments}This work is supported by SRFDP under Grant No. 200931271104 and Shanghai Natural Science Foundation, China, under Grant No. 10ZR1422000.
1,314,259,996,397
arxiv
\section{INTRODUCTION} \label{sec:introduction} The world population is becoming older fast. As a consequence the age-related spending is projected to rise dramatically in the coming decades in all the developed countries. Increasingly governments in the developed world realize they cannot afford paying sufficient public pensions and are looking for innovations in retirement income product market. In this paper we consider a variable annuity contract with Guaranteed Minimum Withdrawal Benefit (GMWB) with option to surrender the contract before maturity. This contract promises to return the entire initial investment through cash withdrawals during the policy life plus the remaining account balance at maturity, regardless of the portfolio performance. Thus even when the account of the policyholder falls to zero before maturity, GMWB feature will continue to provide the guaranteed cashflows. In addition, we allow the option to surrender the contract before the maturity which is a standard feature of real products on the market. GMWB allows the policyholder to withdraw funds below or at contractual rate without penalty and above the contractual rate with some penalty. If the policyholder behaves passively and the withdraw amount at each withdrawal date is predetermined at the beginning of the contract, then the behavior of the policyholder is called ``static''. In this case the paths of the account can be simulated and a standard Monte Carlo simulation method can be used to price the GMWB. On the other hand if the policyholder optimally decide the amount of withdraw at each withdrawal date, then the behavior of the policyholder is called ``dynamic''. Under the optimal withdrawal strategy of a policyholder, the pricing of variable annuities with GMWB becomes an optimal stochastic control problem; and adding surrender feature makes it also an optimal stopping problem. The variable annuities with GMWB feature under dynamic and static withdrawal strategies have been considered in a number of papers over the last decade, e.g. \citet{milevsky2006financial}, \citet{bauer2008universal}, \citet{dai2008guaranteed}, \citet{Huang2012, Huang2014}, \citet{bacinello2011unifying}. Recently, \citet{Azimzadeh2014} prove the existence of an optimal \emph{bang-bang} control for a Guaranteed Lifelong Withdrawal Benefits (GLWB) contract. In particular, they find that the holder of a GLWB can maximize a contract writer loss by only ever performing non-withdrawal, withdrawal at exactly the contract rate, or full surrender. This dramatically reduces the optimal strategy space. However, they also demonstrate that the related GMWB contract is not convexity preserving, and hence does not satisfy the bang-bang principle other than in certain degenerate cases. For GMWB under optimal withdrawal assumption, the numerical algorithms developed by \citet{dai2008guaranteed} and \citet{Forsyth2008} appear to be the only ones found in the literature, and both are based on solving corresponding partial differential equation (PDE) via finite difference method. In the case when transition density of the underlying wealth process between withdrawal dates or its moments are known in closed form, often it can be more convenient and more efficient to utilize direct integration methods to calculate the required annuity value expectations in backward time-stepping procedure. Such an algorithm was developed in \citet{LuoShevchenkoGHQC2014, LuoShevchenkoGMWB2015} for solving optimal stochastic control problem in pricing GMWB variable annuity. This allows to get virtually instant results for typical GMWB annuity prices on the standard desktop PC. In this paper we adopt this algorithm to price variable annuities with GMWB with surrender option under static, dynamic, and simplified bang-bang withdrawal strategies. To our best knowledge, there are no publications presenting results for GMWB with both optimal withdrawal and surrender features. In the next section we describe the GMWB product with discrete withdrawals, the underlying stochastic model and the optimization problem. Section \ref{algorithm_sec} describes numerical algorithm utilized for pricing In Section \ref{NumericalResults_sec}, numerical results for the fair fees under a series GMWB contract conditions are presented. Concluding remarks are given in Section \ref{conclusion_sec}. \section{ Model}\label{model_sec} We assume that market is complete in financial risk and also there is no mortality risk (in the event of policyholder death, the contract is maintained by beneficiary), thus the annuity price can be expressed as expectation under the risk neutral process of underlying asset. Let $S(t)$ denote the value of the reference portfolio of assets (mutual fund index, etc.) underlying the variable annuity policy at time $t$ that under no-arbitrage condition follows the risk neutral stochastic process \begin{equation}\label{referenceportfolio_eq} dS(t)=r(t) S(t) dt+\sigma(t) S(t) dB(t), \end{equation} where $B(t)$ is the standard Wiener process, $r(t)$ is risk free interest rate and $\sigma(t)$ is volatility. For simplicity hereafter we assume that model parameters are piece-wise constant functions of time for time discretization $0=t_0<t_1<\cdots<t_N=T$, where $t_0=0$ is today and $T$ is annuity contract maturity. Denote corresponding asset values as $S(t_0),\ldots,S(t_N)$; and risk-free interest rate and volatility as $r_1,\ldots,r_N$ and $\sigma_1,\ldots,\sigma_N$ respectively. That is, $r_1$ is the interest rate for time teriod $(t_0,t_1]$; $r_2$ is for $(t_1;t_2]$, etc and similar for volatility. The premium paid by policyholder upfront at $t_0$ is invested into the reference portfolio of risky assets $S(t)$. Denote the value of this variable annuity account (hereafter referred to as \emph{wealth account}) at time $t$ as $W(t)$, i.e. the upfront premium paid by policyholder is $W(0)$. GMWB guarantees the return of the premium via withdrawals $\gamma_n\ge 0$ allowed at times $t_n$, $n=1,\ldots,N$. Let $N_w$ denote the number of withdrawals in a year (e.g. $N_w=12$ for a monthly withdrawal), then the total number of withdrawals $N=\lceil\; N_w\times T \;\rceil$. The total of withdrawals cannot exceed the guarantee $W(0)$ and withdrawals can be different from contractual (guaranteed) withdrawal $G_n=W(0)(t_n-t_{n-1})/T$, with penalties imposed if $\gamma_n>G_n$. Denote the annual contractual rate as $g=1/T$. Denote the value of the guarantee at time $t$ as $A(t)$, hereafter referred to as \emph{guarantee account}. Obviously, $A(0)=W(0)$. For clarity of notation, denote the time immediately before $t$ (i.e. before withdrawal) as $t^-$, and immediately after $t$ (i.e. after withdrawal) as $t^+$. Then the guarantee balance evolves as \begin{equation}\label{accountbalance_eq} A(t_n^+)=A(t_n^{-})-\gamma_n=A(t^+_{n-1})-\gamma_n,\;\; n=1,2,\ldots,N \end{equation} with $A(T^+)=0$, i.e. $W(0)=A(0) \ge \gamma_1+\cdots+\gamma_N$ and $A(t_{n-1}^{+})\ge \sum_{k=n}^N\gamma_{k}$. The account balance $A(t)$ remains unchanged within the interval $(t_{n-1},\;t_n), \;n=1,2,\ldots,N$. In the case of reference portfolio process (\ref{referenceportfolio_eq}), the wealth account $W(t)$ evolves as \begin{eqnarray}\label{eq_Wt} W(t_n^-)&=&\frac{W(t_{n-1}^+)}{S(t_{n-1})}S(t_n) e^{-\alpha dt_n}= W(t_{n-1}^+)e^{(r_n-\alpha-\frac{1}{2}\sigma^2_n)dt_n+\sigma_n \sqrt{dt_n} z_n},\\ W(t_n^+)&=&\max\left(W(t_n^-)-\gamma_n,0\right),\;\; n=1,2,\ldots,N, \end{eqnarray} where $dt_n=t_n-t_{n-1}$, $z_n$ are iid standard Normal random variables and $\alpha$ is the annual fee charged by the insurance company. If the account balance becomes zero or negative, then it will stay zero till maturity. The cashflow received by the policyholder at withdrawal time $t_n$ is given by \begin{equation} C_n(\gamma_n)=\left\{\begin{array}{ll} \gamma_n, & \mbox{if}\; 0\le \gamma_n\le G_n, \\ G_n+(1-\beta)(\gamma_n-G_n), & \mbox{if}\; \gamma_n>G_n, \end{array} \right. \end{equation} where $G_n$ is contractual withdrawal. That is, penalty is applied if withdrawal $\gamma_n$ exceeds $G_n$, i.e. $\beta\in [0,1]$ is the penalty applied to the portion of withdrawal above $G_n$. If the policyholder decides to surrender at time slice $\tau\in(1,\ldots,{N-1})$, then policyholder receives cashflow $D_\tau(W(t_\tau),A(t_\tau))$ and contract stops. For numerical example we assume that \begin{equation}\label{surrendercashflow_eq} D_\tau(W(t_\tau),A(t_\tau)):=C_\tau(\max(W(t_\tau),A(t_\tau))); \end{equation} other standard surrender conditions can easily be implemented. Denote the value of variable annuity at time $t$ as $Q_t(W(t),A(t))$, i.e. it is determined by values of the wealth and guarantee accounts $W(t)$ and $A(t)$. At maturity, if not surrendered earlier, the policyholder takes the maximum between the remaining guarantee withdrawal net of penalty charge and the remaining balance of the personal account, i.e. the final payoff is \begin{equation} Q_{t_N^-}(W(T^-),A(T^-)):=h_N(W(T^-),A(T^-))=\max\left(W(T^-),C_N(A(T^-))\right). \end{equation} Under the above assumptions/conditions, the fair no-arbitrage value of the annuity at time $t_0$ is \begin{eqnarray}\label{GMWB_general_eq} &&\hspace{-1cm}Q_{t_0}\left ( W(t_0),A(t_0)\right)=\max_{\tau,\gamma_{1},\ldots,\gamma_{\widetilde{N}}}\mathrm{E}_{t_0}\bigg[B(0,\tau)D_\tau(W(t_\tau^-),A(t_\tau^-))\mathbb{I}_{\{t_\tau<T\}}\nonumber\\ &&\hspace{0cm}+B(0,N)h_N(W(T^-),A(T^-))(1-\mathbb{I}_{\{t_\tau<T\}})+\sum_{j=1}^{\widetilde{N}} B(0,j)C_j(\gamma_j)\bigg], \;\;\widetilde{N}=\min(\tau,N)-1, \end{eqnarray} where $B_{0,n}=\exp(-\int_{0}^{t_n} r(\tau)d\tau)$ is discounting factor and $\mathbb{I}_{\{\cdot\}}$ is indicator function. Note that the today's value of the annuity policy $Q_0(W(0),A(0))$ is a function of policy fee $\alpha$. Here, $\tau$ is stopping time and $\gamma_{1},\ldots,\gamma_{N-1}$ are the control variables chosen to maximize the expected value of discounted cashflows, and expectation $\mathrm{E}_0[\cdot]$ is taken under the risk-neutral process conditional on $W_0$ and $A_0$. The fair fee value of $\alpha$ corresponds to $Q_0\left(W(0),A(0)\right)=W(0)$. It is important to note that control variables and stopping time can be different for different realizations of underlying process and moreover the control variable $\gamma_n$ affects the transition law of the underlying wealth process from $t_n$ to $t_{n+1}$. Overall, evaluating GMWB with surrender feature is solving optimal stochastic control problem with optimal stopping. Denote the state vector at time $t_n$ as $X_n=(W(t_n^-),A(t_n^-))$. Given that $\bm{X}=(X_1,\ldots,X_N)$ is Markov process, it is easy to recognize that the annuity valuation under the optimal withdrawal strategy (\ref{GMWB_general_eq}) is optimal stochastic control problem for Markov process that can be solved recursively to find annuity value $Q_{t_n}(x)$ at $t_n$, $n=N-1,\ldots,0$ via backward induction \begin{equation} Q_{t_n}(x)=\max\left(\sup_{0\le\gamma_n\le A(t_{n}^-)}\left(C_n(\gamma_n(X_n))+ e^{-r_{n+1}dt_{n+1}}\int Q_{t_{n+1}}(x^\prime)K_{t_n}(dx^\prime|x,\gamma_n) \right),D_n(x)\right) \end{equation} starting from final condition $Q_T(x)=\max\left(W(T^-),C_N(A(T^-))\right)$. Here $K_{t_n}(dx^\prime|x,\gamma_n)$ is the stochastic kernel representing probability to reach state in $dx^\prime$ at time $t_{n+1}$ if the withdrawal (\emph{action}) $\gamma_n$ is applied in the state $x$ at time $t_n$. For a good textbook treatment of stochastic control problem in finance, see \cite{bauerle2011markov}. Explicitly, this backward recursion can be solved as follows. The annuity price at any time $t$ for a fixed $A(t)$ is a function of $W$ only. Note $ A(t_{n-1}^+)= A(t_{n}^-)=A$ is constant in the period $(t_{n-1}^+,t_n^-)$. Thus in a backward time-stepping setting (similar to a finite difference scheme) the annuity value at time $t=t_{n-1}^+$ can be evaluated as the following expectation \begin{equation}\label{eq_expS} Q_{t^+_{n-1}}\left(W(t_{n-1}^+), A\right)=\mathrm{E}_{t_{n-1}}\left[e^{-r_n dt_n} Q_{t_n^{-}}\left(W(t_n^-),A\right)|W(t_{n-1}^+),A\right]. \end{equation} Assuming the conditional probability distribution density of $W(t_n^-)$ given $W(t_{n-1}^+)$ is known as $p_n(w(t_n)|w(t_{n-1}))$, then the above expectation can be evaluated by \begin{equation}\label{eq_intS} Q_{t_{n-1}^+}\left(W(t_{n-1}^+), A\right)=\int_0^{+\infty} e^{-r_n dt_n} p_n(w|W(t_{n-1}^+)) Q_{t_n^-}(w,A)dw. \end{equation} In the case of wealth process (\ref{eq_Wt}) the transition density $p_n(w(t_n)|w(t_{n-1}))$ is known in closed form and we will use Gauss-Hermite quadrature for the evaluation of the above integration over an infinite domain. The required continuous function $Q_t(W,A)$ will be approximated by a cubic spline interpolation on a discretized grid in the $W$ space. Any change of $A(t)$ only occurs at withdrawal dates. After the amount $\gamma_n$ is drawn at $t_n$, the wealth account reduces from $W(t_n^-)$ to $W(t^+_n) = \max (W(t_n^-) -\gamma_n,0)$, and the guarantee balance drops from $A(t_n^-)$ to $A(t_n^+)=A(t_n^-) - \gamma_n$. Thus the jump condition of $Q_t(W,A)$ across $t_{n}$ is given by \begin{eqnarray}\label{eqn_jump} &&\hspace{-1cm}Q_{t_{n}^-}(W(t_{n}^-),A(t_{n}^-))\nonumber\\ &&\hspace{-1cm}=\max\left(\max_{0 \leq \gamma_n\leq A(t_{n}^-) } [Q_{t_n^+}(\max(W(t_{n}^-)-\gamma_n,0), A(t_{n}^-)-\gamma_n)+C_n(\gamma_n)],D_n(W(t_{n}^-),A(t_{n}^-))\right). \end{eqnarray} For optimal strategy, we chose a value for $\gamma_n$ under the restriction $0 \leq \gamma_n\leq A(t_n^-) $ to maximize the function value $Q_{t_n^-}(W,A)$ in (\ref{eqn_jump}). Repeatedly applying (\ref{eq_intS}) and (\ref{eqn_jump}) backwards in time starting from \begin{equation} Q_{t^-_N}(W(T^-),A(T^-))=\max\left(W(T^-),C_N(A(T^-))\right) \end{equation} gives us annuity value at $t=0$. In additional to dynamic and static strategies, in this paper we also consider \emph{bang-bang} strategy which is simplified sup-optimal strategy where the policy holder at each $t_n$ can either make withdrawal at contractual rate $G_n$, make no withdrawal or surrender. \section{Numerical algorithm}\label{algorithm_sec} A very detailed description of the algorithm that we adapt for pricing GMWB with surrender can be found in \citet{LuoShevchenkoGMWB2015}. Below we outline the main steps. We discretize the asset domain $[W_{\min}, W_{\max}] $ by $W_{\min} =W_0 < W_1, \ldots,W_M=W_{\max}$ , where $W_{\min}$ and $W_{\max}$ are the lower and upper boundary, respectively. The idea is to find annuity values at all these grid points at each time step from $t_n^-$ to $t_{n-1}^+$ through integration (\ref{eq_intS}), starting at maturity $t=t_N^-=T^-$. At each time step we evaluate the integration (\ref{eq_intS}) for every grid point by a high accuracy Gauss-Hermite numerical quadrature; it can also be accomplished by solving corresponding PDE using finite difference method that we implemented for benchmarking. At time step $t_n^- \rightarrow t_{n-1}^+$, the annuity value at $t=t_n^-$ is known only at grid points $W_m$, $m=0,1,\ldots,M$. In order to approximate the continuous function $Q_t(W,A)$ from the values at the discrete grid points, we use the cubic spline interpolation which is smooth in the first derivative and continuous in the second derivative. For guarantee account balance variable $A$, we introduce an auxiliary finite grid $0 = A_1 < \cdots < A_J = W(0)$ to track the remaining guarantee balance $A$, where $J$ is the total number of nodes in the guarantee balance amount coordinate. For each $A_j $, we associate a continuous solution $Q_t(W,A_j)$. At every jump we let $A$ to be one of the grid points $A_j ,\;1 \le j \le J$. For any $W=W_m$, $ m=0,1,\ldots, M$ and $A=A_j$, $ j=1,\ldots, J$ , given that withdrawal amount can only take the pre-defined values $\gamma=A_j-A_k$, $k=1,2,\ldots,j$, irrespective of time $t_n$ and account value $W_m$, the jump condition (\ref{eqn_jump}) takes the following form for the specific numerical setting \begin{equation}\label{eqn_jump2} Q_{t_n^-}(W_m,A_j)=\max\left(\max_{1\leq k \leq j} [Q_{t_n^+}(\max(W_m-A_j+A_k,0), A_k)+C_n(A_j-A_k)],D_n(W_m,A_j)\right). \end{equation} Overall we have $J$ numerical solutions (obtained through integration) to track, corresponding to each of the $A_j$ value, $1\leq j \leq J$. Stepping backward in time, we find $Q_0(W(0),A(0))$ that depends on the policy fee $\alpha$. Finally, we calculate fair fee value of $\alpha$ corresponding to $Q_0(W(0),A(0))=W(0)$ that obviously requires iterative process. \section{Numerical Results}\label{NumericalResults_sec} Below we present numerical results for fair fee of GMWB with surrender option under optimal and suboptimal bang-bang withdrawal strategies. For convenience we denote results for optimal withdrawal strategy without surrender option as GMWB, and with surrender option as GMWB-S. As discussed in \citet{LuoShevchenkoGMWB2015}, only very few results for GMWB under dynamic policyholder behavior can be found in the literature, and these results are for GMWB {\it without} the surrender option. For validation purposes, perhaps the most accurate results are found in \citet{Forsyth2008}, which were obtained with a very fine mesh in a detailed convergence study. As shown in Table \ref{tab_g10}, our GMWB results for fair fee compare very well with those of \citet{Forsyth2008}. The maximum absolute difference in the fair fee rate between the two numerical studies is only $0.3$ basis point (a basis point is $0.01\%$ of a rate). Table \ref{tab_g10} shows some very interesting comparison among GMWB, GMWB-S and bang-bang results. At volatility $\sigma=0.2$, the fair fee for GMWB-S is virtually the same as GMWB, meaning surrender adds little value to the optimal strategy; at high volatility $\sigma=0.3$, fees for GMWB-S are significantly higher than GMWB, up to $50\%$ higher at the half-yearly frequency. This may suggest that at high volatility it is optimal to surrender at high values of account balance or guarantee level. In addition, it also shows higher frequency gives higher extra value to the surrender option. Comparing bang bang with GMWB-S, the fees are below the optimal strategy as expected, but the values are not very significantly lower at both volatility values - it is only about $10\%$ lower at most. Figure \ref{fig_fee1} shows curves of the fee as a function of the contractual annual withdraw rate, given $\sigma=0.2$, $r=0.05$ and $\beta=0.1$. It compares four cases: static (without surrender), GMWB, GMWB-S and bang-bang, all at quarterly withdrawal frequency with $10\%$ penalty charge, i.e. $\beta=0.1$. This comparison also shows GMWB-S and GMWB have virtually the same fees at $\sigma=0.2$, and bang-bnag is only slightly below GMWB-S, confirming results in Table \ref{tab_g10}. However, at the same volatility $\sigma=0.2$, new features emerge if we reduce the penalty charge from $\beta=0.1$ to $\beta=0.05$, as shown in Figure \ref{fig_fee2}. When the penalty charge is reduced and all other parameters are unchanged, the surrender option adds more significant value to GMWB - in fact the fees are more than doubled at low to moderate contractual withdraw rate (or equivalently long or moderate maturity), i.e. fees for GMWB-S are more than twice as those for GMWB. With reduced penalty, fees for bang-bang are still close to the optimal strategy with surrender option, the GMWB-S. We also performed calculations for static withdrawal with surrender option, which is the same as bang-bang minus the ``no-withdrawal" choice. We find the fee for such contract is only less than $1\%$ smaller than the bang-bang strategy, meaning the ``no-withdrawal" option adds little value to the contract. Finally, different penalty functions can be applied to the surrender (i.e. surrender cashflow can be different from (\ref{surrendercashflow_eq})). For example, instead of penalizing only the amount exceeding the contractual withdrawal rate, we can penalize the entire termination amount. In this case we find both GMWB-S and bang-bang yield only slightly lower fees for a given $\beta$ - this is perhaps not very surprising since when it is optimal to surrender, the amount must be much higher than the contractual rate, thus penalizing the entire amount is not much more severe than penalizing only the exceeded part. \begin{table}[!htbp] \begin{center} {{\begin{tabular*}{0.75\textwidth}{cccccc} \hlin frequency & volatility & Chen \& Forsyth & GMWB & GMWB-S & Bang Bang \\ \hlin yearly & 0.2 & 129.1 & 129.1 & 129.2 & 123.9 \\ half-yearly & 0.2 & 133.5 & 133.7 & 134.0 & 125.6 \\ yearly & 0.3 & 293.3 & 293.5 & 418.4 & 392.9 \\ half-yearly & 0.3 & 302.4 & 302.7 & 456.5 & 410.7 \\ \hline \end{tabular*} }}\end{center} \caption{Comparison of fair fee $\alpha$ in basis points (a basis point is 0.01\%) between results of GMWB, GMWB-S and bang-bang. Results under ``Chen \& Forsyth" are for GMWB. The input parameters are $g=10\%$, $\beta=10\%$, $r=5\%$ and $\sigma=0.2$. The withdrawal frequency is quarterly. } \label{tab_g10} \end{table} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.55]{fee1} \vspace{0cm} \caption{Fair fee $\alpha$ as a function of annual guarantee rate $g$ for static, GMWB, GMWB-S and bang-bang at a quarterly withdraw rate. The fixed input parameters are $\beta=10\%$, $r=5\%$ and $\sigma=0.2$.}\label{fig_fee1} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[scale=0.55]{fee2} \vspace{0cm} \caption{Fair fee $\alpha$ as a function of annual guarantee rate $g$ for static, GMWB, GMWB-S and bang-bang at a quarterly withdraw rate. The fixed input parameters are $\beta=5\%$, $r=5\%$ and $\sigma=0.2$.}\label{fig_fee2} \end{center} \end{figure} \newpage \section{CONCLUSIONS}\label{conclusion_sec} In this paper we have developed numerical valuation of variable annuities with GMWB and surrender features under both static, dynamic (optimal) and bang-bang policyholder strategies. The results indicate that following a simple bang-bang strategy does not lead to significant reduction in the price or equivalently in the fee. We also observed that the extra value added by the surrender option very much depends on volatility and the penalty charge, among other facts such as contractual rate and maturity. At high volatility or at low penalty charge, the surrender feature adds very significant value to the GMWB contract - more than doubling in some cases; highlighting the importance of accounting for surrender feature in pricing of real products. We have assumed the policyholder will always live beyond the maturity date or there is always someone there to make optimal withdrawal decisions for the entire duration of the contract. It is not difficult to consider adding some death benefits on top of GMWB, i.e. combining GMWB with some kind of life insurance as it is done in our recent paper \citet{LuoShevchenkoGMWDB2015} considering both market process and death process. Further work includes admitting other stochastic risk factors such as stochastic interest rate or volatility. \section*{Acknowledgement} We gratefully acknowledge financial support by the CSIRO-Monash Superannuation Research Cluster, a collaboration among CSIRO, Monash University, Griffith University, the University of Western Australia, the University of Warwick, and stakeholders of the retirement system in the interest of better outcomes for all.
1,314,259,996,398
arxiv
\section{Introduction} The idea that star formation may be self--propagating or self--triggering is an old one, going back at least to \cite{1977ApJ...214..725E}. There are several other mechanisms acting on a range of scales which may trigger star formation, e.g. the mergers of galaxies \citep[e.g][]{1992ApJ...400..153M}, the passage of gas through galactic spiral arms \citep{1969ApJ...158..123R} or collisions of molecular clouds \citep[e.g][]{1979ApJ...229..578S}, but in this series of papers we focus on the idea that feedback from O--type stars may trigger the formation of stars. Photoionization, winds and supernovae have all been invoked as triggering mechanisms.\\ \indent The collect--and--collapse model, in which a feedback--source generates a gravitationally--unstable shell by sweeping up a uniform cloud has been extensively studied. The model is attractive for its simplicity and has been treated analytically and numerically by several authors \citep{1994A&A...290..421W,2001A&A...374..746W,2007MNRAS.375.1291D}. There is a rapidly--growing body of observational evidence of the collect--and--collapse process at work, e.g. \cite{2006A&A...446..171Z,2006A&A...458..191D,2008A&A...482..585D,2010A&A...518L.101Z} and a consonance between observations and theory is emerging.\\ \indent Star formation in a cloud can also be triggered by the action of stars outside the cloud. In the radiation--driven implosion model, the skin of a molecular cloud is heated by the photoionizing radiation from a nearly OB star and evaporates, driving a reverse shock by the rocket effect \citep{1982ApJ...260..183S,1989ApJ...346..735B}. An otherwise stable clump of gas thus collapses and forms stars. This process has been observed by \cite{1995A&A...301..522L} and by \cite{2003MNRAS.338..545K}, \cite{2009MNRAS.393...21G} and \cite{2010arXiv1007.2727B}.\\ \indent Most systems where triggered star formation has been reported have the complex morphologies expected from the interaction of radiation with a turbulent or inhomogeneous ISM. This problem was first addressed by \cite{1995ApJ...451..675E} and has been studied more recently by \cite{2007MNRAS.377..535D}, \cite{2009ApJ...694L..26G} and \cite{2010arXiv1009.0011G}. Systems of this nature are extremely complicated and thus difficult to interpret observationally. Most authors rely on the geometrical coincidence of young stellar objects (YSOs) with features such as ionization fronts, bright--rimmed clouds or swept--up gas to detect triggering. \cite{2003ApJ...595..900K} and \cite{2008ApJ...688.1142K} studied the W5 HII region at a variety of wavelengths and assessed the degree of triggering by comparing the locations of YSO's to the ionization sources and the walls of the cavities evacuated by HII regions. In their study of NGC 2467, \cite{2009ApJ...700..506S} show that a much higher fraction of YSOs lie within a projected distance of 0.5 pc from an ionization front than would results from random alignment, and conclude that a substantial fraction of the star formation in this region has been triggered. \cite{2009A&A...503..107P} infer triggering from the overlap of many YSOs with dense molecular gas at the borders of the HII region Sh2-254, although they also find the puzzling result that many young objects are projected to lie within Dolidze 25, the ionizing cluster of this region. \cite{2009A&A...497..789U} observe 45 bright--rimmed clouds and classify them as triggered or untriggered on the basis of whether or not they are being photoionized. While all the criteria used by these authors are certainly suggestive of triggered star formation, the evidence presented in all cases is somewhat circumstantial. The papers in this series are an effort to improve the theoretical understanding of triggered star formation and to place such observational interpretations on firmer ground.\\ \indent In a previous paper \citep{2007MNRAS.377..535D} we distinguished weak triggering (`accelerated star formation'), in which stars that would have formed in the absence of feedback are caused to form earlier, and strong triggering in which feedback causes the birth of stars that would not otherwise exist. We consider strong triggering to be more interesting, since it increases the star formation efficiency. Observationally, it is very difficult to distinguish between these possibilities -- even in the case of the collect--and--collapse process, it is not easy to tell whether a given density enhancement has formed by fragmentation of the shell, or is a pre--existing object being overrun. We demonstrated that external irradiation could produce strong triggering by comparison with the evolution of a cloud evolving in the absence of feedback. However, we found that it was difficult to distinguish the triggered and spontaneous objects simply by observing the cloud. Triggering increased the star formation efficiency in our model cloud both by causing the formation of extra stars and by increasing the masses of spontaneously--forming objects. We therefore speculated (as have several other authors, e.g \cite{1994A&A...290..421W}) that triggering may have some observable effect on the stellar IMF which could be used by observers. However, we were only able to form a small number of objects and our resolution was insufficient to follow the formation of individual stars.\\ \indent In this paper, we perform similar calculations, but we simulate a lower--mass cloud at higher resolution, so that we can construct stellar IMFs from our results and see if they are affected by triggering. Once again, we determine the impact of triggering counterfactually by allowing an identical copy of the same cloud to evolve in the absence of ionizing sources so that we may directly compare the evolution of the same gas and stars. In Sections 2 and 3 we discuss our numerical methods and initial conditions. Section 4 contains our results and Sections 5 and 6 contain our discussion and conclusions respectively.\\ \section{Numerical Methods} We make use of a hybrid Smoothed Particle Hydrodynamics (SPH)/N--body code in which gas is represented by discrete particles and hydrodynamical forces are computed using the SPH formalism \citep{1992ARA&A..30..543M}, stars are represented by sink partlcles \citep{1995MNRAS.277..362B} and gravitational forces are computed using a binary tree (in the case of the gas particles) or by direct summation (for the sink particles). Formation of sink particles and the subsequent accretion of gas onto them is modelled in the manner described in \cite{1995MNRAS.277..362B}. We use the standard artificial viscosity prescription with $(\alpha, \beta)=(1,2)$\\ \indent We use a modified Larson equation of state, given by \begin{equation} P = k \rho^{\gamma} \end{equation} where \begin{equation} \begin{array}{rlrl} \gamma &= 0.75 ; & \hfill &\rho \le \rho_1 \\ \gamma &= 1.0 ; & \rho_1 \le & \rho \le \rho_2 \\ \gamma &= 1.4 ; & \hfill \rho_2 \le &\rho \le \rho_3 \\ \gamma &= 1.0 ; & \hfill &\rho \ge \rho_3, \\ \end{array} \end{equation} and $\rho_1= 5.5 \times 10^{-19} {\rm g\ cm}^{-3} , \rho_2=5.5 \times10^{-15} {\rm g\ cm}^{-3} , \rho_3=2 \times 10^{-13} {\rm g\ cm}^{-3}$. The effective cooling at low density mimics line cooling and ensures that the Jeans mass at the point of fragmentation returns approximately a characteristic stellar mass of $0.5$M$_{\odot}$ \citep{2005A&A...435..611J,2006MNRAS.368.1296B}. The $\gamma=1.0$ segment approximates the dust cooling whilst the $\gamma=1.4$ segment mimics the regime in which the collapsing core becomes optically thick and behaves adiabatically. The final isothermal segment allows extremely high density gas to form sink particles, if it has not done so already.\\ \indent We use the method described in \cite{2007MNRAS.382.1759D} to model photoionizing radiation from point sources. We take the source to emit ionizing photons isotropically and use a Str\"omgren integral technique to compute the flux of ionizing photons received by a given SPH particle in a given time interval, allowing us to take into account the time required to ionize the particle. We make use of the on--the--spot approximation, neglecting the diffuse ionizing field. In these simulations, the ionizing source is not a sink particle but an artificial source with a fixed luminosity at a constant position outside the molecular cloud under study.\\ \indent The problem we wish to simulate is that of a turbulent molecular cloud illuminated by an external ionizing source. The problem of the irradiation of initially uniform clouds was studied by \cite{1994A&A...289..559L} and later by \cite{1996ApJ...458..222B}. \cite{1996ApJ...458..222B} considered the effects of the photodissociating as well as the ionizing radiation. They derived conditions under which the photodissociation front is able to overtake the ionization front and propagate into the shocked gas. If this occurs, the evolution of the cloud becomes rather more complicated. The conditions for the dissociation front to outrun the ionization front depend on the ratio of Lyman--continuum to far--UV photons emitted by the source (approximately unity, in the case of the source assumed here) and on the optical depth presented to the Lyman photons by the photoevaporation flow, resulting in a range of optical depths for which photodissociation should not be ignored. In our simulations, we find that the optical depth of the photoevaporation flow is $<0.1$, so that the photodissociation front cannot outrun the ionization front. Checking the validity of these assumptions is an important area of further work, but is beyond the scope of this paper. \cite{1994A&A...289..559L} divide the parameter space of target clouds into five regions. Our simulated cloud falls into their region V, in which an initially R--type front transitions to a D--type front and proceeds through the cloud. This is indeed what we observe.\\ \begin{figure*} \centering \subfigure[Column density map viewed along the $y$--axis of the bottom run. The ionizing source is marked by a green cross.]{\includegraphics[width=0.31\textwidth]{figure1a.eps}} \hspace{.1in} \subfigure[Column density map viewed along the $y$--axis of the control run.]{\includegraphics[width=0.31\textwidth]{figure1b.eps}} \hspace{.1in} \subfigure[Column density map viewed along the $y$--axis of the top run at an age. The ionizing source is marked by a green cross.]{\includegraphics[width=0.31\textwidth]{figure1c.eps}} \caption{Comparison of the morphologies of the clouds in the bottom (left panel), control (middle panel) and top (right panel) runs at $\sim0.67$ Myr. Actual times were chosen so that all runs have the same number of stars.} \label{fig:snaps} \end{figure*} \section{Initial conditions} We construct a model of a molecular cloud of mass 2.3$\times10^{3}$ M$_{\odot}$. The cloud is modelled with $\sim1.65\times10^{6}$ particles, with a smooth gradient in particle mass of a factor of 3 in the $y$--direction, so that the positive--$y$ regions of the cloud are somewhat more bound than the negative--$y$ regions. The resulting minimum self--gravitating mass that can be resolved being $\approx0.05-0.15$M$_{\odot}$. The sinks have accretion radii of 0.002pc, corresponding to a density of $\sim3\times10^{-16}$g cm$^{-3}$, and we also smooth the sink--sink gravitational interactions at this scale. With this sink formation density, only the first segment of the equation of state given above is relevant and the sink particles formed should strictly be regarded as star--forming cores since their final stages are not followed, although we will continue to refer to them as `stars'. The cloud is ellipsoidal in shape, with the long axis being the $y$-axis and initial dimensions of $2\times2\times4$ pc. The density is initially uniform but the cloud is seeded with a Kolmogorov turbulent velocity field, so that it rapidly develops a complex density structure. The cloud is gravitationally bound overall, with a virial ratio of unity. We make three copies of the cloud. One, which we refer to as the `control' run is allowed to evolve undisturbed by ionizing radiation. A second (the `top' run) is illuminated from near its upper (more bound) end, with the ionizing source located at (-2.0, 2.5, 0.0)pc, while the third is illuminated from near the bottom (less bound) end from (2.0, -2.5, 0.0) pc. We will refer to the top and bottoms runs collectively as `feedback' runs. In both cases, the radiation source has an ionizing photon luminosity of $10^{49}$s$^{-1}$, roughly equivalent to a single $60$M$_{\odot}$ O--star or a small cluster of O--stars such as the Trapezium. The luminosity of the source and its proximity to the edge of the cloud (initially $\sim1$pc) results in a photon flux of $\sim8\times10^{10}$cm$^{-2}$s$^{-1}$. These parameters are intended to be realistic. The photon luminosity is equivalent to a lone high--mass O--star or to a low--mass (a few $\times10^{3}$M$_{\odot}$) cluster hosting several lower--mass O--stars. A radius or order 1pc is representative for a cluster of such a size \citep[e.g][]{2009ApJ...691..946M}. If, in the more likely scenario, the ionizing source is regarded as being a small stellar cluster, it cannot therefore be placed much closer to the model cloud (particularly as we are ignoring any gravitational interaction between the source of radiation and the clouds). We therefore consider our chosen flux to be on the high side of what is astrophysically realistic given the size and mass of our target cloud. \cite{2010arXiv1007.2727B} consider the problem of the irradiation of clouds modelled as Bonner--Ebert spheres by ionizing sources of various luminosities sited 3pc from the initial cloud edge. Our chosen flux lies in the middle of the range of fluxes considered by these authors and inside the range where the ionization triggers star formation, rather than destroying their model clouds.\\ \section{Results} \subsection{Cloud morphology} In our previous calculations \citep{2007MNRAS.377..535D}, we found that ionizing radiation had a profound effect on the cloud's morphology during the $\sim0.5$ freefall times for which we ran the simulation. In this paper, we consider lower--mass clouds of smaller spatial extent and higher density. We find that the effect on the cloud morphology of the irradiation is less pronounced because the gas in our new calculations is denser, so that the ionization initially evaporates less material before the ionization front transitions from R--type to D--type, and erosion of the cloud proceeds more slowly in comparison to the cloud freefall time, which is shorter in our new simulations.\\ \indent In Figure \ref{fig:snaps}, we show column--density plots of the bottom, control and top runs at comparable times. We terminated the simulations after $\sim0.7$Myr because the most massive stars in all runs are approaching 10M$_{\odot}$ by this time and hence are almost massive enough to be ionizing sources themselves. We did not wish to complicate our simulations by attempting to model the effects of external and internal feedback, so we stop at this point. The colours in Figure \ref{fig:snaps} denote gas column density and white dots are sink particles. Ionizing sources are marked by green crosses where appropriate. In the control run, the gas is distributed in a network of dense filaments, and it is here that most of the star formation takes place, with the majority being in the largest filament running down most of the length of the cloud. In the bottom run, there is a considerable quantity of molecular gas between the source and the main filament, which partially prevents the feedback from strongly influencing the evolution of the gas in this region. Instead, the observable effects of photoionization in this run are largely to compress the bottom right portion of the cloud and drive the material there towards the cloud core. Conversely, in the top run the ionizing radiation impinges almost directly on the main star--forming filament and smears it out to some extent along an axis pointing away from the source.\\ \begin{figure} \includegraphics[width=0.45\textwidth]{figure2.eps} \caption{Total mass in stars as a function of time in the control run (back line), the top run (blue line) and the bottom run (red line).} \label{fig:msinkt} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{figure3.eps} \caption{Total number of stars as a function of time in the control run (black line), the top run (blue line) and the bottom run (red line).} \label{fig:nsinkt} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{figure4.eps} \caption{Probability density functions for the gas density in the control run (black line), the top run (blue line) and the bottom run (red line).} \label{fig:dens} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{figure5.eps} \caption{Cumulative mass functions of the control run (black line), the top run (blue line) and the bottom run (red line) at times when each run contains 213 stars.} \label{fig:cum_mf} \end{figure} \subsection{Star formation rate and efficiency} In Figures \ref{fig:msinkt} and \ref{fig:nsinkt}, we plot the total stellar mass and the total numbers of stars as functions of time in the three runs. These figures show that external irradiation has little effect on either the total quantity of mass involved in star formation (i.e. the star formation efficiency) or the total numbers of stars formed. In Figure \ref{fig:dens}, we plot the probability density function of the gas density in the control run and the feedback runs. The functions are shifted to higher densities in the feedback runs, but not by a large factor, indicating that the external irradiation has a rather modest effect on the density structure of the cloud, and hence has little influence on the rate of star formation. \subsection{Stellar mass functions} In comparing the mass functions of the three simulations, it is the shape of the function that concerns us. Since the simulations form stars at different rates, comparing them at given times is misleading, since then the mass functions contain different numbers of objects. In Figure \ref{fig:cum_mf}, we plot cumulative mass functions from all three simulations at times chosen so that all contain 213 stars. We see that the mass functions are very similar, although the bottom--triggered run shows a deficit of objects around 1 M$_{\odot}$, and is overall somewhat steeper. We perform Kolmogorov--Smirnov tests on these mass functions to determine the probability that they have the same mass function. We find that the probabilities that the control and top--triggered mass functions are drawn from the same distribution is $18\%$, that the control and bottom--triggered are is $74\%$ and that the two triggered mass functions are drawn from the same distribution is $24\%$. These differences are not statistically significant. \\ \subsection{Triggered stars} In \citep{2007MNRAS.377..535D} we defined strong triggering to mean the formation through the action of feedback of a star or star--forming core which failed to form in a control run evolving in the absence feedback from the same initial conditions. Since SPH is a Lagrangian technique, stars or cores can be unambiguously identified between simulations by the gas particles from they formed. Conversely, abortion was defined as the formation in the control run of a star/core which failed to form in the feedback run. We traced the histories of the $\sim100$ seed gas particles from which sinks initially formed in the absence of feedback and which were induced to form or prevented from forming in the feedback run. This technique, which we call the `same--seed' method, determines whether the \emph{same star formation event} occurs in two simulations starting from the same initial conditions and worked well in our previous calculations, where star formation occurred in a small number of isolated regions within a large globally--unbound cloud. However, star formation in the simulations presented in this work is much more vigorous, since the clouds are bound, and occurs largely in the same crowded region of all three copies of the parent cloud. Applying the same--seed method revealed that it was exceedingly rare (at the $\sim$1\% level) for the same $\sim$ 100 particles to form a seed in more than one simulation. Since very few sink--seeds in any run have counterparts in another run, almost all stars in the feedback runs would be triggered, and almost all in the control run aborted by this definition. Since the numbers, mass function and spatial distribution of stars in all three calculations presented here are very similar, this result is highly counterintuitive.\\ \indent We improved upon the same--seed method by tracing instead \emph{all} the particles going to form a given sink -- the seed particles from which the sink formed \emph{and} the particles accreted afterwards. If more than half of the particles forming a sink in one run also form a sink in another run, the \emph{same star} can be said to form in both calculations and we refer to this as the `same--star' method. Applying this criterion here, we find that the same star forms in two or more runs on only $\sim$20\% of occasions, even though the general morphology of star formation is very similar in all three calculations. This technique would also label most stars as triggered or aborted. The reason for these two results is that in all calculations most of the stars form in a small fraction of the cloud's volume. Even small perturbations to the gas density and velocity fields due to ionization can therefore affect exactly which gas particles are involved in forming a given object, making it unlikely that the same seed or the same star will form more than once.\\ \indent We illustrate this point in Figure \ref{fig:trace_control} where we show traces (i.e. the historical tracks) in the ionized runs of gas particles involved in star formation in the control run. Particles which are also involved in star formation in each ionized run are plotted in black, whereas particles only involved in star formation in the control run are plotted in red (only one tenth of the particles are plotted for reasons of clarity). The counterpart to Figure \ref{fig:trace_control} is Figure \ref{fig:trace_bound} where we plot the traces in the control run of particles from which stars form in the ionized runs. Particles which are also involved in star formation in the control run are plotted in black whereas those which are involved in star formation only in each ionized run are plotted in red. In Figure \ref{fig:trace_bound}, regions of the clouds close to the ionizing sources (the top right in the top--triggered run and the bottom left in the bottom--triggered run) contain large quantities of gas which only forms stars in the feedback runs. In the left panel of Figure \ref{fig:trace_control} by contrast, we see that some of the gas involved in star formation in the control run is actually ionized and evaporated off the cloud in the top--triggered run. However, the volume from which most of the star--forming gas is drawn in all runs is very similar, while exactly which gas particles form stars clearly changes significantly between runs, as shown by the complex admixture of red and black traces over most of the central regions of the clouds. In Figures \ref{fig:trig_sink1} and \ref{fig:trig_sink2} we repeat this exercise at the level of individual sinks. The left panels of both figures show the traces in the top run of gas particles which form particular sinks. The right panels show the traces of the same gas particles in the control run, with particles that are involved in star formation in the control run plotted in black, and those not involved plotted in red. Figure \ref{fig:trig_sink1} shows an object for which \emph{none} of the corresponding gas in the control run is involved in star formation while Figure \ref{fig:trig_sink2} shows an object on which only a small fraction ($\sim30\%$) of the corresponding gas in the control run is involved in star formation.\\ \indent We therefore define an even more conservative criterion for determining whether a star is triggered or aborted, i.e. one which will report the fewest numbers of such events: if less than half the material forming a given star in one of the triggered simulations is \emph{involved in star formation} in the control run, the star is counted as triggered, otherwise the star is counted as spontaneous. Conversely, if less than half the material forming a given star in the control run is involved in star formation in one of the ionized runs, that star is counted as aborted in that run. We term this the `involvement' method. The same--seed and same--star techniques focus largely on whether or not particular identifiable objects form, whereas this method is more general and aims to identify whether particular collections of gas particles form stars in different runs without inquiring which, or how many, stars they contribute to. Both the objects shown in Figures \ref{fig:trig_sink1} and \ref{fig:trig_sink2} are defined as triggered by this criterion. In Figure \ref{fig:trig}, we show the results of applying this analysis to the top and bottom runs, where triggered stars are marked in red and spontaneously--formed stars in black.\\ \begin{figure*} \centering \subfigure{\includegraphics[width=0.45\textwidth]{figure6a.eps}} \hspace{.1in} \subfigure{\includegraphics[width=0.45\textwidth]{figure6b.eps}} \caption{Traces in the top--triggered (left panel) and bottom--triggered (right panel) simulations of gas particles involved in star formation in the control run. Particles traced in black are those that are also involved in star formation in each ionized run, whereas those marked in red are involved in star formation only in the control run. For clarity, only one particle in ten is plotted.} \label{fig:trace_control} \end{figure*} \begin{figure*} \centering \subfigure{\includegraphics[width=0.45\textwidth]{figure7a.eps}} \hspace{.1in} \subfigure{\includegraphics[width=0.45\textwidth]{figure7b.eps}} \caption{Traces in the control run of gas particles involved in star formation in the top--triggered (left panel) and bottom--triggered (right panel) simulations. Particles traced in black are those that are also involved in star formation in the control run, whereas those marked in red are involved in star formation only in each ionized run. For clarity, only one particle in ten is plotted.} \label{fig:trace_bound} \end{figure*} \indent In Figures \ref{fig:trig_sink1} and \ref{fig:trig_sink2} we show examples, both drawn from the top and control runs, of sinks whose formation we regard as triggered.\\ \begin{figure} \centering \subfigure{\includegraphics[width=0.23\textwidth]{figure8a.eps}} \hspace{.01in} \subfigure{\includegraphics[width=0.23\textwidth]{figure8b.eps}} \caption{Traces of gas particles involved in the formation in the top--triggered run of a sink particle (left panel) and of the same gas particles in the control run (right panel). Particles traced in black are those that are involved in star formation in the control run, whereas those marked in red are involved in star formation only in the top--triggered run.} \label{fig:trig_sink1} \end{figure} \begin{figure} \centering \subfigure{\includegraphics[width=0.23\textwidth]{figure9a.eps}} \hspace{.01in} \subfigure{\includegraphics[width=0.23\textwidth]{figure9b.eps}} \caption{Traces of gas particles involved in the formation in the top--triggered run of a sink particle (left panel) and of the same gas particles in the control run (right panel). Particles traced in black are those that are involved in star formation in the control run, whereas those marked in red are involved in star formation only in the top--triggered run.} \label{fig:trig_sink2} \end{figure} \indent The distribution of stars in both ionized simulations is very similar to that in the control run, with most of the stars forming in a filament lying roughly along the $y$--axis. In both ionized runs, stars in the filament are a mixture of triggered and spontaneous objects using the involvement criterion. This reveals that what is happening in the central filament is not merely that the same gas is forming different permutations of stars in all three runs, but that \emph{different gas} is involved in star formation in the ionized runs. The top run contains 196 stars of which 103 are triggered and 93 are spontaneous, whereas the bottom run contains 206 stars of which 39 are triggered and 167 are spontaneous. The majority of triggered objects are mixed in with the spontaneously--formed ones in the central filament, making the two groups impossible to distinguish spatially. Only in the outlying areas of the triggered clusters -- the top--right of the top run and the bottom--right of the bottom run -- are there any outstanding groups of triggered stars, and their numbers are small. We also compared the velocities of the triggered and spontaneous populations parallel and perpendicular to a line--of--sight along the $z$--axis, since these are quantities an observer could measure. We find that there is nothing in these quantities to distinguish the triggered and spontaneous populations.\\ \indent We also applied the involvement criterion to identify aborted stars. In Figure \ref{fig:abort}, we plot the locations of stars in the control run which also form in the top (left panel) and bottom (right panel) simulations as black dots, whereas those that are aborted in each run are plotted as red dots. We see, again, that the objects whose fate has been changed by feedback are spatially mixed with those which form regardless. We conclude from the distributions of triggered, aborted and spontaneously--formed stars in the central filament that feedback agitates the gas in this region so that different parcels of material go to form stars in each run, although most of the star--forming gas is drawn from roughly the same volume, as shown by Figures \ref{fig:trace_control} and \ref{fig:trace_bound}. However, triggering and abortion in this region roughly cancel each other out, so that the total stellar mass and total numbers of stars formed is nearly the same in all threes simulations. The greater number of triggered and aborted objects in the top run is a result of the greater agitation of the gas in the central star--forming filament, due to the source in this run being closer to the filament and to the lack of intervening gas that could shield the filament from the radiation or the shocks driven by photoevaporation. Although it is possible to identify triggering and abortion of stars in the filament in the simulations, since these processes cancel each other out and the result is very nearly the same total stellar mass, number of stars and stellar mass function, the results of this analysis are of little importance from the point of view of the global properties of the cloud and its star cluster.\\ \indent It might be expected that stars whose formation has been triggered should be located close to the ionization front and to be moving along with it or not far behind it. In the top run, the ionization front rapidly reaches the central star--forming filament, after which point, almost all stars in the cloud are perforce located near to the front. However, the ionization front also moves into gas in the top right of the cloud, above the dense filament, triggering the formation of several stars. In Figure \ref{fig:trigtopright}, we show a time series of of column--density maps with the positions and velocities of the stars overlaid as white arrows, and the velocities of randomly--chosen gas particles overlaid as blue arrows. In the earliest frame, the velocities of the stars are clearly correlated with the motion of the gas and with the direction to the radiation source -- the stars were formed in, and are moving along with, the dense gas behind the ionization front, which is travelling from left to right. This is still largely true in the middle panel, but the situation is now complicated, particularly in the stars that have formed in the dense filament whose formation and velocities are mostly unconnected with the ionization front. In addition, even amongst the stars outside the filament, most of which are triggered, dynamical interactions have begun to erase the stars' memories of their velocities at formation and the correlation with the gas velocity field is weaker. The correlation has been further eroded in the right panel of Figure \ref{fig:trigtopright}. In Figure \ref{fig:dcomp_top}, we plot the density PDF of the top and control runs, confined to the region shown in Figure \ref{fig:trigtopright}. The result is very similar to Figure \ref{fig:dens}, demonstrating that, although some triggering is taking place in this region, the evolution is still largely dominated by the dense filament formed by the turbulence.\\ \indent We observe a similar phenomenon in the bottom right of the bottom run. In Figure \ref{fig:trigbotright}, we show a second time series of column--density maps from the bottom run where the ionization front is moving though the low--density gas. In the first image, a tight, strongly--bound cluster of six stars is induced to form by the ionization front, all acquiring velocities close to that at which the gas behind the ionization front is moving. The second panel shows that two of the stars in the triggered cluster have been ejected by dynamical interactions and are now moving in a direction almost perpendicular to the motion of the ionization front, while a third star is ejected in a direction deeper into the front. These three objects are no longer bound to the small triggered cluster. We again plot in Figure \ref{fig:dcomp_bot} density PDFs from the bottom run (red) and the control run (black) confined to the region shown in Figure \ref{fig:trigbotright}. This time, we see that the density enhancement in this region, far from the densest gas in the simulation, is significantly higher than that seen for the whole cluster in Figure \ref{fig:dens}. Figures \ref{fig:trigtopright} and \ref{fig:dcomp_top} show that proximity to the ionization front does not necessarily imply that stars have been induced to form, since the front may run into a region where stars are forming anyway, and that the evolution of the gas such regions of the cloud is predetermined by the seed turbulence. Figures \ref{fig:trigtopright} and \ref{fig:trigbotright} show that dynamical interactions between triggered stars may erase their memory of moving along with the ionization front. The ionization front may induce the formation of small, tightly--bound clusters, in which all the stars are moving along with the front, but such clusters may be unstable, ejecting some of their members in directions uncorrelated with the motion of the ionization front. However, Figures \ref{fig:trigbotright} and \ref{fig:dcomp_bot} show that external triggering may dominate the evolution of some locations by sweeping up regions of low--density gas in which self--gravity has not yet asserted itself.\\ \begin{figure*} \centering \subfigure{\includegraphics[width=0.33\textwidth]{figure10a.eps}} \subfigure{\includegraphics[width=0.33\textwidth]{figure10b.eps}} \subfigure{\includegraphics[width=0.33\textwidth]{figure10c.eps}} \caption{Locations of triggered (red) and spontaneous (black) stars according to the criterion given in the text in the top--triggered (left panel) and bottom--triggered (right panel) simulations, with all stars in the control run shown in the centre panel for comparison. The top run contains 103 triggered stars and 93 untriggered stars (total, 196 stars), while the bottom run contains 39 triggered stars and 167 untriggered stars (total, 206 stars). The times of the plots are, respectively, 0.69, 0.66 and 0.66 Myr in each run.} \label{fig:trig} \end{figure*} \begin{figure*} \centering \subfigure{\includegraphics[width=0.31\textwidth]{figure11a.eps}} \hspace{.1in} \subfigure{\includegraphics[width=0.31\textwidth]{figure11b.eps}} \caption{Locations of stars in the control run which do form (black) or are aborted (red) in the top--triggered (left panel) and bottom--triggered (right panel) simulations, at a time of 0.66Myr in the control run.} \label{fig:abort} \end{figure*} \begin{figure*} \centering \subfigure[Column density map viewed along the $z$--axis of the top right corner of the top run at 0.499 Myr.]{\includegraphics[width=0.31\textwidth]{figure12a.eps}} \hspace{.1in} \subfigure[Column density map viewed along the $z$--axis of the top right corner of the top run at 0.578 Myr.]{\includegraphics[width=0.31\textwidth]{figure12b.eps}} \hspace{.1in} \subfigure[Column density map viewed along the $z$--axis of the top right corner of the top run at 0.636 Myr.]{\includegraphics[width=0.31\textwidth]{figure12c.eps}} \caption{Motions of the stars (white arrows) and randomly--selected gas particles (blue arrows) in the top right corner of the top run at three different epochs.} \label{fig:trigtopright} \end{figure*} \section{Discussion} The purpose of this study was to see if external triggering of star formation in a molecular cloud has any statistical effect on the observable properties of the stars formed, such as their mass function, spatial distribution or velocities. We found that feedback had little effect on the numbers of stars formed and the mass functions in our control and triggered runs were statistically indistinguishable. The star formation rate in all three simulations is large controlled by the interplay between turbulence and gravity, which forms the dense central filament in which most of the stars form and which the ionization fronts driven by the external O--stars struggle to influence. The shocks driven by photoevaporation of the periphery of the cloud are able only to perturb slightly the density and velocity field of the gas within this central region. Although this changes exactly which parcels of gas become involved in star formation and which do not, so that some objects are effectively triggered and others are aborted, these effects cancel out in this region and the total mass of gas involved in star formation is virtually unchanged. Since the gas densities and velocities in the region of the cloud where most of the stars form are largely unaffected by feedback, there is also no change to the mass function of stars produced. This work suggests that is difficult for an external ionization source to influence the numbers or types of stars that a bound turbulent cloud is going to form. Work by \cite{2009MNRAS.397..232B} and \cite{2009MNRAS.398...33P} shows that \emph{internal} feedback from \emph{low--mass} stars has a much stronger effect on fragmentation and on the IMF.\\ \begin{figure*} \centering \subfigure[Column density map viewed along the $z$--axis of the bottom right corner of the bottom run at 0.531 Myr.]{\includegraphics[width=0.31\textwidth]{figure13a.eps}} \hspace{.1in} \subfigure[Column density map viewed along the $z$--axis of the bottom right corner of the bottom run at 0.595 Myr.]{\includegraphics[width=0.31\textwidth]{figure13b.eps}} \hspace{.1in} \subfigure[Column density map viewed along the $z$--axis of the bottom right corner of the bottom run at 0.651 Myr.]{\includegraphics[width=0.31\textwidth]{figure13c.eps}} \caption{Motions of the stars (white arrows) and randomly--selected gas particles (blue arrows) in the bottom right corner of the bottom run at three different epochs.} \label{fig:trigbotright} \end{figure*} \indent Although we can identify triggered and spontaneously--formed stars by reference to the detailed output of our simulations, most of them cannot be identified as such by observing their positions or velocities. In particular the locations and velocities of those stars that are triggered are not necessarily related to the location and velocity of the ionization front. Overall, we find that the characteristics of the star formation are not strongly influenced by feedback. However, feedback does strongly modify the appearance of the cloud and, as shown in Figures \ref{fig:trigbotright} and \ref{fig:dcomp_bot}, and to a lesser extent, Figure \ref{fig:trigtopright}, may be able to dominate the evolution of lower density parts of the cloud. Even in such regions, the distribution of stars and gas is complicated by dynamical interaction between the stars which swiftly erases the correlation in position and velocity between triggered stars and the ionization front.\\ \indent The initial conditions used in our simulations are that of a bound turbulent molecular cloud. The control run, in which there is no feedback, is therefore governed by the interplay of the turbulent velocity field and the gas self--gravity. In Figure \ref{fig:vel} we plot the velocity probability distribution functions in the three simulations and compare them to that of the initial conditions. We see that the peak in the distributions has fallen from $\approx2.5$ km s$^{-1}$ initially to $\approx1.5$ km s$^{-1}$, due to the dissipation of turbulent kinetic energy, and that both feedback runs exhibit a broader high--velocity tail and slightly less very low--velocity material. Overall, however, the effect of external feedback on the velocity field is clearly slight and the driving of turbulence by feedback is weak.\\ \indent It is likely that an ionizing source with a significantly higher photon luminosity, or one which was placed closer to our model clouds may produce a stronger triggering (or disruption effect), since a higher photon flux at the cloud surface and a faster photoevaporation rate would result. However, as explained in Section 2, the photon luminosity we have used is appropriate for a small star cluster of mass comparable to the mass of our clouds and likely to have a radius comparable to the chosen initial separation between the source and the cloud edge. We cannot therefore increase the incident photon flux significantly without invoking a much brighter (and, by implication, more massive) radiation source, or moving the source much closer to the target clouds. In either case, it would be unrealistic to ignore the \emph{gravitational} influence such a source should have on the clouds. We therefore consider the photon flux resulting from our choice of source and separation to be towards the high end of what is realistic for the irradiation of the clouds considered in this paper.\\ \indent The use of different target clouds may also lead to stronger or weaker effects of feedback. The most important factor restraining the influence of photionization in these calculations is the high density gas in the central region of the cloud where most of the star formation takes place regardless of feedback. Clouds in which the gas density is lower are likely to feel the effects of photionization more strongly. For a cloud with mass $M$ and initial radius $R$, the initial gas density $\rho_{0}\sim M/R^{3}$. If we insist that the virial ratio is constant and assume that the gas thermal energy is negligible in comparison to the turbulent kinetic energy, $M/R\sim v_{\rm RMS}^{2}$, where $v_{\rm RMS}$ is the root--mean--square turbulent velocity. The maximum density $\rho_{\rm MAX}$, which is likely to describe the gas where the stars begin to form, will be generated by shocking in the turbulent flows. If the shocks are approximately isothermal and the typical sound speed in the quiescent gas is $c_{\rm s}$, $\rho_{\rm MAX}\sim\rho_{0}(v_{\rm RMS}/c_{\rm s})^{2}$. An increase in the initial cloud radius of a factor of two would then decrease the initial density by a factor of eight and the maximum density by a factor of sixteen. This may be sufficient to allow feedback to have a greater influence on the cloud. However, it is not clear whether this would lead to more destruction of the cloud, or more triggering. In addition, our simulations in \citep{2007MNRAS.377..535D} do have a very much lower gas density than those presented here and we found that the impact of feedback on star formation in this cloud was also rather modest. It is also possible increasing the relative velocity of the photoionization--driven shocks to the turbuent shocks, either by lowering $v_{\rm RMS}$ or by lowering the gas density so that mass--loading does not slow the feedback--generated shocks so much, would allow feedback to have a greater influence on the cloud but, again, it is not clear whether this influence would be more positive of more negative. A full evaluation of these questions demands a parameter study, which we defer to later work. \begin{figure} \includegraphics[width=0.45\textwidth]{figure14.eps} \caption{Comparison of the density PDFs of the top run (blue) and the control run (black) in the region shown in Figure \ref{fig:trigtopright} at a time of 0.636Myr.} \label{fig:dcomp_top} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{figure15.eps} \caption{Comparison of the density PDFs of the bottom run (red) and the control run (black) in the region shown in Figure \ref{fig:trigbotright} at a time of 0.595Myr.} \label{fig:dcomp_bot} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{figure16.eps} \label{fig:vel} \caption{Velocity probability distribution functions for the initial conditions of the simulations (black dashed line) and the final states of the control run (black solid line), the top run (red solid line) and the bottom run (blue solid line).} \end{figure} \section{Conclusions} We find that the effect of external photoionizing irradiation by an O--star on our bound molecular cloud is modest. Although the evaporation of gas alters the morphology of the gas quite strongly in some regions of the cloud, the effect on the rate and efficiency of star formation is small. The effects of feedback on the stellar mass functions were also statistically negligible. We also find that the morphology of the stellar clusters produced in our feedback runs are very similar to that in the control run, and that the majority of stars whose formation was triggered cannot be distinguished from their spontaneously--formed siblings by their positions or velocities with respect to the ionization front. Only in the lower--density peripheral regions of the cloud, near the ionizing sources, do distinct populations of triggered stars form.\\ \indent Feedback has modest effects on the density and velocity probability density functions in our clouds. Instead, the turbulent velocity field and the density field it generates with the assistance of gravity are the dominant agencies controlling the tempo and mode of star formation. This need not aways be the case and further studies are required to evaluate the dependence of this conclusion on, for example, the turbulent velocity field, the boundedness of the cloud and luminosity of the photoionizing radiation source. \section{Acknowledgements} We thank the anonymous referee for insightful suggestions which significantly improved the paper.
1,314,259,996,399
arxiv
\section{Introduction} Consider how a newborn baby learns to walk. Initially, the baby still lacks motor skills to move around. Nevertheless, she can actively move her eyes around to collect abundant visual data, with which she can potentially learn a predictive model that predicts how she should move her eyes from one view to the next view. The predicted eye movements can thus be a useful representation for navigation, and can be bootstrapped to learn walking from one view to the next view in a few shots once she acquires the locomotive capability. Similarly, in the case of visual navigation, an intelligent agent should first learn how to move its sensor around to find a promising direction to explore before executing any locomotive actions. Concretely, given a current image and a goal image, the agent should encode them into a meaningful state representation that makes the downstream motor policy easy to learn. But what is the right feature space for predicting a navigation command? A potentially useful representation can be the relative $2D$ transformation between two views. If we can locate a goal image as a crop centered inside the current field of view, the agent should move forward to get closer. If the crop is to the right of the current field of view, the agent should then turn right, and vice versa. The relative $2D$ location of the two views can thus be parametrized by the location and size of a crop of one view that corresponds to the other view. This parametrization is analogous to PTZ factors a camera uses to pan, tilt and zoom onto the current view in order to focus on the goal part. Such a PTZ vector encodes the relative spatial transformation between two views, and can be used to learn a downstream navigation policy. Since the PTZ encoding is low-dimensional compared to the pixel representation of two views, learning a navigation policy from PTZ should require far less robot interaction data than learning directly from pixel inputs. Additionally, an embedding space learned directly from pixels is likely to suffer from distribution shift as we move from training images to testing images, thus adversely affecting the downstream policy’s accuracy. On the contrary, a robot policy that inputs the PTZ parametrization, which only captures relative transformation, will be insulated from the visual statistics changes as we shift from one image domain to the other. The goal of this project is to test the hypothesis that a pre-trained PTZ predictor can lead to efficient learning of a navigation policy with few robot interaction data in a never-before-seen environment. Crucially, we train the PTZ module by predicting random crops of training environment images in a self-supervised fashion, thus the data involved does not count towards robot interaction data. Our major contribution is that we show training on random noise images can also produce a sufficiently performing PTZ encoder for downstream navigation tasks even in photo-realistic testing environments. \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{figs/framework.pdf} \end{center} \caption{The visual navigation frameworks. Top $(a)$: an end-to-end training baseline, where a generic feature space is jointly learned by minimizing cross entropy loss between $a_t$ and $\hat{a_t}$ for all image-action sequences in $\boldsymbol{D}$. Bottom: (b) a proposed encoder pre-training, where we assume (c) the goal view can be seen as a crop from the current view and use the relative panning, tilting and zooming factors (PTZ) between two (synthetic, e.g. random noises) image crops to perform the pre-training.} \label{fig:framework} \end{figure*} \input{related} \section{Method} Given a robot interaction dataset $\boldsymbol{D}$, we want an agent to navigate to a goal location upon receiving a goal image $x_g$. The dataset $\boldsymbol{D}$ consists of image action pairs $(x_t, a_t, x_{t+1})$, where $x_t$ is the pixel observation at time $t$, $a_t$ the action taken at time $t$, and $x_{t+1}$ the pixel observation after the action. One approach to learn a navigation policy is to train an inverse model on $\boldsymbol{D}$, which predicts an action given a current view and and a goal view \cite{agrawal2016learning}. As memory of past states and actions benefits navigation with only partial observations \cite{pathak2018zero}, we divide the inverse model into an encoder $E_\phi$ that encode image pairs into states and an LSTM policy $\pi_{\theta}$ that predicts actions conditioned on states as in Eq \eqref{eq:inv} \begin{equation} \hat{s}_t = E_{\phi}(x_t, x_{t+1}) \qquad \hat{a}_t = \pi_{\theta}(\hat{s}_t). \label{eq:inv} \end{equation} We can train $E_{\phi}$ and $\pi_{\theta}$ jointly end-to-end by minimizing cross entropy loss between $a_t$ and $\hat{a_t}$ for all image-action sequences in $\boldsymbol{D}$ as shown in the top part of Fig~\ref{fig:framework} (a). The benefit of such an approach is that the system can automatically find the right parameters for a generic encoder such that the learned state representation $s_t$ is useful for the downstream policy. The lack of inductive bias, however, requires a lot of interaction data to train the whole pipeline, which can be expensive to collect. While using interaction data to supervise the navigation policy $\pi_{\theta}$ is fair, using it to train also a generic feature encoder $E_{\phi}$ seems wasteful. Specifically, the encoding of an image pair should not depend on the action space, and it should work even when the two images are not associated with a particular action. Additionally, the fact that in navigation the action space is typically low dimensional also means most data will be used to learn a mapping from high-dimensional pixels to low dimensional states rather than from states to actions. To focus interaction data on learning the policy $\pi_{\theta}$, we can introduce a pre-trained encoder as a fixed feature extractor to replace the generic encoder. To find out what information the feature encoder should encode, we observe in Fig~\ref{fig:framework} (c) that a goal view can be seen as a crop from the current view. The relative location and size of the crop indicates the relative heading and distance of the agent from the goal location. Thus the relative transformation between the two views can be parametrized by the panning angle $p$, tilting angle $t$, and zooming factor $z$, which a PTZ camera can use to shifts its current field of view to a goal field of view. We hypothesize such a 3 DOF parametrization is sufficient for local navigation where a goal is close to the agent. The low dimensionality of this action space is desirable as the mapping between states and actions can be learned efficiently with a few data points. We will now discuss our implementation of the PTZ encoder that predicts $(p, t, z)$ given two images. \begin{figure}[t] \begin{center} \includegraphics[width=0.8\linewidth]{figs/crop.pdf} \end{center} \caption{Examples of the random crop generation process, where the black boxes are the original images, the red boxes are generated current view, and the green boxes are generated goal view.} \label{fig:crop} \end{figure} $\textbf{PTZ Encoder}$ Similar to how digital PTZ is implemented by cropping the current view to generate the panned, tilted, and zoomed view, we approximate learning a PTZ encoder with learning a random crop predictor. Given an $256\times256$ image, we randomly crop a $128\times128$ pixel patch to form the current view. For the goal view, we randomly sample a scale factor $z$ from $0.5-1$ and $(p, t)$ from $0-1$ relative to the top left corner of the first crop to generate the second crop at the location $(128x, 128y)$. We resized the second crop to a $128\times128$ pixel patch afterward. Fig~\ref{fig:crop} visualizes the random crop generation process. Additionally, we also generate pairs of crops without any overlap. This helps scenarios where the goal view is not directly observable in the current field of view. We assign PTZ label $(0, 0, 0)$ to such crop pairs corresponding to zero overlap. The ratio between crops that share some overlap and crops that do not overlap at all is 2:1 in our dataset. We will verify later in ablation that including non-overlaping crops prompts the agent to do exploration behavior when it does not see the goal and improves overall navigation performance. We train a ResNet18 network to regress the PTZ factors $(p, t, z)$ on concatenated crop pairs from image sources other than the interaction dataset $\boldsymbol{D}$ such as standalone natural home images (without action labels) or synthetic images as shown in Fig~\ref{fig:framework} (b). We replace ResNet18's fully-connected layer with a linear layer that projects onto a 3-dimensional space followed by a sigmoid activation and use L1 loss for training. \begin{figure}[t] \begin{center} \includegraphics[width=0.8\linewidth]{figs/noise.pdf} \end{center} \caption{Examples of random noise types. From left to right: Perlin noise, fractal noise, and random geometric shapes.} \label{fig:noise} \end{figure} $\textbf{Training PTZ Encoding on Random Noise}$ One potential concern in dividing the whole pipeline into parts and train each part separately is that a well-trained feature encoder from one domain may not transfer well to the navigation task domain. A quick fix is that we can collect data to train the PTZ encoder from the same image domain as the navigation task, i.e. the natural home images. Crucially, this data is not interaction data as the agent does not need to execute any navigation actions to collect. For example, an agent can stand still in the environment and move its sensors around to collect data. An interesting finding as we train the PTZ on natural home images is that the performance transfers well from training domain from testing domain even when we take images from significantly different-looking environments. This suggests learning to predict relative $2D$ transformations is not tied to the visual statistics of the images on which the learning occurs. We are thus motivated to ask if we can train the PTZ encoder entirely on randomly generated noise images and transfer to natural home images? To answer this question we first trained our PTZ encoder on Gaussian noise. The resulting poor performance suggests the particular choice of noise is critical. We hypothesize that patterned noise rather than high frequency noise should be more useful as the encoder probably needs some visual cues to find relative transformations. To this end we include Perlin noise, which can be used to simulate cloud formations in the sky, and fractal noise, which can be found in nature \cite{kataoka2020pre}, in the dataset to train the encoder. We further include random geometric shapes as they are found in man-made environments and can help the encoder learn edges and orientations. A sample of these three different kinds of random noise is shown in Fig~\ref{fig:noise}. We follow the same procedure as before to sample random crops on these noise images. Using noise for pre-training completely removes the need to access an testing environment for data. $\textbf{PTZ-enabled Navigation Policy}$ Given a pre-trained PTZ encoder $E_{\phi}$ and the interaction dataset $\boldsymbol{D}$, we can now train the LSTM navigation policy $\pi_{\theta}$. Specifically, we sample sub-trajectories up to a maximum trajectory length from every image-action sequence in $\boldsymbol{D}$. We use $E_{\phi}$ as fix feature extractor that encode image pairs into PTZ factors which become the inputs to $\pi_{\theta}$. Additionally the previous action is also given to the policy following \cite{pathak2018zero}. We use a single layer LSTM network with 512 hidden units with ReLU activation to regress the navigation action with L1 loss. During inference, the LSTM can then be queried with a current view and a goal view at each time step, and predicts an action autoregressively until the agent reaches the goal or the episode terminates. Notice if sub-trajectories of maximum sequence $1$ are sampled, the LSTM policy is effectively trained as a feed-forward network that does not use memory from previous time steps. $\textbf{End-to-End Baseline Policy}$ We train the previously described $E_{\phi}$ and $\pi_{\theta}$ jointly end-to-end by directly predicts $a_t$ given $(x_t, x_{t+1})$. We use ResNet18 as the generic encoding network and modifies a fully connected layer to project onto a 128-dimensional embedding space. The same LSTM then uses the 128-dimensional state plus the previous action to predict the next action. The whole pipeline is trained only with interaction data with cross-entropy loss. \section{Experiments} \subsection{Interaction Data Collection} We use Habitat-Sim as our simulator to render photo-realistic images from 3D apartment models in the Gibson Dataset and a simulated turtlebot as our agent to collect interaction data. The robot action space consists of four actions: stop, move forward by 0.3m, turn left by 20 degrees, and turn right by 20 degrees. To collect interaction data, the robot explore the environment with the following scheme. The robot is first randomly placed the agent in the environment. The four robot actions are sampled according to probability $[5\%, 31.67\%, 31.67\%, 31.67\%]$. For each sampled action, the agent repeats it uniformly at random 1-2 times if it is `stop,' and uniformly 1-5 times if otherwise. In the event of a collision, the current repeat sequence is aborted, and a turning action is sampled and repeated 5-13 times uniformly at random. Each episode terminates after 50 actions. For training, we choose ten Gibson environments---`Crandon,' `Delton,' `Goffs,' `Oyens,' `Placida,' `Roane,' `Springhill,' `Sumas,' `Superior,' and `Woonsocket.' We create a 20k/10k training/validation set and a 50k/10k training/validation set from sampling 40/20 and 100/20 starting locations in each of the ten environments. We also create a small 1k/1k training/validation set from sampling 20 starting locations from `Superior' and 20 starting locations from `Crandon' respectively. Collectively, we have created three interaction dataset $\boldsymbol{D}_{2k}$, $\boldsymbol{D}_{30k}$ and $\boldsymbol{D}_{60k}$. \subsection{PTZ training Data Collection} First, we generate a training set from similar domains to the navigation experiments. Specifically, we sample 6500 photo-realistic home images sourced from 65 Gibson environments rendered by the Habitat-Sim simulator to form the training set and 2300 home images from 23 other Gibson environments to form the test set. Notice these images are generated i.i.d. without any action labels. We refer to this dataset as $\boldsymbol{D}_{Habitat}$ Second, we generate Perlin noise and fractal noise using \cite{Vigier2018}. Perlin noise is generated from 2, 4, 8 periods and fractal noise is generated from 2, 4, 8 periods and 1-5 octaves. We generate 10k Perlin noise, 10k fractal noise, and 20k random shapes to form a 40k noise dataset $\boldsymbol{D}_{noise}$. However, this particular composition of noise is rather wishful as we do not know yet which one is the best for PTZ encoder training. To uncover which noise is the best surrogate data source for natural home images, we also create a 40k $\boldsymbol{D}_{Perlin}$, $\boldsymbol{D}_{fractal}$ and $\boldsymbol{D}_{shape}$, each containing only one kind of noise. \subsection{PTZ Pre-training} We train our PTZ encoder with 5 different data source: $\boldsymbol{D}_{Habitat}$, $\boldsymbol{D}_{noise}$, $\boldsymbol{D}_{Perlin}$, $\boldsymbol{D}_{fractal}$ and $\boldsymbol{D}_{shape}$, and test them with the Habitat natural home image test set and report their results in Tab~\ref{tab:noise_comp}. Although training on $\boldsymbol{D}_{Habitat}$ converges fast and achieves near perfect testing performance, training on noise images using the same cropping scheme proves slow to convergence. One empirical observation is that non-overlapping crops confuse the encoder when it has not learn to predict PTZ between two overlapping crops well. If we first train the encoder only with overlapping crops to convergence before mixing in non-overlapping crops, we can get high prediction accuracy for both non-overlapping and overlapping crops. We call this indispensable staggered training a curriculum for training with noise. Once we have the pre-trained PTZ encoder $E_{\phi}$, we fix its weights and optimize only the LSTM weights in $\pi_{\theta}$ as we train the navigation system with interaction data $\boldsymbol{D}$. \subsection{Navigation Task} To test our hypothesis that a pre-trained PTZ encoder can improve data efficiency, we consider a local navigation task, where the goal is in the same room as the agent. We choose five environments---`Beach,' `Eastville,' `Hambleton,' `Hometown,' and `Pettigrew.' and evaluate our PTZ-enabled navigation policy on multi-step navigation. Specifically, we sample 30 starting and goal locations in each of the testing environments such that the start and the goal are five forward steps apart. We then randomize the heading such that goal be in any direction including behind the agent. Such a design tests not only moving towards the goal when the goal is visible but also exploration capability when the goal is not initially visible. This partial observation of the space prompts the using of an LSTM instead of just a feedforward network as the agent needs to memorize where it has explored when it has yet to find the goal to avoid back-and-forth actions. Since trajectories in $\boldsymbol{D}$ are $50$ actions long, we also experimented with sampling from them sub-trajectories of different length as inputs to the system. We hypothesize the longer the sub-trajectories we feed into LSTM when training end-to-end, the better testing performance the system will produce as it has seen more variations and developed more nuanced memory of past experience. However, we do not know if this assumption will hold as we introduce the PTZ encoding into the pipeline. We will run inference experiments to clarify these questions. To infer a trajectory, the agent will auto-regressively predict the next action given the current view and goal view until it uses up to 50 steps or reaches the goal. To determine if an agent arrives at an observation that is close enough to the goal image, we use perceptual loss \cite{zhang2018unreasonable} to measure the similarity between those two observations in the eye of a human. If the similarity score exceeds a threshold of 0.6 while the agent is within a 0.5m radius of the goal location, we consider that agent has successfully reached the target. \section{Results} \subsection{PTZ encoding evaluation} \begin{table} \begin{center} \begin{tabular}{|l|c|c|} \hline Data & Overlap-IOU & Non-Overlap \\ \hline\hline Shape & 72.1 $\pm$ 0.4\% & 48.2 $\pm$ 1.1\% \\ Perlin & 61.6 $\pm$ 0.4\% & 65.3 $\pm$ 0.6\% \\ Fractal & 87.3 $\pm$ 0.5\% & 80.1 $\pm$ 0.6\% \\ All noise combined & 92.2 $\pm$ 0.1\% & 93.2 $\pm$ 0.5\% \\ Habitat & 97.1 $\pm$ 0.1\% & 98.8 $\pm$ 0.1\% \\ Habitat w/o non-overlap & 96.4 $\pm$ 0.1\% & 1.5 $\pm$ 0.1\% \\ Fractal w/o curriculum & 78.0 $\pm$ 0.4\% & 2.7 $\pm$ 0.3\% \\ \hline \end{tabular} \end{center} \caption{Performance comparison of PTZ encoders on the 2300 natural home image test set. In the case when the given goal view (partially) overlaps with the current view, we use the IOU between the ground truth box and predicted bounding box of the goal image in the current view as the evaluation metric. In the case when the given goal view does not overlap with the current view, we set the ground truth PTZ label to (0,0,0) and the corresponding success is defined by whether the encoder can give values that are close enough (given a pre-defined tolerance) to (0,0,0). We report the success rates in such non-overlap case. } \label{tab:noise_comp} \end{table} Let us first examine how well our PTZ encoder trained on noise performs on the 2300 natural home image test set. We evaluate our model by calculating the IOU between the ground truth and predicted bounding box of the goal image in the current view when the goal can be at least partially seen (partially overlaps). As the model predicts more accurate PTZ factors, the predicted bounding box will overlap with the ground truth bounding box more and IOU will approach 1. If the two images are not overlapping, we report the rate at which the model predicts a state close to the ground truth $(0, 0, 0)$, which corresponds to no detection of goal. For example, if we pre-train the PTZ module with Perlin noise and test with a goal view that does not appear in the current view, then by our definition, the PTZ module should output values close to $(0,0,0)$. With a pre-defined tolerance, around $65.3\pm 0.6\%$ of non-overlap cases successfully give PTZ values that are close-enough to $(0,0,0)$. In Tab~\ref{tab:noise_comp}, we show mean and standard deviation of inference performance on both overlapping and non-overlapping image pairs of PTZ trained on different data sources. Training on natural home images $\boldsymbol{D}_{Habitat}$ naturally produces the highest accuracy. However, we observe that training on all three noises combined $\boldsymbol{D}_{noise}$ produces competitive results without seeing a single natural home image. This suggests PTZ of two views is independent of the underlying visual statistics and can transfer well from one domain to another. This property allows for stable training of downstream LSTM policy as PTZ representation will be consistent across different visual domains. This also suggests we do not need to collect new data to fine-tune our navigation policy if we move from one environment to another. We show qualitative inference results of the PTZ encoder in Fig~\ref{fig:inf_traj} where the green bounding boxes indicate where the PTZ encoder predicts the goal crop in the current view. To understand which noise is the most helpful for pre-training, we train the PTZ encoder on individual noise $\boldsymbol{D}_{Perlin}$, $\boldsymbol{D}_{fractal}$ and $\boldsymbol{D}_{shape}$. We see in Tab~\ref{tab:noise_comp} training on fractal noise to convergence outperforms Perlin noise and random shapes and approaches the performance of all noise combined. This result is in line with the finding in \cite{kataoka2020pre} and indicates that the natural home images may share more similar visual statistics with fractal noise than others. The PTZ encoder trained on noise go through a two-stage training curriculum, where we first train with only overlapping images before adding non-overlapping ones. This curriculum is crucial to performance at convergence. If we train on noise images, say $\boldsymbol{D}_{fractal}$ with overlapping and non-overlapping crops concurrently from the start, the performance is lower for overlapping IOU and significantly lower for non-overlapping success prediction rate as shown by the results of `Fractal w/o curriculum' in Tab~\ref{tab:noise_comp}. On the one hand, this suggests prediction on overlapping images and non-overlapping images leverage similar features and can enhance the performance of each when trained together in a curriculum. On the other hand, it might be easier to find those features by training on overlapping images alone at first and then bootstrap from those features to learn to recognize images that do not overlap at all later on. One might ask if non-overlapping images complicate the training, is it necessary to include them? We argue for their necessity as otherwise the prediction on non-overlapping image pairs will be arbitrary and can lead to erroneous inputs to the LSTM policy. We will show in the subsequent section that training the whole system with a PTZ encoder pre-trained on natural home images yet without non-overlap crops gives inferior results. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{figs/nav_inf1.pdf} \end{center} \caption{Navigation results comparing data efficiency of PTZ enabled model and training from scratch. With more interaction data being used (60k), the end-to-end training fashion can finally perform decently (approaching 80\% success rate), compared to the easily over 80\% success rate of our proposal with only 2k interaction data.} \label{fig:nav_inf1} \end{figure} \subsection{Navigation policy evaluation} In Fig~\ref{fig:nav_inf1} we show the percentage of successful runs among $30$ rollouts for each environment. We compare results from policies trained on interaction data $\boldsymbol{D}_{2k}$, $\boldsymbol{D}_{30k}$ and $\boldsymbol{D}_{60k}$, either with PTZ-enabled LSTM or vanilla LSTM from scratch, and different maximum trajectory length 1, 5 and 15. For a given maximum trajectory length $n$ we sample sub-trajectories of length 1-$n$ from the collected 50-step image-action sequence. When $n$=1 we only feed single-action sequences to the LSTM and effectively train it as a feed-forward network without its recurrent memory. We include experiments on feed-forward networks here because \cite{hasenclever2020comic} reports feed-forward networks are more reactive to state inputs while LSTM tends to blindly mimic the trajectories in the data source. Since the agent collects the interaction data initially through random exploration, the data is highly sub-optimal. We hope including feed-forward network experiments will provide insight into how much the sub-optimality can affect LSTM policies. We see in Fig~\ref{fig:nav_inf1} that without the PTZ encoding, the success rate of navigating to the goal location increases as we train the whole pipeline with more data. Since the system needs to figure out the right state representation by itself, it benefits the learning as we feed longer sequences and introduces more reliance on the behavior demonstrated by the scripted random exploration in the interaction data. However, as we start to introduce PTZ encoding, such support becomes unnecessary. Specifically, training a PTZ-enabled policy with only 2k interaction data, one-action sequence already outperforms training an end-to-end system from scratch with 60k data. As we increase the action sequence length to train the PTZ-enabled system, the performance actually drops. This is potentially due to the fact that the LSTM tries to memorize the action sequence in the interaction data even if it is sub-optimal. So far we have only demonstrated results on PTZ encoder trained with all three noise. If we use PTZ encoder pre-trained with individual noise source as shown in Tab~\ref{tab:noise_comp}, what will the navigation success rate be? We show in Fig~\ref{fig:nav_inf2} that PTZ encoder trained with fractal noise outperforms Perlin noise and random shapes. Although the evaluation metrics in Tab \ref{tab:noise_comp} shows training on fractal noise is inferior to training on all noise combined, the navigation results show that it is sufficient to achieve high success rate for the downstream task using fractal noise for pre-training alone. We also show in Fig~\ref{fig:nav_inf2} that using a PTZ pre-trained without overlapping images (`2k\_ptz\_lstm\_1\_habitat') indeed gives rise to poor navigation policy. Thus, pre-training a PTZ encoder on noise to perform navigation tasks benefits from a curriculum of training first on overlapping crops followed by adding non-overlapping crops. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{figs/nav_inf2.pdf} \end{center} \caption{An ablation study. Navigation results comparing the effect of different noises used in pre-training. The performance gap between PTZ encoder trained only with fractal noise (brown bar) and PTZ encoder trained with all noises (blue bar) is marginal, demonstrating the sufficiency of learning a PTZ modele only with fractal noise for navigation tasks.} \label{fig:nav_inf2} \end{figure} \section{Conclusion} In this project, we focus on the topic of visual pre-training for navigation. As training in an end-to-end fashion requires a significant amount of data (~60k as shown in Figure~\ref{fig:nav_inf1}), we break the system into two modules: a feature encoder module (PTZ module) and a LSTM policy-making module, where the first part can be effectively pre-trained without the use of expensive interaction data. Three popular noises are included in pre-training the PTZ module and their effectiveness are extensively evaluated. Promising experimental results verify the usefulness of our proposal in reducing the need for interaction data. {\small \bibliographystyle{ieee_fullname} \section{Related Works} Self-supervision has recently emerged as one of the most promising approaches to ease the need for supervision and yet maintain high performance. Self-supervision builds on the fact pretext tasks can be very useful for pre-training networks without the need for expensive manual annotations. With such pre-trained networks, only a modest amount of labelled data will be needed to fine tune for a target task. As early as 1981, authors of \cite{lucas1981iterative} have made attempts to reduce the reliance on abundant image pair comparisons in image registration by using spatial intensity gradient information as guidance to direct the search for the position that yields the best match. By taking more information about the images into account, the technique is able to find the best match between two images with far fewer comparisons of images. Similarly, the early work of \cite{becker1992self} on perceptual learning replaced the external teacher by internally derived teaching signals that are generated based on the assumption that different parts of the perceptual input have common causes in the external world. Specifically, modules that look at different modalities such as vision and touch, or the same modality at different times such as consecutive 2D views of a rotating 3D object, or spatially adjacent parts of the same images, tend to produce outputs that agree with each other. These resources efficient learning methods have also been exploited in the recent decade \cite{Agrawal2015Learning,Doersch2015Unsupervised,jenni2018self,kim2018learning,larsson2016learning,mahendran2018cross,misra2016shuffle,pathak2017learning,pathak2016context,wang2015unsupervised,wang2017transitive,zhang2017split,misra2020self,noroozi2016unsupervised,arandjelovic2017look,isola2015learning} with the popularity of deep learning. Among them, authors of \cite{Agrawal2015Learning} exploit information about egomotion (camera motion), which, unlike the knowledge of class labels, is freely available to mobile agents. With the same number of training images, features learned using egomotion as supervision compare favourably to features learned using class-label as supervision on the tasks of scene recognition, object recognition, visual odometry and keypoint matching. In \cite{agrawal2016learning}, the robot gathered over 400 hours of experience by executing more than 100K pokes on different objects to learn intuitive model of physics, which is shown effective for planning actions in real-world robotic manipulation tasks. In \cite{pinto2016supersizing}, authors aim to learn to predict grasp locations via trial and error. Different from its earlier literature, authors present a staged-curriculum based learning algorithm, where they learn how to grasp and use the most recently learned model to collect more data. Furthermore, agents in \cite{pathak2018zero} explore the environment without any expert supervision and distills this exploration data into goal-directed skills, as opposed to accounting experts for providing multiple demonstrations of tasks at training time in the form of observation-action pairs from the agent’s point of view. Interestingly, some pretext tasks are argued to be useful for feature learning in general. For example, \cite{zhang2016colorful} shows that colorization can be a powerful pretext task as it serves as a cross-channel encoder. Deep features (e.g. ImageNet-trained VGG features) are also demonstrated to be remarkably useful as a training loss for tasks including image synthesis and outperform all previous metrics by large margins in \cite{zhang2018unreasonable}. In \cite{yen2020learning}, it is found out that pre-training on vision tasks (e.g. object detection) significantly improves generalization and sample efficiency for learning to manipulate objects. Therefore, directly transferring model parameters from vision networks to affordance prediction networks can result in successful zero-shot adaptation, where a robot can pick up certain objects with zero robotic experience. A comprehensive study \cite{YM.2020A} is proposed recently giving analysis of self-supervision. Specially, authors conclude that the weights of the early layers in a deep network contain low-level statistics of natural images, which can be learned decently through solely self-supervision or captured via synthetic transformations instead of using a large image dataset. Slightly more related are works that exploit the possibility of pre-training without natural images~\cite{kataoka2020pre,nakashima2021can}. Recently, authors of~\cite{kataoka2020pre} generate image patterns and their category labels to construct FractalDB, a database without natural images, automatically by assigning fractals based on a natural law existing in the background knowledge of the real world. \cite{nakashima2021can} further demonstrates the usefulness of FractalDB in pre-training Vision Transformers (ViTs). Beyond fractal noise, \cite{baradad2021learning} provides a comprehensive study on how different noise types affect representation learning.
1,314,259,996,400
arxiv
\section{Introduction} The dynamics of turbulent flows is usually described via macroscopic dynamical laws, the Navier-Stokes equations (NSE), derived from the mesoscopic Boltzmann equation \cite{Esposito1999}. They accurately describe the dynamics of averaged quantities over spatial scales much larger than the mean free path length of fluid molecules \cite{Foias01}. Nevertheless, when considering large-scale turbulent flows even the most powerful computer on Earth fails to accurately describe their behavior at all relevant scales. Hence, there is a need to develop accurate yet efficient parameterizations for describing the impact of the unresolved scales of motions on those of interest \cite{Berner2017,Ghil2020}. One way to approach the description of fluid flows containing a large number of scales is via statistical analysis as in the multifractal formalism developed by Parisi and Frisch \cite{Parisi85}. It is based on characterising the high-order statistics of the velocity field increments, thought to be representative of fluctuations at different scales, via a set of {\em scaling exponents} \cite{Benzi84}. Contrasting the usual idea, coming from critical phenomena, that only a countable set of scaling exponents are relevant for a complete characterization of the statistical features of fluid flows \cite{Crisanti93,Ellis99}, Parisi and Frisch \cite{Parisi85} introduced an infinite hierarchy of exponents, each belonging to a given fractal set. These exponents account for all possible (infinite) rescaling symmetries of the NSE, describing the existence of singularities in the energy cascade mechanism in turbulent flows \cite{Benzi84,Dubrulle19}. Since the development of the multifractal theory experimental measurements of the velocity field in fluids have proved to be compatible with this picture \cite{Muzy91,Benzi91,Biferale04,Boffetta08,Benzi08,Arneodo08}. However, this approach only provides global information on the scale-dependent properties of fluids via the probability of occurrence of a given scaling exponent. Moreover, a direct computation of the multifractal spectrum from the NSE is not possible~\cite{Lanotte15}, although it would be helpful to explore the local statistics of velocity field fluctuations \cite{Dubrulle19}. A complementary approach to the high-order statistics is provided in the framework of dissipative chaotic dynamical systems \cite{Lorenz63}, exploiting the fact that the concepts of turbulence and chaos are closely connected \cite{Ruelle71}. Indeed, three-dimensional viscous fluids, as described via the NSE, conform to this class of systems, being characterized by strange attractors, i.e., phase-space states toward which the system evolves for a wide range of initial conditions resulting from a series of bifurcations~\cite{Ruelle71}. However, the search for an attractor underlying turbulent flows has only proved partially successful so far \cite{Takens81,Miles84,Crutchfield88,Bohr05}. Indeed, several studies have suggested that the observed dynamical processes can be associated with the existence of non-hyperbolic strange and possibly stochastic attractors having a dimensionality much lower than the number of degrees of freedom of the system \cite{lucarini2016extremes,LucariniGritsun2020}. Non-hyperbolicity manifests itself with the fact that the attractor is heterogeneous in terms of its local properties of persistence and predictability \cite{LucariniGritsun2020,Faranda17}. When considering numerical models, this has important implications also in terms of error dynamics and efficiency of data assimilation \cite{Vannitsem2016}. In this work, we use a laboratory experiment under high Reynolds number turbulent conditions to explore the active number of degrees of freedom at different scales. For this purpose we use a time-dependent parameter providing information on the symmetries of the turbulent steady state, thus allowing us to reconstruct the underlying attractor. By combining a decomposition method, to detect scale-dependent components, with concepts from extreme value theory (EVT), to sample local properties of attractors, we trace the evolution of the geometrical and topological properties of these invariant objects across scales for a symmetric and an asymmetric turbulent state. While the former is characterized by a scale-invariant attractor, the latter is features an attractor that is scale- and time-dependent, being sensitive to the emergence of an intrinsic timescale solely determined by nonlinear interactions. Furthermore, we also demonstrate that the symmetric turbulent steady state is characterized by a simple phase-space topology and geometry, resembling that of a noisy fixed point. Conversely, the asymmetric turbulent steady state displays scale-dependent non-hyperbolic features, moving from a noisy fixed point like structure at small scales (random attractor) towards a two-lobe chaotic attractor at large scales. Thus, because the attractor adapts its geometric and statistical properties dynamically in time with respect to the intrinsic timescale, we call such attractor a {\em chameleon attractor}. \section{Data} Our data originate from a turbulent von Karman flow, obtained by stirring rapidly water in a vertical cylinder of length $L = 180$~mm and radius $R = 100$~mm. As a result of the forcing turbulence develops that produces a back-reaction onto the two stirring counter-rotating impellers measured through two torque-meters located along their common axis. The resulting torques $C_1(t)$ and $C_2(t)$ can be seen as large-scale quantities reflecting the complex behaviour of the fluid \cite{SaintMichel13}. Similarly, the instantaneous rotation frequencies $f_1(t)$ and $f_2(t)$ of the two impellers provide a global measure of the large-scale circulation that develops under the action of the angular momentum flux \cite{Thalabard14,SaintMichel14}. Although $f_1(t)$ and $f_2(t)$ provide a 1D (time-only) projection of the full 4D (space-time) dynamics of the turbulent flow, they preserve intrinsic properties of the full turbulent system such as intermittency, bi-stability and, for special forcing conditions, a stochastic attractor. Such a situation is observed when $C_1$ and $C_2$ are constant \cite{Faranda17}; as a result, the two frequencies $f_1(t)$ and $f_2(t)$ fluctuate in time, with a typical mean frequency of $f_0 \sim 7$ Hz \cite{Faranda17}. The corresponding turbulent flow is then characterized by a Reynolds number $Re = 2\pi R^2 f_0 \nu^{-1} \sim 3 \times 10^5$, significantly exceeding the estimated critical Reynolds number for turbulence onset, $Re_T \approx 3500$. However, the time fluctuations of $f_1(t)$ and $f_2(t)$ follow an organized pattern determined by a control parameter $\gamma = \langle (C_1(t)-C_2(t))/(C_1(t)+C_2(t)) \rangle$ and traced by an order parameter $\Theta(t) = (f_1(t)-f_2(t))/(f_1(t)+f_2(t))$. When $\gamma = 0$ the turbulent state is statistically symmetric and $\Theta(t)$ fluctuates around zero \cite{SaintMichel13,SaintMichel14}. For $\gamma \ne 0$, the symmetry is broken and $\Theta(t)$ presents large-scale departures from zero. \begin{figure}[h] \centerline{\includegraphics[width=\textwidth]{Fig1}} \caption{The temporal behavior of a sample of $\Theta(t)$ for $\gamma = -0.0081$ (red line) and $\gamma = 0.0631$ (blue line). The horizontal gray line refers to $\Theta(t)=0$.} \label{fig1} \end{figure} Figure~\ref{fig1} reports the time behavior of a sample of $\Theta(t)$ for two selected values of $\gamma$ used in this study,$\gamma = -0.0081$ (symmetric) and $\gamma = 0.0631$ (asymmetric). As expected, we note fluctuations around zero for the symmetric case (i.e., $\gamma = -0.0081$), while large-scale transitions featuring intermittent bursts are found for the asymmetric case (i.e., $\gamma = 0.0631$). The difference between the two values of $\gamma$ can be also highlighted by looking at the spectral properties of $\Theta(t)$ as depicted by the variations of the power spectral density (PSD) across frequencies as reported in Figure \ref{fig2}. \begin{figure}[h] \centerline{\includegraphics[width=\textwidth]{Fig2}} \caption{Power spectral density versus frequency for the two values $\gamma \in \{-0.0081, 0.0631\}$ as reported by red and blue lines, respectively. The vertical dotted lines refer to the typical mean propeller frequency $f_0 = 7$ Hz and its harmonic \cite{Faranda17}.} \label{fig2} \end{figure} When $\gamma \sim 0$ the time series resembles that of an uncorrelated white noise, typically characterized by a flat spectrum over a wide range of scales, while for $\gamma > 0$ a turbulent spectrum emerges. Furthermore, the characteristic frequency $f_0 \sim 7$~Hz associated with the average impeller rotation frequency is also recognizable in the spectrum, together with its harmonics at $2 f_0$ \cite{Faranda17}. Then, the spectrum saturates to that of a white noise for $f > 20$ Hz. To account for this behavior, in the following we apply a low-pass filtering procedure with a cut-off frequency $f_{cut} \sim 20$ Hz to our time series to reduce high-frequency fluctuations, also saving computational time in our subsequent calculations \cite{Faranda17}. \section{Methods} \subsection{Attractor reconstruction} As shown in Faranda et al. \cite{Faranda17} the dynamical behavior of $\Theta(t)$ can be globally described by a stochastic strange attractor whose geometry depends on $\gamma$. Thus, as a first step of our procedure we reconstruct the global attractor via Takens' embedding method~\cite{Takens81}. This means to translate our univariate representation of the system in terms of the time series $\Theta(t)$ into an $m-$dimensional manifold $\mathcal{M}$ via the following diffeomorphism \begin{equation} \Theta(t) \to \Theta_{m, \Delta}(t) = [\Theta(t), \Theta(t-\Delta), \Theta(t-2\Delta), \ldots, \Theta(t-(m-1)\Delta]^\dagger \end{equation} where $\dagger$ indicates the transposition operator. The two parameters, i.e., the embedding dimension $m$ and the time delay $\Delta$, are selected according to standard criteria based on the false nearest neighbor method, suggesting $m = 3$, and the time lag at which the auto-correlation function reduces to 0.5, giving us $\Delta = 20$ time steps \cite[see, e.g.,][]{Faranda17}. In this way, we move from a univariate time series $\Theta(t)$ to a 3-D multivariate signal $\Theta_\mu(t) = [\Theta_1(t), \Theta_2(t), \Theta_3(t)]$. The 3-D phase-space for both values of $\gamma$ is reported in Figure \ref{fig3}. \begin{figure}[h] \centerline{\includegraphics[width=\textwidth]{Fig3}} \caption{3-D reconstruction of the full attractor of the system for $\gamma = -0.0081$ (red) and $\gamma = 0.0631$ (blue).} \label{fig3} \end{figure} A clear difference emerges between the two reconstructed attractors: while the symmetric case ($\gamma \sim 0$) is characterized by a noisy fixed point like structure, the asymmetric case is clearly characterized by a two-lobe attractor, like that observed for many dissipative chaotic systems \cite{Lorenz63}. However, there is an additional complexity hidden in this global attractor, that reflects the scale dependent properties of turbulence, as we demonstrate in the following. \subsection{Multivariate Empirical Mode Decomposition (MEMD)} To uncover the scale dependence, we first decompose the data into intrinsic modes by using the multivariate empirical mode decomposition \cite[MEMD, ][]{Rehman10} that is the multivariate extension of the standard empirical mode decomposition (EMD) \cite{Huang98}. It is an algorithmic procedure directly working in the data domain to detect embedded patterns into multivariate signals $\Theta_\mu(t)$ in the form of so-called Multivariate Intrinsic Mode Functions (MIMFs) \citep{Rehman10}. These patterns are derived through the {\em sifting process} \cite{Huang98}, slightly modified to implement an appropriate cubic spline procedure for multivariate signals \cite{Rehman10}. It consists of the following steps: \begin{enumerate} \item identify local extremes, i.e., points where the $\mu$-variate derivative of $\Theta_\mu(t)$ is zero; \item use cubic spline interpolation over these points to derive the upper (maxima) and the lower (minima) envelopes $\mathbf{U}_\mu(t)$ and $\mathbf{L}_\mu(t)$, respectively; \item derive the mean envelope $\mathbf{M}_\mu(t) = \frac{\mathbf{U}_\mu(t) + \mathbf{L}_\mu(t)}{2}$ and evaluate the detail $\mathbf{H}_\mu(t) = \Theta_\mu(t) - \mathbf{M}_\mu(t)$. \end{enumerate} These steps are iterated until the detail $\mathbf{H}_\mu(t)$ has the same number of extrema and zeros (or having them differing at most by one) and a zero-average mean envelope $\mathbf{M}_\mu(t)$. This means that $\mathbf{H}_\mu(t)$ can be classified as the first Multivariate Intrinsic Mode Function $\mathbf{C}_{\mu, 1}(t)$ (also called multivariate empirical mode) \cite{Huang98,Rehman10}. Then, the algorithmic procedure is repeated over the first residue $\mathbf{R}_{\mu, 1}(t) = \Theta_\mu(t) - \mathbf{C}_{\mu, 1}(t)$ until no more MIMFs $\mathbf{C}_{\mu, k}(t)$ can be filtered out from the data, i.e., the final residue $\mathbf{R}_\mu(t)$ is a $\mu-$variate non-oscillating (monotonic) trend \cite{Rehman10}. Hence, we can write \begin{equation} \Theta_\mu(t) = \sum_{k=1}^N {\bf C}_{\mu, k}(t) + {\bf R}_\mu(t). \label{eq:memd} \end{equation} Each ${\bf C}_{\mu, k}(t)$ is a multivariate pattern representative of a peculiar dynamical feature that evolves on a typical multivariate mean timescale $\tau_k$ defined as \cite{Rehman10} \begin{equation} \tau_k = \frac{1}{N_p \, \Delta t} \int_{0}^{N_p \, \Delta t} t' \langle \mathbf{C}_{\mu, k}(t') \rangle_{\mu} dt', \label{eq:tau} \end{equation} where $N_p$ is the number of data points, $\Delta t$ is the time resolution, and $\langle \cdots \rangle$ stands for ensemble average over the $\mu$-dimensional space. MIMFs are by construction ordered in terms of decreasing frequency \cite{Rehman10,Huang98}. Although an {\em a priori} decomposition basis is not fixed, the derived basis, i.e., the set of $\{{\bf C}_{\mu, k}(t)\}$, is a formal mathematical basis, that is, the MIMFs are empirically and locally orthogonal with respect to each other \cite{Rehman10}. Thus, partial sums of Eq.~(\ref{eq:memd}) can be exploited to provide additional information over specific ranges of scales, making the multivariate signal $\Theta_\mu(t)$ interpreted as a superposition of scale-dependent fluctuations \cite{Alberti20}. This property is used in the following to diagnose the dynamical properties of the instantaneous (in time) and local (in phase-space) states. \subsection{Dynamical system metrics} The dynamical properties of the $\mu-$variate systems can be investigated by means of two dynamical systems metrics \citep{Lucarini12}, the instantaneous dimension ($d$) and the inverse persistence ($\theta$), based on extreme value theory (EVT). The former is a measure of the active number of degrees of freedom, while the latter is a measure of the short-term stability of the phase-space trajectory associated with the extremal index of the generalized extreme value (GEV) distribution of recurrence distances \citep{Moloney19}. These instantaneous metrics are obtained by sampling the recurrences (i.e., close encounters) of some reference state $\zeta$ and observing that they are distributed according to EVT \cite{lucarini2016extremes,Lucarini12,Lucarini14}. Formally, let $x(\zeta)$ be the trajectory of the system and let $\zeta^\ast$ be an arbitrary reference state in the phase-space. Let further $g(x(\zeta), \zeta^\ast) = -\log \left[ dist(x(\zeta), \zeta) \right]$ be the logarithmic return, where $dist(x(\zeta), \zeta^\ast)$ is the Euclidean distance between $x(\zeta)$ and $\zeta^\ast$. If we define exceedances as $X(\zeta^\ast) = g(x(\zeta), \zeta^\ast) - s(q, \zeta^\ast)$, with $s(q, \zeta^\ast)$ being an upper threshold corresponding to the $q$--th empirical quantile of $g(x(\zeta), \zeta^\ast)$, the Freitas-Freitas-Todd theorem modified by Lucarini et al. (2014) \cite{Lucarini14} states that the cumulative distribution $F(X, \zeta^\ast)$ of returning to a sphere of radius $r$ around $\zeta^\ast$ converges to the exponential member of the generalized Pareto family \begin{equation} F(X, \zeta^\ast) \simeq \exp \left[ -\theta(\zeta^\ast) \frac{X(\zeta^\ast)}{d^{-1}(\zeta^\ast)} \right], \end{equation} where $0 \le d < \infty$ is the local dimension and $0 \le \theta \le 1$ is the inverse persistence of the state $\zeta^\ast$. Since each point of the trajectory of the phase-space corresponds to a time instant of our embedded time series by means of $d$ and $\theta$ we have a time-dependent view of the properties of our system. However, this only provides information on the full structure of the attractor, without revealing additional features that can be related to processes and mechanisms operating at different scales. For this reason, following Alberti et al. (2020) \cite{Alberti20} we firstly use the MIMFs to reconstruct the dynamics at different ranges of frequencies by exploiting partial sums of Eq.~(\ref{eq:memd}) \begin{equation} {\bf \Theta}^f_\mu(t) = \sum_{k | f_k = 1/\tau_k > f^\ast} {\bf C}_{\mu, k}(t), \label{eq:Btau} \end{equation} giving us a description of the dynamical features at frequencies larger than $f^\ast$. Starting from the largest frequency (i.e., $k=1$) and adding lower and lower ones ($k = 2, 3, ..., N)$ we can introduce for each frequency $f$ a scale-dependent instantaneous dimension $D(t, f)$ and inverse persistence $\theta(t, f)$ by diagnosing the dynamical properties of ${\bf \Theta}^f_\mu(t)$. \section{Results} Figures~\ref{fig4} and \ref{fig5} show the instantaneous dynamical system metrics for (a) the symmetric case with $\gamma = -0.0081$ and (b) a case with full symmetry breaking at $\gamma = 0.0631$, respectively. \begin{figure} \centerline{\includegraphics[scale=0.6]{Fig4a}\includegraphics[scale=0.6]{Fig4b}} \caption{The 3-D attractor of the system at different frequencies colored by, respectively, the scale-dependent instantaneous dimension $D(t, f)$ (left panels) and the scale-dependent inverse persistence $\theta(t, f)$ for the symmetric case $\gamma = -0.0081$. Moving from top to bottom we consider high to low cutoff frequencies. The color bar for the left panels is saturated between 2 and 4 for visual purposes.} \label{fig4} \end{figure} \begin{figure} \centerline{\includegraphics[scale=0.6]{Fig5a}\includegraphics[scale=0.6]{Fig5b}} \caption{Same as Figure \ref{fig4} but for the case $\gamma = 0.0631$.} \label{fig5} \end{figure} In the symmetric case (Figure \ref{fig4}), we observe that the geometrical properties of the attractor in phase-space are completely invariant with respect to frequency, suggesting that the properties of the system do not depend on the scale. The scale-dependent instantaneous dimension $D(t, f)$ slightly depends on frequency, although the most probable and the average value $\langle D(t, f) \rangle \approx 3$ are about the same for all frequencies. The scale-dependent inverse persistence $\theta(t, f)$ is mostly characterized by values larger than 0.8, with an average value $\langle \theta(t, f) \rangle \approx 1$ for all frequencies as expected for an unstructured stochastic system \cite{Faranda17}. By contrast, in the non-symmetric case (Figure \ref{fig5}), this scale-invariance is broken. As a result, we observe sudden bursts of scale-dependent instantaneous dimensions $D(t, f) \ge 6$, temporally localized differently at different frequencies. The scale-dependent inverse persistence $\theta(t, f)$ displays also markedly different behaviour across frequencies with respect to the symmetric case ($\gamma \sim 0$), with values close to one at high frequencies and less than 0.2 at lower ones. Furthermore, we clearly observe a transition towards a two-lobe attractor at frequencies less than 0.2 Hz, thus matching the expected phase-space geometry of the full attractor as well as the break in the scaling observed in the PSD (see Figure \ref{fig2}). Thus, while in the symmetric case, $\gamma = -0.0081$, we find that the $D(t, f)$ and $\theta(t, f)$ are homogeneously distributed across the attractor, implying that its topology is very simple and compatible with a noisy fixed point, the non-symmetric case attractor displays scale-dependent features with a heterogeneous spatial distribution of the two metrics. To further highlight the scale-dependent features, we show in Figure \ref{fig6} the behavior of the average dimension $\langle D(t, f) \rangle$ and persistence $\langle \theta(t, f) \rangle$ in comparison with the PSD. \begin{figure} \centerline{\includegraphics[width=\textwidth]{Fig6a}} \centerline{\includegraphics[width=\textwidth]{Fig6b}} \caption{The behavior of the average dimension $\langle D(t, f) \rangle$ (upper panel, filled circles) and persistence $\langle \theta(t, f) \rangle$ (lower panel, filled circles) in comparison with the PSD (reported as solid lines). Red symbols/lines refer to $\gamma = -0.0081$, while blue symbols/lines refer to $\gamma = 0.0631$.} \label{fig6} \end{figure} We clearly observe that, on average, the symmetric case presents a scale-invariant behavior of the two metrics, with values close to those expected for a stochastic system (i.e., $\langle D(t, f) \rangle = 3$ and $\langle \theta(t, f) \rangle = 1$). Conversely, a scale-dependent behavior is observed in the non-symmetric case, with a transition between $\langle D(t, f) \rangle < 3$ and $\langle D(t, f) \rangle > 3$ occurring around the low-frequency break observed in the PSD. This also corresponds to changes from $\langle \theta(t, f) \rangle < 0.5$ to $\langle \theta(t, f) \rangle \to 1$. Thus, our findings clearly suggest that we are faced with a scale-dependent modification of the geometrical and topological properties of the underlying attractor, depending on the emergence of an intrinsic timescale solely determined by nonlinear interactions. This means that we observe a time behavior mirroring the well-known scaling behavior of a 3-D turbulent flow. At scales larger than the injection scale, the energy transfer is small, and the individual scales are in quasi-equilibrium; for scales smaller than the injection scale, the mean energy transfer is positive, and there is an out-of equilibrium energy cascade towards smaller scales, following a Kolmogorov spectrum with intermittency corrections \cite{Kolmogorov41}. In the present case, the low frequencies are associated to low-dimensional dynamics, showing that the statistical equilibrium at large scales is driven by a few degrees of freedom, generating a well defined low-dimensional attractor. On the other hand, the dynamics at scales smaller than the injection scale effectively plays the role of noise, which restores the broken symmetry and provides the "statistical temperature" for large scales, or the stochasticity of the attractor \cite{Thalabard14}. Finally, since our reconstructed 3D phase-space defined via $\Theta_\mu(t))$ is just a projection of a higher-dimensional attractor where other degrees of freedom are lump in stochastic terms (i.e., at small scales), it is not surprising that we find dimensions larger than 3. As shown in Faranda et al. (2017) \cite{Faranda17}, this points towards the existence of an unstable fixed point associated with abrupt changes and hints at the existence of an underlying stochastic attractor. Our scale-dependent results also suggest that, although the flow dynamics involves a wide range of scales, some of them can be described by stochastic theory \cite{Alberti21}. \section{Conclusions} In this paper we have introduced a new multiscale analysis tool to deal with the investigation of time and scale-dependent properties of a simple system derived from a turbulent flow. By exploiting two different cases with different properties in terms of symmetry, we find evidence of a scale-invariant nature of the geometrical properties of the phase-space for the symmetric case. Conversely, in the non-symmetric case the scale-invariance is broken by sudden bursts of elevated scale-dependent instantaneous dimensions $D(t, f)$, temporally localized differently at different frequencies, with also markedly different behavior of the local persistence properties. We clearly observe a transition towards a two-lobe attractor at low frequencies matching the expected phase-space geometry of the full attractor as well as the frequency break observed in the spectral properties. Thus, we have demonstrated that the geometrical and topological properties of invariant objects (i.e., attractors) are in fact both scale and time-dependent, being sensitive to the emergence of an intrinsic timescale solely determined by nonlinear interactions. For this reason, since the studied attractor adapts its geometric and statistical properties dynamically in time with respect to the intrinsic timescale, we suggest to call such an attractor a {\em chameleon attractor}. Furthermore, we also observed that the symmetric case has a very simple phase-space topology and can be associated with a noisy fixed point, while the non-symmetric case attractor displays scale-dependent features with a heterogeneous spatial distribution. Our results demonstrate that we cannot appropriately describe such attractors with full/averaged properties, and that we need refined analysis tools to detect their heterogeneity and the state-dependent properties of the system. Hence, it is apparent that the analysis of multiscale systems requires considering concepts allowing us to explore local and instantaneous properties of the system \cite{Faranda17,Alberti21}. Our analysis shows that the highly heterogeneous {\em chameleon attractors} discussed here could be common in high-dimensional dynamical systems as those encountered in climate sciences. We are confident that follow-up studies will further demonstrate their existence in such systems by exploiting the framework applied in the present work. \section*{Acknowledgements} This work was funded through ANR EXPLOIT, grant agreement no. ANR-16-CE06-0006-01 and ANR TILT grant agreement no. ANR-20-CE30-0035. VL acknowledges the support received from the EPSRC project EP/T018178/1 and from the EU Horizon 2020 project TiPES (Grant no. 820970). RVD has received funding by the German Federal Ministry for Education and Research via the JPI Climate/JPI Oceans project ROADMAP (grant no. 01LP2002B). \bibliographystyle{elsarticle-num}
1,314,259,996,401
arxiv
\section{Introduction}\label{sec1} Let $K$ be a field complete with respect to a discrete valuation whose residue field $k$ is finite and of characteristic $p$, where $p$ is a fixed prime. In other words, $K$ is a local field, and we denote by $G_K = \operatorname{Gal}(\bar{K}/K)$ the local Galois group. Recall that a $\mathbb{Z}_p$-adic representation of $G_K$ is a $\mathbb{Z}_p$-module of finite rank with a continuous and linear action of $G_K$. For the Witt ring $W(k)$, let $\mathcal{O}_{\mathcal{E}}$ be the $p$-adic completion of $W(k)((u))$ with the field of fractions $\mathcal{E}$. Let $K_{cyc}$ be the cyclotomic $\mathbb{Z}_p$-extension of $K$ in $\bar{K}$ obtained by adjoining the $p^n$-th roots of unity to $K$, $H = \operatorname{Gal}(\bar{K}/K_{cyc})$ and $\Gamma =G_K/H= \operatorname{Gal}(K_{cyc}/K)$. Then there is a natural action of $\Gamma$ and a Frobenius $\varphi$ on $\mathcal{O}_{\mathcal{E}}$. In \cite{Fon}, Fontaine introduced a new technique to understanding the category of $\mathbb{Z}_p$-adic representations of $G_K$ in terms of algebraic objects, namely, the $(\varphi,\Gamma)$-modules. In the equal characteristic case ($(p,p)$ case), he constructed a category of \'{e}tale $\varphi$-modules over $\mathcal{O}_{\mathcal{E}}$ and proved that this category is equivalent to the category of $\mathbb{Z}_p$-adic representations of $G_K$. Recall that an \'{e}tale $\varphi$-module over $\mathcal{O}_{\mathcal{E}}$ is a finite rank $\mathcal{O}_{\mathcal{E}}$-module with a bijective semi-linear operator $\varphi$. An \'{e}tale $(\varphi,\Gamma)$-module over $\mathcal{O}_{\mathcal{E}}$ is an \'{e}tale $\varphi$-module over $\mathcal{O}_{\mathcal{E}}$ together with a continuous and semi-linear action of $\Gamma$ commuting with the action of $\varphi$. Then using the theory of the field of norms due to Fontaine and Wintenberger \cite{Win}, he deduced the mixed characteristic case ($(0,p)$ case) from the equal characteristic case. In this case, he decomposed the Galois group $G_K$ along a totally ramified $\mathbb{Z}_p$-extension $K_{cyc}$ of $K$. He showed that the category of $\mathbb{Z}_p$-adic representations of $G_K$ is equivalent to the category of \'{e}tale $(\varphi,\Gamma)$-modules over $\mathcal{O}_{\mathcal{E}}$. This equivalence is a deep result that allows the computation of the Galois cohomology. In \cite{LH1}, Herr devised a technique to calculate the Galois cohomology by introducing a complex, namely, the \emph{Herr complex}. The Herr complex is defined on the category of \'{e}tale $(\varphi,\Gamma)$-modules and the cohomology groups of this complex turn out to match with the Galois cohomology groups on the category of $\mathbb{Z}_p$-adic representations of $G_K$. The results of Fontaine, along with this complex, play a crucial role in all the works pertaining to the computation of the Galois cohomology. In \cite{Flo}, Floric further extended the Herr complex to the False-Tate type curve extensions to include certain non-abelian extensions over cyclotomic $\mathbb{Z}_p$-extension. Note that the extension $K_{cyc}$ is obtained by adjoining the $p^n$-th roots of unity to $K$, and they are the $p^n$-torsion points of the multiplicative Lubin-Tate formal group $\mathbb{G}_m$ on $\mathbb{Q}_p$ with respect to the uniformizer $p$. Thus the cyclotomic $\mathbb{Z}_p$-extension is the same as the extension associated with the multiplicative Lubin-Tate formal group. It is natural to try to carry out this theory for arbitrary Lubin-Tate formal group over $K$. In this direction, there has been a lot of activity in recent years to develop the Fontaine theory for Lubin-Tate formal groups \cite{KR}, \cite{Berger2013}, \cite{Four-Xie}, \cite{Berger2016}, \cite{SV}, \cite{Ber-Four}, and \cite{Ber-Sch-Xie}, where the base field is a finite extension $K$ of $\mathbb{Q}_p$ with ring of integers $\mathcal{O}_K$ and uniformizer $\pi$. In \cite{KR}, Kisin-Ren classified the local Galois representations using the extensions arising from division (torsion) points of the Lubin-Tate formal group defined over $K$. More precisely, consider a Lubin-Tate formal group $\mathcal{F}$ over a finite extension $K/\mathbb{Q}_p$, and for $n\geq 1$, let $K_n\subset K$ be the subfield generated by the $\pi^n$-torsion points of $\mathcal{F}$, where $\pi$ is a uniformizer of $\mathcal{O}_K$. These fields $K_n$'s are usually referred to as \emph{Lubin-Tate extensions} of $K$. Define $K_{\infty}:= \cup_{n\geq1} K_n$ and $\Gamma_{LT}:= \operatorname{Gal}(K_\infty/K)$. Then they obtained a classification of $G_K$-representations on finite $\mathcal{O}_K$-modules via \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules, where \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules are analogues of \'{e}tale $(\varphi,\Gamma)$-modules (\cite[Theorem 1.6]{KR}). More details are given in section \ref{2.2}. This paper depends heavily on the classification of $G_K$-representations given by Kisin and Ren. We show that the theorem of Kisin and Ren allows us to compute the Galois cohomology of representations defined over $\mathcal{O}_K$ (Theorem \ref{lattices}). For this first, we observe that the Kisin-Ren theorem \cite[Theorem 1.6]{KR} holds for ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$ the category of discrete $\pi$-primary abelian groups with a continuous and linear action of $G_K$. It is crucial to work with this category as this category has enough injectives, and this category is equivalent to the category of injective limits of $\pi$-power torsion objects in the category of \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules over $\mathcal{O}_{\mathcal{E}}$ (Corollary \ref{KR discrete}). Next, we generalize the Herr complex to the Lubin-Tate extensions, and we call it the \emph{Lubin-Tate Herr complex} (see Definition \ref{LTHC}). Then we have the following result. \begin{thmalph}[=Theorem \ref{G_K-cohomo}]\label{M1} For a discrete $\pi$-primary abelian group $V$ with a continuous and linear action of $G_K$, we have a natural isomorphism \begin{equation*} H^i(G_K,V)\cong \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))\quad \text{for} \ i\geq 0. \end{equation*} The cohomology groups on the right side are computed using the Lubin-Tate Herr complex defined for $(\varphi_q,\Gamma_{LT})$-module corresponding to $V$, while the left hand side denotes the usual Galois cohomology groups of the representation $V$. \end{thmalph} Moreover, we show that both the cohomology functors commute with the inverse limits and deduce the above theorem for the case when $V$ is a representation defined over $\mathcal{O}_K$ (Theorem \ref{lattices}). We further extend the equivalence of categories of Kisin and Ren to include certain non-abelian extensions over the Lubin-Tate extension (Theorem \ref{False-Tate equivalence}) and show that the construction of the Lubin-Tate Herr complex for $(\varphi_q,\Gamma_{LT})$-modules can be generalized to $(\varphi_q,\Gamma_{LT,FT})$-modules over non-abelian extensions, and we call it the \emph{False-Tate type Herr complex} (Definition \ref{FTHC}). In this case, we establish the following theorem. \begin{thmalph}[= Theorem \ref{Main4}]\label{M2} For any $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$, we have a natural isomorphism \begin{equation*} H^i(G_K,V)\cong \mathcal{H}^i(\Phi\Gamma_{LT,FT}^{\bullet}(\mathbb{D}_{LT,FT}(V)))\quad \text{for}\ i\geq 0. \end{equation*} In other words, the False-Tate type Herr complex $\Phi\Gamma_{LT,FT}^{\bullet}(\mathbb{D}_{LT,FT}(V))$ computes the Galois cohomology of $G_K$ with coefficients in $V$. \end{thmalph} Next, we define an operator $\psi_q$ acting on \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules, and then we prove the following result. \begin{thmalph}[=Theorem \ref{Main5}]\label{M6} Let $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$. Then we have a well-defined homomorphism $$\mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))\rightarrow \mathcal{H}^i(\Psi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))\quad \text{for} \ i\geq0.$$ Further, the homomorphism $$\mathcal{H}^0(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))\rightarrow \mathcal{H}^0(\Psi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))$$ is injective. \end{thmalph} Moreover, we prove similar result in the case of False-Tate type extensions (Theorem \ref{Theorem False Tate}). Next, we describe the Iwasawa cohomology in terms of the complex associated with $\psi_q$ (Theorem \ref{Iwasawa cohomology}). \iffalse{ Next, we describe the Iwasawa cohomology in terms of the complex associated with $\psi_q$. We prove the following theorem. \begin{thmalph}[=Theorem \ref{Iwasawa cohomology}]\label{M7} For any $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$, the complex \begin{equation*} \underline{\Psi}^{\bullet}(\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT}))): 0\rightarrow \mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT}))\xrightarrow{\psi-id}\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT}))\rightarrow 0, \end{equation*} where $\psi= \psi_{\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT}))}$, computes $H^i_{Iw}({K_{\infty}/K},V)_{i\geq 1}$ the Iwasawa cohomology groups. \end{thmalph} }\fi In the second part, we use our techniques to deal with the case of coefficient rings. Recall that a coefficient ring is a complete local Noetherian ring with finite residue field. In \cite{Dee}, Dee generalized Fontaine theory to the case of a general complete Noetherian local ring $R$, whose residue field is a finite extension of $\mathbb{F}_p$. He extended Fontaine's \cite{Fon} results to the category of $R$-modules of finite type with a continuous $R$-linear action of $G_K$. He constructed a category of \'{e}tale $\varphi$-modules (resp., \'{e}tale $(\varphi,\Gamma)$-modules) over $K$ parameterized by $R$ and proved that this category is equivalent to the category of $R$-linear representations of $G_K$ in the equal characteristic case (resp., mixed characteristic case) (\cite[Theorem 2.1.27 and Theorem 2.3.1]{Dee}). The category of \'{e}tale $\varphi$-modules (resp., \'{e}tale $(\varphi,\Gamma)$-modules) is defined to be a module of finite type over the completed tensor product $\mathcal{O}_{\mathcal{E}}\hat{\otimes}_{\mathbb{Z}_p}R$ with an action of $\varphi$ (resp., $\varphi$ and $\Gamma$) as in the case of Fontaine. The core point of the proof is Lemma 2.1.5. and Lemma 2.1.6. in \cite{Dee}. Crucial in the proof of the equivalence of categories stated above, he used the results of Fontaine \cite{Fon} for the case when the representation $V$ has finite length. Then the general case was deduced by taking the inverse limits. We also extend a result of Kisin and Ren (\cite[Theorem $1.6$]{KR}) to give a classification of the category of $R$-representations of $G_K$. We consider a category of \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules over the completed tensor product $\mathcal{O}_R:=\mathcal{O}_{\mathcal{E}}\hat{\otimes}_{\mathcal{O}_K}R$, where the ring $\mathcal{O}_{\mathcal{E}}$ is constructed using the periods of Tate-module of $\mathcal{F}$. Then we prove that this category is equivalent to the category of $R$-representations of $G_K$. In the equal characteristic case, we show the following result. \begin{thmalph}[=Theorem \ref{Main6.1}]\label{M3} The functor $V\mapsto \mathbb{D}_R(V)$ is an exact equivalence of categories between ${\bf Rep}_R(G_K)$ the category of $R$-representations of $G_K$ and ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\acute{e}t}$ the category of \'{e}tale $\varphi_q$-modules over $\mathcal{O}_R$ with quasi-inverse functor $\mathbb{V}_R$. \end{thmalph} The construction of these functors is explained in section \ref{sub6.1}. In the case of mixed characteristic, we have following theorem which gives a classification of $R$-representations of the local Galois group in terms of \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules over $\mathcal{O}_R$. \begin{thmalph}[=Theorem \ref{Main6.2}]\label{M4} The functor $\mathbb{D}_R$ is an equivalence of categories between ${\bf Rep}_R(G_K)$ the category of $R$-linear representations of $G_K$ and ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\Gamma_{LT},\acute{e}t}$ the category of \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules over $\mathcal{O}_R$. The functor $\mathbb{V}_R$ is a quasi-inverse of the functor $\mathbb{D}_R$. \end{thmalph} We also have a generalization of Theorem \ref{M1}, Theorem \ref{M6} and Theorem \ref{Iwasawa cohomology} to the case of the coefficient ring. The generalization of Theorem \ref{Iwasawa cohomology} to the case of coefficient rings allows us to generalize the dual exponential map \begin{equation*} \text{Exp}^*:H^1_{Iw}({K_{\infty}/K},\mathcal{O}_K(\chi_{cyc}\chi_{LT}^{-1}))\xrightarrow{\sim} \mathbb{D}_{LT}(\mathcal{O}_K)^{\psi_{\mathbb{D}_{LT}(\mathcal{O}_K)}=id}. \end{equation*} defined in \cite{SV} over coefficient rings (see Corollary \ref{dual exp.}). It is possible that this leads to the construction of Coates-Wiles homomorphisms for the Galois representations defined over $R$. \subsection*{Organization of the paper} In section \ref{sec2}, we recall some necessary background that will be used in subsequent sections. In section \ref{sec3}, we define the Lubin-Tate Herr complex as a generalization of Herr complex over Lubin-Tate extensions and compute the Galois cohomology groups of representations defined over $\mathcal{O}_K$. In the next section, we extend the Lubin-Tate Herr complex to include certain non-abelian extensions and show results in the computation of Galois cohomology. In section \ref{section psi}, we define an operator $\psi_q$ acting on the category of \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules and prove some results, which give a relationship between the cohomology groups of the Lubin-Tate Herr complex for $\varphi_q$ and $\psi_q$. We also present our results, which give a relationship between the False-Tate type Herr complex for $\varphi_q$ and $\psi_q$. In section \ref{Iwasawa}, we briefly recall the computation of the Iwasawa cohomology due to Schneider and Venjakob for Lubin-Tate extensions in terms of the complex associated with $\psi_q$. Then in section \ref{sec6}, we record some significant results on coefficient rings, and we generalize a theorem of Kisin and Ren over coefficient rings, which in turn allows us to extend our results over coefficient rings, and these results appear in section \ref{sec7}. \subsection*{Acknowledgements} We would like to thank Laurent Berger for going through an earlier draft of the article and suggesting valuable comments regarding the article. \section{Lubin-Tate Extensions and Galois Representations}\label{sec2} \subsection{Background on Lubin-Tate modules}\label{sec2.1} In this section, we recall some basic results of Lubin-Tate modules. We fix a local field $K$ of characteristic $0$ with the ring of integers $\mathcal{O}_K$, maximal ideal $\mathfrak{m}_K$, and residue field $k$ of characteristic $p>0$. Let $\pi$ be a prime element of $\mathcal{O}_K$, $\operatorname{card}(k)=q$ and $q=p^r$ for some fixed $r$. Let $\bar{K}$ be a fixed algebraic closure of $K$ with the ring of integers $\mathcal{O}_{\bar{K}}$ and maximal ideal $\mathfrak{m}_{\bar{K}}$. A \emph{Lubin-Tate module} over $\mathcal{O}_K$, for a prime element $\pi$ of $\mathcal{O}_K$, is a formal $\mathcal{O}_K$-module $\mathcal{F}$ such that $[\pi]_{\mathcal{F}}(X)\equiv X^q\mod\pi$. Then the set $\mathfrak{m}_{\bar{K}}$ together with the operations \begin{equation*} x\underset{\mathcal{F}}+y:=\mathcal{F}(x,y) \quad \text{and} \quad a.x:=[a]_{\mathcal{F}}(x) \quad \text{for}\ x,y\in\mathfrak{m}_{\bar{K}} \ \text{and} \ a\in\mathcal{O}_K \end{equation*} gives rise to an $\mathcal{O}_K$-module in the usual sense, which we denote by $\mathcal{F}(\mathfrak{m}_{\bar{K}})$. Now consider \begin{equation*} \mathcal{F}(n):=\{\lambda\in\mathcal{F}(\mathfrak{m}_{\bar{K}})\vert\pi^n.\lambda=0\} =\{\lambda\in\mathcal{F}(\mathfrak{m}_{\bar{K}})\vert[\pi^n]_{\mathcal{F}}(\lambda)=0\}=\operatorname{Ker}([\pi^n]_{\mathcal{F}}) \end{equation*} the group of $\pi^n$-division points. Then $\mathcal{F}(n)$ is a free $\mathcal{O}_K/\pi^n\mathcal{O}_K$-module of rank $1$ \cite[Chapter III, Proposition 7.2]{Neu}. Let $K_n:=K(\mathcal{F}(n))$. Since $\mathcal{F}(n)\subseteq \mathcal{F}(n+1)$, we have a chain of fields $K\subseteq K_1\subseteq K_2\subseteq\ldots\subseteq K_{\infty}= \bigcup_{n=1}^{\infty} K_n$. These field extensions are called \emph{Lubin-Tate extensions}. The extension $K_n/K$ is totally ramified abelian extension of degree $q^{n-1}(q-1)$ with Galois group $\operatorname{Gal}(K_n/K)\cong \operatorname{Aut}_{\mathcal{O}_K}(\mathcal{F}(n))\cong \mathcal{O}_K^{\times}/\mathcal{O}^{\times(n)}_K$ \cite[Chapter III, Theorem 7.4]{Neu}. Moreover, this isomorphism fits into the following commutative diagram \begin{center} \begin{tikzcd} \operatorname{Gal}(K_{n+1}/K) \arrow{r}{\cong} \arrow[swap]{d}{restriction} & \mathcal{O}_K^{\times}/\mathcal{O}^{\times(n+1)}_K \arrow{d}{projection} \\ \operatorname{Gal}(K_n/K) \arrow{r}[swap]{\cong}& \mathcal{O}_K^{\times}/\mathcal{O}^{\times(n)}_K. \end{tikzcd} \end{center} Now by taking the projective limits, we obtain the isomorphism \begin{equation}\label{Lubin Tate character} \operatorname{Gal}(K_\infty/K)\cong \mathcal{O}_K^\times. \end{equation} \subsection{Kisin and Ren's equivalence}\label{2.2} Recall that $K$ is a local field of characteristic $0$, i.e., $K$ is a finite extension of $\mathbb{Q}_p$ and it is complete with respect to a discrete valuation with finite residue field $k$ of characteristic $p>0$. Let $G_K:=\operatorname{Gal}(\bar{K}/K)$ be the absolute Galois group of $K$. In this section, we recall the construction of equivalence of categories of Kisin and Ren \cite[Theorem $1.6$]{KR}. For this, let $W = W(k)$ be the ring of Witt vectors over $k$ and $K_0 = W[\frac{1}{p}]$ be the field of fractions of $W$. Then $K_0$ is maximal unramified extension of $\mathbb{Q}_p$ contained in $K$. For an $\mathcal{O}_{K_0}$-algebra $A$, we write $A_K = A\otimes_{\mathcal{O}_{K_0}} \mathcal{O}_K$. Let $\mathcal{F}$ be the Lubin-Tate group over $K$ corresponding to the uniformizer $\pi$. As in \cite{KR}, we fix a local co-ordinate $X$ on $\mathcal{F}$ such that the Hopf algebra $\mathcal{O}_{\mathcal{F}}$ may be identified with $\mathcal{O}_K[[X]]$. For any $a \in \mathcal{O}_K$, write $[a]_{\mathcal{F}}\in \mathcal{O}_K[[X]]= \mathcal{O}_{\mathcal{F}}$ the power series giving the endomorphism of $\mathcal{F}$. Let $K_\infty$ be the Lubin-Tate extension of $K$. Let $H_K= \operatorname{Gal}(\bar{K}/K_{\infty})$ and $\Gamma_{LT}= G_K/H_K= \operatorname{Gal}(K_\infty/K)$. Let $\mathcal{TF}$ be the $p$-adic Tate-module of $\mathcal{F}$. Then $\mathcal{TF}$ is a free $\mathcal{O}_K$-module of rank $1$. The action of $G_K$ on $\mathcal{TF}$ factors through $\Gamma_{LT}$ and induces an isomorphism $\chi_{LT} : \Gamma_{LT} \rightarrow \mathcal{O}_K^\times$. Let $\mathcal{R} = \varprojlim \mathcal{O}_{\bar{K}}/p\mathcal{O}_{\bar{K}}$, where the transition maps are given by the Frobenius $\varphi$. The ring $\mathcal{R}$ can also be identified with $\varprojlim \mathcal{O}_{\bar{K}}/\pi\mathcal{O}_{\bar{K}}$, and the transition maps being given by the $q$-Frobenius $\varphi_q = \varphi^r$. The ring $\mathcal{R}$ is a complete valuation ring, and it is perfect of characteristic $p$. The fraction field $\operatorname{Fr}(\mathcal{R})$ of $\mathcal{R}$ is a complete, algebraically closed non-archimedean perfect field of characteristic $p$. Then we have a map $\iota : \mathcal{TF}\rightarrow \mathcal{R}$, which is induced by the evaluation of $X$ at $\pi$-torsion points. Let $v = (v_n)_{n \geq 0}\in \mathcal{TF}$ with $v_n = \mathcal{F}(n)$ and $\pi.v_{n+1} = v_n,$ then $\iota(v) = (v^*_n(X)+\pi\mathcal{O}_{\bar{K}})_{n\geq 0}$. Moreover, we have the following lemma, which follows from \cite[Lemma 9.3]{Co1}. More details are given in \cite[\S2.1]{Sch}. \begin{lemma}\!\textup{\cite[Lemma 1.2]{KR}}\label{embedding} There is a unique map $\{\} : \mathcal{R}\rightarrow W(\mathcal{R})_K$ such that $\{x\}$ is a lifting of $x$ and $\varphi_q(\{x\}) = [\pi]_{\mathcal{F}}(x).$ Moreover, $\{\}$ respects the action of $G_K$. In particular, if $v \in \mathcal{TF}$ is an $\mathcal{O}_K$-generator, there is an embedding $\mathcal{O}_K[[X]]\hookrightarrow W(\mathcal{R})_K$ sending $X$ to $\{\iota(v)\}$ which identifies $\mathcal{O}_K[[X]]$ with a $G_K$-stable, $\varphi_q$-stable subring of $W(\mathcal{R})_K$ such that $\{\iota(\mathcal{TF})\}$ lies in the image of $\mathcal{O}_K[[X]]$. \end{lemma} The $G_K$-action on $\mathcal{O}_K[[X]]$ factors through $\Gamma_{LT}$, and we have $\varphi_q(X) = [\pi]_{\mathcal{F}}(X)$ and $\sigma_a(X)= [a]_{\mathcal{F}}(X)$, where $\sigma_a = \chi_{LT}^{-1}(a)$ for any $a \in \mathcal{O}_K^\times$. We fix an $\mathcal{O}_K$-generator $v \in \mathcal{TF}$ and identify $\mathcal{O}_K[[X]]$ with a subring of $W(\mathcal{R})_K$ by sending $X$ to $\{\iota(v)\}$ by using Lemma \ref{embedding}. Let $\mathcal{O}_{\mathcal{E}}$ be the $\pi$-adic completion of $\mathcal{O}_K[[X]][\frac{1}{X}]$. Then $\mathcal{O}_{\mathcal{E}}$ is a complete discrete valuation ring with uniformizer $\pi$ and the residue field $k((X))$. Since $W(\mathcal{R})$ is $p$-adically complete, we may view \begin{equation*} \mathcal{O}_{\mathcal{E}}\subset W(\mathcal{R})_K \subset W(\operatorname{Fr}(\mathcal{R}))_K. \end{equation*} Let $\mathcal{O}_{\mathcal{E}^{ur}} \subset W(\operatorname{Fr}(\mathcal{R}))_K$ denote the maximal integral unramified extension of $\mathcal{O}_{\mathcal{E}}$ and $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ the $\pi$-adic completion of $\mathcal{O}_{\mathcal{E}^{ur}}$, which is again a subring of $W(\operatorname{Fr}(\mathcal{R}))_K$. Let $\mathcal{E}, \mathcal{E}^{ur}$ and $\widehat{\mathcal{E}^{ur}}$ denote the field of fractions of $\mathcal{O}_{\mathcal{E}}, \mathcal{O}_{\mathcal{E}^{ur}}$ and $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$, respectively. These rings are all stable under the action of $\varphi_q$ and $G_K$. Moreover, the $G_K$-action factors through $\Gamma_{LT}$. \begin{lemma}\!\textup{\cite[Lemma 1.4]{KR}}\label{Galois group isomorphism} The residue field of $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ is a separable closure of $k((X))$, and there is a natural isomorphism \begin{equation*} \operatorname{Gal}(\mathcal{E}^{ur}/\mathcal{E}) \xrightarrow{\sim}\operatorname{Gal}(\bar{K}/K_\infty). \end{equation*} \end{lemma} Let $E:= k((X))$, which is the residue field of $\mathcal{E}$; then it follows from Lemma \ref{Galois group isomorphism} that $E^{sep}$ is the residue field of $\widehat{\mathcal{E}^{ur}}$. The following lemma is an easy consequence of the definition of $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$. \begin{lemma}\label{Exact} $(\mathcal{O}_{\widehat{\mathcal{E}^{ur}}})^{\varphi_q=id} = \mathcal{O}_K$. \end{lemma} \begin{proof} Consider the exact sequence \begin{equation*} 0\rightarrow k \rightarrow E^{sep}\xrightarrow[x\mapsto x^q-x]{\varphi_q-id} E^{sep}\rightarrow 0. \end{equation*} By d\'{e}vissage, we deduce the exact sequence \begin{equation*} 0\rightarrow \mathcal{O}_K/\pi^n\mathcal{O}_K \rightarrow \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}/\pi^n \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\xrightarrow{\varphi_q-id} \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}/\pi^n \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\rightarrow0, \:\forall\: n\geq 1. \end{equation*} Here the projective system $\{\mathcal{O}_K/\pi^n\mathcal{O}_K\}_{n\geq 1}$ has surjective transition maps, therefore passing to the projective limit is exact and gives us an exact sequence \begin{equation*} 0\rightarrow \mathcal{O}_K\rightarrow \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\xrightarrow{\varphi_q-id}\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\rightarrow 0. \end{equation*} Hence, $(\mathcal{O}_{\widehat{\mathcal{E}^{ur}}})^{\varphi_q=id} = \mathcal{O}_K$. \end{proof} The subring $\mathcal{O}_{\mathcal{E}}\subset W(\operatorname{Fr}(\mathcal{R}))$, which is constructed using the periods of $\mathcal{TF}$, is naturally a Cohen ring for $X_K(K)$, where $X_K(K)$ is a field of characteristic $p$ constructed using the field of norms. More details can be found in \cite[$\S 1$]{KR}. The Galois group $G_E = \operatorname{Gal}(E^{sep}/E)$ can be identified with $\operatorname{Gal}(X_K(\bar{K})/X_K(K))$. Then by Lemma \ref{Galois group isomorphism}, we have \begin{equation*} H_K\xrightarrow{\sim}G_E. \end{equation*} The $G_K$-action on $\mathcal{R}$ induces a $G_K$-action on $W(\operatorname{Fr}(\mathcal{R}))_K$, and the rings $\mathcal{O}_{\mathcal{E}}, \mathcal{O}_{\mathcal{E}^{ur}}$ and $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ are stable under the action of $G_K$. On the other hand, $G_E$ acts on $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ by continuity and functoriality, and these actions are compatible with the identification of Galois groups $H_K\xrightarrow{\sim}G_E$. \par Let $V$ be an $\mathcal{O}_K$-module of finite rank with a continuous and linear action of $G_K$. Consider the $\varphi_q$-module: \begin{equation*} \mathbb{D}_{LT}(V): =( \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K} V)^{H_K} = (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K}V)^{G_E}. \end{equation*} The action of $G_K$ on $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K}V$ induces a semi-linear action of $G_K/H_K = \Gamma_{LT} = \operatorname{Gal}(K_\infty/K)$ on $\mathbb{D}_{LT}(V)$. A $(\varphi_q,\Gamma_{LT})$-module $M$ over $\mathcal{O}_{\mathcal{E}}$ is $\varphi_q$-module over $\mathcal{O}_{\mathcal{E}}$ together with a semi-linear action of $\Gamma_{LT}$ and this action commutes with the endomorphism $\varphi_M$ of $M$. We say that $M$ is \'{e}tale if it is \'{e}tale as a $\varphi_q$-module. We write ${\bf Mod}_{/\mathcal{O}_{\mathcal{E}}}^{\varphi_q,\Gamma_{LT},\acute{e}t}$ (resp., ${\bf Mod}_{/\mathcal{O}_{\mathcal{E}}}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}$) for the category of finite free (resp., finite torsion) \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules over $\mathcal{O}_{\mathcal{E}}$ and ${\bf Rep}_{\mathcal{O}_K}(G_K)$ (resp., ${\bf Rep}_{\mathcal{O}_K-tor}(G_K)$) for the category of finite free (resp., finite torsion) $\mathcal{O}_K$-modules with a continuous linear action of $G_K$. Then $\mathbb{D}_{LT}$ is a functor from ${\bf Rep}_{\mathcal{O}_K}(G_K)$ (resp., ${\bf Rep}_{\mathcal{O}_K-tor}(G_K)$) to ${\bf Mod}_{/\mathcal{O}_{\mathcal{E}}}^{\varphi_q,\Gamma_{LT},\acute{e}t}$ (resp., ${\bf Mod}_{/\mathcal{O}_{\mathcal{E}}}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}$). Let $M$ be an \'{e}tale $(\varphi_q,\Gamma_{LT})$-module over $\mathcal{O}_{\mathcal{E}}$. Then consider the $G_K$-representation: \begin{equation*} \mathbb{V}_{LT}(M):= (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_{\mathcal{E}}}M)^{\varphi_q\otimes \varphi_M=id}. \end{equation*} Here $G_K$ acts on $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ as before and acts via $\Gamma_{LT}$ on $M$. The diagonal action of $G_K$ on $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_{\mathcal{E}}}M$ is $\varphi_q\otimes\varphi_M$-equivariant, which induces a $G_K$-action on $\mathbb{V}_{LT}(M)$. Then using the similar proof as that in \cite[A1, Proposition 1.2.4 and 1.2.6]{Fon}, we have that the functors $\mathbb{D}_{LT}$ and $\mathbb{V}_{LT}$ are exact functors and the natural maps \begin{align*} &\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_{\mathcal{E}}}\mathbb{D}_{LT}(V)\rightarrow \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K}V,\\& \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K}\mathbb{V}_{LT}(M) \rightarrow \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_{\mathcal{E}}}M \end{align*} are isomorphisms. In particular, we have the following result, which is established in \cite[Theorem 1.6]{KR}. \begin{theorem}\!\textup{\cite[Theorem 1.6]{KR}}\label{Kisin Ren} The functors \begin{equation*} V\mapsto \mathbb{D}_{LT}(V)= (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K}V)^{H_K} \qquad \text{and} \qquad M\mapsto \mathbb{V}_{LT}(M)= (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_{\mathcal{E}}}M)^{\varphi_q\otimes\varphi_M=id} \end{equation*} are exact quasi-inverse equivalence of categories between the category ${\bf Rep}_{\mathcal{O}_K}(G_K)\ (\text{resp.,}\, {\bf Rep}_{\mathcal{O}_K-tor}(G_K) )$ and ${\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t}_{/\mathcal{O}_{\mathcal{E}}}$ $(\text{resp.,}\,{\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}})$. \end{theorem} \begin{remark} Instead of $K$, if we choose any finite extension, say $F$, of $K$, then we have above equivalence of categories for $\mathcal{O}_F$-modules. \end{remark} \section{Galois Cohomology over the Lubin-Tate Extensions} \label{sec3} The category of finitely generated $\mathcal{O}_K$-modules with a continuous and linear action of $G_K$ does not have injectives, so the category ${\bf Rep}_{\mathcal{O}_K}(G_K)$ does not have enough injectives. Therefore we extend the functor $\mathbb{D}_{LT}$ to a category that has enough injectives as we are going to use injective objects to compute cohomology groups. Let ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$ be the category of discrete $\pi$-primary abelian groups with a continuous action of $G_K$. Then any object in this category is the filtered direct limit of $\pi$-power torsion objects in ${\bf Rep}_{\mathcal{O}_K-tor}(G_K)$. Note that the category ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$ has enough injectives. First, we extend the functor $\mathbb{D}_{LT}$ to the category ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$. For any $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$, define \begin{equation*} \mathbb{D}_{LT}(V):= (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K}V)^{H_K}. \end{equation*} Since $V$ is the filtered direct limit of $\pi$-power torsion objects in ${\bf Rep}_{\mathcal{O}_K-tor}(G_K)$, and both the tensor product and taking $H_K$-invariant commute with the filtered direct limits, so the functor $\mathbb{D}_{LT}$ commutes with the filtered direct limits. Therefore $\mathbb{D}_{LT}$ is an exact functor into the category $\varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$ of injective limits of $\pi$-power torsion objects in ${\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$. Now for any object $M \in \varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$, define \begin{equation*} \mathbb{V}_{LT}(M):= (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_\mathcal{E}}M)^{\varphi_q\otimes\varphi_M=id}. \end{equation*} The functor $\mathbb{V}_{LT}$ also commutes with the direct limits. Then we have the following proposition, which shows that the equivalence of Theorem \ref{Kisin Ren} extends to the category of discrete $\pi$-primary representations of $G_K$, and this is an important step towards our main theorem. \begin{proposition} \label{KR discrete} The functor $\mathbb{D}_{LT}$ is a quasi-inverse equivalence of categories between the category ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$ and $\varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$ with quasi-inverse $\mathbb{V}_{LT}$. \end{proposition} \begin{proof} Since the functor $\mathbb{D}_{LT}$ and $\mathbb{V}_{LT}$ commute with the direct limits, the proposition follows from Theorem \ref{Kisin Ren} by taking direct limits. \end{proof} Note that throughout this paper, each complex has the first term in degree $-1$, unless stated otherwise. Let $p$ be an odd prime. Define $D^{sep}:= \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K}V$. As $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K}V \cong \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_\mathcal{E}}\mathbb{D}_{LT}(V)$, we have $D^{sep} \cong \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_\mathcal{E}}\mathbb{D}_{LT}(V)$. Now define the co-chain complex $\Phi^{\bullet}(D^{sep})$ as follows: \begin{equation*} \Phi^{\bullet}(D^{sep}): 0\rightarrow D^{sep}\xrightarrow{\varphi_q\otimes\varphi_{\mathbb{D}_{LT}(V)}-id}D^{sep}\rightarrow 0. \end{equation*} \begin{lemma} \label{augmentation} For any discrete $p$-primary representation $V$ of $G_K$, let $V[0]$ be the complex with $V$ in degree $0$ and $0$ everywhere else. Then the augmentation map $V[0]\rightarrow \Phi^{\bullet}(D^{sep})$ is a quasi-isomorphism of co-chain complexes. \end{lemma} \begin{proof} By Lemma \ref{Exact}, we know that the complex $\Phi^{\bullet}(E^{sep})$ is acyclic in non-zero degrees with $0$-th cohomology equal to $k$, the augmentation map $$k[0]\rightarrow\Phi^{\bullet}(E^{sep})$$ is a quasi-isomorphism. By dévissage, the augmentation map \begin{equation} \label{A} \mathcal{O}_K/\pi^n[0]\rightarrow \Phi^{\bullet}(\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}/\pi^n) \end{equation} is also a quasi-isomorphism as each term in both complexes is a flat $\mathcal{O}_K/\pi^n$-module. If $V$ is finite abelian $\pi$-group then it is killed by some power of $\pi$, and we have $\Phi^{\bullet}(D^{sep})= \Phi^{\bullet}(\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}/\pi^n)\otimes_{\mathcal{O}_K/\pi^n}V$. Since $V$ is free $\mathcal{O}_K/\pi^n$-module, tensoring with $V$ is an exact functor. Now tensoring (\ref{A}) with $V$, we get \begin{equation*} V[0]\rightarrow \Phi^{\bullet}(D^{sep}) \end{equation*} is a quasi-isomorphism. Since the direct limit functor is an exact functor, the general case follows by taking direct limits. \end{proof} \begin{lemma}\label{trivial cohom} $H^i(H_K,\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}/\pi^n )=0$ for all $n\geq 1$ and $i\geq 1$. \end{lemma} \begin{proof} By d\'{e}vissage, we are reduced to the case $n=1$, i.e., we only need to prove that $H^i(H_K, E^{sep})=0$ for all $i\geq 1$. But this is a standard fact of Galois cohomology \cite[Proposition $6.1.1$]{NSW}. \end{proof} \begin{proposition}\label{H_K-cohomology} For any $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$, we have $\mathcal{H}^i(\Phi^{\bullet}(\mathbb{D}_{LT}(V)))\cong H^i(H_K, V)$ as $\Gamma_{LT}$-modules. In other words, the complex $\Phi^{\bullet}(\mathbb{D}_{LT}(V))$ computes the $H_K$-cohomology of $V$. \end{proposition} \begin{proof} Assume that $V$ is finite. Then by definition, \begin{equation*} \mathbb{D}_{LT}(V)= (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K}V)^{H_K}= (D^{sep})^{H_K}. \end{equation*} So the complex $\Phi^{\bullet}(\mathbb{D}_{LT}(V))$ is the $H_K$-invariant part of $\Phi^{\bullet}(D^{sep})$. Since $V$ is finite, the terms of $\Phi^{\bullet}(D^{sep})$ are of the form $D^{sep}= E^{sep}\otimes_E \mathbb{D}_{LT}(V)$ and are acyclic objects for the $H_K$-cohomology by using Lemma \ref{trivial cohom}. Then it follows from Lemma \ref{augmentation} that $\mathcal{H}^i(\Phi^{\bullet}(\mathbb{D}_{LT}(V)))\cong H^i(H_K, V)$ as $\Gamma_{LT}$-modules. Since both the functors $\mathcal{H}^i(\Phi^{\bullet}(\mathbb{D}_{LT}(-)))$ and $H^i(H_K,-)$ commute with the filtered direct limits, the general case follows by taking the direct limits. \end{proof} Let $\Delta$ be the torsion subgroup of $\Gamma_{LT}$ and $H_K^*$ the kernel of the quotient map $G_K\twoheadrightarrow\Gamma_{LT}\twoheadrightarrow\Gamma_{LT}^*:=\Gamma_{LT}/\Delta$. Then $\Delta$ is isomorphic to $\mu_{q-1}$. If the order of $\Delta$ is not prime to $p$, then we choose a finite $p$-extension $F$ of $K$ such that torsion part of $\operatorname{Gal}(K_\infty/F)$ is prime to $p$. In that case, Kisin-Ren theorem allows us to compute $G_F=\operatorname{Gal}(\bar{K}/F)$-cohomology of $V$. Therefore, without any loss of generality, we can assume that the order of $\Delta$ is prime to $p$. \begin{proposition}\label{H_K^*-cohomo} The complex $\Phi^{\bullet}(\mathbb{D}_{LT}(V)^{\Delta})$ computes the $H_K^*$-cohomology of $V$. \end{proposition} \begin{proof} Since the order of $\Delta$ is prime to $p$, the $p$-cohomological dimension of $\Delta$ is zero. Moreover, the isomorphism $H_K^*/H_K\cong \Delta$ gives the following short exact sequence \begin{equation*} 0\rightarrow H_K\rightarrow H_K^*\rightarrow \Delta\rightarrow 0. \end{equation*} Now the result follows from the Hochschild-Serre spectral sequence together with Proposition \ref{H_K-cohomology}. \end{proof} Note that $\Gamma_{LT}^*$ is torsion-free. Assume that $\Gamma_{LT}^*\cong\bigoplus_{i=1}^d\mathbb{Z}_p$ as an abelian group, where $d$ is the degree of $K$ over $\mathbb{Q}_p$. Let $\Gamma_{LT}^*$ be topologically generated by the set $\mathfrak{X}:=\langle\gamma_1,\gamma_2,\ldots,\gamma_d\rangle$. Then consider the co-chain complex \begin{equation*} \Gamma_{LT}^{\bullet}(A): 0\rightarrow A \rightarrow\bigoplus_{i_1\in \mathfrak{X}}A\rightarrow\cdots\rightarrow\bigoplus_{\{i_1,\ldots,i_r\}\in \binom{\mathfrak{X}}{r}}A\rightarrow\cdots \rightarrow A\rightarrow 0, \end{equation*} where $\binom{\mathfrak{X}}{r}$ denotes choosing $r$-indices at a time from the set $\mathfrak{X}$, and for all $0\leq r\leq \lvert\mathfrak{X}\rvert-1$, the map $d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}}:A\rightarrow A$ from the component in the $r$-th term corresponding to $\{i_1,\ldots,i_r\}$ to the component corresponding to the $(r+1)$-tuple $\{j_1,\ldots,j_{r+1}\}$ is given by \begin{equation*} d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}} = \left\{ \begin{array}{ll} 0 & \mbox{if } \{i_1,\ldots,i_r\}\nsubseteq\{j_1,\ldots,j_{r+1}\}, \\ (-1)^{s_j}(\gamma_j-id) & \mbox{if } \{j_1,\ldots,j_{r+1}\}= \{i_1,\ldots,i_r\}\cup\{j\}, \end{array} \right. \end{equation*} and $s_j$ is the number of elements in the set $\{i_1,\ldots,i_r\}$, which are smaller than $j$. This is the Koszul complex of $A$ as a module over $\mathcal{O}[[\Gamma_{LT}^\ast]]$, and the differentials above are the differentials of the Koszul complex with respect to the ordered sequence $\gamma_1-1,\gamma_2-1,\cdots,\gamma_d-1$. \begin{example} Let $d=2$. Then the complex $\Gamma_{LT}^\bullet(A)$ is defined as follows: \begin{equation*} \Gamma_{LT}^\bullet(A): 0\rightarrow A\xrightarrow{x\mapsto A_0x} A \oplus A\xrightarrow{x \mapsto A_1x} A\rightarrow 0, \end{equation*} where \[ A_0= \begin{bmatrix} \gamma_1-id\\ \gamma_2-id \end{bmatrix}, A_1= \begin{bmatrix} -(\gamma_2-id) & \gamma_1-id \end{bmatrix}. \qedhere \] \end{example} \begin{lemma}\label{H^0} The functor $A\mapsto \mathcal{H}^i(\Gamma_{LT}^{\bullet}(A))_{i\geq 0}$ is a cohomological $\delta$-functor. Moreover, if $A$ is a discrete representation of $\Gamma_{LT}^*$ then $\mathcal{H}^0(\Gamma_{LT}^{\bullet}(A))= A^{\Gamma_{LT}^*}$. \end{lemma} \begin{proof} Let \begin{equation}\label{B} 0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0 \end{equation} be a short exact sequence of representations of $\Gamma_{LT}^*$. Then we have a short exact sequence \begin{equation}\label{C} 0\rightarrow \Gamma_{LT}^{\bullet}(A)\rightarrow \Gamma_{LT}^{\bullet}(B)\rightarrow \Gamma_{LT}^{\bullet}(C)\rightarrow 0 \end{equation} of co-chain complexes. Then the long exact cohomology sequence of (\ref{C}) gives maps \begin{equation*} \delta^i: \mathcal{H}^i(\Gamma_{LT}^{\bullet}(C))\rightarrow \mathcal{H}^{i+1}(\Gamma_{LT}^{\bullet}(A)), \end{equation*} which are functorial in (\ref{B}). Therefore $A\mapsto \mathcal{H}^i(\Gamma_{LT}^{\bullet}(A))$ is a cohomological $\delta$-functor. The second part follows from the fact that the action of $\Gamma_{LT}^*$ on $A$ factors through a finite quotient. Since the classes of the elements $\gamma_i \ (i\in \mathfrak{X})$ generate finite quotients of $\Gamma_{LT}^*$, $A^{\Gamma_{LT}^*}= \cap_{i\in \mathfrak{X}}\operatorname{Ker}(\gamma_i-id)= \mathcal{H}^0(\Gamma_{LT}^{\bullet}(A))$. \end{proof} \begin{proposition}\label{Gamma^*-cohomo} Let $A$ be a discrete $\pi$-primary representation of $\Gamma_{LT}^*$. Then $\mathcal{H}^i(\Gamma_{LT}^{\bullet}(A))\cong H^i(\Gamma_{LT}^*,A)$ for $i\geq0$. In other words, the complex $\Gamma_{LT}^{\bullet}(A)$ computes the $\Gamma_{LT}^*$-cohomology of $A$. \end{proposition} \begin{proof} We prove the proposition by using induction on the number of generators of $\Gamma_{LT}^*$. First, assume that $\Gamma_{LT}^*$ is topologically generated by $\langle\gamma_1,\gamma_2\rangle$. Let $\Gamma^*_{\gamma_1}$ denote the subgroup of $\Gamma_{LT}^*$ generated by $\gamma_1$ and $\Gamma_{\gamma_2}^*$ the quotient of $\Gamma_{LT}^*$ by $\Gamma_{\gamma_1}^*$. We denote by $\Gamma_{\gamma_i}^{\bullet}(A)$ the co-chain complex \begin{equation*} \Gamma_{\gamma_i}^{\bullet}(A): 0\rightarrow A\xrightarrow{\gamma_i-id} A\rightarrow 0. \end{equation*} Then the co-chain complex $\Gamma_{LT}^{\bullet}(A)$ is the total complex of the double complex $\Gamma_{\gamma_2}^{\bullet}(\Gamma_{\gamma_1}^{\bullet}(A))$, and associated to the double complex $\Gamma_{\gamma_2}^{\bullet}(\Gamma_{\gamma_1}^{\bullet}(A))$, there is a spectral sequence \begin{equation}\label{SSADC} E_2^{mn} = \mathcal{H}^m(\Gamma_{\gamma_2}^{\bullet}(\mathcal{H}^n(\Gamma_{\gamma_1}^{\bullet}(A))))\Rightarrow \mathcal{H}^{m+n}(\Gamma_{LT}^{\bullet}(A)). \end{equation} Moreover, associated to the group $\Gamma_{LT}^*$, we have the Hochschild-Serre spectral sequence \begin{equation}\label{HSSS} E_2^{mn}=H^m(\Gamma_{\gamma_2}^*,H^n(\Gamma_{\gamma_1}^*,A))\Rightarrow H^{m+n}(\Gamma_{LT}^*,A). \end{equation} Now assume that $A$ is an injective object in the category of discrete $\pi$-primary abelian groups with a continuous action of $\Gamma_{LT}^*$. Then the complex $\Gamma_{\gamma_1}^{\bullet}(A)$ is acyclic in non-zero degrees with $0$-th cohomology isomorphic to $H^0(\Gamma_{\gamma_1}^*,A)=A^{\Gamma_{\gamma_1}^*}$ \cite[Corollary 6.41]{rot}, i.e, the map $A^{\Gamma_{\gamma_1}^*}[0]\rightarrow \Gamma_{\gamma_1}^{\bullet}(A)$ is a quasi-isomorphism. But $A^{\Gamma_{\gamma_1}^*}$ is an injective object in the category of discrete $\pi$-primary abelian groups with a continuous action of $\Gamma_{\gamma_2}^*$. Now by using step 1 and step 2 of \cite[Proposition 2.1.7]{PZ}, the map $A^{\Gamma_{LT}^*}[0]\rightarrow \Gamma_{\gamma_2}^{\bullet}(A^{\Gamma_{\gamma_1}^*}) $ is a quasi-isomorphism of co-chain complexes. Note that the functor $H^i(\Gamma_{LT}^*,-)$ is a universal $\delta$-functor, and $\mathcal{H}^i(\Gamma_{LT}^{\bullet}(-))$ is a cohomological $\delta$-functor such that $H^0(\Gamma_{LT}^*,-)\cong \mathcal{H}^0(\Gamma_{LT}^{\bullet}(-))$. Therefore, we have a natural transformation $H^i(\Gamma_{LT}^*,-)\rightarrow\mathcal{H}^i(\Gamma_{LT}^{\bullet}(-))$ of $\delta$-functors. Then by using spectral sequences (\ref{SSADC}) and (\ref{HSSS}), we have \begin{equation*} H^i(\Gamma_{LT}^*,A)\cong \mathcal{H}^i(\Gamma_{LT}^{\bullet}(A)) \quad \text{for} \ i\geq 0. \end{equation*} Now the case for general $A$ follows from Lemma \ref{H^0} by using dimension shifting. Then by induction assume that the result is true when $\Gamma_{LT}^*$ is topologically generated by $\langle \gamma_1,\gamma_2,\ldots,\gamma_{d-1}\rangle$. Now we want to prove the proposition when $\Gamma_{LT}^* = \langle\gamma_1,\gamma_2,\ldots,\gamma_d\rangle$. Consider the complexes \begin{equation*} \Gamma^\bullet_{\gamma_d}(A): 0\rightarrow A\xrightarrow{\gamma_d-id}A\rightarrow 0, \end{equation*} and \begin{equation*} \Gamma_{LT\backslash \gamma_d}^{\bullet}(A): 0\rightarrow A \rightarrow\bigoplus_{i_1\in \mathfrak{X^\prime}}A\rightarrow\cdots\rightarrow\bigoplus_{\{i_1,\ldots,i_r\}\in \binom{\mathfrak{X^\prime}}{r}}A\rightarrow\cdots \rightarrow A\rightarrow 0, \end{equation*} where $\mathfrak{X^\prime}= \{\gamma_1,\ldots,\gamma_{d-1}\}$, and for all $0\leq r\leq \lvert\mathfrak{X^\prime}\rvert-1$, the map $d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}}:A\rightarrow A$ from the component in the $r$-th term corresponding to $\{i_1,\ldots,i_r\}$ to the component corresponding to the $(r+1)$-tuple $\{j_1,\ldots,j_{r+1}\}$ is given by the following \begin{equation*} d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}} = \left\{ \begin{array}{ll} 0 & \mbox{if } \{i_1,\ldots,i_r\}\nsubseteq\{j_1,\ldots,j_{r+1}\}, \\ (-1)^{s_j}(\gamma_j-id) & \mbox{if } \{j_1,\ldots,j_{r+1}\}= \{i_1,\ldots,i_r\}\cup\{j\}, \end{array} \right. \end{equation*} and $s_j$ is the number of elements in the set $\{i_1,\ldots,i_r\}$ smaller than $j$. Note that the complex $\Gamma_{LT}^\bullet(A)$ is the total complex of the double complex $\Gamma^\bullet_{\gamma_d}(\Gamma_{LT\backslash \gamma_d}^{\bullet}(A))$. Since the result is true for $\Gamma_{LT\backslash \gamma_d}^{\bullet}(A)$ by using induction hypothesis, the proof follows by using similar techniques as explained in the case when $\Gamma_{LT}^*$ is generated by $\gamma_1$ and $\gamma_2$. \end{proof} \iffalse{ \begin{proof} Suppose that $\Gamma_{LT}^*$ is topologically generated by $<\gamma_1,\gamma_2>$ and $\Gamma^*_{\gamma_1}$ be the subgroup of $\Gamma_{LT}^*$ generated by $\gamma_1$ and $\Gamma_{\gamma_2}^*$ the quotient of $\Gamma_{LT}^*$ by $\Gamma_{\gamma_1}^*$. Now assume that $A$ is an injective object in the category of discrete $\pi$-primary abelian groups with a continuous action of $\Gamma_{LT}^*$. We denote by $\Gamma_{\gamma_i}^{\bullet}(A)$ the co-chain complex \begin{equation*} \Gamma_{\gamma_i}^{\bullet}(A): 0\rightarrow A\xrightarrow{\gamma_i-id} A\rightarrow 0. \end{equation*} Then the co-chain complex $\Gamma_{LT}^{\bullet}(A)$ is the total complex of the double complex $\Gamma_{\gamma_2}^{\bullet}(\Gamma_{\gamma_1}^{\bullet}(A))$, and we have a spectral sequence \begin{equation*} E_2^{mn} = \mathcal{H}^m(\Gamma_{\gamma_2}^{\bullet}(\mathcal{H}^n\Gamma_{\gamma_1}^{\bullet}(A)))\Rightarrow \mathcal{H}^{m+n}(\Gamma_{LT}^{\bullet}(A)). \end{equation*} As $A$ is injective, by Corollary $6.41$ of \cite{rot}, the complex $\Gamma_{\gamma_1}^{\bullet}$ is acyclic in non-zero degrees with zeroth cohomology equal to $\mathcal{H}^0(\Gamma_{\gamma_1}^{\bullet}(A))$. By step 1 and step 2 of \cite[Proposition $2.1.7$]{PZ}, $\mathcal{H}^0(\Gamma_{\gamma_1}^{\bullet}(A))$ is isomorphic to $H^0(\Gamma_{\gamma_1}^*,A)$. Since $H^0(\Gamma_{\gamma_1}^*,A)$ is an injective object in the category of discrete $\pi$-primary abelian groups on which $\Gamma_{\gamma_2}^*$ acts continuously, the spectral sequence degenerates at $E_1$, and the complex $\Gamma_{LT}^{\bullet}(A)$ is acyclic in non-zero degrees with zeroth cohomology isomorphic to $H^0(\Gamma_{LT}^*,A)$. Note that $H^i(\Gamma_{LT}^*,-)$ is universal $\delta$-functor and $\mathcal{H}^i(\Gamma_{LT}^{\bullet}(-))$ is cohomological $\delta$-functor such that $H^0(\Gamma_{LT}^*,-)\cong \mathcal{H}^0(\Gamma_{LT}^{\bullet}(-))$, we have a natural transformation of $\delta$-functors from $H^i(\Gamma_{LT}^*,-)$ to $\mathcal{H}^i(\Gamma_{LT}^{\bullet}(-))$. Now the general case follows from above by using dimension shifting (\cite[Corollary $6.49$]{rot}). Since $\Gamma_{LT}^*$ has a finite number of generators, the proposition follows by using induction on the number of generators. \end{proof} }\fi \begin{remark} \label{Independence of generators} For any representation $A$ of $\Gamma_{LT}^*$, clearly, the complex $\Gamma_{LT}^{\bullet}(A)$ depends on the choice of generators of $\Gamma_{LT}^*$. The cohomology groups of the Koszul complex $\Gamma_{LT}^{\bullet}(A)$ are independent of the choice of generators of $\Gamma_{LT}^*$. \end{remark} \begin{proof} It is enough to prove for $\mathcal{O}_K[[\Gamma_{LT}^\ast]]$, as the general case, is obtained by tensoring with $A$. Assume that $\Gamma_{LT}^*=\langle\gamma_1,\gamma_2,\ldots,\gamma_n\rangle$ for generators $\gamma_i$. Let $\Gamma_{LT}^*= \langle\gamma_1^\prime,\gamma_2^\prime,\ldots,\gamma_n^\prime\rangle$ be another set of generators. Then, the ideal of $\mathcal{O}_K[[\Gamma_{LT}^\ast]]$ generated by $\{\gamma_1-1,\gamma_2-1,\ldots,\gamma_n-1\}$ is equal to the ideal generated by $\{\gamma_1^\prime-1,\gamma_2^\prime-1,\ldots,\gamma_n^\prime-1\}$. Setting $T_1=\gamma_1-1, T_2=\gamma_2-1,\ldots,T_n=\gamma_n-1$ and $S_1=\gamma_1^\prime-1,S_2=\gamma_2^\prime-1,\ldots,S_n=\gamma_n^\prime-1$, we have an isomorphism of rings $\mathcal{O}_K[[\Gamma_{LT}^*]]\cong\mathcal{O}_K[[\Gamma_{LT}^*]]$ which induces an isomorphism between $\mathcal{O}_K[[T_1,T_2,\ldots,T_n]]$ and $\mathcal{O}_K[[S_1,S_2,\ldots,S_n]]$ via the maps $T_i\mapsto S_i$. Therefore the matrix of this isomorphism which has entries in $\mathcal{O}_K[[\Gamma_{LT}^\ast]]$ is invertible there. Then it follows that the Koszul complexes with respect to the sequences $\{T_i:i=1,\ldots,n\}$ and $\{S_i:i=1,\ldots,n\}$ are quasi-isomorphic (see \cite[Section 15.28]{stacks-project}). \end{proof} \iffalse $S_1=\sum r_iT_i, S_j=T_j$ for all $j\geq2$. Let $\{e_1,\cdots,e_n\}$ be a set of formal basis for $\mathcal{O}[[\Gamma_{LT}]]^d$, and we set the basis elements of $\bigoplus_{(i_1,\cdots,i_r)\in \binom{\mathfrak{X}}{r}}\mathcal{O}[[\Gamma_{LT}]]$ by $\{e_{i_1}\wedge\cdots\wedge e_{i_r}\}$. Then consider a new set of basis vectors $\{f_1,\cdots,f_n\}$ for $\mathcal{O}[[\Gamma_{LT}]]^n$ such that the complex $\Gamma_{LT,\mathcal B_2}^{\bullet}(A)$ is defined by the sequence $S_1,\cdots, S_n$. This forms a set of basis vectors for $\mathcal{O}[[\Gamma_{LT}]]^n$. Consider the map $\theta^{\bullet}:\Gamma_{LT,\mathcal B_2}^{\bullet}(A)\longrightarrow\Gamma_{LT,\mathcal B_1}^{\bullet}(A)$ given by \begin{eqnarray*} \mbox{in degree 0}:& &id: 1\mapsto 1 \\ \mbox{in degree 1}:& &\theta_1:\begin{cases} e_1\mapsto \sum_{i=1}^{n}r_if_i\\ e_j\mapsto f_j, j\geq2 \end{cases}\\ \mbox{in degree }r\geq2:& &\theta_r:\begin{cases} e_1\wedge e_{i_2}\wedge\cdots\wedge e_{i_r}\mapsto &\sum r_if_i\wedge f_{i_2}\wedge\cdots\wedge f_{i_r}\\ e_{i_1}\wedge e_{i_2}\wedge\cdots\wedge e_{i_r}\mapsto &f_{i_1}\wedge f_{i_2}\wedge\cdots\wedge f_{i_r}, \mbox{ for }i_1\neq1. \end{cases} \end{eqnarray*} Then the map $\theta$ is a chain map of complexes. Then we have the following maps: \begin{equation*} \begin{tikzcd} \bigoplus_{\{i_1,\cdots,i_r\}\in \binom{\mathfrak{X}}{r}}A\arrow{r}{(x_j)\mapsto (d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}}x_j)}\arrow[swap]{d}[swap]{\theta} & \bigoplus_{\{i_1,\ldots,i_{r+1}\}\in \binom{\mathfrak{X}}{r+1}}A\arrow[swap]{d}[swap]{\theta}\\ \bigoplus_{i_1\cdots,i_r\in \binom{\mathfrak{X}}{r}}A\arrow{r} & \bigoplus_{\{i_1,\ldots,i_{r+1}\}\in \binom{\mathfrak{X}}{r+1}}A. \end{tikzcd} \end{equation*} We show that this is a commutative diagram. Let $i_1=1$ and $d$ denote the differentials. As usual, we write $e_{i_1}\wedge\cdots\wedge\widehat e_{i_r}\wedge\cdots\wedge e_{i_j}$ to indicate that $e_{i_r}$ has been omitted from the wedge product. Then \begin{eqnarray*} \theta(d(e_{i_1}\wedge\cdots\wedge e_{i_r}))&=&\theta(S_1e_{1}\wedge\cdots\wedge\cdots\wedge e_{i_r}+\sum_{j=2}^n (-1)^{s_j}S_je_{i_1}\wedge\cdots\wedge \widehat e_j\cdots\wedge e_{i_r}). \end{eqnarray*} Further, \begin{eqnarray*} d(\theta(e_{i_1}\wedge\cdots\wedge e_{i_r})) &=& d(\sum_{i=1}^n r_if_{i}\wedge\cdots\wedge f_{i_r})\\ &=& \sum_{j=1}^n r_j(T_jf_{i_1}\wedge\cdots\wedge f_j\cdots\wedge f_{i_r} +\sum_{k=2}^j(-1)^{k+1} (T_{i_k}f_j\wedge f_{i_2}\wedge\cdots\wedge\widehat f_{i_k}\wedge\cdots\wedge f_{i_j})\\ &=& \sum_{j=1}^n r_j(T_jf_{i_1}\wedge\cdots\wedge f_j\cdots\wedge f_{i_r} +\sum_{k=2}^{r}(-1)^{k+1}S_{i_k}(\sum_{t=1}^{n}r_tf_t\wedge f_{i_2}\wedge\cdots\wedge\widehat{f_{i_k}}\wedge\cdots\wedge f_{i_r})\\ &=& S_1f_{i_2}\wedge\cdots\wedge f_{i_j} +\sum_{k=2}^{r}(-1)^{k+1}S_{i_k}(\sum_{t=1}^{n}r_tf_t\wedge f_{i_2}\wedge\cdots\wedge\widehat{f_{i_k}}\wedge\cdots\wedge f_{i_r})\\ \end{eqnarray*} \iffalse{ Consider the map $\theta$ from $\Gamma_{LT,\mathcal B_1}^{\bullet}(A)$ to the complex $\Gamma_{LT,\mathcal B_2}^{\bullet}(A)$ given by \begin{eqnarray*} \mbox{in degree 0}:& &id: x\mapsto x \\ \mbox{in degree 1}:& &\theta_1:\begin{cases} x_1\mapsto S_1x_1=\sum_{i=1}^{n}r_iT_ix_i\\ x_n\mapsto S_nx_n=T_nx_n, n\geq2 \end{cases}\\ \mbox{in degree }d\geq2:& &\theta_d:\begin{cases} (x_1,x_{i_2},x_{1_3},\cdots,x_{i_n})\mapsto &(S_1x_1,S_{i_2}x_{i_2},\cdots,S_{i_n}x_{i_n})\\ &=(\sum_{i=1}^{n}r_iT_ix_1,T_{i_2}x_{i_2},T_{i_3}x_{i_3},\cdots,T_{i_n}x_{i_n})\\ (x_{i_1},x_{i_2},x_{i_3},\cdots,x_{i_n})\mapsto &(S_{i_1}x_{i_1},S_{i_2}x_{i_2},\cdots,S_{i_n}x_{i_n}). \end{cases} \end{eqnarray*} With this map, we have the following commutative diagram: \begin{equation*} \begin{tikzcd} A \arrow{r}{x\mapsto (T_ix)} \arrow[swap]{d}[swap]{id} & \bigoplus_{i_1\in \mathfrak{X^\prime}}A\arrow[swap]{d}[swap]{} \\ A \arrow{r}{x\mapsto (S_ix)} & \bigoplus_{i_1\in \mathfrak{X^\prime}}A. \end{tikzcd} \end{equation*} Let $i_1=1$, then we have the following maps: \begin{equation*} \begin{tikzcd} \bigoplus_{\{i_1,\cdots,i_r\}\in \binom{\mathfrak{X}}{r}}A\arrow{r}{(x_j)\mapsto (d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}}x_j)}\arrow[swap]{d}[swap]{} & \bigoplus_{\{i_1,\ldots,i_{r+1}\}\in \binom{\mathfrak{X}}{r+1}}A\arrow[swap]{d}[swap]{}\\ \bigoplus_{i_1\cdots,i_r\in \binom{\mathfrak{X}}{r}}A\arrow{r} & \bigoplus_{\{i_1,\ldots,i_{r+1}\}\in \binom{\mathfrak{X}}{r+1}}A. \end{tikzcd} \end{equation*} The case when $i_j\neq1$ can be dealt with similarly.. Then we have the following maps: \begin{equation*} \begin{tikzcd} \bigoplus_{\{i_1,\cdots,i_r\}\in \mathfrak{X}}A\arrow{r}{(x_j)\mapsto (T_ix_i)}\arrow[swap]{d}[swap]{} & \bigoplus_{\{i_1,\ldots,i_{r+1}\}\in \binom{\mathfrak{X}}{r+1}}A\arrow[swap]{d}[swap]{}\\ \bigoplus_{i_1\in \mathfrak{X}}A\arrow{r} & \bigoplus_{\{i_1,\ldots,i_{r+1}\}\in \binom{\mathfrak{X}}{r+1}}A. \end{tikzcd} \end{equation*} }\fi Here the $S_{i_j}=T_{i_j}$, for $i_j\geq1$, so the commutativity of the diagram follows. Hence, the map $\theta$ is a map of complexes. As $r_1$ is a unit, we have $T_1=r_1^{-1}(S_1-\sum_{k=1}^nr_kT_k), T_2=S_2,\cdots,T_n=S_n$. Then we can define a map $\Theta^{\bullet}:\Gamma_{LT,\mathcal B_1}^{\bullet}(A)\longrightarrow\Gamma_{LT,\mathcal B_2}^{\bullet}(A)$ given on the basis vectors by \begin{eqnarray*} \begin{cases} f_1\mapsto r_i^{-1}e_1-\sum_{i=2}^{n}r_1^{-1}r_ie_i\\ f_j\mapsto e_j, j\geq2. \end{cases}\\ \end{eqnarray*} This induces a map on the wedge product and hence gives us a map from the complexes. Then, we have \begin{eqnarray*} \Theta(\theta(e_1\wedge\cdots\wedge e_{i_r})) &=&\Theta(\sum_{k=1}^r r_kf_k\wedge f_{i_2}\wedge\cdots\wedge f_{i_r})\\ &=&r_1\Theta(f_1\wedge f_{i_2}\wedge\cdots\wedge f_{i_r})+\sum_{k=2}^nr_k\Theta(f_k\wedge f_{i_2}\wedge\cdots\wedge f_{i_j})\\ &=&r_1(r_1^{-1}e_1\wedge e_{i_2}\wedge\cdots\wedge e_{i_r}-\sum_{k=2}^nr_1^{-1}r_ke_k\wedge e_{i_2}\wedge\cdots\wedge e_{i_r})\\ & & +\sum_{k=2}^n r_ke_k\wedge e_{i_2}\wedge\cdots\wedge e_{i_r}\\ &=& e_1\wedge e_{i_2}\wedge\cdots\wedge e_{i_j}. \end{eqnarray*} Further, \begin{eqnarray*} \Theta(\theta(e_1)) &=&\Theta(\sum_{k=1}^n r_k f_k)\\ &=&r_1\Theta(f_1)+\Theta(\sum_{k=2}^n r_kf_k)\\ &=&r_1(r_1^{-1}e_1-\sum_{k=2}^n r_1^{-1}r_ke_k)+\sum_{k=2}^nr_ke_k\\ &=&e_1. \end{eqnarray*} We can similarly show that $\theta\circ\Theta$ is the identity map. Hence the map $\Theta^{\bullet}:\Gamma_{LT,\mathcal B_1}^{\bullet}(A)\longrightarrow\Gamma_{LT,\mathcal B_2}^{\bullet}(A)$ is an isomorphism of complexes. Let $\gamma_1',\gamma_2',\cdots,\gamma_n'$ $G$ be another set of generators for $\Gamma_{LT}^\ast$. Then every generator in $G$ can be expressed as $\prod_{j=1}^n\gamma_j^{k_{ji}}$, for some integers $k_{j1},\cdots,k_{jn}$. In each of these expressions, one of $k_{j1},\cdots,k_{jn}$ is a unit. Then, we rearrange the generators in $G$, and set $\gamma_1'=\prod_{j=1}^n\gamma_j^{k_{ji}}$, where $k_{11}$ is a unit. Now, the subgroup generated by $G\backslash\{\gamma_1'\}$ is equal to the subgroup generated by $\gamma_2,\cdots,\gamma_n$. Similarly, expressing the generators of $G\backslash\{\gamma_1'\}$ as powers of $\gamma_2,\cdots,\gamma_n$ and noticing that one of the powers is a unit in $\mathbb{Z}$, we get another generator in $G\backslash\{\gamma_1'\}$, which we call $\gamma_2'$ such that the subgroup generated by $G\backslash\{\gamma_1',\gamma_2'\}$ is equal to the subgroup generated by $\{\gamma_3,\cdots,\gamma_n\}$. Proceeding in this way, we get the following equality of groups generated: \begin{eqnarray*} \langle\gamma_1,\gamma_2,\cdots,\gamma_n\rangle&=& \langle\gamma_1',\gamma_2,\cdots,\gamma_n\rangle\\ &=& \langle\gamma_2,\gamma_3,\cdots,\gamma_1'\rangle\\ &=& \langle\gamma_2',\gamma_3,\cdots,\gamma_{n-1},\gamma_1'\rangle\\ &=& \langle\gamma_3,\gamma_4,\cdots,\gamma_{n-1},\gamma_1',\gamma_2'\rangle\\ &=& \cdots\quad\quad\cdots\\ &=& \langle\gamma_1',\gamma_2',\cdots,\gamma_n'\rangle \end{eqnarray*} Let $S_k=\gamma_k'-1$, for $k=1,\cdots,n$. Then the ideal $\langle S_1,\cdots,S_n\rangle=\langle T_1,\cdots,T_n\rangle$, and $S_k=\sum_{i=i}^{n}a_{kj}T_j$. Here one of the coefficients $a_{kj}$ is a unit in $\mathcal{O}[[\Gamma_{LT}^\ast]]$. Therefore, $\langle T_1,\cdots,T_n\rangle=\langle S_1, T_2,\cdots, T_n\rangle$. Proceeding in this way, we have: \begin{eqnarray*} \langle T_1,T_2,\cdots,T_n\rangle &=&\langle S_1,T_2,T_3\cdots,T_n\rangle\\ &=&\langle T_2,S_1,T_3,\cdots,T_n\rangle\\ &=&\langle S_2,S_1,T_3,\cdots,T_n\rangle\\ &=&\langle T_3,S_2,S_1,T_4,\cdots,T_n\rangle\\ & &\cdots\\ &=&\langle S_n,\cdots,S_2,S_1\rangle. \end{eqnarray*} Define $\frac{1}{id-\gamma_1}(a):= \lim\limits_{n\to\infty}\sum_{j=0}^{n}\gamma_1^j(a)$ for $a \in A$, where the series on the right hand side is convergent as $\Gamma_{LT}^*$ acts continuously on $A$. Then $\frac{\gamma_1^\prime-id}{\gamma_1-id}$ is in the fraction field $Frac(\mathcal{O}_K[[\Gamma_{LT}^*]])$, and we have the following diagram \begin{center} \begin{tikzcd} \Gamma_{LT,\gamma_1,\gamma_2}^{\bullet}(A): 0\arrow{r} & A \arrow{r}{x\mapsto A_0x} \arrow[swap]{d}[swap]{id} & A \oplus A \arrow{r}{x\mapsto A_1x} \arrow[swap]{d}[swap]{\frac{\gamma_1^\prime-id}{\gamma_1-id}\oplus id} & A \arrow{r}\arrow[swap]{d}[swap]{\frac{\gamma_1^\prime-id}{\gamma_1-id}} & 0\\ \Gamma_{LT,\gamma_1^\prime,\gamma_2}^{\bullet}(A): 0 \arrow{r} & A \arrow{r}[swap]{x\mapsto A_0^\prime x} & A\oplus A \arrow{r}[swap]{x\mapsto A_1^\prime x} & A \arrow{r}& 0, \end{tikzcd} \end{center} where \[ A_0= \begin{bmatrix} \gamma_1-id \\ \gamma_2-id \end{bmatrix}, A_1 = \begin{bmatrix} -(\gamma_2-id) & \gamma_1-id \end{bmatrix}, \] \[ A_0^\prime= \begin{bmatrix} \gamma_1^\prime-id \\ \gamma_2-id \end{bmatrix}, A_1^\prime = \begin{bmatrix} -(\gamma_2-id) & \gamma_1^\prime-id \end{bmatrix}. \] It is easy to check that the above diagram is commutative. By passing to the cohomology, it induces an isomorphism between $\mathcal{H}^i(\Gamma_{LT,\gamma_1,\gamma_2}^{\bullet}(A))$ and $\mathcal{H}^i(\Gamma_{LT,\gamma_1^\prime,\gamma_2}^{\bullet}(A))$. Similarly, it is easy to show that $\mathcal{H}^i(\Gamma_{LT,\gamma_1^\prime,\gamma_2}^{\bullet}(A))$ is naturally isomorphic to $\mathcal{H}^i(\Gamma_{LT,\gamma_1^\prime,\gamma_2^\prime}^{\bullet}(A))$. Therefore there is a natural isomorphism between $\mathcal{H}^i(\Gamma_{LT,\gamma_1,\gamma_2}^{\bullet}(A))$ and $\mathcal{H}^i(\Gamma_{LT,\gamma_1^\prime,\gamma_2^\prime}^{\bullet}(A))$. Now the general case follows by using induction on the number of generators of $\Gamma_{LT}^*$. \end{proof} \f Next, we define a complex, namely the Lubin-Tate Herr complex, which is a generalization of the Herr complex \cite{LH1}. \begin{definition} \label{LTHC} Let $M\in \varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$. Define the co-chain complex $\Phi\Gamma_{LT}^{\bullet}(M)$ as the total complex of the double complex $\Gamma_{LT}^{\bullet}(\Phi^{\bullet}(M^{\Delta}))$, and we call it the \emph{Lubin-Tate Herr complex} for $M$. \end{definition} Explicitly for the case $d=2$, the Lubin-Tate Herr complex looks like as in the following example. Note that in the following examples $M=M^{\Delta}$. We write $M$ only for the simplicity. \begin{example}{\label{Ex1}} Let $d=2$, then the Lubin-Tate Herr complex $\Phi\Gamma_{LT}^{\bullet}(M)$ is defined as: \begin{equation*} 0\rightarrow M\xrightarrow{x\mapsto A_{0,\varphi_q}x}M^{\oplus 3}\xrightarrow{x\mapsto A_{1,\varphi_q}x} M^{\oplus 3}\xrightarrow{x\mapsto A_{2,\varphi_q}x}M\rightarrow 0, \end{equation*} where \[ A_{0,\varphi_q}= \begin{bmatrix} \varphi_M-id \\ \gamma_1-id \\ \gamma_2-id \end{bmatrix}, A_{1,\varphi_q} = \begin{bmatrix} -(\gamma_1-id) & \varphi_M-id & 0 \\ -(\gamma_2-id) & 0 & \varphi_M-id \\ 0 & -(\gamma_2-id) & \gamma_1-id \end{bmatrix}, \] \[ A_{2,\varphi_q}= \begin{bmatrix} \gamma_2-id & -(\gamma_1-id)& \varphi_M-id \end{bmatrix}. \] \end{example} \begin{lemma}\label{dimension-shifting} For any $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$, the functor $V \mapsto \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))_{i\geq 0}$ is a cohomological $\delta$-functor from the category ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$ to the category of abelian groups. Moreover, $\mathcal{H}^0(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))\cong V^{G_K}.$ \end{lemma} \begin{proof} Let \begin{equation}\label{D} 0\rightarrow V_1\rightarrow V_2\rightarrow V_3\rightarrow 0 \end{equation} be a short exact sequence of discrete $\pi$-primary representations of $G_K$. Then the exact functor $\mathbb{D}_{LT}$ is exact, gives the short exact sequence $0\rightarrow \mathbb{D}_{LT}(V_1)\rightarrow\mathbb{D}_{LT}(V_2)\rightarrow\mathbb{D}_{LT}(V_3)\rightarrow 0$ in $\varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$. By using Acyclic Assembly Lemma \cite[Lemma 2.7.3]{Wei}, we get a short exact sequence \begin{equation}\label{E} 0\rightarrow\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V_1))\rightarrow\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V_2))\rightarrow\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V_3))\rightarrow 0, \end{equation} of co-chain complexes. Then the long exact sequence of (\ref{E}) gives maps \begin{equation*} \delta^i: \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V_3)))\rightarrow\mathcal{H}^{i+1}(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V_1))), \end{equation*} which are functorial in (\ref{D}). Therefore $V \mapsto \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))_{i\geq 0}$ is a cohomological $\delta$-functor from the category ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$ to the category of abelian groups. For the second part, note that $\varphi_q$ acts trivially on $V$ and it commutes with the action of $G_K$, so: \begin{align*} \mathbb{D}_{LT}(V)^{\varphi_{\mathbb{D}_{LT}(V)} = id} & = ((\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K} V)^{H_K})^{\varphi_{\mathbb{D}_{LT}(V)}=id}\\ & = (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}^{\varphi_q= 1}\otimes_{\mathcal{O}_K} V)^{H_K}\\ & = (\mathcal{O}_K\otimes_{\mathcal{O}_K} V)^{H_K} \\ & \cong V^{H_K}, \end{align*} where the third equality follows from Lemma \ref{Exact}. Therefore $$\mathbb{D}_{LT}(V)^{\varphi_{\mathbb{D}_{LT}(V)}=id,\Gamma_{LT}=id} \cong (V^{H_K})^{\Gamma_{LT}=id}=V^{G_K}.$$ On the other hand, \begin{align*} \mathcal{H}^0(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))=& (\mathbb{D}_{LT}(V)^\Delta)^{\varphi_{\mathbb{D}_{LT}(V)} = id,\Gamma_{LT}^*=id}\\=&\mathbb{D}_{LT}(V)^{\varphi_{\mathbb{D}_{LT}(V)} = id,\Gamma_{LT}=id}. \end{align*} Hence \begin{equation*} \mathcal{H}^0\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V))\cong V^{G_K}. \end{equation*} \end{proof} \begin{theorem}\label{G_K-cohomo} Let $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$. Then $H^i(G_K,V)\cong \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))$ for $i\geq 0$, i.e., the Lubin-Tate Herr complex $\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V))$ computes the Galois cohomology of $G_K$ with coefficients in $V$. \end{theorem} \begin{proof} As $(H^i(G_K,-))_{i\geq 0}$ is a universal $\delta$-functor and $(\mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(-))))_{i\geq 0}$ is a cohomological $\delta$-functor such that $\mathcal{H}^0(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(-)))\cong H^0(G_K,-)$, we have a natural transformation \begin{equation*} H^i(G_K,-)\rightarrow \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(-))) \end{equation*} of $\delta$-functors. First, assume that $V$ is an injective object in ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$. Then there is a spectral sequence \begin{equation}\label{F} E_2^{mn}= \mathcal{H}^m(\Gamma_{LT}^{\bullet}(\mathcal{H}^n(\Phi^{\bullet}(\mathbb{D}_{LT}(V)^{\Delta}))))\Rightarrow \mathcal{H}^{m+n}(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V))) \end{equation} associated to the double complex $\Gamma_{LT}^{\bullet}(\Phi^{\bullet}(\mathbb{D}_{LT}(V)^\Delta))$, and associated to the group $G_K$, we have the Hochschild-Serre spectral sequence \begin{equation}\label{hsss} E_2^{mn}= H^m(\Gamma_{LT}^*,H^n(H_K^*,V))\Rightarrow H^{m+n}(G_K,V). \end{equation} Since $V$ is injective, it follows from Proposition \ref{H_K^*-cohomo} that the augmentation map $$V^{H_K^*}[0]\rightarrow \Phi^{\bullet}(\mathbb{D}_{LT}(V)^\Delta)$$ is a quasi-isomorphism. Also, $V^{H_K^*}$ is injective as a discrete representation of $\Gamma_{LT}^*$. Then by using Proposition \ref{Gamma^*-cohomo}, the map $$V^{G_K}[0]\rightarrow\Gamma_{LT}^{\bullet}(V^{H_K^*})$$ is a quasi-isomorphism of complexes. Now the natural transformation $H^i(G_K,-)\rightarrow \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(-)))$ and the spectral sequences (\ref{F}) and (\ref{hsss}) give the following isomorphism \begin{equation*} H^i(G_K,V)\cong \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))\quad \text{for}\ i\geq 0. \end{equation*} As we know that the category ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$ has enough injectives, the general case follows from Lemma \ref{dimension-shifting} by using dimension shifting. \end{proof} \iffalse{ \begin{proof} For any $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$, we know that the functor $(H^i(G_K,-))_{i\geq 0}$ is a universal $\delta$-functor and $(\mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(-))))_{i\geq 0}$ is a cohomological $\delta$-functor such that $\mathcal{H}^0\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(-))\cong H^0(G_K,-)$ (Lemma \ref{dimension-shifting}). Thus we have a natural transformation \begin{equation*} H^i(G_K,-)\rightarrow \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(-))) \end{equation*} of $\delta$-functors. First, suppose that $V$ is an injective object in the category ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$. Then there is a spectral sequence \begin{equation}\label{F} E_2^{mn}= \mathcal{H}^m\Gamma_{LT}^{\bullet}(\mathcal{H}^n\Phi^{\bullet}(\mathbb{D}_{LT}(V)^{\Delta}))\Rightarrow \mathcal{H}^{m+n}\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)), \end{equation} associated to the double complex $\Gamma_{LT}^{\bullet}(\Phi^{\bullet}(\mathbb{D}_{LT}(V)^\Delta))$. Since $V$ is injective, it follows from Proposition \ref{H_K^*-cohomo} that the augmentation map $V^{H_K^*}[0]\rightarrow \Phi^{\bullet}(\mathbb{D}_{LT}(V)^\Delta)$ is a quasi-isomorphism. Also, $V^{H_K^*}$ is injective as a discrete representation of $\Gamma_{LT}^*$, by Proposition \ref{Gamma^*-cohomo}, the map $V^{G_K}[0]\rightarrow\Gamma_{LT}^{\bullet}(V^{H_K^*})$ is also a quasi-isomorphism. Then the spectral sequence (\ref{F}) gives the following isomorphism \begin{equation*} H^i(G_K,V)\cong \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{LT}(V)))\; \text{for all}\; i\geq 0. \end{equation*} As we know that the category ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$ has enough injectives, the general case is deduced from Lemma \ref{dimension-shifting} by using dimension shifting. \end{proof} }\fi Let $V \in {\bf Rep}_{\mathcal{O}_K}(G_K)$. Then we have \begin{align*} V&=\varprojlim V\otimes_{\mathcal{O}_K}\mathcal{O}_K/ \pi^n\mathcal{O}_K\\ &\cong\varprojlim V/\pi^nV, \end{align*} where each $V/\pi^nV$ is $\pi$-power torsion and it is also discrete as $V/\pi^nV$ is finite. Therefore any object in ${\bf Rep}_{\mathcal{O}_K}(G_K)$ is the inverse limit of objects in the category ${\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$. \begin{lemma}\label{commutes inverse} Let $V\in {\bf Rep}_{\mathcal{O}_K}(G_K)$. Then the functor $H^i(G_K,-)$ commutes with the inverse limits, i.e., $H^i(G_K,V)\cong \varprojlim\limits_n H^i(G_K,V/\pi^nV)$. \end{lemma} \begin{proof} As $k$ is finite, the cohomology groups $H^i(G_K,V/\pi^nV)$ are finite for all $n$ (\cite[Theorem $2.1$]{Ta1}). Now the result follows from \cite[Corollary $2.2$]{Tate}. \end{proof} \begin{theorem}\label{lattices} Let $V\in {\bf Rep}_{\mathcal{O}_K}(G_K)$. Then we have $H^i(G_K,V)\cong \mathcal{H}^i(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(V)))$ for $i\geq0$. \end{theorem} \begin{proof} Firstly, we show that the functor $\mathcal{H}^i(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(-)))$ commutes with the inverse limits. Here the transition maps are surjective in the projective system $(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(V/\pi^nV)))_n$ of co-chain complexes of abelian groups, so the first hyper-cohomology spectral sequence degenerates at $E_2$. Moreover, it follows from Lemma \ref{commutes inverse} that $\varprojlim\limits_n{}^{1} \mathcal{H}^i(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(V/\pi^nV)))=0$. Therefore the second hyper-cohomology spectral sequence \begin{equation*} \varprojlim\limits_n{}^{i} \mathcal{H}^j(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(V/\pi^nV))) \Rightarrow \mathcal{H}^{i+j}(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(V))) \end{equation*} also degenerates at $E_2$. Thus $\varprojlim\limits_n \mathcal{H}^i(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(V/\pi^nV)))= \mathcal{H}^i(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(V)))$. Now \begin{align*} H^i(G_K,V)&\cong \varprojlim\limits_n H^i(G_K,V/\pi^nV) \\ & \cong \varprojlim\limits_n \mathcal{H}^i(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(V/\pi^nV))) \\ & \cong \mathcal{H}^i(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(V))), \end{align*} where the first isomorphism follows from Lemma \ref{commutes inverse} and the second is induced from Theorem \ref{G_K-cohomo}. \end{proof} \begin{corollary}\label{zero} Let $V\in {\bf Rep}_{\mathcal{O}_K}(G_K)$. Then $\mathcal{H}^i(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(V)))=0$ for $i\geq 3$, although it is not obvious from the definition of the Lubin-Tate Herr complex. \end{corollary} \begin{proof} Recall the classical result that the groups $H^i(G_K,V)$ are trivial for $i\geq 3$, \cite[chapter II, Proposition 12]{Ser}. Then it follows from the above theorem that $\mathcal{H}^i(\Phi\Gamma_{LT}^\bullet(\mathbb{D}_{LT}(V)))=0$ for $i\geq3$. \end{proof} \section{Galois Cohomology over the False-Tate Type Extensions}\label{sec4} In this section, we assume that $K$ contains the $\pi$-torsion points of the Lubin-Tate group $\mathcal{F}$. Recall that $\mathfrak{m}_K$ is the maximal ideal of $\mathcal{O}_K$. For any $x\in\mathfrak{m}_K\backslash\mathfrak{m}^2_K$, choose a system $(x_i)_{i\geq1}$ such that $[p](x_1)= x$ and $[p](x_{i+1})= x_i$ for all $i\geq 1$. Define $\tilde{K}:= K(x_i)_{i\geq 1}$, and then the extension $\tilde{K}/K$ is not Galois. Let $L:= K_{\infty}\tilde{K}$; then it is easy to see that the extension $L/K$ is Galois. Moreover, $L/K$ is arithmetically pro-finite as $\operatorname{Gal}(L/K)$ is a $p$-adic Lie group. As in \cite{Win}, we consider the field of norms for this extension. The fraction field $\operatorname{Fr}(\mathcal{R})$ contains the field of norms $E_L:= X_K(L)$ in a natural way and $\operatorname{Gal}(\bar{K}/L)\cong \operatorname{Gal}(E^{sep}/E_L)$. Recall that the ring $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ is a complete discrete valuation ring with residue field $E^{sep}$ and is stable under the action of $G_K$ and $\varphi_q$. Define $\mathcal{O}_{\mathcal{L}}:=(\mathcal{O}_{\widehat{\mathcal{E}^{ur}}})^{\operatorname{Gal}(\bar{K}/L)}$. Then $E_L = (E^{sep})^{\operatorname{Gal}(\bar{K}/L)}$ and $\mathcal{O}_{\mathcal{L}}$ is a complete discrete valuation ring with residue field $E_L$. Moreover, the ring $\mathcal{O}_{\mathcal{L}}$ is stable under the action of $G_K$ and $\varphi_q$. Define $\Gamma_{LT,FT}:= \operatorname{Gal}(L/K)$. We summarize the above notations by Figure \ref{Fig 1} and Figure \ref{Fig 2}. \begin{figure}[h] \centering \begin{minipage}{.50\textwidth} \centering \begin{tikzcd} & & & \bar{K} \\ &{}& & \\ & L \arrow[dd, "{\Gamma_{LT,FT}}", no head] \arrow[rruu, "H_L", no head] & & \\ \tilde{K} \arrow[ru, no head] \arrow[ruu, phantom]& & K_\infty \arrow[lu, no head] \arrow[ruuu, "H_K"', no head] & \\ & K \arrow[ru, "\Gamma_{LT}"', no head] \arrow[lu, no head] \arrow[rruuuu, "G_K"', no head, bend right=60] & & \end{tikzcd} \caption{Field extensions of $K$} \label{Fig 1} \end{minipage}% \begin{minipage}{.50\textwidth} \centering \begin{tikzcd} & & \operatorname{Fr}(\mathcal{R}) \\ & & \\ & & E^{sep} \arrow[uu, no head] \\ E_L \arrow[rru, "H_L", no head] & &\\ & E \arrow[lu, no head] \arrow[ruu, "H_K"', no head] & \end{tikzcd} \caption{Field extensions of $E$}\label{Fig 2} \end{minipage} \end{figure} Now for any $V \in {\bf Rep}_{\mathcal{O}_K}(G_K)$, define \begin{equation*} \mathbb{D}_{LT,FT}(V):= (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K} V)^{\operatorname{Gal}(\bar{K}/L)}. \end{equation*} Let ${\bf Mod}^{\varphi_q,\Gamma_{LT,FT},\acute{e}t}_{/\mathcal{O}_{\mathcal{L}}}$ be the category of finite free \'{e}tale $(\varphi_q,\Gamma_{LT,FT})$-modules over $\mathcal{O}_\mathcal{L}$. Then the modules $\mathbb{D}_{LT,FT}(V)$ and $\mathbb{D}_{LT}(V)\otimes_{\mathcal{O}_{\mathcal{E}}}\mathcal{O}_{\mathcal{L}}$ are in the category ${\bf Mod}^{\varphi_q,\Gamma_{LT,FT},\acute{e}t}_{/\mathcal{O}_{\mathcal{L}}}$, and there is a natural map $\iota: \mathbb{D}_{LT}(V)\otimes_{\mathcal{O}_{\mathcal{E}}}\mathcal{O}_{\mathcal{L}}\rightarrow \mathbb{D}_{LT,FT}(V)$. \begin{proposition}\label{composite functor} The map $\iota$ is an isomorphism of \'{e}tale $(\varphi_q,\Gamma_{LT,FT})$-modules over $\mathcal{O}_{\mathcal{L}}$. \end{proposition} \begin{proof} Consider the isomorphism $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_{\mathcal{E}}}\mathbb{D}_{LT}(V) \cong \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K} V$ of \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules over $\mathcal{O}_{\mathcal{E}}$. Then \begin{align*} \mathbb{D}_{LT}(V)\otimes_{\mathcal{O}_{\mathcal{E}}}\mathcal{O}_{\mathcal{L}} &=\mathbb{D}_{LT}(V)\otimes_{\mathcal{O}_{\mathcal{E}}} (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}})^{\operatorname{Gal}(\bar{K}/L)} \\&= (\mathbb{D}_{LT}(V)\otimes_{\mathcal{O}_{\mathcal{E}}} \mathcal{O}_{\widehat{\mathcal{E}^{ur}}})^{\operatorname{Gal}(\bar{K}/L)} \\& \cong (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K} V)^{\operatorname{Gal}(\bar{K}/L)}\\ &= \mathbb{D}_{LT,FT}(V), \end{align*} where the second identity follows from the fact that $\operatorname{Gal}(\bar{K}/L)\subseteq \operatorname{Gal}(\bar{K}/K_\infty)$. Thus $\iota$ is an isomorphism of \'{e}tale $(\varphi_q,\Gamma_{LT,FT})$-modules. \end{proof} Similarly, for any $M \in {\bf Mod}^{\varphi_q,\Gamma_{LT,FT},\acute{e}t}_{/\mathcal{O}_{\mathcal{L}}}$, define \begin{equation} \mathbb{V}_{LT,FT}(M):=(\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_{\mathcal{L}}} M)^{\varphi_q\otimes\varphi_M= id}. \end{equation} \begin{theorem}\label{False-Tate equivalence} The functor $\mathbb{D}_{LT,FT}$ defines an exact equivalence of categories between ${\bf Rep}_{\mathcal{O}_K}(G_K)$ $\ (\text{resp.,}\, {\bf Rep}_{\mathcal{O}_K-tor}(G_K))$ and ${\bf Mod}^{\varphi_q,\Gamma_{LT,FT},\acute{e}t}_{/\mathcal{O}_{\mathcal{L}}}$ $(\text{resp.,}\,{\bf Mod}^{\varphi_q,\Gamma_{LT,FT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{L}}})$ with a quasi-inverse functor $\mathbb{V}_{LT,FT}$. \end{theorem} \begin{proof} Since the functor $\mathbb{D}_{LT,FT}$ is composite of the functor $\mathbb{D}_{LT}$ with the scalar extension $\otimes_{\mathcal{O}_{\mathcal{E}}} \mathcal{O}_{\mathcal{L}}$, the proof follows from Proposition \ref{composite functor} and Theorem \ref{Kisin Ren}. \end{proof} \begin{remark}\label{choice} The extension $\tilde{K}$ is not the canonical one. We can also define $\tilde{K}$ as follows: \begin{enumerate} \item Define $K_{cyc}:= K(\mu_{p^n})_{n\geq 1}$. Let $K_{cyc}\subseteq K_{\infty}$ and $\tilde{K}:= K(\pi^{p^{-r}},r\geq 1)$, then $L= K_\infty \tilde{K}$ is a Galois extension of $K$ and $\operatorname{Gal}(L/K_\infty)\cong \mathbb{Z}_p$. The case when $K_{cyc}= K_{\infty}$ has been considered in \cite{Flo} and \cite{LH2}. \item We can also define $\tilde{K}:=K(y_i)_{i\geq 1}$, where $(y_i)_{i\geq 1}$ is a system satisfying $[\pi](y_1)= y$ and $[\pi](y_{i+1})= y_i$ for all $i\geq 1$ and $y \in \mathfrak{m}_K\backslash \mathfrak{m}_K^2$. In this case, $\operatorname{Gal}(L/K_\infty)$ is isomorphic to an open subgroup of $\mathbb{Z}_p$. \end{enumerate} \end{remark} Then using similar methods as explained in section \ref{sec3}, we extend the functor $\mathbb{D}_{LT,FT}$ to the category of discrete $\pi$-primary abelian groups with a continuous action of $G_K$. Then we have the following result. \begin{theorem} The functors $\mathbb{D}_{LT,FT}$ and $\mathbb{V}_{LT,FT}$ are the quasi-inverse equivalence of categories between the category $ {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$ and $\varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT,FT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{L}}}$. \end{theorem} \begin{proof} Since the functor $\mathbb{D}_{LT}$ commutes with direct limits, it follows from Proposition \ref{composite functor} that the functor $\mathbb{D}_{LT,FT}$ also commutes with direct limits. Now the result follows from Theorem \ref{False-Tate equivalence} by taking direct limits and noting that the functor $\mathbb{V}_{LT,FT}$ also commutes with direct limits. \end{proof} Note that $H_L\cong \operatorname{Gal}(E^{sep}/E_L)$, then it follows that $H^i(H_L,\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}/\pi^n )=0$ for all $n\geq 1$ and $i\geq 1$. Moreover, for any $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$, we have $\mathcal{H}^i(\Phi^{\bullet}(\mathbb{D}_{LT,FT}(V)))\cong H^i(H_L, V)$ as $\Gamma_{LT,FT}$-modules. Recall that $K$ contains the $\pi$-torsion points of the Lubin-Tate group $\mathcal{F}$, and $\Gamma_{LT}^*= \langle\gamma_1,\gamma_2,\ldots,\gamma_d\rangle$ as an abelian group. Let $\tilde{\gamma}$ be a topological generator of $\operatorname{Gal}(L/K_\infty)$. We lift $\gamma_1,\gamma_2,\ldots,\gamma_d$ to the elements of $\operatorname{Gal}(L/\tilde{K})$. Then $\Gamma_{LT,FT}:=\Gamma_{LT}^*\ltimes \mathbb{Z}_p$ is topologically generated by the set $\tilde{\mathfrak{X}}:=\langle\gamma_1,\gamma_2, \ldots,\gamma_d,\tilde{\gamma}\rangle$ with the relations $\gamma_i\tilde{\gamma}= \tilde{\gamma}^{a_i}\gamma_i$ such that $a_i\in \mathbb{Z}_p^\times$, where $a_i= \chi_{LT}(\gamma_i)$ for all $i=1,\ldots,d$, and $\chi_{LT}$ is the Lubin-Tate character. Let $A$ be an arbitrary representation of the group $\Gamma_{LT,FT}$. Then consider the complex \begin{equation*} \Gamma_{LT,FT}^{\bullet}(A): 0\rightarrow A \rightarrow \bigoplus_{i_1\in \tilde{\mathfrak{X}}} A\rightarrow\cdots\rightarrow \bigoplus_{\{i_1,\ldots,i_r\} \in \binom{\tilde{\mathfrak{X}}}{r}} A\rightarrow\cdots \rightarrow A\rightarrow 0, \end{equation*} where for all $0\leq r\leq |\tilde{\mathfrak{X}}|-1$, the map $d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}}:A\rightarrow A$ from the component in the $r$-th term corresponding to $\{i_1,\ldots,i_r\}$ to the component corresponding to the $(r+1)$-tuple $\{j_1,\ldots,j_{r+1}\}$ is given by \begin{equation*} d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}} = \left\{ \begin{array}{ll} 0 & \mbox{if } \{i_1,\ldots,i_r\}\nsubseteq\{j_1,\ldots,j_{r+1}\}, \\ (-1)^{s_j}(\gamma_j-id) & \mbox{if } \{j_1,\ldots,j_{r+1}\}= \{i_1,\ldots,i_r\}\cup\{j\} \\ & \mbox {and} \{i_1,\ldots,i_{r}\}\ \mbox{doesn't contain}\ \tilde{\gamma}, \\ (-1)^{s_j+1}\left(\gamma_j-\frac{\tilde{\gamma}^{\chi_{LT}(j)\chi_{LT}(i_1)\cdots\chi_{LT}(i_r)}-id}{\tilde{\gamma}^{\chi_{LT}(i_1)\cdots\chi_{LT}(i_r)}-id}\right) & \mbox{if } \{j_1,\ldots,j_{r+1}\}= \{i_1,\ldots,i_r\}\cup\{j\} \\ & \mbox {and} \{i_1,\ldots,i_{r}\}\ \mbox{contains}\ \tilde{\gamma}, \\ \tilde{\gamma}^{\chi_{LT}(i_1)\cdots\chi_{LT}(i_r)}-id& \mbox{if } \{j_1,\ldots,j_{r+1}\}= \{i_1,\ldots,i_r\}\cup\{\tilde{\gamma}\}, \end{array} \right. \end{equation*} where $s_j$ is the number of elements in the set $\{i_1,\ldots,i_r\}$, which are smaller than $j$. \begin{example}{\label{Ex2}} Let $d=2$, then the complex $\Gamma_{LT,FT}^\bullet(A)$ is defined as follows: \begin{equation*} \Gamma_{LT,FT}^\bullet(A): 0\rightarrow A\xrightarrow{x\mapsto A_0x} A^{\oplus3}\xrightarrow{x \mapsto A_1x} A^{\oplus3}\xrightarrow{x\mapsto A_2x}A\rightarrow 0, \end{equation*} where \[ A_0= \begin{bmatrix} \gamma_1-id\\ \gamma_2-id \\ \tilde{\gamma}-id \end{bmatrix}, A_1= \begin{bmatrix} -(\gamma_2-id) & \gamma_1-id & 0\\ \tilde{\gamma}^{a_1}-id & 0 & -\left(\gamma_1- \frac{\tilde{\gamma}^{a_1}-id}{\tilde{\gamma}-id}\right)\\ 0 & \tilde{\gamma}^{a_2}-id & -\left(\gamma_2-\frac{\tilde{\gamma}^{a_2}-id}{\tilde{\gamma}-id}\right) \end{bmatrix}, \] \[ A_2= \begin{bmatrix} \tilde{\gamma}^{a_1a_2}-id & \gamma_2-\frac{\tilde{\gamma}^{a_1a_2}-id}{\tilde{\gamma}^{a_1}-id}) & -\left(\gamma_1-\frac{\tilde{\gamma}^{a_1a_2}-id}{\tilde{\gamma}a^{a_2}-id}\right) \end{bmatrix}. \] \end{example} The functor $A\mapsto \mathcal{H}^i(\Gamma_{LT,FT}^\bullet(A))_{i\geq 0}$ forms a cohomological $\delta$-functor. Note that the complex $\Gamma_{LT,FT}^\bullet(A)$ is the total complex of the following double complex \begin{center} \begin{tikzcd}[row sep=large] & 0 & 0 & & 0 & & 0 \\ \Gamma_{LT,FT \backslash \gamma_d}^{\bullet d}(A) : 0\arrow{r} & A \arrow{r}\arrow[swap]{u} & \underset{i_1\in \tilde{\mathfrak{X}}^\prime}\bigoplus A \arrow{r}\arrow[swap]{u} & \cdots \arrow{r} & \underset{\{i_1,\ldots,i_r\}\in \binom{\tilde{\mathfrak{X}}^\prime}{r}}\bigoplus A\arrow{r}\arrow[swap]{u} & \cdots \arrow{r}& A \arrow{r}\arrow[swap]{u} & 0\\ \Gamma_{LT,FT\backslash \gamma_d}^\bullet(A):0\arrow{r} & A \arrow{r}\arrow[swap]{u}[swap]{\gamma_d-id} & \underset{i_1\in \tilde{\mathfrak{X}}^\prime}\bigoplus A\arrow{r}\arrow[swap]{u} & \cdots \arrow{r} & \underset{\{i_1,\ldots,i_r\}\in \binom{\tilde{\mathfrak{X}}^\prime}{r}}\bigoplus A\arrow{r}\arrow[swap]{u} & \cdots \arrow{r}& A \arrow{r}\arrow[swap]{u} & 0\\ & 0 \arrow[swap]{u} & 0 \arrow[swap]{u} & & 0\arrow[swap]{u} & & 0 \arrow[swap]{u} \end{tikzcd} \end{center} where $\tilde{\mathfrak{X}}^{\prime} = \{\gamma_1,\ldots,\gamma_{d-1},\tilde{\gamma}\}$ and $\Gamma_{LT,FT \backslash \gamma_d}^{\bullet d}(A)$ denotes the complex $\Gamma_{LT,FT \backslash \gamma_d}^{\bullet}(A)$ with $\tilde{\gamma}$ replaced by $\tilde{\gamma}^{a_d}$. The vertical maps $d_{b_1,\ldots,b_r}^{c_1,\ldots,c_r}: A\rightarrow A$ from the component in the $r$-th term corresponding to $\{b_1,\ldots,b_r\}$ to the component corresponding to $r$-th component $\{c_1,\ldots,c_r\}$ are given by the following \begin{equation*} d_{b_1,\ldots,b_r}^{c_1,\ldots,c_r}= \left\{ \begin{array}{ll} \gamma_d-id & \mbox{if } \{b_1,\ldots,b_r\} \mbox{ doesn't contain any term}\\& \mbox{ of the form } (\tilde{\gamma}-id), \\ \dfrac{(\tilde{\gamma}^{a_d\chi_{LT}(b_1)\cdots\chi_{LT}(b_r)}-id)(\gamma_d-id)}{\tilde{\gamma}^{\chi_{LT}(b_1)\cdots\chi_{LT}(b_r)}-id} & \mbox{if } \{b_1,\ldots,b_{r}\} \mbox{ contains a term of the form } \\ & (\tilde{\gamma}^{\chi_{LT}(b_1)\cdots\chi_{LT}(b_r)}-id). \end{array} \right. \end{equation*} Then using the similar technique as in the proof of Proposition \ref{Gamma^*-cohomo}, it follows that the complex $\Gamma_{LT,FT}^\bullet(A)$ computes the $\Gamma_{LT,FT}$-cohomology of $A$ and $\mathcal{H}^0(\Gamma_{LT,FT}^\bullet(A))= A^{\Gamma_{LT,FT}}$, for a discrete $\pi$-primary abelian group $A$ with continuous action of $\Gamma_{LT,FT}$. \begin{definition} \label{FTHC} Let $M \in \varinjlim{\bf Mod}^{\varphi_q,\Gamma_{LT,FT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{L}}}$. We define the co-chain complex $\Phi\Gamma_{LT,FT}^{\bullet}(M)$ as the total complex of the double complex $\Gamma_{LT,FT}^\bullet(\Phi^\bullet(M))$ and call it the \emph{False-Tate type Herr complex} for $M$. \end{definition} \begin{example}{\label{Ex3}} In the case of $d=2$, the False-Tate type Herr complex is defined as follows: \begin{equation*} 0\rightarrow M\xrightarrow{x\mapsto A_{0,\varphi_q}x}M^{\oplus 4}\xrightarrow{x\mapsto A_{1,\varphi_q}x} M^{\oplus 6}\xrightarrow{x\mapsto A_{2,\varphi_q}x}M^{\oplus4}\xrightarrow{x\mapsto A_{3,\varphi_q}x}M\rightarrow 0, \end{equation*} where \[ A_{0,\varphi_q}= \begin{bmatrix} \varphi_M-id \\ \gamma_1-id\\ \gamma_2-id \\ \tilde{\gamma}-id \end{bmatrix}, A_{1,\varphi_q}= \begin{bmatrix} -(\gamma_1-id) & \varphi_M-id & 0 & 0\\ -(\gamma_2-id) & 0 & \varphi_M-id & 0\\ -(\tilde{\gamma}-id) & 0 & 0 & \varphi_M-id\\ 0 & -(\gamma_2-id) & \gamma_1-id & 0\\ 0 & \tilde{\gamma}^{a_1}-id & 0 & -\left(\gamma_1-\frac{\tilde{\gamma}^{a_1}-id}{\tilde{\gamma}-id}\right)\\ 0 & 0 & \tilde{\gamma}^{a_2}-id & -\left(\gamma_2-\frac{\tilde{\gamma}^{a_2}-id}{\tilde{\gamma}-id}\right) \end{bmatrix}, \] \[ A_{2,\varphi_q}= \begin{bmatrix} \gamma_2-id & -(\gamma_1-id) & 0 & \varphi_M-id & 0 & 0 \\ -(\tilde{\gamma}^{a_1}-id) & 0 & \gamma_1-\frac{\tilde{\gamma}^{a_1}-id}{\tilde{\gamma}-id} & 0 & \varphi_M-id & 0\\ 0 & -(\tilde{\gamma}^{a_2}-id) & \gamma_2-\frac{\tilde{\gamma}^{a_2}-id}{\tilde{\gamma}-id} & 0 & 0 & \varphi_M-id\\ 0 & 0 & 0 & \tilde{\gamma}^{a_1a_2}-id & \gamma_2-\frac{\tilde{\gamma}^{a_1a_2}-id}{\tilde{\gamma}^{ a_1}-id} & -\left(\gamma_1-\frac{\tilde{\gamma}^{a_1a_2}-id}{\tilde{\gamma}^{ a_2}-id}\right) \end{bmatrix}, \] \[ A_{3,\varphi_q}= \begin{bmatrix} -(\tilde{\gamma}^{a_1a_2}-id) & -\left(\gamma_2-\frac{\tilde{\gamma}^{a_1a_2}-id}{\tilde{\gamma}^{ a_1}-id}\right) & \gamma_1-\frac{\tilde{\gamma}^{a_1a_2}-id}{\tilde{\gamma}^{a_2}-id} & \varphi_M-id \\ \end{bmatrix}. \] \end{example} Now the cohomology of the complex $\Phi\Gamma_{LT,FT}^{\bullet}(-)$ gives the cohomological functors $(\mathcal{H}^i(\Phi\Gamma_{LT,FT}^{\bullet}(-)))_{i\geq 0}$ from $\varinjlim{\bf Mod}^{\varphi_q,\Gamma_{LT,FT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{L}}}$ to the category of abelian groups. Then we have the following theorem. \begin{theorem}\label{Main4} For any $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$, we have a natural isomorphism \begin{equation*} H^i(G_K,V)\cong \mathcal{H}^i(\Phi\Gamma_{LT,FT}^{\bullet}(\mathbb{D}_{LT,FT}(V)))\quad \text{for}\ i\geq 0. \end{equation*} \end{theorem} \begin{proof} Note that the functor $V\mapsto\mathcal{H}^i(\Phi\Gamma_{LT,FT}^{\bullet}(\mathbb{D}_{LT,FT}(V)))_{i\geq 0}$ is a cohomological $\delta$-functor such that $\mathcal{H}^0(\Phi\Gamma_{LT,FT}^{\bullet}(\mathbb{D}_{LT,FT}(V)))\cong H^0(G_K,V)$. Hence the proof follows as in the proof of Theorem \ref{G_K-cohomo}. \end{proof} \begin{corollary} Let $V \in {\bf Rep}_{\mathcal{O}_K}(G_K)$. Then the False-Tate type Herr complex computes the Galois cohomology of $G_K$ with coefficients in $V$. \end{corollary} \begin{proof} The proof is similar to that of Theorem \ref{lattices}. \end{proof} \section{The Operator $\psi_q$} \label{section psi} Recall that the residue field of $\mathcal{O}_{\mathcal{E}}$ is $E=k((X))$, which is not perfect, so $\varphi_q$ is not an automorphism but is injective. The field $\widehat{\mathcal{E}^{ur}}$, which is the fraction field of $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$, is an extension of degree $q$ of $\varphi_{q}(\widehat{\mathcal{E}^{ur}})$. Put $\operatorname{tr} = \operatorname{trace}_{\widehat{\mathcal{E}^{ur}}/\varphi_q(\widehat{\mathcal{E}^{ur}})}$. Define \begin{equation*} \psi_q:\widehat{\mathcal{E}^{ur}} \rightarrow \widehat{\mathcal{E}^{ur}} \end{equation*} such that \begin{equation*} \varphi_q(\psi_q(x))=\frac{1}{\pi}(\operatorname{tr}(x)). \end{equation*} The map $\psi_q$ maps $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ to $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}} $ and $\mathcal{O}_{\mathcal{E}}$ to $\mathcal{O}_{\mathcal{E}}$ \cite[Remark $3.1$]{SV}. This follows from the fact that the residue extensions $E^{sep}/\varphi_q(E^{sep})$ and $E/\varphi_q(E)$ are totally inseparable. Therefore the trace map defined by \begin{equation*} \operatorname{tr}(x) = \text{trace}_{\widehat{\mathcal{E}^{ur}}/\varphi_q(\widehat{\mathcal{E}^{ur}})}(x) = \text{trace}_{\varphi_q(\widehat{\mathcal{{E}}^{ur}})}(y \mapsto xy) \end{equation*} is trivial for these extensions. Hence if $x \in \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$, then $\operatorname{tr}(x) \in \pi\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$. Moreover, \begin{equation*} \operatorname{tr}_{\widehat{\mathcal{E}^{ur}}/\varphi_q(\widehat{\mathcal{E}^{ur}})}(\varphi_q(x))= q \varphi_q(x) \end{equation*} implies that \begin{equation*} \psi_q(\varphi_q(x)) = \frac{q}{\pi}(x). \end{equation*} Hence \begin{equation*} \psi_q\circ\varphi_q = \frac{q}{\pi}id. \end{equation*} We may extend this map $\psi_q$ to $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K} V$ by trivial action on $V$. As $\varphi_q$ commutes with $\Gamma_{LT}$, the module $\varphi_q(\widehat{\mathcal{E}^{ur}})$ is stable under $\Gamma_{LT}$, so $\gamma\circ \operatorname{tr}\circ \gamma^{-1} = \operatorname{tr}$ for all $\gamma \in \Gamma_{LT}$. This ensures that $\psi_q$ commutes with $\Gamma_{LT}$ and it is also stable under the action of $\Gamma_{LT}$. This induces an operator \begin{equation*} \psi_{\mathbb{D}_{LT}(V)}: \mathbb{D}_{LT}(V)\rightarrow \mathbb{D}_{LT}(V) \end{equation*} satisfying \begin{equation*} \psi_{\mathbb{D}_{LT}(V)}\circ\varphi_{\mathbb{D}_{LT}(V)} = \frac{q}{\pi}id_{\mathbb{D}_{LT}(V)}. \end{equation*} Similarly, we have $\psi_{\mathbb{D}_{LT,FT}(V)}: \mathbb{D}_{LT,FT}(V)\rightarrow \mathbb{D}_{LT,FT}(V)$ satisfying above properties as $\psi_q$ maps $\mathcal{O}_\mathcal{L}$ to itself. Next, we prove the following lemma. \begin{lemma}\label{phi to psi morphism} Let $A$ be an abelian group. Consider the following complexes $\mathscr{C}_1$, $\mathscr{C}_2$ and $\mathscr{C}_3$ \begin{equation*} \mathscr{C}_i: 0\rightarrow A\xrightarrow{d_i}A\rightarrow 0, \qquad \text{for}\ i=1,2 \end{equation*} \begin{equation*} \mathscr{C}_3: 0\rightarrow A \xrightarrow{d_3}\bigoplus_{i_1\in \mathfrak{y}}A\xrightarrow{d_3}\cdots\rightarrow\bigoplus_{\{i_1,\ldots,i_r\}\in \binom{\mathfrak{y}}{r}}A\xrightarrow{d_3}\cdots \xrightarrow{d_3} A\rightarrow 0, \end{equation*} where $\mathfrak{y}$ is a finite set and $\binom{\mathfrak{y}}{r}$ denotes choosing $r$-indices at a time from the set $\mathfrak{y}$. Let $\operatorname{Tot}(\mathscr{C}_i\mathscr{C}_j)$ be the total complex of the double complex $\mathscr{C}_i\mathscr{C}_j$. Then a morphism from the complex $\mathscr{C}_1$ to $\mathscr{C}_2$, which commutes with $d_3$, induces a natural homomorphism between the cohomology groups \begin{equation*} \mathcal{H}^i(\operatorname{Tot}(\mathscr{C}_1\mathscr{C}_3))\rightarrow\mathcal{H}^i(\operatorname{Tot}(\mathscr{C}_2\mathscr{C}_3)). \end{equation*} \end{lemma} \begin{proof} The morphism from $\mathscr{C}_1$ to $\mathscr{C}_2$ induces the following commutative diagram \begin{center} \begin{tikzcd} \mathscr{C}_1 : 0\arrow{r} & A \arrow{r}{d_1} \arrow[swap]{d}{\delta_1} & A \arrow{r} \arrow[swap]{d}[swap]{\delta_2} & 0\\ \mathscr{C}_2 : 0 \arrow{r} & A \arrow{r}[swap]{d_2} & A \arrow{r} & 0. \end{tikzcd} \end{center} This induces a morphism between the total complex $\operatorname{Tot}(\mathscr{C}_1\mathscr{C}_3)$ and $\operatorname{Tot}(\mathscr{C}_2\mathscr{C}_3)$, as follows: \begin{center} \begin{tikzpicture}[baseline= (a).base] \node[scale=.90] (a) at (0,0) { \begin{tikzcd}[row sep=large] \operatorname{Tot}(\mathscr{C}_1\mathscr{C}_3): 0\arrow{r} & A \arrow{r}{(d_1,d_3)} \arrow[swap]{d}{\delta_1} & A \oplus \smashoperator[r]{\bigoplus _{i_1\in \mathfrak{y}}}A \arrow{r} \arrow[swap]{d}{\delta_2, \smashoperator[r]{\bigoplus_{i_1\in \mathfrak{y}}}\delta_1} & \cdots\arrow{r} & \smashoperator[r]{\bigoplus_{\{i_1,\ldots,i_{d-1}\}\in \binom{\mathfrak{y}}{d-1}}} A\oplus A \arrow{r}{d_3-d_1} \arrow[swap]{d}{\smashoperator[r]{\bigoplus_{\{i_1,\ldots,i_{d-1}\}\in \binom{\mathfrak{y}}{d-1}}}\delta_2,\delta_1} & A \arrow{r} \arrow[swap]{d}[swap]{\delta_2} & 0\\ \operatorname{Tot}(\mathscr{C}_2\mathscr{C}_3): 0 \arrow{r} & A \arrow{r}[swap]{(d_2,d_3)} & A\oplus\smashoperator[r]{\bigoplus_{i_1\in \mathfrak{y}}}A \arrow{r} & \cdots\arrow{r} & \smashoperator[r]{\bigoplus_{\{i_1,\ldots,i_{d-1}\}\in \binom{\mathfrak{y}}{d-1}}}A\oplus A \arrow{r}[swap]{d_3-d_2} & A \arrow{r} & 0. \end{tikzcd} }; \end{tikzpicture} \end{center} As the morphism from $\mathscr{C}_1$ to $\mathscr{C}_2$ commutes with $d_3$, it is easy to check that each square is commutative in the above diagram, which induces a homomorphism \begin{equation*} \mathcal{H}^i(\operatorname{Tot}(\mathscr{C}_1\mathscr{C}_3))\rightarrow\mathcal{H}^i(\operatorname{Tot}(\mathscr{C}_2\mathscr{C}_3)). \end{equation*} \end{proof} Recall that $D^{sep}=\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K}V\cong \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_\mathcal{E}}\mathbb{D}_{LT}(V)$. Define a complex $\Psi^\bullet(D^{sep})$ as the following \begin{equation*} \Psi^{\bullet}(D^{sep}): 0\rightarrow D^{sep}\xrightarrow{\psi_q\otimes \psi_{\mathbb{D}_{LT}(V)}-\frac{q}{\pi}id}D^{sep}\rightarrow 0. \end{equation*} \subsection{The case of Lubin-Tate extensions} \begin{definition} For any $M\in \varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$, the co-chain complex $\Psi\Gamma_{LT}^{\bullet}(M)$ is defined as the total complex of the double complex $\Gamma_{LT}^\bullet(\Psi^\bullet(M^{\Delta}))$. \end{definition} \begin{example}{\label{Ex4}} Let $d=2$, then the complex $\Psi\Gamma_{LT}^{\bullet}(M)$ is defined as follows \begin{equation*} 0\rightarrow M\xrightarrow{x\mapsto A_{0,\psi_q}x}M^{\oplus 3}\xrightarrow{x\mapsto A_{1,\psi_q}x} M^{\oplus 3}\xrightarrow{x\mapsto A_{2,\psi_q}x}M\rightarrow 0, \end{equation*} where $M=M^\Delta$, and \[ A_{0,\psi_q}= \begin{bmatrix} \psi_M-\frac{q}{\pi}id \\ \gamma_1-id \\ \gamma_2-id \end{bmatrix}, A_{1,\psi_q} = \begin{bmatrix} -(\gamma_1-id) & \psi_M-\frac{q}{\pi}id & 0 \\ -(\gamma_2-id) & 0 & \psi_M-\frac{q}{\pi}id \\ 0 & -(\gamma_2-id) & \gamma_1-id \end{bmatrix}, \] \[ A_{2,\psi_q}= \begin{bmatrix} \gamma_2-id & -(\gamma_1-id)& \psi_M-\frac{q}{\pi}id \end{bmatrix}. \] \end{example} Next, we have the following proposition, which is an easy consequence of Lemma \ref{phi to psi morphism}. \begin{proposition} Let $M\in \varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$. Then the morphism $\Phi^{\bullet}(M)\rightarrow \Psi^{\bullet}(M)$, which is given by the following \begin{center} \begin{tikzcd}[row sep=large, column sep = large] \Phi^\bullet(M):0\arrow{r} & M \arrow{r}{\varphi_M-id} \arrow[swap]{d}{id} & M \arrow{r} \arrow[swap]{d}[swap]{-\psi_M} & 0\\ \Psi^\bullet(M):0 \arrow{r} & M \arrow{r}[swap]{\psi_M-\frac{q}{\pi}id} & M \arrow{r} & 0, \end{tikzcd} \end{center} induces a morphism \begin{equation*} \Phi\Gamma_{LT}^{\bullet}(M) \rightarrow \Psi\Gamma_{LT}^{\bullet}(M). \end{equation*} \end{proposition} \begin{proof} Since $\psi_q$ commutes with the action of $\Gamma_{LT}$, the proof follows easily from Lemma \ref{phi to psi morphism}, by taking $\mathscr{C}_1 = \Phi^{\bullet}(M^\Delta), \mathscr{C}_2 = \Psi^{\bullet}(M^\Delta)$ and $\mathscr{C}_3 = \Gamma_{LT}^{\bullet}(M^\Delta)$. \end{proof} \begin{example} Let $d=2$. Then the morphism between $\Phi\Gamma_{LT}^{\bullet}(M)$ and $\Psi\Gamma_{LT}^{\bullet}(M)$ is given by the following: \begin{center} \begin{tikzcd} \Phi\Gamma_{LT}^{\bullet}(M): 0\arrow{r} & M \arrow{r}{A_{0,\varphi_q}} \arrow[swap]{d}{id} & M \oplus M\oplus M \arrow{r}{A_{1,\varphi_q}} \arrow[swap]{d}{\mathscr{F}} & M \oplus M\oplus M \arrow{r}{A_{2,\varphi_q}} \arrow[swap]{d}{\mathscr{F}^{\prime}} & M \arrow{r} \arrow[swap]{d}[swap]{-\psi_M} & 0\\ \Psi\Gamma_{LT}^{\bullet}(M): 0 \arrow{r} & M \arrow{r}[swap]{A_{0,\psi_q}} & M\oplus M \oplus M \arrow{r}[swap]{A_{1,\psi_q}} & M \oplus M\oplus M \arrow{r}[swap]{A_{2,\psi_q}} & M \arrow{r} & 0, \end{tikzcd} \end{center} where $M=M^\Delta$, and \begin{align*} \mathscr{F}(x_1,x_2,x_3) = & (-\psi_M(x_1),x_2,x_3),\\ \mathscr{F}^\prime(x_1,x_2,x_3)=& (-\psi_M(x_1), -\psi_M(x_2), x_3), \end{align*} and the maps $A_{i,\varphi_q}$ and $A_{i,\psi_q}$ are the same as defined in Example \ref{Ex1} and Example \ref{Ex4}. \end{example} \begin{theorem}\label{Main5} Let $M \in \varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$. Then we have a well-defined homomorphism \begin{equation*} \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(M))\rightarrow \mathcal{H}^i(\Psi\Gamma_{LT}^{\bullet}(M)) \quad \text{for} \ i\geq0. \end{equation*} Further, the homomorphism $\mathcal{H}^0(\Phi\Gamma_{LT}^{\bullet}(M))\rightarrow \mathcal{H}^0(\Psi\Gamma_{LT}^{\bullet}(M))$ is injective. \end{theorem} \begin{proof} Since $(-\psi_M)(\varphi_M-id) = (\psi_M-\frac{q}{\pi}id)$, and $\psi_M$ commutes with $\Gamma_{LT}$, we have a morphism $\Phi\Gamma_{LT}^{\bullet}(M) \rightarrow \Psi\Gamma_{LT}^{\bullet}(M)$ of co-chain complexes, which induces a well-defined homomorphism \begin{equation*} \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(M))\rightarrow \mathcal{H}^i(\Psi\Gamma_{LT}^{\bullet}(M)) \quad \text{for} \ i\geq0. \end{equation*} For the second part, let $\mathcal{K}$ be the kernel and $\mathcal{C}$ be the co-kernel of the morphism $\Phi\Gamma_{LT}^{\bullet}(M) \rightarrow \Psi\Gamma_{LT}^{\bullet}(M)$. Then the complex $\mathcal{K}$ and $\mathcal{C}$ are given as the following: \begin{equation*} \mathcal{K}: 0\rightarrow 0\rightarrow \operatorname{Ker}\psi_M\oplus\bigoplus_{i_1\in \mathfrak{X}}0\rightarrow\cdots\rightarrow \bigoplus_{\{i_1,\ldots,i_{d-1}\}\in \binom{\mathfrak{X}}{d-1}}\operatorname{Ker}\psi_M\oplus 0\rightarrow \operatorname{Ker}\psi_M\rightarrow 0, \end{equation*} \begin{equation*} \mathcal{C}: 0\rightarrow 0\rightarrow \operatorname{coker}\psi_M\oplus\bigoplus_{i_1\in \mathfrak{X}}0\rightarrow\cdots\rightarrow \bigoplus_{\{i_1,\ldots,i_{d-1}\}\in \binom{\mathfrak{X}}{d-1}}\operatorname{coker}\psi_M\oplus 0\rightarrow \operatorname{coker}\psi_M\rightarrow 0. \end{equation*} The morphisms of the complex $\mathcal{K}$ are the restrictions of that of the complex $\Phi\Gamma_{LT}^{\bullet}(M)$, and the morphisms of the complex $\mathcal{C}$ are induced from the complex $\Psi\Gamma_{LT}^{\bullet}(M)$. Then we have the exact sequence \begin{equation*} 0\rightarrow \mathcal{K}\rightarrow \Phi\Gamma_{LT}^{\bullet}(M) \rightarrow \Psi\Gamma_{LT}^{\bullet}(M)\rightarrow \mathcal{C}\rightarrow 0, \end{equation*} which gives us the following short exact sequences \begin{equation}\label{eq(1)} 0\rightarrow \mathcal{K}\rightarrow \Phi\Gamma_{LT}^{\bullet}(M) \rightarrow \mathbb{I}\rightarrow 0, \end{equation} \begin{equation}\label{eq(2)} 0\rightarrow \mathbb{I}\rightarrow \Psi\Gamma_{LT}^{\bullet}(M)\rightarrow \mathcal{C}\rightarrow 0, \end{equation} where $\mathbb{I}$ is the image of $\Phi\Gamma_{LT}^{\bullet}(M)\rightarrow \Psi\Gamma_{LT}^{\bullet}(M)$. Note that $\mathcal{H}^0(\mathcal{C})=0$. Then by taking the long exact cohomology sequence of (\ref{eq(2)}), we have $\mathcal{H}^0(\mathbb{I})\cong \mathcal{H}^0(\Psi\Gamma_{LT}^{\bullet}(M))$. Also, we have a long exact sequence \begin{align*} 0 \rightarrow \mathcal{H}^0(\mathcal{K})\rightarrow \mathcal{H}^0(\Phi\Gamma_{LT}^{\bullet}(M))&\rightarrow \mathcal{H}^0(\mathbb{I}) \rightarrow \mathcal{H}^1(\mathcal{K}) \rightarrow \mathcal{H}^1(\Phi\Gamma_{LT}^{\bullet}(M))\rightarrow \mathcal{H}^1(\mathbb{I})\\&\rightarrow \mathcal{H}^2(\mathcal{K})\rightarrow \mathcal{H}^2(\Phi\Gamma_{LT}^{\bullet}(M))\rightarrow \mathcal{H}^2(\mathbb{I})\rightarrow \mathcal{H}^3(\mathcal{K})\rightarrow 0 \rightarrow \cdots \end{align*} Finally, as $\mathcal{H}^0(\mathbb{I})\cong \mathcal{H}^0(\Psi\Gamma_{LT}^{\bullet}(M))$ and $\mathcal{H}^0(\mathcal{K})=0$, the homomorphism $$\mathcal{H}^0(\Phi\Gamma_{LT}^{\bullet}(M))\rightarrow \mathcal{H}^0(\Psi\Gamma_{LT}^{\bullet}(M))$$ is injective. \end{proof} \begin{remark}\label{Herr theorem} In the classical case when $V$ is a $\mathbb{Z}_p$-representation of $G_K$ and $K_\infty=K_{cyc}$, the action of $\gamma-id$ is bijective on $\operatorname{Ker}\psi=\mathbb{D}(V)^{\psi=0}$, where $\gamma$ is a topological generator of $\Gamma$ \cite[Theorem 3.8]{LH1} and the Herr complex for $\varphi$ and $\psi$ are quasi-isomorphic \cite[Proposition 4.1]{LH1}. \end{remark} \iffalse{ \begin{remark}\label{special case} Let the action of $\tau_1:=\gamma_1-id$ be bijective on $\operatorname{Ker}\psi_M$, then the homomorphism $\mathcal{H}^0(\Phi\Gamma_{LT}^{\bullet}(M))\rightarrow \mathcal{H}^0(\Psi\Gamma_{LT}^{\bullet}(M))$ is an isomorphism. Moreover, the homomorphism $\mathcal{H}^1(\Phi\Gamma_{LT}^{\bullet}(M))\rightarrow \mathcal{H}^1(\Psi\Gamma_{LT}^{\bullet}(M))$ is an injection. \end{remark} \begin{proof} Consider the co-chain complexes \begin{equation*} \mathscr{C}_{\gamma_1}(M): 0\rightarrow M\xrightarrow{-(\gamma_1-id)}M\rightarrow 0, \end{equation*} and \begin{equation*} \Gamma_{LT\backslash \gamma_1}^{\bullet}(M): 0\rightarrow M \rightarrow\bigoplus_{i_1\in \mathfrak{X^\prime}}M\rightarrow\cdots\rightarrow\bigoplus_{\{i_1,\ldots,i_r\}\in \binom{\mathfrak{X^\prime}}{r}}M\rightarrow\cdots \rightarrow M\rightarrow 0, \end{equation*} where $\mathfrak{X^\prime}= \{\gamma_2,\ldots,\gamma_d\}$, and for all $0\leq r\leq |\mathfrak{X^\prime}|-1$, the map $d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}}:M\rightarrow M$ from the component in the $r$-th term corresponding to $\{i_1,\ldots,i_r\}$ to the component corresponding to the $(r+1)$-tuple $\{j_1,\ldots,j_{r+1}\}$ is given by \begin{equation*} d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}} = \left\{ \begin{array}{ll} 0 & \mbox{if } \{i_1,\ldots,i_r\}\nsubseteq\{j_1,\ldots,j_{r+1}\}, \\ (-1)^{s_j+1}(\gamma_j-id) & \mbox{if } \{j_1,\ldots,j_{r+1}\}= \{i_1,\ldots,i_r\}\cup\{j\}, \end{array} \right. \end{equation*} and $s_j$ is the number of elements in the set $\{i_1,\ldots,i_r\}$ smaller than $j$. Then the complex $\mathcal{K}$ can be written as the total complex of the following double complex \begin{center} \begin{tikzpicture}[baseline= (a).base] \node[scale=0.80] (a) at (0,0){ \begin{tikzcd}[row sep=large] & 0 & 0 & & 0 & & 0 \\ 0\arrow{r} & \operatorname{Ker}\psi_M\arrow{r} \arrow[swap]{u} & \underset{i_1\in \mathfrak{X^\prime}}\bigoplus\operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u} & \cdots \arrow{r} & \underset{\{i_1,\ldots,i_r\}\in \binom{\mathfrak{X^\prime}}{r}}\bigoplus\operatorname{Ker}\psi_M\arrow{r}\arrow[swap]{u} & \cdots \arrow{r}& \operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u} & 0\\ 0\arrow{r} & \operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u}[swap]{-(\gamma_1-id)} & \underset{i_1\in \mathfrak{X^\prime}}\bigoplus\operatorname{Ker}\psi_M \arrow{r} \arrow[swap]{u}{-(\gamma_1-id)} & \cdots \arrow{r} & \underset{\{i_1,\ldots,i_r\}\in \binom{\mathfrak{X^\prime}}{r}}\bigoplus\operatorname{Ker}\psi_M\arrow{r}\arrow[swap]{u}{-(\gamma_1-id)} & \cdots \arrow{r}& \operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u}{-(\gamma_1-id)} & 0\\ & 0 \arrow[swap]{u} & 0 \arrow[swap]{u} & & 0\arrow[swap]{u} & & 0 \arrow[swap]{u} \end{tikzcd} }; \end{tikzpicture} \end{center} In other words, $\mathcal{K}$ is the total complex of $\Gamma_{LT\backslash \gamma_1}^{\bullet}(\mathscr{C}_{\gamma_1}(\operatorname{Ker}\psi_M))$, which is bounded double complex with exact columns as $0\rightarrow \operatorname{Ker}\psi_M\xrightarrow{-(\gamma_1-id)}\operatorname{Ker}\psi_M\rightarrow 0$ is exact. Therefore $\mathcal{K}$ is acyclic. Then by taking the long exact cohomology sequence of (\ref{eq(1)}), we have $\mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(M))\cong \mathcal{H}^i(\mathbb{I})$. Moreover, $\mathcal{H}^0(\mathcal{C})=0$. Now the result follows from the long exact cohomology sequence for the short exact sequence (\ref{eq(2)}). \end{proof} \begin{remark}\label{divisible modules} Let $M$ be a $\pi$-divisible module in $\varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$ such that the action of $\tau_1:=\gamma_1-id$ is bijective on $\operatorname{Ker}\psi_M$. Then we have an isomorphism \begin{equation*} \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(M))\xrightarrow{\sim} \mathcal{H}^i(\Psi\Gamma_{LT}^{\bullet}(M))\quad \text{for}\ i\geq 0. \end{equation*} \end{remark} \begin{proof} Since $M$ is $\pi$-divisible and $\frac{q}{\pi}= \pi^{r-1}\mod\mathcal{O}_K^\times$, the map $\frac{q}{\pi}:M\rightarrow M$ is surjective. Also, $\psi_M\circ \varphi_M= \frac{q}{\pi}id_M$. Then $\psi_M: M \rightarrow M$ is surjective, and the co-kernel complex $\mathcal{C}$ consists of zeros, i.e., $\mathcal{C}$ is a zero complex. Since the action of $\tau_1:=\gamma_1-id$ is bijective on $\operatorname{Ker}\psi_M$, it follows from Remark \ref{special case} that the complex $\mathcal{K}$ is acyclic. Now by taking the cohomology of the following short exact sequence \begin{equation*} 0\rightarrow \mathcal{K} \rightarrow \Phi\Gamma_{LT}^{\bullet}(M) \rightarrow \Psi\Gamma_{LT}^{\bullet}(M) \rightarrow 0, \end{equation*} we get the desired result. \end{proof} }\fi \subsection{The case of False-Tate type extensions} \begin{definition} Let $M\in \varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT,FT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{L}}}$. Then \emph{the False-Tate type Herr complex $\Psi\Gamma_{LT,FT}^{\bullet}(M)$ corresponding to $\psi_q$} is defined as the total complex of the double complex $\Gamma_{LT,FT}^\bullet(\Psi^\bullet(M))$. \end{definition} Let $\mathscr{C}_1 = \Phi^{\bullet}(M), \mathscr{C}_2 = \Psi^{\bullet}(M)$ and $\mathscr{C}_3 = \Gamma_{LT,FT}^{\bullet}(M)$. Then by using Lemma \ref{phi to psi morphism}, we have a morphism \begin{equation*} \Phi\Gamma_{LT,FT}^\bullet(M)\rightarrow \Psi\Gamma_{LT,FT}^\bullet(M). \end{equation*} Next, we prove a result in the case of False-Tate type extensions, which is analogous to Theorem \ref{Main5}. Recall that $\Gamma_{LT,FT}$ is topologically generated by $\{\gamma_1,\ldots,\gamma_d,\tilde{\gamma}\}$, and $a_i = \chi_{LT}(\gamma_i)$. \begin{theorem} \label{Theorem False Tate} Let $M\in \varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT,FT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{L}}}$. Then the morphism $$\Phi\Gamma_{LT,FT}^{\bullet}(M)\rightarrow \Psi\Gamma_{LT,FT}^{\bullet}(M)$$ induces a well-defined homomorphism $\mathcal{H}^i(\Phi\Gamma_{LT,FT}^{\bullet}(M))\rightarrow\mathcal{H}^i(\Psi\Gamma_{LT,FT}^{\bullet}(M))$ for $i\geq 0$. Moreover, we have $\mathcal{H}^0(\Phi\Gamma_{LT,FT}^{\bullet}(M))\hookrightarrow\mathcal{H}^0(\Psi\Gamma_{LT,FT}^{\bullet}(M))$. \end{theorem} \begin{proof} As before, the fact that $(-\psi_M)(\varphi_M-id) = (\psi_M-\frac{q}{\pi}id)$, and $\psi_M$ commutes with the action of $\Gamma_{LT,FT}$, we have a morphism $\Phi\Gamma_{LT,FT}^{\bullet}(M) \rightarrow \Psi\Gamma_{LT,FT}^{\bullet}(M)$ of co-chain complexes, which induces a well-defined homomorphism \begin{equation*} \mathcal{H}^i(\Phi\Gamma_{LT,FT}^{\bullet}(M))\rightarrow \mathcal{H}^i(\Psi\Gamma_{LT,FT}^{\bullet}(M))\quad \text{for} \ i\geq0. \end{equation*} Let $\mathcal{K}$ be the kernel and $\mathcal{C}$ the co-kernel of the morphism $\Phi\Gamma_{LT,FT}^{\bullet}(M) \rightarrow \Psi\Gamma_{LT,FT}^{\bullet}(M)$. Then we have an exact sequence \begin{equation*} 0\rightarrow \mathcal{K} \rightarrow \Phi\Gamma_{LT,FT}^{\bullet}(M) \rightarrow \Psi\Gamma_{LT,FT}^{\bullet}(M) \rightarrow \mathcal{C}\rightarrow 0. \end{equation*} The result then follows by using the same method as in Theorem \ref{Main5}. \end{proof} \iffalse \begin{proposition} Let $V$ be a $\mathbb{Z}_p$-representation of $G_K$. Assume that $K$ contains a primitive $p$-th root of unity $\zeta_p$ and $K_\infty=K_{cyc}$. Let $\mathbb{D}$ denote the functor $\mathbb{D}_{LT}$, and $\mathbb D_{FT}$ denote the functor $\mathbb D_{LT,FT}$. Then the complexes $\Phi\Gamma_{FT}^\bullet(\mathbb{D}_{FT}(V))$ and $\Psi\Gamma_{FT}^\bullet(\mathbb{D}_{FT}(V))$ are quasi-isomorphic. \end{proposition} \begin{proof} Define $L$ as in Remark \ref{choice}(1). Since $K$ contains $\zeta_p$, $\Gamma_{FT}:=\operatorname{Gal}(L/K)\cong \Gamma^* \ltimes\mathbb{Z}_p$ and $\Gamma_{FT}$ is topologically generated by $\langle \gamma_1,\tilde{\gamma}\rangle$. Note that the complex $\Phi\Gamma_{FT}^\bullet(M)$ is the total complex of the double complex $\Gamma_{FT}^\bullet(\Phi^\bullet(M))$, where $M$ is an \'{e}tale $(\varphi,\Gamma_{FT})$-module over $\mathcal{O}_{\mathcal{L}}$, and the complexes $\Phi^\bullet(M)$ and $\Gamma^\bullet_{FT}(M)$ defined as follows: \begin{equation*} \Phi^\bullet(M): 0\rightarrow M\xrightarrow{\varphi-id}M \rightarrow0, \end{equation*} \begin{equation*} \Gamma_{FT}^\bullet(M): 0\rightarrow M\xrightarrow{x\mapsto A_0x}M\oplus M\xrightarrow{x\mapsto A_1x}M \rightarrow0 \end{equation*} with \[ A_0= \begin{bmatrix} \gamma_1-id \\ \tilde{\gamma}-id \end{bmatrix}, A_1= \begin{bmatrix} \tilde{\gamma}^{a_1}-id & -\left(\gamma_1-\dfrac{\tilde{\gamma}^{a_1}-id}{\tilde{\gamma}-id}\right) \end{bmatrix}. \] Similarly, the complex $\Psi\Gamma_{FT}^\bullet$ is the total complex of the double complex $\Gamma_{FT}^\bullet(\Psi^\bullet(M))$, where $\Psi^\bullet(M): 0\rightarrow M\xrightarrow{\psi_M-id}M \rightarrow0$. Then the morphism $\Phi\Gamma_{FT}^\bullet(M)\rightarrow \Psi\Gamma_{FT}^\bullet(M)$ is given as the following \begin{center} \begin{tikzcd} \Phi\Gamma_{FT}^{\bullet}(M): 0\arrow{r} & M \arrow{r}{A_{0,\varphi}} \arrow[swap]{d}{id} & M^{\oplus 3} \arrow{r}{A_{1,\varphi}} \arrow[swap]{d}{\mathscr{F}} & M^{\oplus 3} \arrow{r}{A_{2,\varphi}} \arrow[swap]{d}{\mathscr{F}^{\prime}} & M \arrow{r} \arrow[swap]{d}[swap]{-\psi} & 0\\ \Psi\Gamma_{FT}^{\bullet}(M): 0 \arrow{r} & M \arrow{r}[swap]{A_{0,\psi}} & M^{\oplus 3} \arrow{r}[swap]{A_{1,\psi}} & M^{\oplus 3}\arrow{r}[swap]{A_{2,\psi}} & M \arrow{r} & 0, \end{tikzcd} \end{center} where \[ A_{0,\varphi}= \begin{bmatrix} \varphi_M-id \\ \gamma_1-id\\ \tilde{\gamma}-id \end{bmatrix}, A_{1,\varphi}= \begin{bmatrix} -(\gamma_1-id) & \varphi_M-id & 0 \\ -(\tilde{\gamma}-id) & 0 & \varphi_M-id\\ 0 & \tilde{\gamma}^{a_1}-id & -\left(\gamma_1-\frac{\tilde{\gamma}^{a_1}-id}{\tilde{\gamma}-id}\right) \end{bmatrix}, \] \[ A_{2,\varphi}= \begin{bmatrix} -(\tilde{\gamma}^{a_1}-id) & \gamma_1-\frac{\tilde{\gamma}^{a_1}-id}{\tilde{\gamma}-id} & \varphi_M-id \\ \end{bmatrix}, \] \[ A_{0,\psi}= \begin{bmatrix} \psi_M-id \\ \gamma_1-id\\ \tilde{\gamma}-id \end{bmatrix}, A_{1,\psi}= \begin{bmatrix} -(\gamma_1-id) & \psi_M-id & 0 \\ -(\tilde{\gamma}-id) & 0 & \psi_M-id\\ 0 & \tilde{\gamma}^{a_1}-id & -\left(\gamma_1-\frac{\tilde{\gamma}^{a_1}-id}{\tilde{\gamma}-id}\right) \end{bmatrix}, \] \[ A_{2,\psi}= \begin{bmatrix} -(\tilde{\gamma}^{a_1}-id) & \gamma_1-\frac{\tilde{\gamma}^{a_1}-id}{\tilde{\gamma}-id} & \psi_M-id \\ \end{bmatrix}, \] \begin{align*} \mathscr{F}(x_1,x_2,x_3)=& (-\psi_M(x_1),x_2,x_3),\\ \mathscr{F}^\prime(x_1,x_2,x_3)=& (-\psi_M(x_1), -\psi_M(x_2), x_3). \end{align*} Since $\psi_M\circ\varphi_M=id$, each of the square diagram commutes. Let $\mathcal{K}$ be the kernel complex of the morphism $\Phi\Gamma_{FT}^\bullet(M)\rightarrow \Psi\Gamma_{FT}^\bullet(M)$. Then as $\psi_M$ is surjective, we have a short exact sequence: \begin{equation}\label{long exact} 0 \rightarrow \mathcal{K}\rightarrow \Phi\Gamma_{FT}^\bullet(M)\rightarrow \Psi\Gamma_{FT}^\bullet(M)\rightarrow 0. \end{equation} This induces a homomorphism \begin{equation*} \mathcal{H}^i(\Phi\Gamma_{FT}^\bullet(M))\rightarrow \mathcal{H}^i(\Psi\Gamma_{FT}^\bullet(M)). \end{equation*} Note that the complex $\mathcal{K}$ can be written as the total complex of the following double complex: \begin{center} \begin{tikzcd}[row sep=large, column sep = large] & 0 & 0 \\ 0\arrow{r} & \operatorname{Ker}\psi_M \arrow{r}{-(\tilde{\gamma}^{a_1}-id)}\arrow[swap]{u} & \operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u} & 0\\ 0\arrow{r} & \operatorname{Ker}\psi_M \arrow{r}[swap]{-(\tilde{\gamma}-id)}\arrow[swap]{u}[swap]{-(\gamma_1-id)} & \operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u}{-\left(\gamma_1-\frac{\tilde{\gamma}^{a_1}-id}{\tilde{\gamma}-id}\right)} & 0\\ & 0 \arrow[swap]{u} & 0\arrow[swap]{u} \end{tikzcd} \end{center} By Remark \ref{Herr theorem}, we know that $\gamma_1-id$ acts bijectively on $\operatorname{Ker}\psi=\mathbb{D}(V)^{\psi=0}$ over the category of \'{e}tale $(\varphi,\Gamma)$-modules over $\mathcal{O}_{\mathcal{E}}$. Further, by Theorem \ref{False-Tate equivalence} and \cite[Corollary 1.4]{Flo}, we have $M:=\mathbb{D}_{FT}(V)=\mathbb{D}(V)\otimes_{\mathcal{O}_{\mathcal{E}}}\mathcal{O}_{\mathcal{L}}$. As the map $\psi_M:=\psi\otimes id: \mathbb{D}_{FT}(V)\rightarrow \mathbb{D}_{FT}(V)$, for the map $\psi:\mathbb{D}(V)\rightarrow\mathbb{D}(V)$, the $\operatorname{Ker}\psi_M=\operatorname{Ker}\psi\otimes_{\mathcal{O}_{\mathcal{E}}}\mathcal{O}_{\mathcal{L}}$. Under this scalar extension, the map $\gamma_1-id$ on $\operatorname{Ker}\psi$ induces the map $(\gamma_1-id)$ on $\operatorname{Ker}\psi_M$. Since the map $\gamma_1-id$ acts bijectively on $\operatorname{Ker}\psi$, the map $(\gamma_1-id)$ acts bijectively on $\operatorname{Ker}\psi_M$ over the category of \'{e}tale $(\varphi,\Gamma_{FT})$-modules over $\mathcal{O}_{\mathcal{L}}$. This action is through the diagonal action of $G_K$ on $\mathbb{D}(V)\otimes\mathcal{O}_{\mathcal L}$. Also, $\frac{\tilde{\gamma}^{a_1}-id}{\tilde{\gamma}-id}$ is a unit in $\mathbb{Z}_p[[\Gamma_{FT}]]$, and on $\mathbb{D}_{FT}(V)=\mathbb{D}(V)\otimes\mathcal O_\mathcal{L}$, it acts through the trivial action of $\tilde\gamma$ on $\mathbb{D}(V)$. As $\frac{\tilde{\gamma}^{a_1}-1}{\tilde{\gamma}-1}=\sum_{n\geq1}\binom{a_1}{n}(\tilde\gamma-1)^n=a_1+\sum_{n\geq1}\binom{a_1}{n}(\tilde\gamma-1)^{n-1}$, it acts by multiplication by the unit $a_1$ on $\mathbb{D}(V)$. Therefore, $\gamma_1-\frac{(\tilde{\gamma}^{a_1}-id)}{(\tilde{\gamma}-id)}$ acts by $\gamma_1-a_1=a_1(a_1^{-1}\gamma_1-1)$ on $\mathbb{D}(V)$ and hence on $\mathbb{D}(V)^{\psi=0}$. Since $a_1\in\mathbb{Z}_p^\times$, there is a character $\tau$ of $\mathbb{Z}_p$ such that $\tau(\gamma_1)=a_1^{-1}$. Then the twisted module $\mathbb{D}(V(\tau))$ carries the twisted action of $\gamma_1$. Again, by Remark \ref{Herr theorem}, $\gamma_1-id$ acts bijectively on $\mathbb{D}(V(\tau))^{\psi=0}$. the columns of the above diagram are exact. Then by \cite[Lemma 2.7.3]{Wei}, $\mathcal{K}$ is acyclic and then by taking the long exact cohomology sequence of (\ref{long exact}), it follows that \begin{equation*} \mathcal{H}^i(\Phi\Gamma_{FT}^\bullet(M))\cong \mathcal{H}^i(\Psi\Gamma_{FT}^\bullet(M)) \quad \text{for all}\ i\geq 0. \end{equation*} \end{proof} \iffalse{ \begin{remark} Let $\tau_1:=\gamma_1-id$ act bijectively on $\operatorname{Ker}\psi_M$, i.e., the complex $0\rightarrow \operatorname{Ker}\psi_M\xrightarrow{\gamma_1-id}\operatorname{Ker}\psi_M\rightarrow0$ is exact. Then the homomorphism $$\mathcal{H}^0(\Phi\Gamma_{LT,FT}^{\bullet}(M))\rightarrow \mathcal{H}^0(\Psi\Gamma_{LT,FT}^{\bullet}(M))$$ is an isomorphism. Moreover, the map $\mathcal{H}^1(\Phi\Gamma_{LT,FT}^{\bullet}(M))\rightarrow \mathcal{H}^1(\Psi\Gamma_{LT,FT}^{\bullet}(M))$ is injective. \end{remark} \begin{proof} Let $\tilde{\mathfrak{X}}^{\prime} = \{\gamma_2,\ldots,\gamma_d,\tilde{\gamma}\}$. Then consider the complex \begin{equation*} \mathscr{C}(M): 0\rightarrow M \rightarrow\underset{{i_1\in \tilde{\mathfrak{X}}^{\prime}}}\bigoplus M\rightarrow\cdots\rightarrow\underset{{\{i_1,\ldots,i_r\}\in \binom{\tilde{\mathfrak{X}}^{\prime}}{r}}}\bigoplus M\rightarrow \cdots \rightarrow M\rightarrow 0, \end{equation*} where $\binom{\tilde{\mathfrak{X}}^{\prime}}{r}$ denotes choosing $r$-indices at a time from the set $\tilde{\mathfrak{X}}^{\prime}$, and for all $0 \leq r\leq \lvert \tilde{\mathfrak{X}}^{\prime}\rvert-1$, the map $d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}}:M\rightarrow M$ from the component in the $r$-th term corresponding to $\{i_1,\ldots,i_r\}$ to the component corresponding to the $(r+1)$-tuple $\{j_1,\ldots,j_{r+1}\}$ is given by \begin{equation*} d_{i_1,\ldots,i_r}^{j_1,\ldots, j_{r+1}} = \left\{ \begin{array}{ll} 0 & \mbox{if } \{i_1,\ldots,i_r\}\nsubseteq\{j_1,\ldots,j_{r+1}\}, \\ (-1)^{s_j+1}(\gamma_j-id) & \mbox{if } \{j_1,\ldots,j_{r+1}\}= \{i_1,\ldots,i_r\}\cup\ j\} \\ & \mbox {and} \{i_1,\ldots,i_{r}\}\ \mbox{doesn't contain}\ \tilde{\gamma}, \\ (-1)^{s_j}\left(\gamma_j-\frac{\tilde{\gamma}^{\chi_{LT}(j)\chi_{LT}(i_1)\cdots\chi_{LT}(i_r)}-id}{\tilde{\gamma}^{\chi_{LT}(i_1)\cdot \chi_{LT}(i_r)}-id}\right) & \mbox{if } \{j_1,\ldots,j_{r+1}\}= \{i_1,\ldots,i_r\}\cup\ j\} \\ & \mbox {and} \{i_1,\ldots,i_{r}\}\ \mbox{contains}\ \tilde{\gamma}, \\ -\left(\tilde{\gamma}^{\chi_{LT}(i_1)\cdots\chi_{LT}(i_r)}-id\right) & \mbox{if } \{j_1,\ldots,j_{r+1}\}= \{i_1,\ldots,i_r\}\cup\{\tilde{\gamma}\}, \end{array} \right. \end{equation*} and $s_j$ is the number of elements in the set $\{i_1,\ldots,i_r\}$ smaller than $j$. Let $\mathscr{C}_{a_1}(M)$ denote the complex $\mathscr{C}(M)$ with $\tilde{\gamma}$ replaced by $\tilde{\gamma}^{a_1}$. Then the kernel complex $\mathcal{K}$ can be written as the total complex of the following bounded double complex \begin{center} \begin{tikzpicture}[baseline= (a).base] \node[scale=0.80] (a) at (0,0){ \begin{tikzcd}[row sep=large] & 0 & 0 & & 0 & & 0 \\ \mathscr{C}_{a_1}(\operatorname{Ker}\psi_M): 0\arrow{r} & \operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u} & \underset{i_1\in \tilde{\mathfrak{X}}^\prime}\bigoplus\operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u} & \cdots \arrow{r} & \underset{\{i_1,\ldots,i_r\}\in \binom{\tilde{\mathfrak{X}}^\prime}{r}}\bigoplus\operatorname{Ker}\psi_M\arrow{r}\arrow[swap]{u} & \cdots \arrow{r}& \operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u} & 0\\ \mathscr{C}(\operatorname{Ker}\psi_M):0\arrow{r} & \operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u}[swap]{-(\gamma_1-id)} & \underset{i_1\in \tilde{\mathfrak{X}}^\prime}\bigoplus\operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u} & \cdots \arrow{r} & \underset{\{i_1,\ldots,i_r\}\in \binom{\tilde{\mathfrak{X}}^\prime}{r}}\bigoplus\operatorname{Ker}\psi_M\arrow{r}\arrow[swap]{u} & \cdots \arrow{r}& \operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u} & 0\\ & 0 \arrow[swap]{u} & 0 \arrow[swap]{u} & & 0\arrow[swap]{u} & & 0 \arrow[swap]{u} \end{tikzcd} }; \end{tikzpicture} \end{center} where the vertical maps $d_{b_1,\ldots,b_r}^{c_1,\ldots,c_r}: \operatorname{Ker}\psi_M\rightarrow \operatorname{Ker}\psi_M$ from the component in the $r$-th term corresponding to $\{b_1,\ldots,b_r\}$ to the component corresponding to $r$-th component $\{c_1,\ldots,c_r\}$ is given by the following \begin{equation*} d_{b_1,\ldots,b_r}^{c_1,\ldots,c_r}= \left\{ \begin{array}{ll} -(\gamma_1-id) & \mbox{if } \{b_1,\ldots,b_r\} \mbox{ doesn't contain any term}\\& \mbox{ of the form } (\tilde{\gamma}-id), \\ -\left(\dfrac{(\tilde{\gamma}^{a_1\chi_{LT}(b_1)\cdots\chi_{LT}(b_r)}-id)(\gamma_1-id)}{\tilde{\gamma}^{\chi_{LT}(b_1)\cdots\chi_{LT}(b_r)}-id}\right) & \mbox{if } \{b_1,\ldots,b_{r}\} \mbox{ contains a term of the form } \\ & (\tilde{\gamma}^{\chi_{LT}(b_1)\cdots\chi_{LT}(b_r)}-id). \end{array} \right. \end{equation*} It is easy to see that each square is commutative in the above double complex. Since $\dfrac{\tilde{\gamma}^{a_1\chi_{LT}(b_1)\cdots\chi_{LT}(b_r)}-id}{\tilde{\gamma}^{\chi_{LT}(b_1)\cdots\chi_{LT}(b_r)}-id}$ is a unit in $\mathcal{O}_K[[\Gamma_{LT,FT}^*]]$, thus $\mathcal{K}$ is exact at every point. Now the result follows by using the same technique as in Remark \ref{special case}. \end{proof} Next, we give an illustration of the above remark by the following example. \begin{example} Let $d=2$. Then $\Gamma_{LT,FT}^*=\langle\gamma_1,\gamma_2,\tilde{\gamma}\rangle$, and the morphism from $\Phi\Gamma_{LT,FT}^{\bullet}(M)$ to $\Psi\Gamma_{LT,FT}^{\bullet}(M)$ is given by as following: \begin{center} \begin{tikzcd} \Phi\Gamma_{LT,FT}^{\bullet}(M): 0\arrow{r} & M \arrow{r}{A_{0,\varphi_q}} \arrow[swap]{d}{id} & M^{\oplus 4} \arrow{r}{A_{1,\varphi_q}} \arrow[swap]{d}{\mathscr{F}} & M^{\oplus 6} \arrow{r}{A_{2,\varphi_q}} \arrow[swap]{d}{\mathscr{F}^{\prime}} & M^{\oplus 4} \arrow{r}{A_{3,\varphi_q}} \arrow[swap]{d}{\mathscr{F}^{\prime\prime}} & M \arrow{r} \arrow[swap]{d}[swap]{-\psi_M} & 0\\ \Psi\Gamma_{LT,FT}^{\bullet}(M): 0 \arrow{r} & M \arrow{r}[swap]{A_{0,\psi_q}} & M^{\oplus 4} \arrow{r}[swap]{A_{1,\psi_q}} & M^{\oplus 6}\arrow{r}[swap]{A_{2,\psi_q}} & M^{\oplus 4}\arrow{r}[swap]{A_{3,\psi_q}} & M \arrow{r} & 0, \end{tikzcd} \end{center} where $M=M^\Delta$, and \begin{align*} \mathscr{F}(x_1,x_2,x_3,x_4)=& (-\psi_M(x_1),x_2,x_3,x_4), \\ \mathscr{F}^{\prime}(x_1,x_2,x_3,x_4,x_5,x_6) =& (-\psi_M(x_1),-\psi_M(x_2),-\psi_M(x_3),x_4,x_5,x_6),\\ \mathscr{F}^{\prime\prime}(x_1,x_2,x_3,x_4) =& (-\psi_M(x_1),-\psi_M(x_2),-\psi_M(x_3),x_4), \end{align*} and the maps $A_{i,\varphi_q}$ are the same as defined in Example \ref{Ex3}. The maps $A_{i,\psi_q}$ are deduced from $A_{i,\varphi_q}$ by just replacing $(\varphi_M-id)$ with $(\psi_M-\frac{q}{\pi}id)$. Since $\psi_q$ commutes with the action of $\Gamma_{LT,FT}$, it is easy to see that each square diagram is commutative. Thus we have a morphism of co-chain complexes, which induces a well-defined homomorphism \begin{equation*} \mathcal{H}^i(\Phi\Gamma_{LT,FT}^{\bullet}(M))\rightarrow \mathcal{H}^i(\Psi\Gamma_{LT,FT}^{\bullet}(M)). \end{equation*} The kernel $\mathcal{K}$ and the co-kernel $\mathcal{C}$ of the morphism $\Phi\Gamma_{LT,FT}^{\bullet}(M)\rightarrow \Psi\Gamma_{LT,FT}^{\bullet}(M)$ are given by the following complexes: \begin{equation*} \mathcal{K}: 0\rightarrow 0 \rightarrow \operatorname{Ker}\psi_M \rightarrow \operatorname{Ker}\psi_M\oplus \operatorname{Ker}\psi_M\oplus\operatorname{Ker}\psi_M \rightarrow \operatorname{Ker}\psi_M\oplus \operatorname{Ker}\psi_M\oplus\operatorname{Ker}\psi_M \rightarrow \operatorname{Ker}\psi_M\rightarrow 0, \end{equation*} \begin{equation*} \mathcal{C}: 0\rightarrow 0 \rightarrow \operatorname{coker}\psi_M \rightarrow \operatorname{coker}\psi_M\oplus \operatorname{coker}\psi_M\oplus\operatorname{coker}\psi_M \rightarrow \operatorname{coker}\psi_M\oplus \operatorname{coker}\psi_M\oplus\operatorname{coker}\psi_M \rightarrow \operatorname{coker}\psi_M\rightarrow 0. \end{equation*} $\mathcal{K}$ is a sub-complex of $\Phi\Gamma_{LT,FT}^\bullet(M)$ and the morphisms are induced from $\Phi\Gamma_{LT,FT}^\bullet(M)$ by restriction and $\mathcal{C}$ is a quotient of $\Psi\Gamma_{LT,FT}^\bullet(M)$ and morphisms are induced from $\Psi\Gamma_{LT,FT}^\bullet(M)$. Note that $\mathcal{K}$ can be written as the total complex of the following double complex: \begin{center} \begin{tikzcd}[row sep=large, column sep = large] & 0 & 0 & 0 \\ 0\arrow{r} & \operatorname{Ker}\psi_M \arrow{r}{x \mapsto\partial^\prime_0x}\arrow[swap]{u} & \operatorname{Ker}\psi_M\oplus \operatorname{Ker}\psi_M \arrow{r}{x \mapsto \partial^\prime_1x}\arrow[swap]{u} & \operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u} & 0\\ 0\arrow{r} & \operatorname{Ker}\psi_M \arrow{r}[swap]{x \mapsto \partial_0x}\arrow[swap]{u}[swap]{-(\gamma_1-id)} & \operatorname{Ker}\psi_M\oplus\operatorname{Ker}\psi_M \arrow{r}[swap]{x \mapsto \partial_1x}\arrow[swap]{u}{\mathscr{H}} & \operatorname{Ker}\psi_M \arrow{r}\arrow[swap]{u}{\mathscr{H}^\prime} & 0\\ & 0 \arrow[swap]{u} & 0 \arrow[swap]{u} & 0\arrow[swap]{u} \end{tikzcd} \end{center} where \[ \partial^\prime_0= \begin{bmatrix} -(\gamma_2-id) \\ -(\tilde{\gamma}^{a_1}-id) \end{bmatrix}, \partial^\prime_1= \begin{bmatrix} -(\tilde{\gamma}^{a_1a_2}-id) & \gamma_2-\dfrac{\tilde{\gamma}^{a_1a_2}-id}{\tilde{\gamma}^{a_1}-id} \end{bmatrix}, \] \[ \partial_0= \begin{bmatrix} -(\gamma_2-id) \\ -(\tilde{\gamma}-id) \end{bmatrix}, \partial_1= \begin{bmatrix} -(\tilde{\gamma}^{a_2}-id) & \gamma_2-\dfrac{\tilde{\gamma}^{a_2}-id}{\tilde{\gamma}-id} \end{bmatrix}, \] and \begin{align*} \mathscr{H}(x_1,x_2)= & \left(-(\gamma_1-id)x_1, -(\dfrac{(\tilde{\gamma}^{a_1}-id)(\gamma_1-id)}{\tilde{\gamma}-id})x_2\right), \\ \mathscr{H}^{\prime}(x_1) =& -\left(\dfrac{(\tilde{\gamma}^{a_1a_2}-id)(\gamma_1-id)}{\tilde{\gamma}^{a_2}-id}\right)x_1. \end{align*} Since $\dfrac{\tilde{\gamma}^{a_1}-id}{\tilde{\gamma}-id}$ and $\dfrac{\tilde{\gamma}^{a_1a_2}-id}{\tilde{\gamma}^{a_2}-id}$ are units in $\mathcal{O}_K[[\Gamma_{LT,FT}^*]]$, the columns of the above double complex are exact. Therefore $\mathcal{K}$ is acyclic. Since we have the exact sequence \begin{equation*} 0\rightarrow \mathcal{K} \rightarrow \Phi\Gamma_{LT,FT}^{\bullet}(M) \rightarrow \Psi\Gamma_{LT,FT}^{\bullet}(M) \rightarrow \mathcal{C}\rightarrow 0, \end{equation*} and $\mathcal{H}^i(\mathcal{K}) = 0$ for all $i\geq 0$. Also, $\mathcal{H}^0(\mathcal{C})=0$. Then it is easy to see \begin{enumerate} \item $\mathcal{H}^0(\Phi\Gamma_{LT,FT}^{\bullet}(M))\cong \mathcal{H}^0(\Psi\Gamma_{LT,FT}^{\bullet}(M))$, \item $\mathcal{H}^1(\Phi\Gamma_{LT,FT}^{\bullet}(M))\hookrightarrow \mathcal{H}^1(\Psi\Gamma_{LT,FT}^{\bullet}(M))$. \end{enumerate} \end{example} \begin{remark} Let $M$ be a $\pi$-divisible module in $\varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT,FT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{L}}}$ such that $\tau_1=\gamma_1-id$ acts bijectively on $\operatorname{Ker}\psi_M$. Then we have $$\mathcal{H}^i(\Phi\Gamma_{LT,FT}^{\bullet}(M))\xrightarrow{\sim} \mathcal{H}^i(\Psi\Gamma_{LT,FT}^{\bullet}(M))\quad \text{for}\ i\geq0.$$ \end{remark} \begin{proof} The proof is similar to Remark \ref{divisible modules}. \end{proof} }\fi }\f \section{Iwasawa Cohomology over Lubin-Tate Extensions}\label{Iwasawa} In this section, we briefly recall the results of Schneider and Venjakob on their computation of the Iwasawa cohomology. For any $M\in \varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$, consider the complex \begin{equation*} \underline{\Psi}^{\bullet}(M): 0\rightarrow M\xrightarrow{\psi_M-id}M\rightarrow 0. \end{equation*} Let $M\in \varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$, then $\pi^nM =0$ for some $n\geq 1$. Define \begin{equation*} M^{\vee}:= \text{Hom}_{\mathcal{O}_K}(M,K/\mathcal{O}_K), \end{equation*} which can be identified with $\text{Hom}_{\mathcal{O}_{\mathcal{E}}}(M, \mathcal{O}_{\mathcal{E}}/\pi^n\mathcal{O}_{\mathcal{E}}(\chi_{LT}))$. For more details see (24) in \cite{SV}. Further, for an \'{e}tale $(\varphi_q,\Gamma_{LT})$-module $M$ such that $\pi^nM =0$, for some $n\geq 1$, we have $\Gamma_{LT}$-invariant continuous pairing defined as (\cite[Remark $4.7$]{SV}) \begin{align*} \langle\cdot,\cdot\rangle:& M \times M^{\vee} \rightarrow K/\mathcal{O}_K\\ &(m,F)\mapsto \pi^{-n}\ \operatorname{Res}(F(m)d\operatorname{log}_{LT}(\omega_{LT}))\ \text{mod}\mathcal{O}_K, \end{align*} where $\operatorname{Res}$ is the residue map and $\omega_{LT}=\{\iota(v)\}$. This pairing induces the following pairing for any $M\in {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$: \begin{equation*} \mathcal{H}^i(\Phi^\bullet(M^\vee))\times \mathcal{H}^{1-i}(\underline{\Psi}^{\bullet}(M))\rightarrow K/\mathcal{O}_K \end{equation*} which is perfect. \iffalse{ \begin{proposition}\label{duality Theorem} Let $M\in {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$. Then the pairing \begin{equation*} \mathcal{H}^i(\Phi^\bullet(M^\vee))\times \mathcal{H}^{1-i}(\underline{\Psi}^{\bullet}(M))\rightarrow K/\mathcal{O}_K \end{equation*} is perfect. \end{proposition} \begin{proof} For an \'{e}tale $(\varphi_q,\Gamma_{LT})$-module $M$ such that $\pi^nM =0$, for some $n\geq 1$, we have $\Gamma_{LT}$-invariant continuous pairing defined as (\cite[Remark $4.7$]{SV}) \begin{align*} \langle\cdot,\cdot\rangle:& M \times M^{\vee} \rightarrow K/\mathcal{O}_K\\ &(m,F)\mapsto \pi^{-n}\ \operatorname{Res}(F(m)d\operatorname{log}_{LT}(\omega_{LT}))\ \text{mod}\mathcal{O}_K, \end{align*} where $\operatorname{Res}$ is the residue map and $\omega_{LT}=\{\iota(v)\}$. Moreover, this pairing satisfy the following properties: \begin{enumerate} \item The operator $\psi_M$ is left adjoint to $\varphi_{M^\vee}$ under the pairing $\langle\cdot,\cdot\rangle$; \item The operator $\varphi_M$ is left adjoint to $\psi_{M^\vee}$ under the pairing $\langle\cdot,\cdot\rangle$. \end{enumerate} This induces a pairing \begin{equation}\label{perfect} \mathcal{H}^i(\Phi^\bullet(M^\vee))\times \mathcal{H}^{1-i}(\underline{\Psi}^{\bullet}(M))\rightarrow K/\mathcal{O}_K \end{equation} of $\mathcal{O}_K$-modules. Note that the cohomology groups $\mathcal{H}^i(\Phi^\bullet(M^\vee))$ and $\mathcal{H}^i(\underline{\Psi}^\bullet(M))$ are trivial for $i\geq 2$ and for $i<0$, it is sufficient to check only for $i=0$ and $1$. Now, \begin{align*} \mathcal{H}^0(\Phi^\bullet(M^\vee))^\vee &= ((M^\vee)^{\varphi_{M^\vee}=id})^\vee\\& = (M^\vee)^\vee/(\varphi_{M^\vee}-id)^\vee (M^\vee)^\vee \\ &= M/(\psi_M-id)M\\ & = \mathcal{H}^1(\underline{\Psi}^{\bullet}(M)), \end{align*} where the first and the last equality follows from the definition of $\Phi^\bullet(M^\vee)$ and $\underline{\Psi}^{\bullet}(M)$, respectively. The third equality uses the property that $\psi_M$ is left adjoint to $\varphi_{M^\vee}$ and $\varphi_M$ is left adjoint to $\psi_{M^\vee}$. Similarly, \begin{align*} \mathcal{H}^1(\Phi^\bullet(M^\vee))^\vee &= ((M^\vee)/({\varphi_{M^\vee}-id})M^\vee)^\vee\\& = ((M^\vee)^\vee)^{(\varphi_{M^\vee}=id)^\vee}\\ &= M^{\psi_M=id}\\ & = \mathcal{H}^0(\underline{\Psi}^{\bullet}(M)). \end{align*} Hence $$\mathcal{H}^i(\Phi^\bullet(M^\vee))^{\vee} \cong \mathcal{H}^{1-i}(\underline{\Psi}^{\bullet}(M))$$ for all $i$. In other words, the pairing given by (\ref{perfect}) is perfect. \end{proof} }\fi \begin{remark}\label{Remark-duality} Note that the functors $\mathcal{H}^i(\Phi^\bullet(-))$ and $\mathcal{H}^i(\underline{\Psi}^\bullet(-))$ commute with direct limits. Therefore the pairing exists for any $M\in \varinjlim {\bf Mod}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}_{/\mathcal{O}_{\mathcal{E}}}$. \end{remark} \iffalse{ This induces a perfect pairing of $\mathcal{O}_K$-modules \begin{equation*} \mathcal{H}^i(\Phi^\bullet(M^\vee))\times \mathcal{H}^{1-i}(\underline{\Psi}^{\bullet}(M))\rightarrow K/\mathcal{O}_K. \end{equation*} In other words, we have a canonical isomorphism \begin{equation}\label{duality} \mathcal{H}^i(\Phi^\bullet(M^\vee))^{\vee} \cong \mathcal{H}^{1-i}(\underline{\Psi}^{\bullet}(M)). \end{equation} }\fi Let $V \in {\bf Rep}_{\mathcal{O}_K}(G_K)$, and we define the Iwasawa cohomology over $K_{\infty}$ by \begin{equation*} H^i_{Iw}(K_{\infty}/K,V):= \varprojlim H^i(L,V), \end{equation*} where $L$ varies over the finite Galois extensions of $K$ contained in $K_{\infty}$, and the projective limit is taken with respect to the cohomological co-restriction maps. We now recall the following theorem of \cite[Theorem 5.13]{SV}, which gives a description of the Iwasawa cohomology groups in terms of $\underline{\Psi}^\bullet$ complex. \begin{theorem}\label{Iwasawa cohomology} Let $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K) $. Then the complex \begin{equation*} \underline{\Psi}^{\bullet}(\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT}))): 0\rightarrow \mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT}))\xrightarrow{\psi-id}\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT}))\rightarrow 0, \end{equation*} where $\psi= \psi_{\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT}))}$, and $\chi_{cyc}$ is the cyclotomic character, computes the Iwasawa cohomology groups $H^i_{Iw}({K_{\infty}/K},V)$ for all $i\geq 1$. \end{theorem} \begin{proof} Since $V \in {\bf Rep}_{\mathcal{O}_K-tor}^{dis}(G_K)$, i.e., $V$ is a discrete $\pi$-primary representation of $G_K$, we have an isomorphism \begin{equation*} H^i_{Iw}({K_{\infty}/K},V)\cong H^{2-i}(\operatorname{Gal}(\bar{K}/K_{\infty}),V^{\vee}(\chi_{cyc}))^{\vee}, \end{equation*} which is induced from the Local Tate duality. For more details see \cite[Remark 5.11]{SV}. Moreover, \begin{align*} H^{2-i}(\operatorname{Gal}(\bar{K}/K_{\infty}),V^{\vee}(\chi_{cyc}))^{\vee} &= \mathcal{H}^{2-i}(\Phi^{\bullet}(\mathbb{D}_{LT}(V^\vee(\chi_{cyc}))))^{\vee}\\ &=\mathcal{H}^{2-i}(\Phi^\bullet(\mathbb{D}_{LT}(V^\vee)(\chi_{cyc})))^\vee \\&= \mathcal{H}^{2-i}(\Phi^\bullet(\mathbb{D}_{LT}(V)^\vee(\chi_{LT}^{-1}\chi_{cyc})))^\vee\\& = \mathcal{H}^{2-i}(\Phi^\bullet(\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT}))^\vee))^\vee\\&\cong \mathcal{H}^{i-1}(\underline{\Psi}^{\bullet}(\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT})))). \end{align*} Here the first equality follows from Proposition \ref{H_K-cohomology}. The second and third equality uses Remark 4.6 and Remark 5.6 of \cite{SV}, respectively, while the last isomorphism comes from Remark \ref{Remark-duality}. Hence \begin{equation*} H^i_{Iw}({K_{\infty}/K},V)\cong \mathcal{H}^{i-1}(\underline{\Psi}^{\bullet}(\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT})))). \end{equation*} This proves the theorem. \end{proof} \begin{corollary}\label{Iwasawa cor.} For any $V \in {\bf Rep}_{\mathcal{O}_K}(G_K)$, we have $H^i_{Iw}({K_{\infty}/K},V)\cong \mathcal{H}^{i-1}(\underline{\Psi}^{\bullet}(\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT}))))$ for $i\geq 1$. \end{corollary} \begin{proof} Since the transition maps are surjective in the projective system $(\underline{\Psi}^\bullet(\mathbb{D}_{LT}(V/\pi^nV(\chi_{cyc}^{-1}\chi_{LT}))))_n$ of co-chain complexes of abelian groups, the first hyper-cohomology spectral sequence degenerates at $E_2$. Moreover, $\varprojlim\limits_n{}^{1} \mathcal{H}^i(\underline{\Psi}^\bullet(\mathbb{D}_{LT}(V/\pi^nV(\chi_{cyc}^{-1}\chi_{LT}))))=0$. Then the second hyper-cohomology spectral sequence \begin{equation*} \varprojlim\limits_n{}^{i} \mathcal{H}^j(\underline{\Psi}^\bullet(\mathbb{D}_{LT}(V/\pi^nV(\chi_{cyc}^{-1}\chi_{LT})))) \Rightarrow \mathcal{H}^{i+j}(\underline{\Psi}^\bullet(\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT})))), \end{equation*} also degenerates at $E_2$. Therefore $\varprojlim\limits_n \mathcal{H}^i(\underline{\Psi}^\bullet(\mathbb{D}_{LT}(V/\pi^nV(\chi_{cyc}^{-1}\chi_{LT}))))= \mathcal{H}^i(\underline{\Psi}^\bullet(\mathbb{D}_{LT}(V(\chi_{cyc}^{-1}\chi_{LT}))))$. Moreover, it follows from Lemma \ref{commutes inverse} that the functor $H^i_{Iw}({K_{\infty}/K},-)$ commutes with the inverse limits. Now the result follows from Theorem \ref{Iwasawa cohomology} by taking the inverse limits. \end{proof} Next, we generalize most of our results to the case of any complete local Noetherian ring whose residue field is finite extension of $\mathbb{F}_p$. \section{An Equivalence of Categories over the Coefficient Ring}\label{sec6} \subsection{Background on Coefficient Rings}\label{sec5} In this section, we recall some basic results on coefficient rings. Recall that a \emph{coefficient ring} $R$ is a complete Noetherian local ring with finite residue field $k_R$ of characteristic $p$, i.e., $k_R$ is a finite extension of $\mathbb{F}_p$. Then $R$ has a natural pro-finite topology with a base of open ideals given by the powers of its maximal ideal $\mathfrak{m}_R$. In other words, $R = \varprojlim_n R/\mathfrak{m}_R^n R$. A \emph{coefficient ring homomorphism} is a continuous homomorphism of coefficient rings $R^{\prime}\rightarrow R$ such that the inverse image of the maximal ideal $\mathfrak{m}_R$ is the maximal ideal $\mathfrak{m}_{R^{\prime}}\subset R^{\prime}$ and the induced homomorphism on residue fields is an isomorphism. For a fixed prime no $p$, a \emph{$p$-ring} is a complete discrete valuation ring whose valuation ideal is generated by a prime element. Let $R$ and $S$ be arbitrary rings, and $I\subset R, J\subset S$ be two ideals. Assume that $R$ and $S$ are both $T$-algebras for some third ring $T$. The \emph{completed tensor product} $R\hat{\otimes}_T S$ is defined as the completion of $R\otimes_T S$ with respect to the $(I\otimes S+R\otimes J)$-adic topology. Let $\mathcal{O}$ be a $p$-ring and $R$ be a coefficient ring. Let $\mathcal{O}_K$ be a finite extension of $\mathbb{Z}_p$. Assume that both $\mathcal{O}$ and $R$ are $\mathcal{O}_K$-algebras, where the maps $\mathcal{O}_K\rightarrow \mathcal{O}$ and $\mathcal{O}_K\rightarrow R$ are local homomorphisms. Define \begin{equation*} \mathcal{O}_R:= \mathcal{O}\hat{\otimes}_{\mathcal{O}_K} R. \end{equation*} Then by \cite[Proposition 1.2.3]{Dee}, $\mathcal{O}_R$ is a complete Noetherian semi-local ring. \begin{proposition}\label{Artinian} Let $A$ be a Noetherian semi-local commutative ring with unity and $\mathfrak{m}_A$ the radical (intersection of all maximal ideals) of $A$. Then $A/\mathfrak{m}_A^n$ is Artinian for all $n\geq 1$. \end{proposition} \begin{proof} We prove this by using induction on $n$. Let $n=1$. Then by Chinese Remainder theorem, we have \begin{equation} \label{eq:11} A/\mathfrak{m}_A \cong \bigoplus_{i=1}^{n}A/\mathfrak{m}_i, \end{equation} where $\mathfrak{m}_i$ is a maximal ideal of $A$ for $0\leq i \leq n$, and the map is a natural projection map. Note that each $A/\mathfrak{m}_i$ is Artinian being a field. Consequently, the right hand side of (\ref{eq:11}) is Artinian as it is a finite direct sum of Artinian rings. Therefore $A/\mathfrak{m}_A$ is Artinian, and the result is true for $n=1$. Suppose that the result is true for $n-1$. For general $n$, the result follows from the exact sequence \begin{equation*} 0\rightarrow \mathfrak{m}_A^{n-1}/\mathfrak{m}_A^n \rightarrow A/\mathfrak{m}_A^n\rightarrow A/\mathfrak{m}_A^{n-1}\rightarrow 0. \end{equation*} Now $A/\mathfrak{m}_A^{n-1}$ is Artinian by induction hypothesis. Since $A$ is Noetherian, $\mathfrak{m}_A^{n-1}/\mathfrak{m}_A^n$ is a finitely generated module over $A/\mathfrak{m}_A$. Together with the fact that every finitely generated module over an Artinian ring is Artinian, the module $\mathfrak{m}_A^{n-1}/\mathfrak{m}_A^n$ is Artinian. Hence $A/\mathfrak{m}_A^n$ is Artinian. \end{proof} \begin{remark} Since $\mathcal{O}_R$ is complete semi-local Noetherian ring with unity, by Proposition \ref{Artinian}, $\mathcal{O}_R/\mathfrak{m}_R^n\mathcal{O}_R$ is Artinian for all $n\geq 1$. \end{remark} Let $R$ and $S$ be two coefficient rings, and $\mathcal{O}$ be a $p$-ring (or indeed any local ring with residue field of characteristic $p$). Let $\theta : R \rightarrow S$ is a ring homomorphism, then it induces $\theta : \mathcal{O}\otimes_{\mathcal{O}_K}R\rightarrow \mathcal{O}\otimes_{\mathcal{O}_K}S$. Assume that $\theta$ is local. Then we have $\theta(\mathcal{O}\otimes\mathfrak{m}_R+\mathfrak{m}_{\mathcal{O}}\otimes R)\subset \mathcal{O}\otimes\mathfrak{m}_S+\mathfrak{m}_{\mathcal{O}}\otimes S$, and $\theta$ is continuous with respect to the obvious topologies. Therefore it induces a semi-local homomorphism \begin{equation*} \theta: \mathcal{O}_R\rightarrow \mathcal{O}_S. \end{equation*} \begin{proposition}\!\textup{\cite[Proposition 1.2.6]{Dee}}\label{faithfully flat} Let $\theta : \mathcal{O}_1\rightarrow\mathcal{O}_2$ be a local homomorphism of $p$-rings and let $R$ be a coefficient ring. If $\theta$ is flat, then it induces a faithfully flat homomorphism \begin{equation*} \theta_R : \mathcal{O}_{1,R}\rightarrow \mathcal{O}_{2,R}. \end{equation*} \end{proposition} \subsection{An equivalence over coefficient rings} \subsubsection{The characteristic $p$ case} \label{sub6.1} Let $E$ be a local field of characteristic $p>0$. Then $E\cong k((t))$, where $k$ is a finite extension of $\mathbb{F}_p$. Assume that $\operatorname{card}(k)=q$, where $q=p^r$ for some fixed $r$. Let $\mathcal{O}_\mathcal{E}$ be the Cohen ring of $E$ with uniformizer $\pi$. Let $\mathcal{E}$ be the field of fractions of $\mathcal{O}_\mathcal{E}$. The field $\mathcal{E}$ is a complete discrete valued field of characteristic $0$, whose residue field is E. We fix a choice of $\mathcal{E}$. Let $\mathcal{E}^{ur}$ be the maximal unramified extension of $\mathcal{E}$ with ring of integers $\mathcal{O}_{\mathcal{E}^{ur}}$. Clearly, $\mathcal{E}^{ur}$ is a Galois extension of $\mathcal{E}$, and there is an identification of Galois groups \begin{equation*} G_E=\operatorname{Gal}({E^{sep}/E})\xrightarrow{\sim} \operatorname{Gal}({\mathcal{E}^{ur}/\mathcal{E}}), \end{equation*} where $E^{sep}$ is the separable closure of $E$. The ring $\mathcal{O}_{\mathcal{E}^{ur}}$ has a valuation induced from $\mathcal{O}_{\mathcal{E}}$ and the valuation ring in the completion $\widehat{\mathcal{E}^{ur}}$ of $\mathcal{E}^{ur}$ is a $p$-ring with residue field $E^{sep}$. We write ${\mathcal{O}}_{\widehat{\mathcal{E}^{ur}}}$ for this ring. The Galois group $G_E$ acts by continuity on $\widehat{\mathcal{E}^{ur}}$. \iffalse{ \begin{remark} Let $K$ be a $p$-adic field with residue field $k$ such that $\text{card}(k)=q$. Since $E \cong k((t))$, we have $\mathcal{O}_K\hookrightarrow \mathcal{O}_{\mathcal{E}}$. \end{remark} Let $E$ be a local field of characteristic $p>0$. Recall that the Cohen ring of $E$ is the unique (up to isomorphism) absolutely unramified discrete valuation ring of characteristic $0$ with residue field $E$. Let $\mathcal{O}_\mathcal{E}$ be the Cohen ring of $E$ with uniformizer $\pi$. Let $\mathcal{E}$ be the field of fractions of $\mathcal{O}_\mathcal{E}$. Then \begin{equation*} \mathcal{O}_\mathcal{E} = \varprojlim _{n \in\mathbb{N}}\mathcal{O}_\mathcal{E}/\pi^n\mathcal{O}_\mathcal{E},\ \mathcal{O}_\mathcal{E}/\pi\mathcal{O}_\mathcal{E}=E\ \text{and}\ \mathcal{E}=\mathcal{O}_\mathcal{E}[\frac{1}{\pi}]. \end{equation*} The field $\mathcal{E}$ is of characteristic $0$ with a complete discrete valuation, whose residue field is E, and its maximal ideal is generated by $\pi$. Moreover, if $\mathcal{E}^{\prime}$ is another field with the same property, then there is a continuous homomorphism $\iota: \mathcal{E}\rightarrow \mathcal{E}^{\prime}$ of valued fields inducing identity on $E$, and $\iota$ is always an isomorphism. If $E$ is perfect, then $\mathcal{O}_{\mathcal{E}}$ may be identified with the ring $W(E)$ of Witt vectors with coefficients in $E$, and $\iota$ is unique. So, we have a $p$-ring $\mathcal{O}_{\mathcal{E}}$ of characteristic zero with fraction field $\mathcal{E}$ and residue field $E$. Fix a choice of $\mathcal{E}$. For any homomorphism $f: E\rightarrow F$ of fields of characteristic $p$, there is a unique local homomorphism $\mathcal{O}_{\mathcal{E}}\rightarrow \mathcal{O}_\mathcal{F}$ which induces $f$ on the residue fields (\cite[Theorem A.45]{FO}). Thus for any finite separable extension $F$ of $E$, there is a unique unramified extension $\mathcal{E}_F = \text{Frac}(\mathcal{O}_\mathcal{F})$ of $\mathcal{E}$ whose residue field is $F$. Moreover, if $F/E$ is Galois, then $\mathcal{E}_F/\mathcal{E}$ is also Galois with Galois group \begin{equation*} \operatorname{Gal}(\mathcal{E}_F/\mathcal{E}) = \operatorname{Gal}(F/E). \end{equation*} Let $E^{sep}$ be the separable closure of $E$. Then \begin{equation*} E^{sep} = \bigcup_{F\in S}F, \end{equation*} where $S$ runs over the finite extensions of $E$ contained in $E^{sep}$. If $F, F^{\prime}\in S$ and $F\subset F^{\prime}$, then $\mathcal{E}_F\subset \mathcal{E}_F^{\prime}$. Define \begin{equation*} \mathcal{E}^{ur} := \bigcup_{F\in S} \mathcal{E}_F. \end{equation*} Clearly $\mathcal{E}^{ur}$ is a Galois extension of $\mathcal{E}$, and there is an identification of Galois groups \begin{equation*} G_E=\operatorname{Gal}({E^{sep}/E})\xrightarrow{\sim} \operatorname{Gal}({\mathcal{E}^{ur}/\mathcal{E}}). \end{equation*} Moreover, $\mathcal{O}_{\mathcal{E}^{ur}}$, which is the ring of integers of $\mathcal{E}^{ur}$, is a strict Hensalization of $\mathcal{O}_{\mathcal{E}}$ with field of fractions $\mathcal{E}^{ur}$, and $\mathcal{O}_{\mathcal{E}^{ur}}$ has a valuation induced from $\mathcal{O}_{\mathcal{E}}$ and the valuation ring in the completion $\widehat{\mathcal{E}^{ur}}$ of $\mathcal{E}^{ur}$ is a $p$-ring with residue field $E^{sep}$, a separable closure of $E$. Write ${\mathcal{O}}_{\widehat{\mathcal{E}^{ur}}}$ for this ring and $G_E$ acts by continuity on $\widehat{\mathcal{E}^{ur}}$. \begin{remark} Let $K$ be a $p$-adic field with residue field $k$ such that $\text{card}(k)=q$. Since $E$ is a local field of characteristic $p$ and the residue field of $E$ has cardinality $q$. Therefore, $E \cong k((\pi))$ and we have $\mathcal{O}_K\hookrightarrow \mathcal{O}_{\mathcal{E}}$. \end{remark} }\fi From now on, $R$ will always denote the coefficient ring, unless stated otherwise. Also, we assume that $R$ is always an $\mathcal{O}_K$-algebra such that the map $\mathcal{O}_K\rightarrow R$ is a local ring homomorphism. Here $\mathcal{O}_K$ is the ring of integers of a $p$-adic field $K$ with residue field $k$ such that $\operatorname{card}(k)=q$.\par Define the rings \begin{align*} &\mathcal{O}_R:= \mathcal{O}_{\mathcal{E}}\hat{\otimes}_{\mathcal{O}_K} R,\\& \widehat{\mathcal{O}^{ur}_R}:= \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\hat{\otimes}_{\mathcal{O}_K} R. \end{align*} Then it follows from Proposition \ref{faithfully flat} that $\widehat{\mathcal{O}^{ur}_R}$ is an $\mathcal{O}_R$-algebra and is faithfully flat over $\mathcal{O}_R$. The action of $G_E$ on $\mathcal{O}_{\mathcal{E}^{ur}}$ induces an action on $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$. Now by taking the trivial action of $G_E$ on $R$, it induces a Galois action on $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K} R$. Moreover, this action is continuous as $G_E$ acts continuously on $\mathcal{E}^{ur}$. Thus the action of $G_E$ on $\widehat{\mathcal{O}^{ur}_R}$ is continuous with respect to the $\mathfrak{m}_R\widehat{\mathcal{O}^{ur}_R}$-adic topology. \begin{remark} It follows from \cite[Proposition 1.2.3]{Dee} that $\mathcal{O}_R$ and $\widehat{\mathcal{O}^{ur}_R}$ are Noetherian semi-local rings, complete with respect to the $\mathfrak{m}_R$-adic topology and that $\mathfrak{m}_R$ generates the radical of these rings. \end{remark} Let $\varphi_q:=(x\mapsto x^q)$ be the $q$-Frobenius on $E$. Choose a lift of $\varphi_q$ on $\mathcal{E}$ such that it maps $\mathcal{O}_{\mathcal{E}}$ to $\mathcal{O}_{\mathcal{E}}$. Then we have a ring homomorphism $\varphi_q: \mathcal{O}_{\mathcal{E}}\rightarrow \mathcal{O}_{\mathcal{E}}$ such that \begin{equation*} \varphi_q(x) \equiv x^q \mod \pi. \end{equation*} Assume that $\varphi_q$ is flat. Then we have a $R$-linear homomorphism \begin{equation*} \varphi_q:= \varphi_q\otimes id_R : \mathcal{O}_{\mathcal{E}}\otimes_{\mathcal{O}_K} R\rightarrow \mathcal{O}_{\mathcal{E}}\otimes_{\mathcal{O}_K} R. \end{equation*} Since the ideal $\mathfrak{m}_{\mathcal{O}_{\mathcal{E}}}\otimes R+\mathcal{O}_{\mathcal{E}}\otimes \mathfrak{m}_R$ in $\mathcal{O}_{\mathcal{E}}\otimes_{\mathcal{O}_K} R$ is generated by $\mathfrak{m}_R$, it is clear that $\varphi_q$ maps $\mathfrak{m}_{\mathcal{O}_{\mathcal{E}}}\otimes R+\mathcal{O}_{\mathcal{E}}\otimes \mathfrak{m}_R$ to itself. Then we have the following lemma. \begin{lemma}\label{flat} The homomorphism \begin{equation*} \varphi_q : \mathcal{O}_R\rightarrow \mathcal{O}_R \end{equation*} is faithfully flat. \end{lemma} \begin{proof} Since $\varphi_q$ is flat, the proof follows from Proposition \ref{faithfully flat}. \end{proof} As the $q$-Frobenius $\varphi_q$ on $\mathcal{O}_{\mathcal{E}}$ extends uniquely by functoriality and continuity to a $q$-Frobenius on $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$, we also have a faithfully flat homomorphism from $\widehat{\mathcal{O}^{ur}_R}$ to $\widehat{\mathcal{O}^{ur}_R}$. \begin{definition} An $R$-representation of $G_E$ is a finitely generated $R$-module with a continuous and $R$-linear action of $G_E$. \end{definition} \begin{definition} A $\varphi_q$-module over $\mathcal{O}_R$ is an $\mathcal{O}_R$-module $M$ together with a map $\varphi_M : M \rightarrow M$, which is semi-linear with respect to $\varphi_q$. \end{definition} \begin{remark} Let $M$ be an $\mathcal{O}_R$-module. Then a semi-linear map $\varphi_M: M\rightarrow M$ is equivalent to an $\mathcal{O}_R$-linear map $\varphi_M^{lin}: M_{\varphi_q}\rightarrow M$, where $M_{\varphi_q}:=M\otimes_{\mathcal{O}_R,\varphi_q}\mathcal{O}_R$ is the base change of $M$ by $\mathcal{O}_R$ via $\varphi_q$. \end{remark} Let ${\bf Rep}_R(G_E)$ denote the category of $R$-linear representations of $G_E$ and ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q}$ the category of $\varphi_q$-modules over $\mathcal{O}_R$. The morphisms in ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q}$ are $\mathcal{O}_R$-linear homomorphisms commuting with $\varphi$. Now we define a functor from ${\bf Rep}_R(G_E)$ to ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q}$. Let $V$ be an $R$-representation of $G_E$. Define \begin{align*} \mathbb{D}_R(V):= (\widehat{\mathcal{O}^{ur}_R}\otimes_R V)^{G_E}. \end{align*} Here $G_E$ acts diagonally. Moreover, the multiplication by $\mathcal{O}_R$ on $\hat{\mathcal{O}}^{ur}_R\otimes_R V$ is $G_E$-equivariant, thus $\mathbb{D}_R(V)$ is an $\mathcal{O}_R$-module. We extend the definition of $q$-Frobenius to $\widehat{\mathcal{O}^{ur}_R}\otimes_R V$ by taking trivial action of $\varphi_q$ on $V$, and then $\varphi_q$ commutes with the action of $G_E$. It induces a $\mathcal{O}_R$-module homomorphism \begin{equation*} \varphi_{\mathbb{D}_R(V)} : \mathbb{D}_R(V)\rightarrow \mathbb{D}_R(V), \end{equation*} which is semi-linear with respect to $\varphi_q$. Then $V\mapsto\mathbb{D}_R(V)$ is a functor from ${\bf Rep}_R(G_E)$ to ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q}$. The following lemma shows that the functor $\mathbb{D}_R$ commutes with restriction of scalars. \begin{lemma}\label{finite length} Let $V\in {\bf Rep}_R(G_E) $ such that $\mathfrak{m}_R^nV=0$ for some $n$. Then as $\mathcal{O}_K$-modules, we have \begin{equation*} \mathbb{D}_{LT}(V) \cong \mathbb{D}_R(V). \end{equation*} \end{lemma} \begin{proof} We use induction on $n$. First assume that $\mathfrak{m}_R V=0$. Then $V$ is finitely generated as an $R/\mathfrak{m}_R$-module. But $R/\mathfrak{m}_R=k_R$ is the residue field of $R$ and it is finite. So by using Nakayama's lemma (for local rings), $V$ is finitely generated as an $\mathcal{O}_K$-module. Next, suppose that the statement is true for $n-1$, i.e., if $\mathfrak{m}_R^{n-1}W=0$ for any $R$-module $W$, then $W$ is finitely generated as an $\mathcal{O}_K$-module. Now let $\mathfrak{m}_R^nV=0$. Consider the exact sequence \begin{equation*} 0\rightarrow \mathfrak{m}_R^{n-1}V \rightarrow V\rightarrow V/\mathfrak{m}_R^{n-1}V\rightarrow 0. \end{equation*} Then by using induction hypothesis, $\mathfrak{m}_R^{n-1}V$ and $V/\mathfrak{m}_R^{n-1}V$ are finitely generated as $\mathcal{O}_K$-modules. Thus, $V$ is finitely generated as an $\mathcal{O}_K$-module. Hence \begin{equation*} \widehat{\mathcal{O}^{ur}_R}\otimes_R V = (\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\hat{\otimes}_{\mathcal{O}_K}R)\hat{\otimes}_R V\cong \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\hat{\otimes}_{\mathcal{O}_K}V=\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K}V. \end{equation*} Here the first equality follows from the fact that $\widehat{\mathcal{O}^{ur}_R}$ is complete and $V$ is finitely generated as an $R$-module. The last one uses that $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ is complete, and $V$ is finitely generated as an $\mathcal{O}_K$-module. Then taking $G_E$-invariants, we get the desired result. \end{proof} Note that using the similar proofs as in \cite{Dee}, it is easy to prove (even if we replace the absolute Frobenius with $q$-Frobenius) that the functor $\mathbb{D}_R$ is an exact and faithful functor, and it commutes with restriction of scalars and inverse limits. Moreover, for any $V \in {\bf Rep}_R(G_E)$, the module $\mathbb{D}_R(V)$ is finitely generated as an $\mathcal{O}_R$-module. Also, the canonical $\widehat{\mathcal{O}^{ur}_R}$-linear homomorphism of $G_E$-modules \begin{equation*} \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}\mathbb{D}_R(V)\rightarrow \widehat{\mathcal{O}^{ur}_R}\otimes_R V \end{equation*} is an isomorphism. Next, we define a full subcategory of ${\bf Mod}^{\varphi_q}_{/\mathcal{O}_R}$, which is an essential image of the functor $\mathbb{D}_R$. \begin{definition} A $\varphi_q$-module $M$ over $\mathcal{O}_R$ is said to be \emph{\'{e}tale} if $\Phi_M^{lin}$ is an isomorphism, and $M$ is finitely generated as an $\mathcal{O}_R$-module. \end{definition} Let ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\acute{e}t}$ denote the category of \'{e}tale $\varphi_q$-modules. A morphism of \'{e}tale $\varphi_q$-modules is a morphism of the underlying $\varphi_q$-modules. Then it follows from \cite[Lemma 2.1.16 and 2.1.17]{Dee} that the category ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\acute{e}t}$ is an abelian category. Moreover, it is stable under sub-objects, quotients, tensor products and $\mathbb{D}_R(V) \in {\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\acute{e}t}$. Now we introduce a functor, which is an inverse functor to $\mathbb{D}_R$. The functor \begin{equation*} \mathbb{V}_R : {\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\acute{e}t} \rightarrow {\bf Rep}_R(G_K) \end{equation*} is defined as the following. Let $M$ be an \'{e}tale $\varphi_q$-module over $\mathcal{O}_R$. Then view $\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M$ as a $\varphi_q$-module via \begin{equation*} \varphi_{\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M}(\lambda\otimes m) = \varphi_q(\lambda)\otimes \varphi_M(m) \quad \text{for}\ \lambda \in \widehat{\mathcal{O}^{ur}_R}, m \in M. \end{equation*} For simplicity, we write $\varphi_q\otimes\varphi_M$ rather than $\varphi_{\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M}$. The Galois group $G_E$-acts on $\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M$ via its action on $\widehat{\mathcal{O}^{ur}_R}$ and the group action commutes with the action of $\varphi_q$. Define \begin{equation*} \mathbb{V}_R(M) := (\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M)^{\varphi_q\otimes\varphi_M=id}, \end{equation*} which is a sub $R$-module stable under the action of $G_E$. Thus $M \mapsto \mathbb{V}_R(M)$ is a functor from ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\acute{e}t}$ to the category ${\bf Rep}_R(G_E)$. Next, without any extra work using \cite[Proposition $2.1.21.$]{Dee}, we can easily show that the functor $\mathbb{V}_R$ commutes with the inverse limits. \begin{lemma} \label{surjective V} Let $V$ be an $R$-representation of $G_E$. Then $\varphi_q\otimes id_V-id$ is a surjective homomorphism of abelian groups acting on $\widehat{\mathcal{O}^{ur}_R}\otimes_R V$. \end{lemma} \begin{proof} Suppose $\mathfrak{m}_R V= 0$. Then the map $\varphi_q-id : E^{sep}\rightarrow E^{sep}$ is surjective, as the polynomial $x^q-x-\lambda$ is separable, for all $\lambda \in E^{sep}$. As $k_R$ is finite extension of $\mathbb{F}_p$ and $\varphi_q$ acts trivially on $k_R$, the map \begin{equation*} \varphi_q-id : E^{sep}\otimes_{\mathbb{F}_p}k_R\rightarrow E^{sep}\otimes_{\mathbb{F}_p}k_R \end{equation*} is also surjective. Also, $\varphi_q-id$ is continuous, thus the map \begin{equation*} \varphi_q-id: k_{\widehat{\mathcal{O}^{ur}_R}}=E^{sep}\hat{\otimes}_{\mathbb{F}_p}k_R\rightarrow E^{sep}\hat{\otimes}_{\mathbb{F}_p}k_R \end{equation*} is surjective. Note that $V_1=V/\mathfrak{m}_RV$ is free over $k_R$ and $\varphi_q$ acts on $k_{\widehat{\mathcal{O}^{ur}_R}}\otimes_{k_R}V_1$ via its action on $k_{\widehat{\mathcal{O}^{ur}_R}}$, it follows that $\varphi_q\otimes id_{V_1}-id$ is surjective on $k_{\widehat{\mathcal{O}^{ur}_R}}\otimes_{k_R}V_1$. Then by d\'{e}vissage, \begin{equation*} \varphi_q\otimes id_{V_n}-id: \widehat{\mathcal{O}^{ur}_R}/\mathfrak{m}_R^n\otimes_{R}V_n \rightarrow \widehat{\mathcal{O}^{ur}_R}/\mathfrak{m}_R^n\otimes_{R}V_n \end{equation*} is surjective. Here $\widehat{\mathcal{O}^{ur}_R}/\mathfrak{m}_R^n$ is Artinian and $V_n$ has finite length, so the Mittag-Leffler condition holds for $\widehat{\mathcal{O}^{ur}_R}/\mathfrak{m}_R^n\otimes_{R}V_n$. Then by passage to the inverse limits, the result holds for general $V$. \end{proof} To prove the next proposition, we need the following analogous result of Lemma \ref{finite length}, which can be easily proved. \begin{lemma}\label{finite-length M} If $\mathfrak{m}_R^n M = 0$, then as $\mathcal{O}_K$-modules \begin{equation*} \mathbb{V}_{LT}(M) = \mathbb{V}_R(M). \end{equation*} \end{lemma} \begin{proposition} \label{surjective} Let $M$ be an \'{e}tale $\varphi_q$-module. Then $\varphi_q\otimes \varphi_M-id$ is a surjective homomorphism of abelian groups on $\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M$. \end{proposition} \begin{proof} If $\mathfrak{m}_R^nM=0$, then by Lemma \ref{finite-length M}, we have $\mathbb{V}_{LT}(M) = \mathbb{V}_R(M)$ as an $\mathcal{O}_K$-module. Now by using \cite{KR}, it follows that \begin{equation*} \widehat{\mathcal{O}^{ur}_R}\otimes_R\mathbb{V}_R(M_n)\rightarrow \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M_n \end{equation*} is an isomorphism. Moreover, this isomorphism respects the action of $\varphi_q\otimes\varphi_M$. Then by using Lemma \ref{surjective V}, the map $\varphi_q\otimes\varphi_M-id$ is surjective on $\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M_n$, and the general case follows by passing to the inverse limits. \end{proof} \begin{proposition}\label{exact V} The functor $\mathbb{V}_R$ is an exact functor. \end{proposition} \begin{proof} Let $0\rightarrow M \rightarrow M^{\prime} \rightarrow M^{\prime\prime} \rightarrow 0$ be an exact sequence of \'{e}tale $\varphi_q$-modules. Then we have the following commutative diagram with the exact rows, \begin{center} \begin{tikzcd} 0\arrow{r} & \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}M \arrow{r} \arrow[swap]{d}{\varphi_q\otimes\varphi_M-id} & \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}M^{\prime} \arrow{r} \arrow[swap]{d}{\varphi_q\otimes\varphi_{M^\prime}-id} & \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}M^{\prime\prime} \arrow{r} \arrow[swap]{d}{\varphi_q\otimes\varphi_{M^{\prime \prime}}-id} & 0\\ 0 \arrow{r} & \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}M \arrow{r} & \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}M^{\prime} \arrow{r} & \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}M^{\prime\prime} \arrow{r} & 0. \end{tikzcd} \end{center} Now by applying the Snake lemma, we get an exact sequence \begin{equation*} 0 \rightarrow \mathbb{V}_R(M)\rightarrow \mathbb{V}_R(M^{\prime})\rightarrow \mathbb{V}_R(M^{\prime\prime})\rightarrow \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}M/(\varphi_q\otimes\varphi_M-id)\rightarrow \cdots. \end{equation*} By Lemma \ref{surjective}, we know that the map $\varphi_q\otimes\varphi_M-id$ is a surjective homomorphism acting on $\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}M$, so the last term is zero, and the sequence \begin{equation*} 0 \rightarrow \mathbb{V}_R(M)\rightarrow \mathbb{V}_R(M^\prime)\rightarrow \mathbb{V}_R(M^{\prime\prime})\rightarrow 0 \end{equation*} is an exact sequence. Hence the functor $\mathbb{V}_R$ is exact. \end{proof} Moreover, for an \'{e}tale $\varphi_q$-module $M$, the module $\mathbb{V}_R(M)$ is finitely generated over $R$, and the homomorphism of $\widehat{\mathcal{O}^{ur}_R}$-modules \begin{equation*} \widehat{\mathcal{O}^{ur}_R}\otimes_R \mathbb{V}_R(M)\rightarrow \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M \end{equation*} is an isomorphism. The proof is similar to \cite[Proposition 2.1.26]{Dee}. Next, the following theorem gives the equivalence of categories between ${\bf Rep}_R(G_E)$ and ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\acute{e}t}.$ \begin{theorem}\label{Main6.1} The functor \begin{equation*} \mathbb{D}_R: {\bf Rep}_R(G_E) \rightarrow {\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\acute{e}t} \end{equation*} is an equivalence of categories with quasi-inverse functor \begin{equation*} \mathbb{V}_R: {\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\acute{e}t}\rightarrow {\bf Rep}_R(G_E). \end{equation*} \end{theorem} \begin{proof} It is enough to construct functorial isomorphisms \begin{equation*} \mathbb{V}_R(\mathbb{D}_R(V))\xrightarrow{\sim} V \quad \text{and} \quad\mathbb{D}_R(\mathbb{V}_R(M))\xrightarrow{\sim} M \end{equation*} for an $R$-representation $V$ of $G_E$ and an \'{e}tale $\varphi_q$-module $M$ over $\mathcal{O}_R$, respectively. Consider the isomorphism of $G_E$-modules \begin{equation*} \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}\mathbb{D}_R(V)\rightarrow \widehat{\mathcal{O}^{ur}_R}\otimes_R V. \end{equation*} Taking $\varphi_q\otimes\varphi_M$-invariants, we obtain an isomorphism \begin{equation*} \mathbb{V}_R(\mathbb{D}_R(V))\rightarrow (\widehat{\mathcal{O}^{ur}_R}\otimes_R V)^{\varphi_q\otimes\varphi_M=id}. \end{equation*} Note that $V$ has trivial action of $\varphi_q\otimes\varphi_M$, so there is a map \begin{equation*} V\rightarrow (\widehat{\mathcal{O}^{ur}_R}\otimes_R V)^{\varphi_q\otimes\varphi_M=id}. \end{equation*} Now for finite length modules, the above map is an isomorphism by using Theorem \ref{Kisin Ren}. By taking the inverse limits, the map is an isomorphism for the general $V$. Hence \begin{equation*} \mathbb{V}_R(\mathbb{D}_R(V))\rightarrow V. \end{equation*} Similarly, the map \begin{equation*} M\rightarrow (\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M)^{G_E} \end{equation*} is an isomorphism. Moreover, we have an isomorphism \begin{equation*} \widehat{\mathcal{O}^{ur}_R}\otimes_R \mathbb{V}_R(M)\rightarrow \widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M. \end{equation*} Then taking $G_E$-invariant we have \begin{equation*} \mathbb{D}_R(\mathbb{V}_R(M))=(\widehat{\mathcal{O}^{ur}_R}\otimes_R \mathbb{V}_R(M))^{G_E}\xrightarrow{\sim} (\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R} M)^{G_E}. \end{equation*} Therefore \begin{equation*} \mathbb{D}_R(\mathbb{V}_R(M))\rightarrow M \end{equation*} is an isomorphism, and this proves the theorem. \end{proof} \begin{remark} The functors $\mathbb{D}_R$ and $\mathbb{V}_R$ are compatible with the tensor product. \end{remark} \subsubsection{The characteristic zero case}\label{sub6.2} Let $K$ be a local field of characteristic $0$. Recall that the ring $\mathcal{O}_{\mathcal{E}}$ is the $\pi$-adic completion of $\mathcal{O}_K[[X]][\frac{1}{X}]$, and $\mathcal{O}_{\mathcal{E}^{ur}}$ is the maximal integral unramified extension of $\mathcal{O}_{\mathcal{E}}$. The ring $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ is the $\pi$-adic completion of $\mathcal{O}_{\mathcal{E}^{ur}}$. Also, the Galois group $H_K=\operatorname{Gal}(\bar{K}/K_\infty)$ is identified with $G_E$, where $E$ is the field of norms of the extension $K_\infty/K$ and it is a field of characteristic $p$. Let $V$ be an $R$-representation of $G_K$. Then \begin{equation*} \mathbb{D}_R(V):=(\widehat{\mathcal{O}^{ur}_R}\otimes_R V)^{H_K} = (\widehat{\mathcal{O}^{ur}_R}\otimes_{R}V)^{G_E} \end{equation*} is a $\varphi_q$-module over $\mathcal{O}_R$. The $G_K$-action on $\widehat{\mathcal{O}^{ur}_R}\otimes_{R}V$ induces a semi-linear action of $G_K/H_K = \Gamma_{LT} = \operatorname{Gal}(K_\infty/K)$ on $\mathbb{D}_R(V)$. \begin{definition} A \emph{$(\varphi_q,\Gamma_{LT})$-module} $M$ over $\mathcal{O}_R$ is a $\varphi_q$-module over $\mathcal{O}_R$ equipped with a continuous semi-linear action of $\Gamma_{LT}$, which commutes with the endomorphism $\varphi_M$ of $M$, and a $(\varphi_q,\Gamma_{LT})$-module is \emph{\'{e}tale} if its underlying $\varphi_q$-module is \'{e}tale. \end{definition} We write ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\Gamma_{LT},\acute{e}t}$ for the category of \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules over $\mathcal{O}_R$. Then $\mathbb{D}_R$ is a functor from the category of $R$-linear representations of $G_K$ to the category of \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules over $\mathcal{O}_R$. If $M$ is an \'{e}tale $(\varphi_q,\Gamma_{LT})$-module over $\mathcal{O}_R$, then \begin{equation*} \mathbb{V}_R(M) = (\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}M)^{\varphi_q\otimes \varphi_M=id} \end{equation*} is an $R$-representation of $G_K$. The group $G_K$ acts on $\widehat{\mathcal{O}^{ur}_R}$ as before and acts via $\Gamma_{LT}$ on $M$. The $G_K$ action on $\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}M$ is $\varphi_q\otimes\varphi_M$-equivariant, and this induces a $G_K$ action on $\mathbb{V}_R(M)$. For any $V \in {\bf Rep}_R(G_K)$, there is a canonical $R$-linear homomorphism \begin{equation*} V\rightarrow \mathbb{V}_R(\mathbb{D}_R(V)). \end{equation*} of representations of $G_K$. By Theorem \ref{Main6.1}, this is an isomorphism when restricted to $H_K$, so it must be an isomorphism of $G_K$-representations. Similarly, for an \'{e}tale $(\varphi_q,\Gamma_{LT})$-module $M$, the canonical homomorphism of \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules \begin{equation*} M\rightarrow \mathbb{D}_R(\mathbb{V}_R(M)) \end{equation*} is an isomorphism. Moreover, by using Theorem \ref{Main6.1}, the underlying map of $\varphi_q$-modules is an isomorphism, and this proves the following theorem. \begin{theorem} \label{Main6.2} The functor $\mathbb{D}_R$ is an equivalence of categories between the category ${\bf Rep}_R(G_K)$ of $R$-linear representations of $G_K$ and the category ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\Gamma_{LT},\acute{e}t}$ of \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules with quasi-inverse functor $\mathbb{V}_R$. \end{theorem} Next, we extend the functor $\mathbb{D}_R$ to the category ${\bf Rep}_{\mathfrak{m}_R-tor}^{dis}(G_K)$ of discrete $\mathfrak{m}_R$-primary abelian groups with a continuous and linear action of $G_K$. Any object in ${\bf Rep}_{\mathfrak{m}_R-tor}^{dis}(G_K)$ is the filtered direct limit of $\mathfrak{m}_R$-power torsion objects in ${\bf Rep}_R(G_K)$. For any $V\in {\bf Rep}_{\mathfrak{m}_R-tor}^{dis}(G_K)$, define \begin{equation*} \mathbb{D}_R(V)=(\widehat{\mathcal{O}^{ur}_R}\otimes_R V)^{H_K}. \end{equation*} Note that the functor $\mathbb{D}_R$ commutes with the direct limits as the tensor product and taking $H_K$-invariants commute with the direct limits. Then $\mathbb{D}_R(V)$ is an object into the category $\varinjlim {\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}$ of injective limits of $\mathfrak{m}_R$-power torsion objects in ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\Gamma_{LT},\acute{e}t}$. For any $M\in \varinjlim {\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}$, put \begin{equation*} \mathbb{V}_R(M) = (\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_R}M)^{\varphi_q\otimes \varphi_M=id}. \end{equation*} Then the functor $\mathbb{V}_R$ also commutes with the direct limits, and we have the following result. \begin{proposition}\label{7.19} The functors $\mathbb{D}_R$ and $\mathbb{V}_R$ are quasi-inverse equivalences of categories between the category ${\bf Rep}_{\mathfrak{m}_R-tor}^{dis}(G_K)$ and $\varinjlim {\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor}$. \end{proposition} \begin{proof} Since the functors $\mathbb{D}_R$ and $\mathbb{V}_R$ commute with the direct limits, the proposition follows from Theorem \ref{Main6.2} by taking the direct limits. \end{proof} \section{Galois Cohomology over the Coefficient Ring} \label{sec7} By Proposition \ref{7.19}, the functor $\mathbb{D}_R$ is a quasi-inverse equivalence of categories between the category ${\bf Rep}_R(G_K)\ (\text{resp.,}\ {\bf Rep}_{\mathfrak{m}_R-tor}^{dis}(G_K))$ and ${\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\Gamma_{LT},\acute{e}t}$ $(\text{resp.,}\ \varinjlim {\bf Mod}_{/\mathcal{O}_R}^{\varphi_q,\Gamma_{LT},\acute{e}t,tor})$. The following theorem is a generalization of Theorem \ref{lattices} over the coefficient rings. \begin{theorem} \label{Main7} Let $V$ be an $R$-representation of $G_K$. Then there is a natural isomorphism \begin{equation*} H^i(G_K,V)\cong \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{R}(V)))\quad \text{for} \ i\geq 0. \end{equation*} \end{theorem} \begin{proof} First assume that $\mathfrak{m}_R^n V= 0$ for some $n\in \mathbb{N}$. Then by Lemma \ref{finite length} and Theorem \ref{lattices}, we have \begin{equation*} H^i(G_K,V)\cong \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{R}(V))). \end{equation*} Next, it follows from \cite[Theorem 2.1]{Ta1} and \cite[Corollary 2.2]{Tate} that the functor $H^i(G_K,-)$ commutes with the inverse limits. Moreover, we have $\mathbb{D}_R(V)\xrightarrow{\sim} \varprojlim \mathbb{D}_R(V_n)$. Observe that the modules $\mathbb{D}_R(V_n)$ are finitely generated over the Artinian ring $\mathcal{O}_R/\mathfrak{m}_R^n\mathcal{O}_R$, so the inverse limit functor is an exact functor on the category of $\mathfrak{m}_R$-power torsion \'{e}tale $(\varphi_q,\Gamma_{LT})$-modules over $\mathcal{O}_R$. Then we have \begin{equation*} \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{R}(V)))=\varprojlim \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_{R}(V_n))). \end{equation*} Hence the general case follows by passing to the inverse limits. \end{proof} Next, in order to generalize the Theorem \ref{Main5} to the case of coefficient rings; first, we extend the operator $\psi:= \psi_{\mathbb{D}_{LT}(V)}$ to $\mathbb{D}_R(V)$. As $\psi_q$ maps $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ to $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$, we extend $\psi_q$ to $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes_{\mathcal{O}_K} R$ by making it trivially act on $R$. Moreover, it maps $\mathfrak{m}_{\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}}\otimes R+ \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\otimes \mathfrak{m}_R$ to itself, inducing an $R$-linear map \begin{equation*} \psi_q: \widehat{\mathcal{O}^{ur}_R}\rightarrow \widehat{\mathcal{O}^{ur}_R}. \end{equation*} Moreover, as $\psi_q$ acts Galois equivariantly, making it act on $\widehat{\mathcal{O}^{ur}_R}\otimes_{\mathcal{O}_K} V$ via its action on $\widehat{\mathcal{O}^{ur}_R}$, we have an operator $\psi_{\mathbb{D}_R(V)}$ on $\mathbb{D}_R(V)$. \begin{theorem}\label{Main7.2} Let $V \in {\bf Rep}_{\mathfrak{m}_R-tor}^{dis}(G_K)$. Then we have a well-defined homomorphism \begin{equation*} \mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_R(V)))\rightarrow \mathcal{H}^i(\Psi\Gamma_{LT}^{\bullet}(\mathbb{D}_R(V)))\quad \text{for}\ i\geq 0. \end{equation*} Moreover, the homomorphism $\mathcal{H}^0(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_R(V)))\rightarrow \mathcal{H}^0(\Psi\Gamma_{LT}^{\bullet}(\mathbb{D}_R(V))$ is injective.\end{theorem} \begin{proof} If $V$ is a finite abelian $\mathfrak{m}_R$ group, then the theorem follows from Lemma \ref{finite length} and Theorem \ref{Main5}. Also, the functors $\mathcal{H}^i(\Phi\Gamma_{LT}^{\bullet}(\mathbb{D}_R(-)))$ and $\mathcal{H}^i(\Psi\Gamma_{LT}^{\bullet}(\mathbb{D}_R(-)))$ commute with the inverse limits. Hence the result follows for general $V$ by passing to the inverse limits. \end{proof} The following theorem is a generalization of \cite[Theorem 5.13]{SV} to the case of coefficient rings. It is possible that this approach leads to the construction of a Perrin-Riou homomorphism for Galois representation defined over the coefficient ring $R$ \begin{theorem}\label{Main7.3} Let $V \in {\bf Rep}_R(G_K)$. Then we have \begin{equation*} H^i_{Iw}({K_{\infty}/K},V)\cong \mathcal{H}^{i-1}(\underline{\Psi}^{\bullet}(\mathbb{D}_{R}(V(\chi_{cyc}^{-1}\chi_{LT})))) \quad \text{for} i\geq 1. \end{equation*} \end{theorem} \begin{proof} Suppose that $V$ has a finite length. Then $\mathbb{D}_R(V) = \mathbb{D}_{LT}(V)$ as an $\mathcal{O}_K$-module. Thus $\psi_{\mathbb{D}_R(V)}$ agrees with the $\psi_{\mathbb{D}_{LT}(V)}$. Now by Corollary \ref{Iwasawa cor.}, we have \begin{equation*} H^i_{Iw}({K_{\infty}/K},V)\cong \mathcal{H}^{i-1}(\underline{\Psi}^{\bullet}(\mathbb{D}_{R}(V(\chi_{cyc}^{-1}\chi_{LT})))) \quad \text{for}\ i\geq1. \end{equation*} Moreover, it follows from Lemma \ref{commutes inverse} that the functor $H^i_{Iw}(K_{\infty}/K,-)$ also commutes with the inverse limits. Then by passing to the inverse limits, we deduce the theorem for general $V$. \end{proof} \begin{remark} It is possible to extend Theorem \ref{False-Tate equivalence} to the case of coefficient rings, and using that we can prove that for any $V \in {\bf Rep}_R(G_K)$, \begin{equation*} H^i(G_K,V)\cong \mathcal{H}^i(\Phi\Gamma_{LT,FT}^{\bullet}(\mathbb{D}_{R}(V))). \end{equation*} This gives a generalization of Theorem \ref{Main4} over the coefficient rings. We can also generalize Theorem \ref{Theorem False Tate} to the case of coefficient rings. \end{remark} By Theorem \ref{Iwasawa cohomology}, we have \begin{equation*} H^1_{Iw}({K_{\infty}/K},\mathcal{O}_K(\chi_{cyc}\chi_{LT}^{-1}))\cong \mathbb{D}_{LT}(\mathcal{O}_K)^{\psi_{\mathbb{D}_{LT}(\mathcal{O}_K)}=id}. \end{equation*} The map \begin{equation*} \operatorname{Exp}^*:H^1_{Iw}({K_{\infty}/K},\mathcal{O}_K(\chi_{cyc}\chi_{LT}^{-1}))\rightarrow \mathbb{D}_{LT}(\mathcal{O}_K)^{\psi_{\mathbb{D}_{LT}(\mathcal{O}_K)}=id} \end{equation*} is called the \emph{dual exponential map}. These dual exponential maps occur in the construction of the Coates-Wiles homomorphisms. For more details about this dual exponential map, see \cite{SV}. We generalize the dual exponential map over the coefficient ring to check if one can extend the Coates-Wiles homomorphisms to Galois representations defined over $R$. \begin{theorem}\label{commutative} Let $V\in {\bf Rep}_{R}(G_K)$. Then we have the following commutative diagram \begin{center} \begin{tikzcd} H^i_{Iw}({K_{\infty}/K},V) \arrow{r}{\cong} \arrow{d} & \mathcal{H}^{i-1}(\underline{\Psi}^{\bullet}(\mathbb{D}_{R}(V(\chi_{cyc}^{-1}\chi_{LT})))) \arrow{d} \\ H^i_{Iw}({K_{\infty}/K},V\otimes_{R}\mathcal{O}_K)\arrow{r}[swap]{\cong}& \mathcal{H}^{i-1}(\underline{\Psi}^{\bullet}(\mathbb{D}_{LT}(V\otimes_{R}\mathcal{O}_K(\chi_{cyc}^{-1}\chi_{LT})))). \end{tikzcd} \end{center} \end{theorem} \begin{proof} Note that the ring homomorphism $R\rightarrow \mathcal{O}_K$ induces a map from $V\rightarrow V\otimes_{R}\mathcal{O}_K$. Moreover, $H^i_{Iw}(K_\infty/K,V)= H^i(G_K,R[[\Gamma_{LT}]]\otimes_{R}V)$. Therefore, we have a well defined map $H^i_{Iw}(K_\infty/K, V)\rightarrow H^i_{Iw}({K_{\infty}/K},V\otimes_{R}\mathcal{O}_K)$. Similarly, the map $V\rightarrow V\otimes_{R}\mathcal{O}_K$ defines a map from $\mathbb{D}_R(V)\rightarrow \mathbb{D}_{LT}(V\otimes_{R}\mathcal{O}_K)$, and this induces a well-defined map $\mathcal{H}^{i-1}(\underline{\Psi}^{\bullet}(\mathbb{D}_{R}(V(\chi_{cyc}^{-1}\chi_{LT}))))\rightarrow\mathcal{H}^{i-1}(\underline{\Psi}^{\bullet}(\mathbb{D}_{LT}(V\otimes_{R}\mathcal{O}_K(\chi_{cyc}^{-1}\chi_{LT})))).$ Then the result follows from Theorem \ref{Iwasawa cohomology} and Theorem \ref{Main7.3}. \end{proof} Next, we generalize the dual exponential map over coefficient rings. \begin{corollary}\label{dual exp.} There is a dual exponential map $\operatorname{Exp}_R^*:H^1_{Iw}({K_{\infty}/K},R(\chi_{cyc}\chi_{LT}^{-1}))\xrightarrow{\sim}\mathcal{O}_R^{\psi_R=id}$ over $R$, and the diagram \begin{center} \begin{tikzcd} H^1_{Iw}({K_{\infty}/K},R(\chi_{cyc}\chi_{LT}^{-1})) \arrow{r}{\operatorname{Exp}_R^*} \arrow{d} & \mathcal{O}_R^{\psi_R=id} \arrow{d} \\ H^1_{Iw}({K_{\infty}/K},\mathcal{O}_K(\chi_{cyc}\chi_{LT}^{-1}))\arrow{r}[swap]{\operatorname{Exp}^*}& \mathcal{O}_\mathcal{E}^{\psi=id}, \end{tikzcd} \end{center} where $\psi_R= \psi_{\mathbb{D}_R(R)}$ and $\psi= \psi_{\mathbb{D}_{LT}(\mathcal{O}_K)}$, is commutative. \end{corollary} \begin{proof} Since $R(\chi_{cyc}\chi_{LT}^{-1})$ is an $R$-representation of $G_K$, by Theorem \ref{Main7.3}, we have \begin{align*} H^1_{Iw}({K_{\infty}/K},R(\chi_{cyc}\chi_{LT}^{-1}))&\cong \mathcal{H}^0(\underline{\Psi}^{\bullet}(\mathbb{D}_{R}(R(\chi_{cyc}\chi_{LT}^{-1})(\chi_{cyc}^{-1}\chi_{LT})))\\&\cong\mathcal{O}_R^{\psi_R=id}. \end{align*} Also, by Theorem \ref{Iwasawa cohomology}, we have \begin{align*} H^1_{Iw}({K_{\infty}/K},R(\chi_{cyc}\chi_{LT}^{-1})\otimes_{R}\mathcal{O}_K)&\cong H^1_{Iw}({K_{\infty}/K},\mathcal{O}_K(\chi_{cyc}\chi_{LT}^{-1}))\\& \cong \mathcal{H}^0(\underline{\Psi}^{\bullet}(\mathbb{D}_{LT}(\mathcal{O}_K(\chi_{cyc}\chi_{LT}^{-1})(\chi_{cyc}^{-1}\chi_{LT})))\\&\cong\mathcal{O}_\mathcal{E}^{\psi=id}. \end{align*} Now, the result follows from Theorem \ref{commutative} by putting $i=1$ and $V= R(\chi_{cyc}\chi_{LT}^{-1})$. \end{proof}
1,314,259,996,402
arxiv
\section{Introduction} By referring to \textsc{Cauchy} \cite[(1841)]{cauchy1841} \textsc{Peano} introduces in \emph{Applicazioni geometriche del calcolo infinitesimale} \cite[(1887)]{peano87} the concept of \emph{strict derivative} of set functions. The set functions considered by him are not precisely finite additive measures. The modern concept of finite additivity is based on partitions by disjoint sets, while \textsc{Peano}'s additivity property coincides with a traditional supple concept of ``decompositions of magnitudes'', which \textsc{Peano} implements in his proofs as \emph{distributive set functions}. Contrary to \textsc{Peano}'s strict derivative (\emph{rapporto}), \textsc{Cauchy}'s derivative (\emph{rapport diff\'erentiel}) of a set function corresponds to the usual derivative of functions of one variable. In \textsc{Peano}'s Theorem \ref{peanodev} on strict derivative of distributive set functions the (physical) \emph{mass-density paradigm} is realized: the ``mass'' (a distributive set function) is recovered from the ``density'' (strict derivative) by integration with respect to the ``volume'' (a positive distributive set function of reference). \textsc{Peano} expresses \textsc{Cauchy}'s ideas in a more precise and modern language and completes the program proposed by \textsc{Cauchy}, who, at the end of his article \cite[(1841) p.\,229]{cauchy1841}, writes: \begin{quote} Dans un autre M\'emoire nous donnerons de nouveaux d\'eveloppements aux principes ci-dessus expos\'es [on coexistent magnitudes], en les appliquant d'une mani\`ere sp\'eciale \a l'\'evalutation des longueurs, des aires et des volumes.\,\footnote{\translation{% In another memoir we will give new developments to the above mentioned statements [on coexistent magnitudes], and we will apply them to evaluate lengths, areas and volumes.}} \end{quote} Among numerous applications of \textsc{Peano}'s strict derivatives of set functions which can be found in \emph{Applicazioni geometriche}, there are formulae on oriented integrals, in which the geometric vector calculus by \textsc{Grassmann} plays an important role. For instance, \textsc{Peano} proves the formula of area starting by his definition of area of a surface, that he proposed in order to solve the drawbacks of \textsc{Serret}'s definition of area \cite[(1879)]{serret}. The didactic value of \textsc{Peano}'s strict derivative of set function is transparent: in \emph{La mesure des grandeurs} \cite[(1935)]{lebesgue1935} \textsc{Lebesgue} himself uses a similar approach to differentiation of measures in order to simplify the exposition of his measure theory. In Section \ref{sez-paradigma}, \textsc{Peano}'s and \textsc{Lebesgue}'s derivative are compared in view of the paradigm of mass-density and of the paradigm of primitives, that motivated mathematical research between $19^{th}$ century and the beginning of $20^{th}$ century. In the celebrated paper \emph{L'int\'egration des fonctions discontinues} \cite[(1910)]{lebesgue1910} \textsc{Lebesgue} defines a derivative of $\sigma$-additive measures with respect to the volume. He proves its existence and its measurability. In the case of absolute continuity of the $\sigma$-additive measures, \textsc{Lebesgue} proves that the measure is given by the integral of his derivative with respect to the volume. As it will be seen later in details, \textsc{Peano}'s \emph{strict derivative} of distributive set functions does not necessarily exist and, moreover, whenever it exists, \textsc{Peano}'s strict derivative is continuous, while \textsc{Lebesgue}'s derivative in general is not. Section \ref{sez-misura} presents an overview of \textsc{Peano}'s work on pre-Lebesgue classical measure theory which is completed in Sections \ref{sez-distributive}-\ref{sez-derivata}. Section \ref{sez-cauchy} is devoted to an analysis of \textsc{Cauchy}'s \emph{Coexistent magnitudes} \cite[(1841)]{cauchy1841}\,% \footnote{From now on we refer to \textsc{Cauchy}'s paper \emph{M{\'e}moire sur le rapport diff{\'e}rentiel de deux grandeurs qui varient simultan{\'e}ment} \cite[(1841)]{cauchy1841} as to \emph{Coexistent magnitudes.} }, by emphasizing the results that will be found, in a different language, in \textsc{Peano}'s \emph{Applicazioni geometriche} or in \textsc{Lebesgue}'s \emph{La mesure des grandeurs}. Section \ref{sez-distributive} concerns the concept of ``distributive families" and of ``distributive set functions'' as presented by \textsc{Peano} in \emph{Applicazioni geometriche} and in his paper \emph{Le grandezze coesistenti di Cauchy} \cite[(1915)]{peano1915}. Section \ref{sez-derivata} presents a definition of strict derivative of set functions, main results and some applications, while in Section \ref{sez-massdensity} we discuss \textsc{Peano}'s definition of integral of set functions and a related theorem that realizes the mentioned physical paradigm of mass-density. Section \ref{sez-comments} presents the approach of \textsc{Lebesgue} in \emph{La mesure des grandeurs} to \textsc{Cauchy}'s coexistent magnitudes, leading to introduction of a new notion of derivative: the uniform-derivative. We observe that this paper is meanly historical. From a methodological point of view, we are focussed on primary sources, that is, on mathematical facts and not on the elaborations or interpretations of these facts by other scholars of history of mathematics. For convenience of the reader, original statements and, in some case, terminology are presented in a modern form, preserving, of course, their content. Historical investigations on forgotten mathematical achievements are not useless (from the point of view of mathematics), because some of them carry ideas that remain innovative today. This thought was very well expressed by \textsc{Mascheroni} before the beginning of the study of the geometrical problems leading to the \emph{Geo\-me\-tria del compasso} (1797): \begin{quote} [\dots] mentre si trovano tante cose nuove progredendo nelle matema\-tiche, non si potrebbe forse trovare qualche luogo ancora incognito retrocedendo?\,\footnote{\translation{While we can find so many new things by moving forward in mathematics, why can't we find some still unknown place by retroceding?}} \end{quote} By respect for historical sources and for the reader's convenience, the quotations in the sequel will appear in the original tongue with a translation in square brackets, placed in footnote. \section{The physical paradigm of mass-density \\versus the paradigm of primitives}\label{sez-paradigma} In {\it Philosophiae Naturalis Principia Mathematica} (1687) the first definition concerns mass and density: \begin{quote} Quantitas materiae est mensura ejusdem orta ex ilius densitate et magni\-tudine conjunctim [\dots]. Hanc autem quantitem sub nomine corporis vel massa in sequentibus passim intelligo.\,\footnote{\translation{The quantity of matter is a measure of the matter itself, arising from its density and magnitude conjunctly [\dots]. It is this quantity that I mean hereafter everywhere under the name of body or mass.}} \end{quote} In this sentence \textsc{Newton} presents the \emph{mass-density paradigm} (i.e., the mass can be computed in terms of the density and, conversely, the density can be obtained from the mass) as a fundament of Physics. In \emph{Coexistent magnitudes} \cite[(1841)]{cauchy1841} \textsc{Cauchy}, with a clear didactic aim, uses the mass-density paradigm in order to give a unitary exposition of several problems related to differential calculus. From a mathematical point of view the implementation of this physical paradigm presents some difficulties and it does not assure a univocal answer. The first difficulty is in defining what is a ``mass'', the second is in choosing a procedure for evaluating ``density'' and, finally, in determining under what condition and how it is possible ``to recover'' the mass from the density. All these critical aspects that we find in \textsc{Cauchy} \cite[(1841)]{cauchy1841}, are overcome in a precise and clear way by \textsc{Peano} in \emph{Applicazioni geometriche} \cite[(1887)]{peano87}. Natural properties that connect density and mass are the following: \begin{enumerate} \item \emph{The density of a homogenous body is constant.}\labelpag{hom} \item \emph{The greater is the density, the greater is the mass.} \labelpag{gre} \item \emph{The mass of a body, as well as its volume, is the sum of its parts.}\labelpag{sum} \end{enumerate} The realization of the physical paradigm can be mathematically expressed by the following formula \begin{equation}\label{mass-volume} \mu(A)=\int_A g \,{\mathrm{d}}({\rm vol}_n) \end{equation} where $\mu$ is the ``mass'', $g$ is the ``density'' and ${\rm vol}_n$ is the $n-$dimensional volume.\,\footnote{In today terminology, the realization of (\ref{mass-volume}) is expressed by saying that $g$ is the \emph{Radon-Nikodym derivative} of $\mu$ with respect to ${\rm vol}_n$.} The properties \eqref{hom}, \eqref{gre} and \eqref{sum} do not allow for a direct derivation of \eqref{mass-volume} without further conditions depending on the meaning of integral; for instance, having in mind the Riemann integral, an obvious necessary condition is the Riemann integrability of the density $g$. In \textsc{Peano}'s \emph{Applicazioni Geometriche} \cite[(1887)]{peano87}: \begin{itemize} \item the ``masses'' and the ``volumes'' are represented by \emph{distributive set functions}, as it will be shown in detail in \S \ref{sez-distributive}, \item the ``densities'' (strict derivatives) are computed using a limit procedure, as we shall see in the sequel (see formula \eqref{der-f-peano}), \item the ``mass'' is recovered by integration using \eqref{mass-volume}. This final step is strengthened by the fact that \textsc{Peano}'s strict derivative is continuous. \end{itemize} The mathematical realization of mass-density paradigm is directly connected with mathematical paradigm of primitives, that is with the study of conditions assuring that integration is the inverse operation of differentiation. At the beginning of the $20^{th}$ century the problem of looking for primitives is the cornerstone of the new theory of measure, founded by \textsc{Lebesgue} \cite[(1904)]{lebesgue1904}. The problem of primitives becomes arduous when one has to pass from functions of one variable to functions of more variables. \textsc{Lebesgue} in \emph{L'int\'egration des fonctions discontinues} \cite[(1910)]{lebesgue1910} overcomes these difficulties by substituting the integral of a generic function $g$ with a set function $\mu$ described by formula \eqref{mass-volume}. The paradigm of primitives gives more importance to the operations (of differentiation and integration) than to the set functions. On the contrary, in the mass-density paradigm the primary aim is the evaluation of the infinitesimal ratio between two set functions (for instance, mass and volume) in order to recover the ``mass'' by integrating the ``density'' with respect to ``volume''. On the other hand in the paradigm of primitives the main problem is an extension of the notion of integral in order to describe a primitive of a given function and, consequently, to preserve fundamental theorem of calculus. In \textsc{Lebesgue}'s works the two paradigms appear simultaneously for the first time in the second edition of his famous book {\em Le{\c c}ons sur l'int{\'e}gration et la recherche des fonctions primitives} \cite[(1928) pp.\,196-198]{lebesgue1928}. In 1921 (see \cite[vol.\,I, p.\,177]{lebesgue_opere}) \textsc{Lebesgue} has already used some physical concept in order to make the notion of set function intuitive; analogously in \cite[(1926)]{lebesgue1926} and \cite[(1928) pp.\,290-296]{lebesgue1928} he uses the mass-density paradigm in order to make more natural the operations of differentiation and integration. In his lectures \emph{Sur la mesure des grandeurs} \cite[(1935)]{lebesgue1935}, the physical paradigm leads \textsc{Lebesgue} to an alternative definition of derivative: he replaces his derivative of 1910 with the new uniform-derivative (equivalent to the strict derivative introduced by \textsc{Peano}), thus allowing him to get continuity of the derivative. Before comparing \textsc{Peano}'s and \textsc{Lebesgue}'s derivative of set functions, we recall the definitions of derivatives given by \textsc{Peano} and \textsc{Cauchy}. \textsc{Peano}'s strict derivative of a set function (for instance, the ``density'' of a ``mass'' $\mu$ with respect to the ``volume'') at a point $\bar x$ is computed, when it exists, as the limit of the quotient of the ``mass'' with respect to the ``volume'' of a cube $Q$, when the supremum of the distances of the points of the cube from $\bar x$ tends to 0 (in symbols $Q\to\bar x$). In formula, \textsc{Peano}'s strict derivative $g_{P}(\bar x)$ of a mass $\mu$ at $\bar x$ is given by: \begin{equation}\label{der-f-peano} g_P(\bar x):=\lim_{Q\to \bar x}\frac{\mu (Q)}{{\mathrm{vol}}_n(Q)} \,\,\,.\end{equation} Every limit procedure of a quotient of the form $\frac{\mu (Q)}{{\mathrm{vol}}_n(Q)}$ with $Q\to\bar x$ and the point $\bar x$ not necessarily belonging to $Q$, will be referred to as \emph{derivative \`a la Peano}. On the other hand, \textsc{Cauchy}'s derivative \cite[(1841)]{cauchy1841} is obtained as the limit between ``mass" and ``volume'' of a cube $Q$ {\it including} the point $\bar x$, when $Q\to\bar x$. In formula, \textsc{Cauchy}'s derivative $g_{C}(\bar x)$ of a mass $\mu$ at $\bar x$ is given by: \begin{equation}\label{der-f-lebesgue} g_C(\bar x):=\lim_{\begin{subarray}{c}Q\to \bar x\\ \bar x\in Q\end{subarray}}\frac{\mu (Q)}{{\mathrm{vol}}_n(Q)} \,\,\,. \end{equation} Every limit procedure of a quotient of the form $\frac{\mu (Q)}{{\mathrm{vol}}_n(Q)}$ with $Q\to\bar x$ and the point $\bar x$ belonging to $Q$, will be referred to as \emph{derivative \`a la Cauchy}. \textsc{Lebesgue}'s derivative of set functions is computed \emph{\`a la Cauchy}. Notice that \textsc{Lebesgue} considers finite $\sigma$-additive and absolutely continuous measures as ``masses", while \textsc{Peano} considers distributive set functions. \textsc{Lebesgue}'s derivative exists (i.e., the limit $(2.6)$ there exists for \emph{almost every} $\bar x$), it is measurable and the reconstruction of a ``mass" as the integral of the derivative is assured by absolute continuity of the ``mass'' with respect to volume. On the contrary, \textsc{Peano}'s strict derivative does not necessarily exist, but when it exists, it is continuous and the mass-density paradigm holds.\,\footnote{Clearly, if \textsc{Peano}'s strict derivative of a finite $\sigma$-additive measure exists, then it coincides with \textsc{Lebesgue} derivative and the ``mass'' is absolutely continuous. Nowadays it is not surprising that \textsc{Lebesgue}'s derivative can be seen as \textsc{Peano}'s strict derivative by lifting measures on a $\sigma$-algebra ${\mathcal{A}}$ and ${\mathcal{A}}$-measurable functions to measures on the Stone space associated to ${\mathcal{A}}$ and the related continuous functions, respectively.} The constructive approaches to differentiation of set functions corresponding to the two limits \eqref{der-f-peano} and \eqref{der-f-lebesgue} are opposed to the approach given by \textsc{Radon} \cite[(1913)]{radon1913} and \textsc{Nikodym} \cite[(1930)]{nikodym1930}, who define the derivative in a more abstract and wider context than those of \textsc{Lebesgue} and \textsc{Peano}. As in the case of \textsc{Lebesgue}, a Radon-Nikodym derivative exists; its existence is assured by assuming absolute continuity and $\sigma$-additivity of the measures. In concluding this Section, let us remark that the physical properties \eqref{hom}, \eqref{gre} and \eqref{sum}, that stand at the basis of the mass-density paradigm, lead to the following direct characterization of the Radon-Nikodym derivative. Let $\mu$ and $\nu$ be finite $\sigma$-additive measures on a $\sigma$-algebra ${\mathcal{A}}$ of subsets of $X$ and let $\nu$ be positive and $\mu$ be absolutely continuous with respect to $\nu$. A function $g:X\to{\mathbb{R}}$ is a \emph{Radon-Nikodym derivative} of $\mu$ with respect to $\nu$ (i.e., $\mu (A)=\int_A g\,{\mathrm{d}}\nu$ for every $A\in{\mathcal{A}}$) if and only if the following two properties hold for every real number $a$: \begin{enumerate} \item $\mu(A)\geq a\,\nu(A)$ for every $A\subset \{g\geq a\}$ and $A\in{\mathcal{A}}$, \labelpag{Hahn_1} \item $\mu(A)\leq a\,\nu(A)$ for every $A\subset \{g\leq a\}$ and $A\in{\mathcal{A}}$, \labelpag{Hahn_2} \end{enumerate} where $\{g\le a\}:=\{x\in X: g(x)\le a\}$ and, dually, $\{g\ge a\}:=\{x\in X: g(x)\ge a\}$. These properties \eqref{Hahn_1} and \eqref{Hahn_2}, expressed by \textsc{Nikodym} \cite[(1930)]{nikodym1930} in terms of Hahn decomposition of measures, are a natural translation of properties \eqref{hom}, \eqref{gre} and \eqref{sum}. \section{Peano on (pre-Lebesgue) classical measure theory}\label{sez-misura} The interest of \textsc{Peano} in measure theory is rooted in his criticism of the \emph{definition of area} (1882), of the \emph{definition of integral} (1883) and of the \emph{definition of derivative} (1884). This criticism leads him to an innovative measure theory, which is extensively exposed in Chapter V of \emph{Applicazioni geometriche} \cite[(1887)]{peano87}. The definition of area given by \textsc{Serret} in \cite[(1879)]{serret} contrasted with the traditional definition of area: in 1882 \textsc{Peano}, independently of \textsc{Schwarz}, observed (see \cite[(1890)]{peano_area1890}) that the area of a cylindrical surface cannot be evaluated as the limit of inscribed polyhedral surfaces, as prescribed by \textsc{Serret}'s definition. In \emph{Applicazioni geometriche}, \textsc{Peano} provides a consistent definition of area and proves the integral formula of area.\,\footnote{This topic will be extensively analyzed in a forthcoming paper by \textsc{Greco, Mazzucchi, Pagani}~\cite{gremazpagAREA}.} \textsc{Peano}'s criticism of the definition of Riemann integral of a function and of its relation with the area of the \emph{ordinate-set} (i.e., hypograph of the function) \cite[(1883)]{peano1883}, forces him to introduce outer/inner measure as the set-theoretic counterparts of upper/lower integral: he defines the latter in terms of infimum/supremum (instead of limits, as done traditionally) of the Darboux sums.\,\footnote{According with \textsc{Letta} \cite{letta}, the notion of negligible set is introduced after an arduous process of investigation on ``similar'' notions related to cardinality and topology, between 1870 and 1882. Afterward the definition of {\it Inhalt} (content) appears in the works by \textsc{Stolz} \cite[(1884)]{stolz1884}, \textsc{Cantor} \cite[(1884)]{cantor1884}, \textsc{Harnack} \cite[(1885)]{harnack1885}. The notions of inner and outer measure are introduced by \textsc{Peano} in \cite[(1883) p.\,446]{peano1883} and in \cite[(1887)]{peano87}, and later by \textsc{Jordan} \cite[(1892)]{jordan1892}. In the following we will refer to the inner and to the outer measures as to \emph{Peano-Jordan measures}. } \textsc{Peano}, in introducing the inner and outer measure as well as in defining area \cite[(1890)]{peano_area1890}, is also influenced by \textsc{Archimedes}'s approach on calculus of area, length and volume of convex figures. In 1884, by analyzing the proof of mean value theorem, given by \textsc{Jordan} \footnote{\textsc{Jordan}, famous geometer and algebraist, publishes only a few papers on mathematical analysis. His most famous work is the {\it Cours d'analyse}, published in several editions. To our knowledge the relationship between \textsc{Peano} and \textsc{Jordan} was good and based on reciprocal appreciation, as one can deduce from two letters conserved in Archives de la Biblioth\`eque Centrale de l'Ecole Polytechnique (Paris).} in the first edition of \emph{Cours d'analyse}, \textsc{Peano} stresses the difference between differentiable functions and functions with continuous derivative. The continuity of derivative is expressed by \textsc{Peano} in terms of the existence of the limit \begin{equation}\label{str-diff}\lim_{\begin{subarray}{c}x,y\to \bar x \\ x\neq y\end{subarray}}\frac{f(x)-f(y)}{x-y}\end{equation} for any $\bar x$ in the domain of $f$.\,\footnote{ Later, in a paper with didactic value \cite[(1892)]{peano_der1892}, \textsc{Peano} re-proposes the distinction between Definition \eqref{str-diff} and the usual derivative of a function, and underlines the correspondence of \eqref{str-diff} with the definition of density in Physics. Nowadays the function $f$ is said \emph{strictly differentiable} at the point $\bar x$ if the limit \eqref{str-diff} exists; consequently, the value of the limit $(3.1)$ is called \emph{strict derivative} of $f$ at $\bar x$.} Moreover, \textsc{Peano}, in his correspondence with \textsc{Jordan} \cite[(1884)]{peano_jordan,peano_gilbert}, observes that uniform convergence of the difference quotient is equivalent to the continuity of the derivative.\,\footnote{ Section 80 of \textsc{Jordan}'s \emph{Cours d'analyse} \cite[(1893) p.\,68]{jordan1893}, titled ``\emph{Cas o\`u $\frac{f(x+h)-f(x)}{h}$ tend uniform\'ement vers $f'(x)$}'', contains a trace of it.} This notion of continuous derivative will be the basis of \textsc{Peano}'s strict derivative of distributive set functions. \emph{Applicazioni geometriche} is a detailed exposition (more than 300 pages) of several topics of geometric applications of infinitesimal calculus.\,\footnote{ As detailed in \textsc{Dolecki, Greco} \cite{greco-dolecki}, between several interesting concepts studied in \emph{Applicazioni geometriche} that are not directly connected with measure theory, we recall the limit of sequences of sets (now called \emph{Kuratowski limits}), the introduction of the concept of differentiability of functions (nowadays called \emph{Fr\'echet differentiability}), the definition of tangent cone (nowadays called \emph{Bouligand cone}), the necessary condition of optimality (nowadays called \emph{Fermat conditions}) and a detailed study of problems of maximun and minimun. } In \emph{Applicazioni geometriche} \textsc{Peano} refounds the notion of Riemann integral by means of inner and outer measures\,\footnote{The simultaneous construction of inner and outer measure is the basis of the evolution of the theory leading to Lebesgue measure. Fortunately, \textsc{Carath\'eodory} \cite[(1914)]{caratheodory1914} and \textsc{Hausdorff} \cite[(1919)]{hausdorff1919} put an end to the intoxication due to the presence of inner measure, as \textsc{Carath\'eodory} writes: \begin{quote} Borel and Lebesgue (as well as Peano and Jordan) assigned an outer measure $m^*(A)$ and an inner measure $m_* (A)$ to every point set $A$ [...]. The main advantage, however, is that the new definition [i.e., the exterior measure of Charath\'eodory] is independent of the concept of an \emph{inner measure} \cite[(2004) p.\,72]{edgar}. \end{quote}}, and extends it to abstract measures. The development of the theory is based on solid topological and logical ground and on a deep knowledge of set theory. He introduces the notions of closure, interior and boundary of sets. \textsc{Peano} in \emph{Applicazioni geometriche} \cite[(1887)]{peano87}, and later \textsc{Jordan} in the paper \cite[(1892)]{jordan1892} and in the second edition of \emph{Cours d'Analyse} \cite[(1893)]{jordan1893}, develop the well known concepts of classical measure theory, namely, measurability, change of variables, fundamental theorems of calculus, with some methodological differences between them.\,\footnote{In a first paper of \textsc{Jordan} \cite[(1892)] {jordan1892} and in a more extensive way in his \emph{Cours d'analyse} \cite[(1893)]{jordan1893}, we find several \textsc{Peano}'s results. There are, however, methodological differences between their approaches: \textsc{Peano} constructs his measure by starting from polygons, while \textsc{Jordan} considers (in the 2-dimensional case) squares. The definition proposed by \textsc{Peano} does not have the simplicity of that of \textsc{Jordan}, but it is independent of the reference frame and it is, by definition, invariant under isometries, without any need of further proof. Moreover, \textsc{Peano}'s definition allows for a direct computation of the proportionality factor appearing under the action of affine transformation (in previous works \textsc{Peano} had developed a formalism allowing for computation of areas of polygons in a simple way, see \cite{gremazpagAREA}) for details. } The mathematical tools employed by \textsc{Peano} were really innovative at that time (and maybe are even nowadays), both on a geometrical and a topological level. \textsc{Peano} used extensively the geometric vector calculus introduced by \textsc{Grassmann}. The geometric notions include oriented areas and volumes (called \emph{geometric forms}). Our main interest concerns Chapter V of \textsc{Peano}'s \emph{Applicazioni geometriche}, where we find differentiation of distributive set functions. \emph{Applicazioni geometriche} is widely cited, but we have the feeling that the work is not sufficiently known. The revolutionary character of \textsc{Peano}'s book is remarked by J. \textsc{Tannery} \cite[(1887)]{tannery1887}: \begin{quote} Le Chapitre V porte ce titre: {\it Grandeurs g\'eom\'etriques}. C'est peut-\^etre le plus important et le plus int\'eressant, celui, du moins, par lequel le Livre de M. Peano se distingue davantage des Trait\'es classiques: les d\'efinitions qui se rapportent aux {\it champs de points}, aux points ext\'erieurs, int\'erieurs ou limites par rapport \a un champ, aux fonctions distributives (coexistantes d'apr\`es Cauchy), \a la longueur (\a l'aire ou au volume) externe, interne ou propre d'un champ, la notion d'int\'egrale \'etendue a un champ sont pr\'esent\'ees sous une forme abstraite, tr\`es pr\'ecise et tr\`es claire.\,\footnote{\translation{% Chapter V is titled: Geometric magnitudes. This chapter is probably the most relevant and interesting, the one that marks the difference of the Book of Peano with respect to other classical Treatises: definitions concerning \emph{sets of points}, exterior, interior and limit points of a given set, distributive functions (coexistent magnitudes in the sense of Cauchy), exterior, interior and proper length (or area or volume) of a set, the extension of the notion of integral to a set, are stated in an abstract, very precise and very clear way.}} \end{quote} Only a few authors fully realized the innovative value of Chapter V of \emph{Applicazioni geometriche}. As an instance, \textsc{Ascoli} says: \begin{quote} In [\emph{Applicazioni geometriche}] vi sono profusi, in forma cos\`i semplice da parere definitiva, idee e risultati divenuti poi classici, come quelli sulla misura degli insiemi, sulla rettificazione delle curve, sulla definizione dell'area di una superficie, sull'integrazione di campo, sulle funzioni additive di insieme; ed altri che sono tutt'ora poco noti o poco studiati [\dots].% \,\footnote{ \translation{In \emph{Applicazioni geometriche} it is possible to find a clear and definitive exposition of many mathematical concepts and results, nowadays of common knowledge: results on measure of sets, on length of arcs, on the definition of area of a surface, on the integration on a set, on additive set functions; and other results that are not well known [\dots] } } \end{quote} Most of the modern historians are aware of the contributions to measure theory given by \textsc{Peano} and \textsc{Jordan} concerning inner and outer measure and measurability.\,\footnote To our knowledge the latest example of historian who forgot to quote any \textsc{Peano}'s contributions, is \textsc{Hochkirchen} \cite[(2003)]{hochkirchen2003}. Ironically, the symbols $\underline\int$ and $\overline\int$ which \textsc{Volterra} introduced for denoting lower and upper integral, were ascribed to \textsc{Peano} by \textsc{Hochkirchen}. } Only a few historians mention \textsc{Peano}'s contributions to derivative of set functions: \textsc{Pesin} \cite{pesin}, \textsc{Medvedev} \cite{medvedev} and \textsc{Hawkins} \cite{hawkins} and others. \textsc{Pesin} \cite[(1970)\, pp.\,32-33]{pesin}, who does ``not intend to overestimate the importance of \textsc{Peano}'s results'', recalls some results of \textsc{Peano}'s work without giving details or appropriate definitions. \textsc{Medvedev} in \cite[(1983)]{medvedev} recalls \textsc{Peano}'s contributions giving detailed information both on the integral as a set function and on the \textsc{Peano}'s derivative. In our opinion he gives an excessive importance to mathematical priorities without pointing out the differences between \textsc{Peano}'s contribution of 1887 and \textsc{Lebesgue}'s contribution of 1910.\,\footnote{ \textsc{Dieudonn\'e}, reviewing in \cite[(1983)]{dieudonne} the \textsc{Medvedev}'s paper \cite[(1983)]{medvedev}, with his usual sarcasm denies any logical value of \textsc{Peano}'s definitions concerning limits and sets. Against any historical evidence, \textsc{Dieudonn\'e} forgets several \textsc{Peano}'s papers on several notions of limit, and ignores the \emph{Formulario mathematico} where \textsc{Peano} presents a large amount of mathematical results, including set axiomatization, through his logical \emph{ideography}. Besides, \textsc{Dieudonn\'e} forgets \textsc{Bourbaki}'s \emph{Elements of the history of mathematics} and ignores that the building blocks of \textsc{Peano}'s ideography are the atomic propositions: $x\in X$ and $x=y$.} \textsc{Hawkins} does not describe \textsc{Peano}'s results on differentiation and integration in detail, as they are too far from the main aim of his book, but he is aware of \textsc{Peano}'s contributions to differentiation of set functions \cite[p.\,88,\,185]{hawkins}, and appraises \textsc{Peano}'s book \emph{Applicazioni geometriche}: \begin{quote} the theory is surprisingly elegant and abstract for a work of 1887 and strikingly modern in his approach \cite[p.\,88]{hawkins}. \end{quote} None of the historian quoted above, establishes a link between \textsc{Peano}'s work on differentiation of measure in \emph{Applicazioni geometriche} with his paper \emph{Grandezze coesistenti} \cite{peano1915} and with Lebesgue's comments on differentiation presented in \emph{La mesures des grandeurs} \cite[(1935)]{lebesgue1935}. Main primary sources on which our paper is based are \cite{cauchy1841,peano1915,lebesgue1910,lebesgue1935,vitali1915,vitali1916,fubini1915b,fubini1915a}. \section{Cauchy's coexistent magnitudes}\label{sez-cauchy} \textsc{Cauchy}'s seminal paper \emph{Coexistent magnitudes} \cite[(1841)]{cauchy1841} presents some difficulties for the modern reader: the terms he introduces are rather obscure (for instance, \emph{grandeurs}, \emph{coexistantes}, \emph{\'el\'ements}, \dots), and the reasonings are based on vague geometric language, accordingly to the \textsc{Cauchy}'s taste. Actually, \textsc{Cauchy}'s aim was to make mathematical analysis as well rigorous as geometry \cite[(1821) p.\,ii]{cauchy1821}: \begin{quote} Quant aux m\'ethodes, j'ai cherch\'e \a leur donner toute la rigueur qu'on exige en g\'eom\'etrie, de mani\`ere \`a ne jamais recourir aux raisons tir\'ees de la g\'en\'eralit\'e de l'alg\`ebre.\,\footnote{\translation{% About methods, I have tried to be rigorous as required in geometry, in order to avoid the general reasonings occurring in algebra.} Not all mathematicians at that time considered geometry as a model of rigor. Indeed \textsc{Lobachevsky} starts his famous book ``Theory of parallels'' \cite[(1829) p.\,11]{Loba1829} with the following sentence: \begin{quote} In geometry I find certain imperfections which I hold to be the reason why this science, apart from transition into analytics, can as yet make no advance from that state in which it has come to us from Euclid. \end{quote}} \end{quote} In his \emph{Le\c cons de m\'ecanique analytique} \cite[(1868) pp.\,172-205]{moigno1868} \textsc{Moigno}, a follower of \textsc{Cauchy}, reprints the paper \emph{Coexistent magnitudes}. He puts into evidence the vagueness of some terms of \textsc{Cauchy}, unfortunately without adding any comment that may help the reader to a better understanding of \textsc{Cauchy}'s paper itself. The meaning of the terms ``\emph{grandeurs}'' and ``\emph{coexistantes}'' can be made precise by analyzing the list of examples given by \textsc{Cauchy}. He implicitly postulates the following properties of ``\emph{grandeurs}'': \begin{enumerate} \item \labelpag{cond2} a magnitude can be divided into finitely many infinitesimal equal elements (using the terminology of \textsc{Cauchy}), where infinitesimal is related to magnitude and diameter; \item the ratio between coexistent magnitudes (not necessarily homogeneous) is a numerical quantity. \end{enumerate} Concerning the term ``\emph{coexistantes}'', coexistent magnitudes are defined by \textsc{Cauchy} as ``magnitudes which exist together, change simultaneously and the parts of one magnitude exist and change in the same way as the parts of the other magnitude''.\,\footnote{ \textsc{Cauchy} says in \cite[(1841) p.\,188]{cauchy1841}: \begin{quote} Nous appellons \emph{grandeurs} ou quantit\'es \emph{coexistantes} deux grandeurs ou quantit\'es qui existent ensemble et varient simultan\'ement, de telle sorte que les \'el\'ements de l'une existent et varient, ou s'\'evanouissent, en m\^eme temps que les \'el\'ements de l'autre. \end{quote}% } Despite of the vagueness of this definition, the meaning of ``\emph{coexistantes}" is partially clarified by many examples of coexistent magnitudes given by \textsc{Cauchy} \cite[(1841) pp.\,188--189]{cauchy1841}, such as the volume and the mass of a body, the time and the displacement of a moving point, the radius and the surface of a circle, the radius and the volume of a sphere, the height and the area of a triangle, the height and the volume of a prism, the base and the volume of a cylinder, and so on. Vagueness of the \textsc{Cauchy}'s definition of ``\emph{grandeurs coexistantes}'' was pointed out by \textsc{Peano}. In \emph{Applicazioni geometriche} \cite[(1887)]{peano87} and in \emph{Grandezze coesistenti} \cite[(1915)]{peano1915}, \textsc{Peano} defines them as set functions over the same given domain, satisfying additivity properties in a suitable sense. The primary aim of \textsc{Cauchy} is pedagogic: he wants to write a paper making easier the study of infinitesimal calculus and its applications. As it is easy to understand, \textsc{Cauchy} bases himself on the mass-density paradigm and introduces the limit of the average of two coexistent magnitudes, calling it {\it differential ratio}. In a modern language we could say that the coexistent magnitudes are set functions, while the differential ratio is a point function. \textsc{Cauchy} points out that the differential ratio is termed in different ways depending on the context, namely, on the nature of the magnitudes themselves (for instance, mass density of a body at a given point, velocity of a moving point at a given time, hydrostatic pressure at a point of a given surface, \dots). Now we list the most significant theorems that are present in the paper of \textsc{Cauchy}, preserving, as much as possible, his terminology. \begin{theorem}\label{mediaintegrale}\cite[Theorem 1, p.\,190]{cauchy1841} The average between two coexistent magnitudes is bounded between the supremum and the infimum of the values of the differential ratio. \end{theorem} \begin{theorem}\label{densitanulla}\cite[Theorem 4, p.\,192]{cauchy1841} A magnitude vanishes whenever its differential ratio, with respect to another coexistent magnitude, is a null function. \end{theorem} \begin{theorem}\label{teomedia} \cite[Theorem 5, p.\,198]{cauchy1841} If the differential ratio between two coexistent magnitudes is a continuous function, then the ``mean value property'' holds.\,\footnote{Let $\mu,\nu : {\mathcal{A}} \to {\mathbb{R}}$ be two magnitudes and let $g$ be the differential ratio of $\mu$ with respect to $\nu$. We say that the \emph{mean value property} holds if, for any set $A \in {\mathcal{A}}$, with $\nu(A) \ne 0$, there exists a point $P\in A$ such that $g(P)=\frac{\mu(A)}{\nu (A)}$.} \end{theorem} \begin{theorem}\label{densitauguale} \cite[Theorem 13, p.\,202]{cauchy1841} If two magnitudes have the same differential ratio with respect to another magnitude, then they are equal. \end{theorem} Even if \textsc{Cauchy} presents proofs that are rather ``vanishing'', his statements (see theorems listed above) and his use of the differential ratio allow \textsc{Peano} to rebuild his arguments on solid grounds. \textsc{Peano} translates the coexistent magnitudes into the concept of distributive set functions, restating the theorems presented by \textsc{Cauchy} and proving them rigorously. In \textsc{Peano}, the property of continuity of the differential ratio (whenever it exists) is a consequence of its definition. On the contrary, \textsc{Cauchy}'s definition of differential ratio does not guarantee its continuity. \textsc{Cauchy} is aware of the fact that the differential ratio can be discontinuous, nevertheless he thinks that, in the most common ``real'' cases, it may be assumed to be continuous; see \cite[(1841), p.\,196]{cauchy1841}: \begin{quote} Le plus souvent, ce rapport diff\'erentiel sera une fonction continue de la variable dont il d\'epend, c'est-\`a-dire qu'il changera de valeur avec elle par degr\'es insensibles.\,\footnote{\translation{% Almost always, this differential ratio is a continuous function of the independent variable, i.e., its values change in a smooth way.}} \end{quote} and \cite[(1841) p.\,197]{cauchy1841}: \begin{quote} Dans un grand nombre de cas, le rapport diff\'erentiel $\rho$ est une fonction continue [\dots].\,\footnote{\translation{% Almost always, the differential ratio $\rho$ is a continuous function [\dots].}} \end{quote} In evaluating the differential ratio as a ``limit of average values $\frac{\mu(A)}{\nu(A)}$ at a point $P$'', for \textsc{Peano} the set $A$ does not necessarily include the point $P$, while for \textsc{Cauchy} $A$ includes $P$ (as \textsc{Cauchy} says: $A$ \emph{renferme le point} $P$). This difference is fundamental also in case of linearly distributed masses. Indeed a linear mass distribution, described in terms of a function of a real variable, admits a differential ratio in the sense of \textsc{Peano} if the derivative exists and is continuous, whilst it admits a differential ratio in the sense of \textsc{Cauchy} \footnote{ Using the identity \begin{equation*} \frac{f(x+h) - f(x-k)}{h+k} = \frac{h\frac{f(x+h)-f(x)}{h} +k\frac{f(x)-f(x-k)}{k}}{h+k} \quad \text{ for every } k, h > 0 \end{equation*} the reader can easily verify that the differential ratio in the sense of \textsc{Cauchy} exists (i.e., the limit of $\frac{f(x+h) - f(x-k)}{h+k}$ exists for $k\to 0^+$ and $h \to 0^+$, with $h+k > 0$) whenever $f'(x)$ exists. } only if the function is differentiable \cite[(1841) p.\,208]{cauchy1841}: \begin{quote} Lorsque deux grandeurs ou quantit\'es coexistantes se r\'eduisent \`a une variable $x$ et \`a une fonction $y$ de cette variable, le rapport diff\'erentiel de fonction \`a la variable est pr\'ecis\'ement ce qu'on nomme la \emph{d\'eriv\'ee} de la fonction ou le \emph{coefficient diff\'erentiel}.\,\footnote{\translation{% When two coexistent magnitudes are a variable $x$ and a function $y$ of $x$, the differential ratio of the function with respect to the variable $x$ coincides with the \emph{derivative} of the function.}} \end{quote} Concerning the existence of the differential ratio, \textsc{Cauchy} is rather obscure; indeed whenever he defines the differential ratio, he specifies that ``it will converge in general to a certain limit different from $0$''. As \textsc{Cauchy} does not clarify the meaning of the expression ``in general'', the conditions assuring the existence of the differential ratio are not given explicitly. On the other hand, \textsc{Cauchy} himself is aware of this lack, as in several theorems he explicitly assumes that the differential ratio is ``completely determined at every point''. Concerning the mass-density paradigm, in \textsc{Cauchy}'s \emph{Coexistent magnitudes} an explicit formula allowing for constructing the mass of a body in terms of its density is also lacking. In spite of this, \textsc{Cauchy} provides a large amount of theorems and corollaries giving an approximate calculation of the mass under the assumption of continuity of the density. We can envisage this approach as a first step toward the modern notion of integral with respect to a general abstract measure. We can summarize further \textsc{Cauchy}'s results into the following theorem: \begin{theorem}\cite[(1841) pp.\,208--215]{cauchy1841} Let us assume that the differential ratio $g$ between two coexistent magnitudes $\mu$ and $\nu$ exists and is continuous. Then $\mu$ can be computed in terms of the integral of $g$ with respect to $\nu$. \end{theorem} \textsc{Cauchy} concludes his memoir \cite[(1841) pp.\,215--229]{cauchy1841} with a second section in which he states the following theorem in order to evaluate lengths, areas and volumes of homothetic elementary figures. \begin{theorem}\label{teo-prop} \cite[Theorem 1, p.\,216]{cauchy1841} Two coexistent magnitudes are proportional, whenever to equal parts of one magnitude there correspond equal parts of the other.\,\footnote{One can observe that this theorem holds true by imposing condition \eqref{cond2}. } \end{theorem} Even if the \textsc{Cauchy}'s paper contains several innovative procedures, to our knowledge only a few authors (\textsc{Moigno}, \textsc{Peano}, \textsc{Vitali}, \textsc{Picone} and \textsc{Lebesgue}) quote it, and only \textsc{Peano} and \textsc{Lebesgue} analyze it in details. \section{Distributive families, decompositions and Peano additivity}\label{sez-distributive} In his paper \emph{Le grandezze coesistenti} \cite[(1915)]{peano1915}, \textsc{Peano} introduces a general concept of \emph{distributive function}, namely a function $f:A\to B$, where $(A,+), (B,+)$ are two sets endowed with binary operations, denoted by the same symbol $+$, satisfying the equality \begin{equation} \label{distr}f(x+y)=f(x)+f(y) \end{equation} for all $x,y$ belonging to $A$ and, if necessary, verifying suitable assumptions.\,\footnote{Among distributive functions considered by \textsc{Peano}, there are the usual linear functions and particular set functions. The reader has to pay attention in order to avoid the interpretation of distributive set functions as finitely additive set functions.} \textsc{Peano} presents several examples of distributive functions. As a special instance, $A$ stands for the family ${\mathcal{P}}(X)$ of all subsets of a finite dimensional Euclidean space $X$, ``$+$'' in the left hand side of (\ref{distr}) is the union operation, and ``$+$'' in the right hand side of (\ref{distr}) is the logical OR (denoted in \textsc{Peano}'s ideography by the same symbol of set-union); therefore, equation \eqref{distr} becomes: \begin{equation}\label{distr2}f(x\cup y)=f(x) \cup f(y).\end{equation} To make (\ref{distr2}) significant, \textsc{Peano} chooses a family ${\mathcal U}\subset {\mathcal{P}}(X)$ and defines ``$f(x)$'' as ``$x\in \mathcal{U}$''. Consequently (\ref{distr2}) becomes: \begin{equation}\label{griglia}x\cup y\in {\mathcal U}\Longleftrightarrow x\in {\mathcal U}\text{ or } y\in {\mathcal U} \end{equation} for all $x,y \in {\mathcal{P}}(X)$. A family $\mathcal U$ satisfying (\ref{griglia}) is called by \textsc{Peano} a \emph{distributive family}.\,\footnote{This notion of distributive family will be rediscovered later by \textsc{Choquet} \cite[(1947)]{choquet}, who called it {\it grill} and recognized it as the dual notion of \textsc{Cartan}'s {\it filter} \cite[(1937)]{cartan}.} Moreover, \textsc{Peano} considers \emph{semi-distributive} families ${\mathcal{F}}\subset {\mathcal{P}} (X)$, i.e., families of sets such that \begin{equation} x\cup y\in {\mathcal{F}} \Longrightarrow x\in {\mathcal{F}} \text{ or } y\in {\mathcal{F}} \end{equation} for all $x, y \in {\mathcal{P}}(X)$. A distributive family of subsets of $X$ is obtained by a semi-distributive family ${\mathcal{F}}$ by adding to ${\mathcal{F}}$ any supersets of its elements. \textsc{Peano} states the following theorem, and attributes to \textsc{Cantor} \cite[(1884) p.\,454]{cantor1884} both its statement and its proof. \begin{theorem}[Cantor compactness property]\label{teo-F1} Let ${\mathcal{F}}$ be a semi-distributive family of a finite-dimensional Euclidean space, and let $S$ be a bounded non-empty set belonging to ${\mathcal{F}}$. Then there exists a point $\bar x$, belonging to the closure of $S$, such that any neighborhood of $\bar x$ contains a set belonging to ${\mathcal{F}}$. \end{theorem} The notion of distributive family is essential in the study of the derivation of distributive set functions by \textsc{Peano}. Distributive families have been introduced by \textsc{Peano} in \emph{Applicazioni geometriche} in 1887. Moreover, he uses them in his famous paper on the existence of solutions of differential equations \cite[(1890) pp.\,201--202]{peano1890} and, later, in his textbook \emph{Lezioni di analisi infinitesimale} \cite[(1893) vol.\,2, pp.\,46--53]{peano1893}. The role played by this notion is nowadays recovered by ``compactness by coverings'' or by ``existence of accumulation points''.\,\footnote{Two examples of distributive families considered by \textsc{Peano} are ${\mathcal U}:=\{ A\subset {\mathbb{R}}^n\, :\; \text{ card}( A)=\infty \}$, and ${\mathcal U}_h:=\{A\subset {\mathbb{R}}^n\, :\, \sup_A h=\sup_{{\mathbb{R}}^n}h\}$, where $h:{\mathbb{R}}^n\to{\mathbb{R}}$ is a given real function. } In proving Theorem \ref{teo-F1}, \textsc{Peano} decomposes a subset of the Euclidean space ${\mathbb{R}}^n$ following a grid of n-intervals implemented by cutting sets along hyperplanes parallel to coordinate axis. We may formalize this procedure in the following way. Let us denote by $H$ a hyperplane of the form $H:=\{x\in{\mathbb{R}}^n\,:\, \langle x,e_i\rangle =a\}$ where $e_i$ is a vector of the canonical basis of ${\mathbb{R}}^n$ and $a\in{\mathbb{R}}$. Let us denote by $H^+$ and $H^-$ the two closed half-spaces delimited by $H$. A family ${\mathcal{F}}$ of subsets of ${\mathbb{R}}^n$ is called \emph{semi-distributive by cutting along hyperplanes} if $$ A\cap H^+ \in {\mathcal{F}} \text{ or } A\cap H^-\in {\mathcal{F}} $$ for every $A \in {\mathcal{F}}$ and for every hyperplane $H$ of ${\mathbb{R}}^n$ of the form indicated above. Under this restrictions a new version of Theorem \ref{teo-F1} still holds: \begin{theorem}[Cantor compactness property by interval-decompositions]\label{teo-F2} Let ${\mathcal{F}}$ be semi-distributive by cutting along hyperplanes and let $S$ be a bounded non-empty set belonging to ${\mathcal{F}}$. Then there exists a point $\bar x$ belonging to the closure of $S$ such that any neighborhood of $\bar x$ contains a set belonging to ${\mathcal{F}}$. \end{theorem} To express additivity properties of set functions, \textsc{Peano}, as it was common at his time \footnote{A similar expression is used also by \textsc{Jordan} \cite{jordan1892}: \begin{quotation} [C]haque champ $E$ a une \'etendue d\'etermin\'ee; [\dots] si on le d\'ecompose en plusieurs parties $E_1$, $E_2$, \dots, la somme des \'etendues de ces parties est \'egale \a l'\'etendue totale de $E$. \translation{% Every set $E$ has a defined extension; [\dots] if $E$ is decomposed into parts $E_1$, $E_2$, \dots, the sum of the extensions of these parts is equal to the extension of $E$.} \end{quotation} }, uses the term \emph{decomposition}. \textsc{Peano} writes in \emph{Applicazioni geometriche} \cite[(1887) p.\,164, 167]{peano87}: \begin{quote} Se un campo $A$ \`e decomposto in parti $A_1,A_2, \dots, A_n$ esso si dir\`a \emph{somma} delle sue parti, e si scriver\`a $$A=A_1+A_2+\dots+A_n$$ [\dots] Una grandezza dicesi \emph{funzione distributiva} di un campo, se il valore di quella grandezza corrispondente ad un campo \`e la somma dei valori di essa corrispondenti alle parti in cui si pu\`o decomporre un campo dato.\,\footnote{\translation{% If a set $A$ is decomposed into the parts $A_1,A_2,\dots,A_n$, it will be called \emph{sum} of its parts, and it will be denoted by $A=A_1+A_2+\dots+A_n$. [\dots] A magnitude is said to be a \emph{distributive set function} if its value on a given set is the sum of the corresponding values of the function on the parts decomposing the set itself.}} \end{quote} In order to formalize in modern language both the operation of ``decomposing'' and his use in \textsc{Peano}'s works, we can pursuit a ``minimal'' way, leading to ``families of interval-decompositions'', and a ``proof-driven'' way, leading to ``families of finite decompositions''. First, the minimal way consists in implementing the procedure of decomposing by cutting along hyperplanes used by \textsc{Peano} in proving Theorem \ref{teo-F1}. More precisely, let ${\mathcal{A}}$ be a family of subsets of the Euclidean space ${\mathbb{R}}^n$; a finite family $\{A_i\}_{i=1}^m$ of elements of ${\mathcal{A}}$ is called an \emph{interval-decomposition} of $A \in {\mathcal{A}}$ if it is obtained by iterating the procedure of cutting by hyperplanes. In other words, an interval-decomposition $\{A_i\}_{i=1}^m$ of a set $A$ is a finite sub-family of ${\mathcal{A}}$ defined recursively as follows: \begin{itemize} \item for $m=1$, $A_1=A$; \item for $m=2$, there exists a hyperplane $H$ such that $A_1= A\cap H^-$ and $A_2= A\cap H^+$; \item for $m>2$, there exist two distinct indices $i_0, i_1 \le n$ such that $\tilde A := A_{i_0} \cup A_{i_1} \in {\mathcal{A}}$ and the families $\{A_i : 1 \le i \le m, i \ne i_0, i \ne i_1 \}\cup \{\tilde A \}$ and $\{A_{i_0}, A_{i_1} \}$ are interval-decompositions of $A$ and $\tilde A$, respectively. \end{itemize} The totality of these interval-decompositions will be denoted by ${\mathbb{D}}_{\textrm{int}}({\mathcal{A}})$. In the case where ${\mathcal{A}}$ is the family of all the closed bounded subintervals of a given closed interval $[a,b]$ of the real line, an arbitrary interval-decomposition of an interval $[a',b']\subset[a,b]$ is a family $\{ [a_{i-1},a_i] \}_{i=1}^m$ where $a' = a_0 \le a_1 \le \dots \le a_{m-1} \le a_m= b'$. The totality of these interval-decompositions are denoted by ${\mathbb{D}}_{\textrm{int}}(a,b)$. The second way consists in summarizing explicitly the properties of the decompositions themselves, as used by \textsc{Peano} in defining the integral and in proving related theorems \footnote{ See pages 165 and 186-188 of \emph{Applicazioni geometriche} \cite[(1887)]{peano87}. }, as it will be seen in Section \ref{sez-massdensity}. This leads to the following definitions of \emph{family of finite decompositions} and of the related \emph{semi-distributive family}, \emph{Cantor compactness property} and \emph{distributive set functions}. Let ${\mathcal{A}}$ be again a family of subsets of an Euclidean space ${\mathbb{R}}^n$ and let us denote by ${\mathcal{P}}_{f}({\mathcal{A}})$ the set of all non-empty finite subfamily of ${\mathcal{A}}$. Define ${\mathbb{U}} ({\mathcal{A}})$ by $${\mathbb{U}} ({\mathcal{A}}):=\{{\mathcal{H}}\in {\mathcal{P}}_{f}({\mathcal{A}}): \cup {\mathcal{H}} \in {\mathcal{A}}\}. $$ Let ${\mathbb{D}}$ be a subset of ${\mathbb{U}}({\mathcal{A}})$; we will say that ${\mathcal{H}}$ is a ${\mathbb{D}}$-\emph{decomposi\-tion} of $A$ if ${\mathcal{H}} \in {\mathbb{D}}$ and $A=\cup{\mathcal{H}}$. \begin{definition} ${\mathbb{D}} \subset {\mathbb{U}}({\mathcal{A}})$ is called a \emph{family of finite decompositions relative to ${\mathcal{A}}$} if the following properties are satisfied: \begin{enumerate} \item $\{A\}\in {\mathbb{D}}$ for every $A\in{\mathcal{A}}$; \item \labelpag{dec_1} if ${\mathcal{H}}$ and ${\mathcal{G}}$ are ${\mathbb{D}}$-decompositions of a set $A$, then $$\{H\cap G: H\in{\mathcal{H}},\,G\in{\mathcal{G}} \}$$ is a ${\mathbb{D}}$-decomposition of $A$; \item \labelpag{dec_2} if ${\mathcal{H}}$ and ${\mathcal{G}}$ are ${\mathbb{D}}$-decompositions of $A$, then for every $G\in {\mathcal{G}}$ the family $${\mathcal{H}}_G := \{H\cap G : H\in{\mathcal{H}} \}$$ is a ${\mathbb{D}}$-decomposition of $G$; \item \labelpag{dec_3} if ${\mathcal{H}}$ is a ${\mathbb{D}}$-decomposition of $A$ and, moreover, for every $H \in {\mathcal{H}}$ the family ${\mathcal{G}}_H$ is a ${\mathbb{D}}$-decomposition of $H$, then $$\cup \{{\mathcal{G}}_H : H \in {\mathcal{H}}\}$$ is a ${\mathbb{D}}$-decomposition of $A$. \end{enumerate} \end{definition} \begin{definition} \label{def_inf} A family ${\mathbb{D}}$ of finite decompositions relative to ${\mathcal{A}}$ is called \emph{infinitesimal} if, for every bounded set $A \in {\mathcal{A}}$ and for every real number $\varepsilon >0$, there is a ${\mathbb{D}}$-decomposition ${\mathcal{H}}$ of $A$ such that the diameter of every $H \in {\mathcal{H}}$ is less than $\varepsilon$. \end{definition} \begin{definition} Let ${\mathbb{D}}$ be a family of finite decompositions relative to ${\mathcal{A}}$. Then a \emph{set function} $\mu \colon {\mathcal{A}} \to {\mathbb{R}}$ is said to be \emph{distributive with respect to} ${\mathbb{D}}$, if \begin{equation}\mu (\cup {\mathcal{H}}) = \sum_{H \in {\mathcal{H}}} \mu(H) \text{ for every }{\mathcal{H}} \in {\mathbb{D}}.\end{equation} \end{definition} Consequently, \begin{definition} \label{def_semi} Let ${\mathbb{D}}$ be a family of finite decompositions relative to ${\mathcal{A}}$. A family ${\mathcal{F}}$ of subsets of the Euclidean space ${\mathbb{R}}^{n}$ is said to be \emph{semi-distributive with respect to} ${\mathbb{D}}$, if \begin{equation}{\mathcal{H}}\in{\mathbb{D}} \text{ and } \cup{\mathcal{H}}\in {\mathcal{F}} \Longrightarrow \exists H\in{\mathcal{H}} \text{ such that } H\in{\mathcal{F}}. \end{equation} \end{definition} \begin{theorem}[Cantor compactness property by an arbitrary family of decompositions]\label{teo-F3} Let ${\mathbb{D}}$ be an infinitesimal family of finite decompositions relative to ${\mathcal{A}}$ and let ${\mathcal{F}}$ be a semi-distributive family with respect to ${\mathbb{D}}$. If $S$ is a bounded non-empty set belonging to ${\mathcal{F}}$. Then there exists a point $\bar x$ belonging to the closure of $S$ such that any neighborhood of $\bar x$ contains a set belonging to ${\mathcal{F}}$. \end{theorem} In the following, an expression of type ``$\mu \colon({\mathcal{A}},{\mathbb{D}}) \to {\mathbb{R}}$ is a distributive set function'' stands for ``${\mathbb{D}}$ is a family of finite decompositions relative to ${\mathcal{A}}$ and $\mu\colon{\mathcal{A}} \to {\mathbb{R}}$ is a distributive set function with respect to ${\mathbb{D}}$. Examples of families of decompositions are ${\mathbb{U}}({\mathcal{A}})$, and \begin{enumerate} \item \labelpag{dec-inter} the family ${\mathbb{D}}_{\textrm{int}}({\mathcal{A}})$ of all interval-decompositions introduced above;\,\footnote{Notice that a real valued set function $\mu:{\mathcal{A}} \to {\mathbb{R}}$ is distributive with respect to ${\mathbb{D}}_{\textrm{int}}({\mathcal{A}})$, if $\mu (A)=\mu (A\cap H^+)+\mu (A\cap H^-)$ for every $A\in {\mathcal{A}}$ and for every hyperplanes $H$ such that $A\cap H^+ \in {\mathcal{A}} $ and $A\cap H^-\in {\mathcal{A}}$. Inner and upper Peano-Jordan measures are both distributive in this sense, but they are not finitely additive. } \item \labelpag{dec-picone} the family of all ${\mathcal{H}} \in {\mathbb{U}}({\mathcal{A}})$ such that the interiors of two arbitrary distinct elements of ${\mathcal{H}}$ have empty intersection and every $H \in {\mathcal{H}}$ is Peano-Jordan measurable; \item \labelpag{dec-picone2} the family of all ${\mathcal{H}} \in {\mathbb{U}}({\mathcal{A}})$ such that the intersection of the closure of two arbitrary distinct elements of ${\mathcal{H}}$ have null Peano-Jordan measure and every $H\in {\mathcal{H}}$ is bounded; \item \labelpag{last_ex} the family of all ${\mathcal{H}} \in {\mathbb{U}}({\mathcal{A}})$ such that two arbitrary distinct elements of ${\mathcal{H}}$ have empty intersection. \end{enumerate} The interval-decompositions (in particular ${\mathbb{D}}(a,b)$) occurs frequently in \textsc{Peano}'s works. Distributive set functions related to the last example (\ref{last_ex}) are well known as \emph{finitely additive set functions}; this type of additivity, expressed in terms of partitions of sets, was introduced for the first time in \textsc{Borel} \cite[(1898), pp.\,46-50]{borel1898}, and, more clearly, in \textsc{Lebesgue} \cite[(1902), p.\,6]{lebesgue1902}. As far as we know, all historians interpreted \textsc{Peano}'s distributive set functions as ``finitely additive'' set functions.\,\footnote{Observe that inner and outer Peano-Jordan measures on Euclidean spaces are not finitely additive, but they are distributive set functions with respect to the families of decomposition of type (\ref{dec-inter}) or (\ref{dec-picone}). Moreover, notice that outer Peano-Jordan measure is a distributive set function with respect to a family of decompositions of type (\ref{dec-picone2}).} For instance, in the proof of the integrability of functions \cite[(1887) p.\,188]{peano87}, \textsc{Peano} uses distributivity properties of the upper and lower integral with respect to the domain of integration; clearly neither the upper nor the lower integral are finitely additive. \section{Peano's strict derivative of distributive functions \\ and its applications}\label{sez-derivata} In \emph{Applicazioni geometriche} \cite[(1887)]{peano87} \textsc{Peano} translates in terms of ``distributive functions'' the ``magnitudes'' of \textsc{Cauchy}, so that two \textsc{Cauchy}'s magnitudes are ``coexistent'' if they are distributive functions with the same domain. \textsc{Peano}'s distributive set functions are called \emph{positive} if their values are positive. \textsc{Peano}'s \emph{strict derivative} is defined by \footnote{ In \textsc{Peano}'s words \cite[(1987) p.\,169]{peano87}: \begin{quote} Diremo che, in un punto $P$, il \emph{rapporto} delle due funzioni distributive $y$ ed $x$ d'un campo vale $\rho$, se $\rho$ \`e il limite verso cui tende il rapporto dei valori di queste funzioni, corrispondenti ad un campo di cui tutti i punti si avvicinano indefinitamente a $P$. \end{quote} \translation{% Given two distributive functions $y$ an $x$ defined over a given set, we say that their \emph{ratio}, at a given point $P$, is $\rho$, if $\rho$ is the limit of the ratio between the values of the two functions, taken along sets for which all its points approach the point $P$.}} \begin{definition} Let $\mu, \nu : ({\mathcal{A}},{\mathbb{D}}) \to {\mathbb{R}}$ be distributive set functions, and let $\nu$ be positive. A real function $g$ over a set $S$ is called a ``strict derivative of $\mu$ with respect to $\nu$'' on $S$ (denoted by $\frac{d\,\mu}{d\,\nu}$ and termed \emph{rapporto} in \emph{Applicazioni geometriche}) if, for every point $x \in S$ and for every $\epsilon >0$, there exists $\delta >0$ such that \footnote{One can note that for the definition of strict derivative at a point $x$, the point $x$ itself must be an accumulation point with respect to the family ${\mathcal{A}}$ and the measure $\nu$, that is, for all $\delta >0$, there exists a $A\in{\mathcal{A}}$ such that $\nu(A)\neq 0$ and $A\subset B_\delta (x)$, where $B_\delta (x)$ denotes the Euclidean ball of center $x$ and radius $\delta$.} \begin{equation} \left|\frac{\mu (A)}{\nu (A)}-g(x)\right|<\epsilon \quad \text{for every } A\in {\mathcal{A}}, \, \text{with } \nu (A)\neq 0, \, A\subset B_\delta(x). \end{equation} \end{definition} It is worth noticing that the concept of strict derivative given by \textsc{Peano} provides a consistent mathematical ground to the concept of ``infinitesimal ratio'' between two magnitudes, successfully used since \textsc{Kepler}. A remarkable example given by \textsc{Peano} is the evaluation of a rectifiable arc length by integrating the ``infinitesimal arc length'' $ds$. Notice that, whenever $ds$ exists in the sense of \textsc{Peano}, the corresponding integral provides the length of the arc. On the contrary, the integration of the infinitesimal arc length $ds$, evaluated in the sense of \textsc{Lebesgue} (1910), provides the length of the arc only in case of absolute continuity of the arc parametrization (see \textsc{Tonelli} \cite[(1908)]{tonelli}) The existence of \textsc{Peano}'s strict derivative is not assured in general; its characterizing properties are clearly presented in \emph{Applicazioni geometriche} and can be summarized in the following theorems. First, \textsc{Peano} gives a precise form to \textsc{Cauchy}'s Theorem \ref{mediaintegrale}, stating the following: \begin{theorem}[see \textsc{Peano} {\cite[Theorem 13, p.\,170]{peano87}} for ${\mathbb{D}}={\mathbb{D}}_{\textrm{int}}$]\label{teorema-fondamentale} Let $\mu, \nu : ({\mathcal{A}},{\mathbb{D}}) \to {\mathbb{R}}$ be distributive set functions with ${\mathbb{D}}$ infinitesimal and $\nu$ positive. If $S \in {\mathcal{A}}$ is a closed and bounded non-empty set and $g$ is the strict derivative of $\mu$ with respect to $\nu$ on $S$, then \begin{equation} \label{mediafor} \inf_{S} g\leq \frac{\mu (A)}{\nu(A)}\leq \sup_{S} g \end{equation} for all $A\in {\mathcal{A}}$ with $A\subset S$ and $\nu (A)>0$.\end{theorem} In the case ${\mathbb{D}}={\mathbb{D}}_{\textrm{int}}$, \textsc{Peano} proves this fundamental theorem by applying Theorem \ref{teo-F2} to the semi-distributive families ${\mathcal{F}}_a:=\{A\in{\mathcal{A}}\, :\,\mu(A)>a\, \nu(A)\}$ and ${\mathcal{G}}_a:=\{A\in{\mathcal{A}}\, :\,\mu(A)<a\, \nu(A)\}$, for real numbers $a$. Observe that (\ref{mediafor}) amounts to (\ref{Hahn_1})-(\ref{Hahn_2}) and also, indirectly, to (\ref{hom})-(\ref{sum}). In \emph{Applicazioni geometriche}, Theorem \ref{teorema-fondamentale} is followed by three corollaries, which we summarize into the following: \begin{corollary} \label{cor-fond} \cite[(1987) p.\,171]{peano87} Under the same hypothesis as in the previous theorem: \begin{enumerate} \item \labelpag{cor-1} if the strict derivative $\frac{d\,\mu}{d\,\nu}$ is a constant $b$ on $S$, then $\mu (A)=b\, \nu(A)$, for all $A\in {\mathcal{A}}$ with $A\subset S$; \item \labelpag{cor-2} if the strict derivative $\frac{d\,\mu}{d\,\nu}$ vanishes at every point of $S$, then $\mu(A)=0$, for all $A\in {\mathcal{A}}$ with $A\subset S$; \item \labelpag{due-mis} if two distributive set functions have equal strict derivatives with respect to $\nu$ on $S$, then they are equal on subsets of $S$ belonging to ${\mathcal{A}}$.\,\footnote{ It is evident that properties (\ref{cor-1})-(\ref{due-mis}) are equivalent. To prove (\ref{due-mis}), \textsc{Peano} shows that the strict derivative of a sum of two distributive set functions is the sum of their derivatives. } \end{enumerate} \end{corollary} The following fundamental \textsc{Peano}'s result point out the difference of \textsc{Peano}'s approach with respect to both approaches of \textsc{Cauchy} and of \textsc{Lebesgue} (1910). \begin{theorem} Under the same hypothesis as in the previous theorem, if the strict derivative of $\mu$ with respect to $\nu$ exists on $S$, then it is continuous on $S$. \end{theorem} The importance of these results is emphasized in \emph{Applicazioni geometriche} by a large amount of evaluations of derivatives of distributive set functions. As a consequence of the existence of the strict derivative, \textsc{Peano} gives, for the first time, several examples of measurable sets. The most significant examples, observations and results are listed below. \begin{enumerate} \item \labelpag{hypo-der} {\it Measurability of the hypograph of a continuous function} \cite[(1887) pp.\,172-174]{peano87}. Let $f$ be a continuous positive real function defined on an interval $[a,b]$, let ${\mathcal{A}}$ be the family of all sub-intervals of $[a,b]$ and let $\nu$ be the Euclidean measure on $1$-dimensional intervals. Define $\mu_f : {\mathcal{A}} \to {\mathbb{R}}$ on every $A$ belonging to ${\mathcal{A}}$, by the inner (respectively, the outer) $2$-dimensional measure (in the sense of Peano-Jordan) of the \emph{positive-hypograph} of $f$, restricted to $A$.\,\footnote{By \emph{positive-hypograph} of $f$ restricted to $A$ we mean the set $\{(x,y)\in [a,b]\times R_+ : x\in A$ and $y\leq f(x), \}$, where ${\mathbb{R}}_+ := \{ x\in {\mathbb{R}}: x\ge 0\}$.} In any case, independently of the choice of inner or outer measure, we have that $\mu_f$ and $\nu$ are distributive set functions with respect to ${\mathbb{D}}(a,b)$, and that $\frac{{\mathrm{d}} \mu_f}{{\mathrm{d}} \nu}(x)=f(x)$ for every $x \in [a,b]$. From (\ref{due-mis}) of Corollary \ref{cor-fond} it follows that the inner measure of the positive-hypograph of the continuous function $f$ coincides with its outer measure; therefore it is measurable in the sense of Peano-Jordan. \item Analogously, \textsc{Peano} considers continuous functions of two variables and the volume of the \emph{positive-hypograph} \cite[(1887) p.\,175]{peano87}. \item \labelpag{area-star} \emph{Area of a plane star-shaped subset delimited by a continuous closed curve} \cite[(1887) pp.\,175-176]{peano87}. Consider a continuous closed curve that can be described in polar coordinates in terms of a continuous function $\rho:[0,2\pi] \to {\mathbb{R}}_+$, with $\rho(0)=\rho(2\pi)$. Let ${\mathcal{A}}$ be the family of all subintervals of $[0,2\pi]$; and for every $A\in{\mathcal{A}}$, let $\nu(A)$ denote the Euclidean measure of the area of the circular sector $\{(\rho \cos(\theta),\rho\sin(\theta)) : \theta \in A, \rho\in [0,1]\}$. Moreover, let $\mu(A)$ denote inner (or outer, indifferently) Peano-Jordan $2$-dimensional measure of the set $\{(\rho\cos(\theta),\rho\sin(\theta)) : \theta \in A, \rho\in [0,\tilde\rho (\theta)]\}$. Then the strict derivative $\frac{{\mathrm{d}} \mu}{{\mathrm{d}} \nu}(\theta )$ is equal to $\rho^2 (\theta)$. From the fact that this derivative does not depend on the choice of inner or outer measure, it follows Peano-Jordan measurability of plane star-shaped sets delimited by continuous closed curves. \item Analogously, \textsc{Peano} considers the volume of a star-shaped set bounded by simple continuous closed surface \cite[(1887) p.\,177]{peano87}. \item \labelpag{cav-pri}{\it Cavalieri's principle between a prism and a spatial figure} \cite[(1887) pp.\,177-179]{peano87}. Consider a straight line $r$ in the tri-dimensional space, an unbounded cylinder $P$ parallel to $r$ with polygonal section, and a spatial figure $F$. Let $\pi_x$ denote the plane perpendicular to $r$ at the point $x\in r$. Assume Peano-Jordan measurability of all sections of $F$ perpendicular to $r$, namely $$ \mu_e(\partial F\cap \pi_x)=0 \quad \text{ for all } x \in r \leqno \quad \qquad (*)$$ where $\mu_e$ denotes $2$-dimensional Peano-Jordan outer measure and $\partial F$ denotes the boundary of $F$. Let ${\mathcal{A}}$ be the family of all segments of $r$. Given a set $A\in{\mathcal{A}}$, let $\mu : {\mathcal{A}} \to {\mathbb{R}}$ denote the outer (or inner, indifferently) $3$-dimensional measure of the set $\cup_{x\in A}(F\cap \pi _x)$, and $\nu (A)$ denote Peano-Jordan $3$-dimensional measure of the set $\cup_{x\in A}(P\cap \pi_x)$. The set functions $\mu$ and $\nu$ are distributive with respect to the family ${\mathbb{D}}(r)$ of interval-decompositions of $r$ and $$\frac{{\mathrm{d}} \mu}{{\mathrm{d}} \nu}(x )=\frac{\mu_e(F\cap \pi _x)}{\mu_e(P\cap \pi _x)} \quad \text{for every } x \in r.$$ From the fact that this derivative does not depend on the choice of the inner or outer measure involved in defining $\mu$, it follows Peano-Jordan measurability of the spatial figure $F$. \item {\it Cavalieri's principle between two spatial figures} \cite[(1887) p.\,180]{peano87}. Consider two spatial figures $F$ and $G$ such that all their sections with planes perpendicular to a given straight line $r$ are Peano-Jordan measurable. Let ${\mathcal{A}}$ be the family of all segments of $r$. Given a set $A\in{\mathcal{A}}$, let $\mu(A)$ and $\nu(A)$ denote outer (or inner, indifferently) Peano-Jordan $3$-dimensional measures of the sets $\cup_{x\in A}(F\cap \pi _x)$ and $\cup_{x\in A}(G\cap \pi _x)$, respectively. The set functions $\mu$ and $\nu$ are distributive with respect to the family ${\mathbb{D}}(r)$ of interval-decompositions of $r$ and $$\frac{{\mathrm{d}} \mu}{{\mathrm{d}} \nu}(x)=\frac{\mu_e(F\cap \pi _x)}{\mu_e(G\cap \pi _x)} \quad \text{for every } x \in r.$$ Hence, from (\ref{cor-1}) it follows the classical Cavalieri's principle: two figures whose corresponding sections have equal areas, have the same volume. \item\labelpag{cav-3} {\it Cavalieri's principle for 3 dimensional figures with respect to one dimensional sections} \cite[(1887) p.\,180]{peano87}. Consider a plane $\pi$. Let ${\mathcal{A}} $ be the family of all rectangles of $\pi$ and let $r_x$ be the straight line perpendicular to $\pi$ at $x\in\pi$. Moreover, consider a spatial figures $F$ such that for any $x\in \pi$ $$ \mu_e(\partial F\cap r_x)=0 \quad \text{ for every } x \in \pi \leqno \quad \qquad (**) $$ where $\mu_e$ denotes the Peano-Jordan $1$-dimensional outer measure and $\partial F$ denotes the boundary of $F$. Given a set $Q\in{\mathcal{A}}$, let $\mu (Q)$ denote the outer (or inner, indifferently) measure of the set $\cup_{x\in Q}(F\cap r_x)$, and $\nu(Q)$ denote the elementary usual measure of $Q$. Then $\mu$ and $\nu$ are distributive with respect interval-decompositions of rectangles of $\pi$ and $$\frac{{\mathrm{d}} \mu}{{\mathrm{d}} \nu}(x )= \mu_e(F\cap r_x) \quad \text{ for every } x \in \pi.$$ \item \labelpag{cav-pri-pia} {\it Cavalieri's principle for 2 dimensional figures} \cite[(1887) p.\,180]{peano87}. Analogously to (\ref{cav-pri}), \textsc{Peano} considers Cavalieri's principle for planar figures. \item {\it Derivative of the length of an arc} \cite[(1887) p.\,181]{peano87}. In order to compare the length of an arc with the length of its orthogonal projection on a straight line $r$, \textsc{Peano} assumes that the orthogonal projection is bijective on a segment $\rho$ of $ r$, and that the arc can be parametrized through a function with continuous non null derivative.\,\footnote{The requirement that the derivative of the arc with respect to a parameter be continuous and non null is expressed by \textsc{Peano} in geometrical terms, namely by requiring that ``the tangent straight line exists at every point $P$ of the arc, and it is the limit of the straight lines passing through two points of the arc, when they tend to $P$''. \textsc{Peano} was aware that these geometrical conditions are implied by the existence of a parametrization with a continuous non-null derivative \cite[(1987) p.\,59, 184]{peano87}. } Let ${\mathcal{A}}$ be the family of all closed bounded segments of $\rho$. For every segment $A \in {\mathcal{A}}$, let $\mu(A)$ denote the length of the arc whose orthogonal projection over $r$ is $A$ and let $\nu(A)$ denote the length of $A$. Then $$\frac{{\mathrm{d}} \mu}{{\mathrm{d}} \nu}(x )=\frac{1}{\cos\theta_x} \leqno \qquad \quad (***) $$ where $\theta_x$ is the angle between $r$ and the straight line that is tangent to the arc at the point (of the arc) corresponding to $x \in \rho$.\,\footnote{ Of course, to avoid $\cos\theta_x = 0$ along the arc, \textsc{Peano} assumes that the tangent straight line at every point of the arc is not orthogonal to $r$. } \item {\it Derivative of the area of a surface} \cite[(1887) pp.\,182-184]{peano87}. By adapting the previous argument, \textsc{Peano} shows that the strict derivative between the area of a surface and its projection on a plane is given by (***), where $\cos \theta$ is the cosinus of the angle between the tangent plane and the projection plane. \end{enumerate} \section{Distributive set functions: integral and strict derivative}\label{sez-massdensity} \textsc{Peano} introduces also the notion of integral with respect to a positive distributive set function. The \emph{proper integral} of a bounded function $\rho $ on a set $A\in {\mathcal{A}}$ with respect to a positive distributive set function $\nu \colon ({\mathcal{A}}, {\mathbb{D}}) \to {\mathbb{R}}$, is denoted by $\int _A\rho \,{\mathrm{d}}\nu$ and is defined as the real number such that for any ${\mathbb{D}}$-decomposition $\{A_i\}_{i=1}^m$ of the set $A$, one has $$ \int _A\rho \, {\mathrm{d}}\nu \ge \rho_1'\nu (A_1)+\rho_2'\nu (A_2)+\dots+\rho_n'\nu (A_m)$$ $$ \int _A\rho \,{\mathrm{d}}\nu \le \rho_1''\nu (A_1)+\rho_2''\nu (A_2)+\dots+\rho_n''\nu (A_m)$$ where $\rho_1',\rho_2',\dots,\rho_m'$ (respectively $\rho_1'',\rho_2'',\dots,\rho_m''$), are numbers defined by \begin{equation} \label{darboux} \rho'_i :=\inf_{x\in A_i}\rho(x) \quad\text{and} \quad \rho_i'':=\sup_{x\in A_i}\rho (x), \end{equation} for all $i=1,...,m$.\,\footnote{This clear, simple and general definition of integral with respect to an abstract positive distributive set function is ignored until the year 1915, when \textsc{Fr\'echet} re-discovers it in the setting of ``finitely additive'' measures \cite[(1915)]{frechet1915}. } \textsc{Peano} defines also the {\it lower} $ \underline\int _{A}\rho \, {\mathrm{d}} \nu$ and the {\it upper} integral $ \overline\int _{A}\rho \, {\mathrm{d}} \nu$ of a bounded function $\rho$ on a set $A\in{\mathcal{A}}$ by $$ \underline\int _{A}\rho \, {\mathrm{d}} \nu :=\sup s' \quad \text{ and } \quad \overline\int _{A}\rho \, {\mathrm{d}} \nu := \inf s''$$ where $s'= \rho_1'\nu (A_1)+\rho_2'\nu (A_2)+\dots+\rho_m'\nu (A_m)$ and $s''=\rho_1''\nu (A_1)+\rho_2''\nu (A_2)+\dots+\rho_m''\nu (A_m)$, where $\rho_i'$ and $\rho_i''$ are defined as in (\ref{darboux}) and $\{A_i\}_{i=1}^m$ runs over ${\mathbb{D}}$-decompositions of $A$. In \textsc{Peano}'s terminology, the integrals defined above are called \emph{geometric integrals}. \textsc{Peano} stresses the analogy among these integrals and the usual \emph{elementary integral} $\int_a^b f(x) \, {\mathrm{d}} x$ of functions $f$ defined over intervals of ${\mathbb{R}}$. Using property \eqref{dec_1} of ${\mathbb{D}}$-decompositions, \textsc{Peano} shows that the lower integral is always less or equal than the upper integral. When these values coincide, their common value is called a proper integral and is denoted by $\int_A\rho\, {\mathrm{d}}\nu$. Moreover, using properties \eqref{dec_2} and \eqref{dec_3} of ${\mathbb{D}}$-decompositions, \textsc{Peano} shows that the lower integral $A \mapsto \underline\int _{A}\rho \, {\mathrm{d}} \nu $ and the upper integral $A \mapsto \bar\int _{A}\rho \, {\mathrm{d}} \nu$ are distributive set functions on ${\mathcal{A}}$ with respect to the same family ${\mathbb{D}}$ of decompositions \cite[(1887) Theorem\,I, p.\,187]{peano87}. In case of $\rho$ continuous, using the property of ``infinitesimality'' of ${\mathbb{D}}$ (see Definition \ref{def_inf}), \textsc{Peano} shows that the derivative of both lower and upper integrals with respect to $\nu$ is $\rho$ \cite[(1887) Theorem\,II, p.\,189]{peano87}; consequently the proper integral $\int_A\rho\, {\mathrm{d}}\nu$ of a continuous $\rho$ exists whenever $A$ is closed and bounded \cite[(1887) Cor. of Theorem\,II, p.\,189]{peano87}. The definitions introduced above allow \textsc{Peano} to realize the mass-density paradigm, i.e., to prove that it is possible to recover a distributive function $\mu$ as the integral of the strict derivative $\frac{d\mu}{d \nu}$ with respect to a positive distributive function $\nu$. \textsc{Peano}'s results can be formulated into the following \begin{theorem}[\textsc{Peano}'s Theorem on strict derivative of distributive set functions, see {\cite[(1887) Theorem\,14, p.\,171, Theorems\,II, III, p.\,189]{peano87}}]\label{peanodev} Let $\mu,\nu : ({\mathcal{A}}, {\mathbb{D}}) \to {\mathbb{R}}$ be distributive set functions, with $\nu$ positive and ${\mathbb{D}}$ infinitesimal. Let $S\in{\mathcal{A}}$ be a closed bounded set and $\rho:S\to{\mathbb{R}}$ a function. The following properties are equivalent: \begin{enumerate} \item $\rho$ is the strict derivative $\frac{d\,\mu}{d\,\nu}$ of $\mu$ with respect to $\nu$ on $S$; \item $\rho$ is continuous and $\mu (A)=\int_A\rho \, {\mathrm{d}} \nu$ for any $A\subset S$, $A\in{\mathcal{A}}$. \end{enumerate} \end{theorem} \textsc{Peano} applies Theorem \ref{peanodev} to the list of examples of strict derivatives of distributive set functions of \S\,\ref{sez-derivata} and obtains the following results. \begin{enumerate} \item \emph{Fundamental theorem of integral calculus for continuous functions} \cite[(1887) pp.\,191-193]{peano87}. Consider a continuous function $f$ on ${\mathbb{R}}$ and let $F$ be a primitive of $f$. Define $\mu$ and $\nu$ over the family ${\mathcal{A}}$ of closed bounded intervals $[a,b]$ of ${\mathbb{R}}$ by $\mu([a,b]):=F(b) - F(a)$ and $\nu([a,b]):=b-a$. Observe that both $\mu$ and $\nu$ are distributive set functions with respect to ${\mathbb{D}}({\mathbb{R}})$ and $$\frac{{\mathrm{d}}\mu}{{\mathrm{d}}\nu}(x) = \lim_{\begin{subarray}{c}a,b\to x \\ a\neq b\end{subarray}} \frac{F(b) - F(a)}{b-a} = f(x) $$ since $F$ has continuous derivative.\,\footnote{ \textsc{Peano} observes that continuity of derivative of $F$ is a necessary and sufficient condition to have the existence of $\frac{{\mathrm{d}}\mu}{{\mathrm{d}}\nu}$.} Therefore, by Theorem \ref{peanodev}, \textsc{Peano} obtains $$F(b)-F(a)=\mu([a,b]) =\int_{[a,b]} f\, d\, \nu = \int_a^b f(x)\, {\mathrm{d}} x \,.$$ \item {\it Calculus of an integral as a planar area} \cite[(1887) pp.\,193-195]{peano87}. The elementary integral of a continuous positive function is Peano-Jordan measure of the positive hypograph of the function. This is an immediate application of Theorem \ref{peanodev} to the setting (\ref{hypo-der}). \item {\it Cavalieri's formula for planar figures} \cite[(1887) p.\,195]{peano87}. Let us suppose that $C\subset{\mathbb{R}}^2$, $C_x :=\{y\in{\mathbb{R}}\, :\; (x,y)\in C\}$ and $(\partial C)_x := \{y\in{\mathbb{R}}\, :\; (x,y)\in \partial C\}$ for every $x \in {\mathbb{R}}$. Assume that for any $x$ the set $(\partial C)_x$ has vanishing outer measure. As a consequence of Theorem \ref{peanodev} and the two-dimensional version of \emph{Cavalieri's principle} (\ref{cav-pri-pia}) (see \cite[(1887) p.\,180]{peano87}), it follows that the measure of the part of the figure $C$, bounded by the abscissas $a$ and $b$, is equal to $$\int_a^b \mu_e (C_x) \, {\mathrm{d}} x $$ where $\mu_e$ denotes outer Peano-Jordan one-dimensional measure. \item {\it Area of a plane star-shaped subset delimited by a continuous closed curve} \cite[(1887) p.\,199]{peano87}. In the setting of example (\ref{area-star}), \textsc{Peano} shows that the area of the sector between the angles $\theta_0$ and $\theta_1$, delimited by a curve described in polar coordinates by $\rho$, is equal to $$\frac{1}{2}\int_{\theta_0}^{\theta_1} \rho(\theta)^2{\mathrm{d}} \theta \,.$$ \item {\it Cavalieri's formula for volumes} \cite[(1887) p.\,221]{peano87}. In the setting (\ref{cav-3}), let's define $F_x := \{(y,z) \in{\mathbb{R}}^2 : (x,y,z)\in F\}$ and $(\partial F)_x := \{(y,z)\in{\mathbb{R}}^2 : (x,y,z)\in \partial F\}$. Assume that for any $x$, the set $(\partial F)_x$ has vanishing outer measure. From Theorem \ref{peanodev}, \textsc{Peano} shows that the volume of the part of the figure $F$, delimited by the planes $x=a$ and $x=b$, is equal to $$\int_a^b \mu_e(F_x)\,{\mathrm{d}} x \, $$ where $\mu_e$ denotes outer Peano-Jordan two-dimensional measure. \end{enumerate} \section{Coexistent magnitudes in Lebesgue and Peano's derivative}\label{sez-comments} \textsc{Lebesgue} gives a final pedagogical \footnote{\textsc{Lebesgue} says in \cite[(1931) p.\,174]{lebesgue1931}: \begin{quote} [\dots] depuis trente ans [d' enseignement] [\dots] on ne s'\'etonnera pas que l'id\'ee me soit venue d'\'ecrire des articles de nature p\'edagogique; si j'ose employer ce qualificatif que suffit ordinairement pour faire fuir les math\'ematiciens. \translation{% [\dots] in the thirty years [of teaching] [\dots] it is not at all surprising that the idea should occur to me of writing articles on a pedagogical vein; if I may use an expression which usually puts mathematicians to flight. (transl. \textsc{May} \cite[(1966) p.\,10]{lebesgue_may})} \end{quote} } exposition of his measure theory in \emph{La mesure des grandeurs} \cite[(1935) p.\,176]{lebesgue1935}, by referring directly to \textsc{Cauchy}'s \emph{Coexistent magnitudes}:\,\footnote{The five parts of the essay \emph{La mesure des grandeurs} have been published in \emph{L'Enseignement math\`ematique} during the years 1931-1935. An english translation \emph{Measure and the Integral} of \emph{La mesure des grandeurs} is due to Kenneth O.~May \cite[(1966)]{lebesgue_may}.} \begin{quote} La th\'eorie des grandeurs qui constitue le pr\'ec\'edent chapitre avait \'et\'e pr\'epar\'ee par des recherches de Cauchy, sur ce qu'il appelait des grandeurs concomitantes [\emph{sic}], par les travaux destin\'es \`a \'eclaircir les notions d'aire, de volume, de mesure [\dots].\,\footnote{\translation{% The theory of magnitudes forming the subject of the preceding chapter was prepared by researches of Cauchy on what he called concomitant magnitudes, by studies destined to clarify the concepts of area, volume, and measure [\dots] (transl. \textsc{May} \cite[(1966) p.\,138]{lebesgue_may})}} \end{quote} \textsc{Lebesgue} is aware of the obscurity of the concepts that are present in \textsc{Cauchy}'s \emph{Coexistent magnitudes}, starting by the meaning of the term \emph{magnitude} itself. In this respect, in order to put on a solid ground the ideas of \textsc{Cauchy}, \textsc{Lebesgue} was compelled to pursuit an approach similar to that \textsc{Peano}: in fact he defines a ``magnitude'' as a set function on a family of sets ${\mathcal{A}}$, requires infinitesimality of ${\mathcal{A}}$ (in the sense that every element of ${\mathcal{A}}$ can be \emph{r\'eduit \`a un point par diminutions successives}), and additivity properties that he express in \emph{La mesure des grandeurs} \cite[(1934) p.\,275]{lebesgue1934} in these words: \begin{quote} Si l'on divise un corps $C$ en un certain nombre de corps partiels $C_1, C_2,$ $ \dots, C_p$, et si la grandeur $G$ est, pour ces corps, $g$ d'une part, $g_1, g_2, \dots, g_p$ d'autre part, on doit avoir: $g=g_1 + g_2 + \dots + g_p$.\,\footnote{\translation{% If a body $C$ is partitioned into a certain number of sub-bodies $C_1, C_2, \dots, C_p$ and if for these bodies the magnitude $G$ is $g$ on the one hand and $g_1, g_2, \dots, g_p$ on the other, we must have $g= g_1 + g_2 + \cdots + g_p$. (transl. \textsc{May} \cite[(1966) p.\,129]{lebesgue_may})} \textsc{Lebesgue} observes that in order to make this condition rigorous, it would be necessary to give a precise meaning to the words \emph{corp} and \emph{partage de la figure totale en parties} \cite[(1934) p.\,275-276]{lebesgue1934}. Moreover he observes that \emph{diviser un corps} may be interpreted in different ways \cite[(1934) p.\,279]{lebesgue1934}.} \end{quote} In \emph{La mesure des grandeurs} \textsc{Lebesgue} considers the operations of integration and differentiation by presenting these topics in a new form with respect to his fundamental and celebrated paper \emph{L'int{\'e}gration des fonctions discontinues} \cite[(1910)]{lebesgue1910}. \textsc{Lebesgue} theory of differentiation of 1910 concerns absolutely continuous $\sigma$-additive measures on Lebesgue measurable sets. On the contrary, twenty-five years later in \emph{La mesure des grandeurs} of 1935 \begin{itemize} \item $\sigma$-additive set functions are replaced by continuous \footnote{It is not easy to give in a few words a definition of the concept of continuity according to \textsc{Lebesgue}: such a continuity is based on a convergence of sequences of sets that in the relevant cases coincides with the convergence in the sense of Hausdorff. We recall that a sequence of sets $\Delta_n$ \emph{converges to $\Delta$ in the sense of Hausdorff} if for all $\epsilon >0$ there exists $n_0$ such that $\Delta_n\subset B_\epsilon (\Delta)$ and $\Delta\subset B_\epsilon (\Delta_n)$ for all $n>n_0$, where $B_\epsilon (A):= \{x \in {\mathbb{R}}^n : \text{ there exists } a \in A \text{ such that } \|x-a\|<\epsilon \}$. Therefore, a set function $f$ is said to be \emph{continuous} if for any $\Delta_n$ and $\Delta$ Peano-Jordan measurable sets, we have that $\lim_{n\to \infty} f (\Delta_n) = f(\Delta)$, whenever $\Delta_n$ converges to $\Delta$ in Hausdorff sense. } additive \footnote{\textsc{Lebesgue} writes in \cite[(1935) p.\,185]{lebesgue1935}: \begin{quote} [\dots] nous supposerons cette fonction [$f$] \emph{additive}, c'est-a-dire telle que, si l'on divise $\Delta$ en deux domaines quarrables $\Delta_1$ et $\Delta_2$ on ait $f(\Delta) = f(\Delta_1) + f(\Delta_2)$. \translation{% [\dots] let us assume that this function is \emph{additive}; that is, it is such that, if we partition $\Delta$ into two quadrable domains $\Delta_1$ and $\Delta_2$, we have $f(\Delta) = f(\Delta_1) + f(\Delta_2)$. (transl. \textsc{May} \cite[(1966) p.\,146]{lebesgue_may})} \end{quote} } measures; \item absolutely continuous measures become set functions with bounded-derivati\-ve~\footnote{A set function $f$ has a \emph{bounded-derivative} with respect to Peano-Jordan $n$-dimensional measure ${{\mathrm{vol}}}_n$ if there exists a constant $M$ such that $|f(\Delta)|\leq M \, {{\mathrm{vol}}}_n(\Delta)$ for any Peano-Jordan measurable set $\Delta$. A set function with bounded-derivatives is called \emph{uniformly Lipschitzian} by \textsc{Picone} \cite[(1923) vol.\,2, p.\,467]{picone}. } (\emph{\`a nombres d\'eriv\'es born\'es}); \item Lebesgue measurable sets are replaced by Jordan-Peano measurable subsets of a given bounded set. \end{itemize} Let $K$ be a bounded closed subset of Euclidean space ${\mathbb{R}}^n$, let ${\mathcal{A}}_K$ be the family of Jordan-Peano measurable (\emph{quarrables}) subsets of $K$ and let $V$ be a positive, continuous, additive set function on ${\mathcal{A}}_K$ with bounded-derivative. Then \textsc{Lebesgue} introduces a definition of derivative. The \emph{uniform-derivative} (\emph{d\'eriv\'ee \a convergence uniforme}) $\varphi$ of a set function $f$ with respect to $V$, is defined as the function $\varphi:K\to {\mathbb{R}}$ such that, for every $\epsilon>0$, there exists $\eta >0$ such that \begin{equation} \Big|\frac{f(\Delta)}{V(\Delta)}-\varphi (x)\Big|<\epsilon \end{equation} for all $x\in K$ and $\Delta \in {\mathcal{A}}_K$ with $x\in \Delta \subset B_\eta (x)$. It is clear that \textsc{Lebesgue}'s new notion of uniform-derivative is strictly related to \textsc{Peano}'s one. In fact, \textsc{Lebesgue} observes that the uniform-derivative is continuous whenever it exists; moreover, he defines the integral \begin{equation} \int_K \varphi \,{\mathrm{d}} V \end{equation} of a \emph{continuous function} $\varphi$ with respect to $V$. His definition of integral \cite[(1935) pp.\,188-191]{lebesgue1935} is rather intricate with respect to that of \textsc{Peano}. It is worthwhile noticing that \textsc{Lebesgue} recognizes the relevance of the notion of an integral with respect to set functions. \textsc{Lebesgue}, not acquainted with previous \textsc{Peano}'s contributions, assigns the priority of this notion to \textsc{Radon} \cite[(1913)]{radon1913}. On the other hand, \textsc{Lebesgue} notices that the integral with respect to set functions was already present in Physics \footnote{\textsc{Lebesgue} gives several examples of this. For instance, the evaluation of the heath quantity necessary to increase the temperature of a body as integral of the specific heath with respect to the mass.} and express his great surprise in recovering in \textsc{Stieltjes}'s integral \cite[(1894)]{stieltjes} an instance of integral with respect to set functions; Lebesgue writes \cite[(1926) p.\,69-70]{lebesgue1926}: \begin{quote} Mais son premier inventeur, Stieltj\`es, y avait \'et\'e conduit par des recher\-ches d'analyse et d'arithm\'etique et il l'avait pr\'esent\'ee sous une forme purement analytique qui masquait sa signification physique; s\`\i\ bien qu'il a fallu beaucoup d'efforts pour comprendre et conna\^itre ce qui est maintenant \'evident. L'historique de ces efforts citerait les nom de F.~Riesz, H.~Lebesgue, W.H.~Young, M.~Fr\'echet, C. de la Vall\'e-Poussin; il montrerait que nous avons rivalis\'e en ing\'eniosit\'e, en perspicacit\'e, mais aussi en aveuglement.\,\footnote{\translation{% But its original inventor, Stieltjes, was led to it by researches in analysis and theory of number and he presented it in a purely analytical form which masked its physical significance, so much so that it required a much effort to understand and recognizes what is nowadays obvious. The history of these efforts includes the works of F.~Riesz, H.~Lebesgue, W.H.~Young, M.~Fr\'echet, C. de la Vall\'e-Poussin. It shows that we were rivals in ingenuity, in insight, but also in blindness. (transl. \textsc{May} \cite[(1966) p.\,190]{lebesgue_may}) }} \end{quote} The first important theorem presented by \textsc{Lebesgue} is the following \begin{theorem} Let $K$ be a bounded closed subset of ${\mathbb{R}}^n$, $\varphi:K\to {\mathbb{R}}$ a continuous function and $V$ a positive additive continuous set function with bounded-derivative. Then the integral $\Delta \mapsto \int_\Delta \varphi \,{\mathrm{d}} V$ with $\Delta \in {\mathcal{A}}$ is the unique additive set function with bounded-derivative which has $\varphi$ as uniform-derivative with respect to $V$.\,\footnote{The proof is rather lengthy, as \textsc{Lebesgue} included in it the definition of integral as well as the theorem of average value.} \end{theorem} The main applications of this theorem, given by \textsc{Lebesgue} in \emph{La mesure des grandeurs} \cite[(1935) p.\,176]{lebesgue1935}, concern: \begin{enumerate} \item the proof that multiple integrals can be given in terms of simple integrals; \item the formula of change of variables;\,\footnote{Lebesgue uses the implicit function theorem.} \item several formulae for oriented integrals (Green's formula, length of curves and area of surfaces). \end{enumerate} The uniform-derivative defined by \textsc{Lebesgue} is, as observed above, a continuous function, and coincides exactly with \textsc{Peano}'s strict derivative. Through a different and more difficult path \footnote{The exposition of 1935 is elementary, but more lengthy and difficult than those presented by \textsc{Lebesgue} in 1910. Surprisingly, the terms \emph{domain, decomposition, limit, additive, continuous} are used by \textsc{Lebesgue} in a supple way. } than \textsc{Peano}'s one, \textsc{Lebesgue} rediscovers the importance of the continuity of the derivative. In \textsc{Lebesgue}'s works there are no references to the contributions of \textsc{Peano} concerning differentiation of set functions. Several years before \emph{La mesure des grandeurs} of 1935, \textsc{Lebesgue} in \cite[(1926)]{lebesgue1926} outlines his contribution to the notion of integral. In the same paper he mentions \textsc{Cauchy}'s \emph{Coexistent magnitudes} in the setting of derivative of measures. Moreover he cites \textsc{Fubini}'s and \textsc{Vitali}'s works of 1915 and 1916 (published by Academies of Turin and of Lincei) in the context of the general problem of primitive functions. More precisely, in 1915, the year of publication of \textsc{Peano}'s paper \emph{Le grandezze coesistenti} \cite{peano1915}, \textsc{Fubini} \cite[(1915)]{fubini1915b,fubini1915a} and \textsc{Vitali} \cite[(1915, 1916)]{vitali1915,vitali1916} introduce a definition of derivative of ``finitely additive measures'' \footnote{\textsc{Fubini}'s first paper \cite{fubini1915a} is presented by C.\,\textsc{Segre} at the Academy of Sciences of Turin on January 10, 1915. In the same session, \textsc{Peano}, Member of the Academy, presents a multilingual dictionary and a paper written by one of his students, \textsc{Vacca}. \textsc{Segre}, on April 11, 1915, presents, as a Member, a second paper of \textsc{Fubini} \cite{fubini1915b} to {\it Accademia dei Lincei}. In the session of the Academy of Turin of June 13, 1915, \textsc{Peano} presents his paper \emph{Le grandezze coesistenti}. Moreover \textsc{Segre} presents two papers by \textsc{Vitali} \cite[(1915)]{vitali1915} and \cite[(1916)]{vitali1916} to Academy of Turin on November 28, 1915 and to Academy of Lincei on May 21, 1916, respectively. There is a rich correspondence between \textsc{Vitali} and \textsc{Fubini}. In the period March-May 1916 \textsc{Fubini} sends three letters to \textsc{Vitali} (transcribed in \emph{Selected papers} of \textsc{Vitali} \cite[pp.\,519-520]{vitali-opere}), concerning differentiation of finitely additive measures and related theorems. In particular \textsc{Fubini} suggests \textsc{Vitali} to quote \textsc{Peano}'s paper \cite[(1915)]{peano1915} and to compare alternative definitions of derivative. In \emph{Selected papers} of \textsc{Vitali} it is also possible to find six letters by \textsc{Peano} to \textsc{Vitali}. Among them, there is letter of March 21, 1916 concerning \textsc{Cauchy}'s coexistent magnitudes; \textsc{Peano} writes: \begin{quote} Grazie della sua nota \cite[(1915)]{vitali1915}. Mi pare che la dimostrazione che Ella d\`a, sia proprio quella di Cauchy, come fu rimodernata da G. Cantor, e poi da me, e di cui trattasi nel mio articolo, Le grandezze coesistenti di Cauchy, giugno 1915, e di cui debbo avere inviato copia. \end{quote} [\![Thanks for your paper \cite[(1915)]{vitali1915}. In my opinion your proof coincides with the one given by Cauchy, as formulated by Cantor and by myself in my paper ``Coexistent magnitudes of Cauchy'' (June 1915), that I sent you.]\!] To our knowledge, \textsc{Fubini} \cite[(1915)]{fubini1915b,fubini1915a} and \textsc{Vitali} \cite[(1915, 1916)]{vitali1915,vitali1916} are not cited by other authors, with the exception of \textsc{Banach} \cite[(1924) p.\,186]{banach1924}, who refers to \textsc{Fubini} \cite[(1915)]{fubini1915b}. }, oscillating themselves between definitions \emph{\`a la Cauchy} and \emph{\`a la Peano}. \textsc{Vitali}, in his second paper \cite{vitali1916}, refers to the \emph{Coexistent magnitudes} of \textsc{Cauchy}, and presents a comparison among the notions of derivative given by \textsc{Fubini}, himself, \textsc{Peano} and the one of \textsc{Lebesgue} of 1910, emphasizing the continuity of the \textsc{Peano}'s strict derivative. \textsc{Vitali} writes in \cite[(1916)]{vitali1916}: \begin{quote} Il Prof.\,G.\,Peano nella Nota citata [\emph{Le grandezze coesistenti}] e in un'altra sua pubblicazione anteriore [\emph{Applicazioni geometriche}], si occupa dei teoremi di Rolle e della media e ne indica la semplice dimostrazione nel caso in cui la derivata [della funzione di insieme $f$] in $P$ sia intesa come il limite del rapporto di $\frac{f(\tau)}{\tau}$, dove $\tau$ \`e un campo qualunque che pu\`o anche non contenere il punto $P$. L'esistenza di tale simile derivata finita in ogni punto porta difatti la continuit\`a [della derivata medesima].\,\footnote{\translation{% Prof.\,Peano, in the cited Paper [\emph{Le grandezze coesistenti}] and in a previous publication [\emph{Applicazioni geometriche}] deals with Rolle's and mean value theorems, pointing out a simple proof, valid in the case in which the derivative [of the set function $f$], in a given point $P$, is the limit of the ratio $\frac{f(\tau)}{\tau}$, where $\tau$ is a set that might not contain the point $P$.}} \end{quote} This proves that since 1926 \textsc{Lebesgue} should have been aware of \textsc{Peano}'s derivative and of its continuity.\,\footnote{We can ask how much \textsc{Lebesgue} was aware of the contributions of \textsc{Peano}. In many historical papers the comment of \textsc{Kennedy} \cite[(1980) p.\,174]{peano_vita}, a well known biographer of \textsc{Peano}, occurs: \begin{quote} Lebesgue acknowledged Peano's influence on his own development. \end{quote} In our opinion \textsc{Peano}'s influence on \textsc{Lebesgue} is relevant but sporadic. After a reading of \textsc{Lebesgue}'s works, we have got the feeling that his knowledge of \textsc{Peano}'s contributions was restricted to two papers on the definition of area and on Peano's curve. } Undoubtably, the contributions of \textsc{Peano} and \textsc{Lebesgue} have a pedagogical and mathematical relevance in formulating a definition of derivative having the property of continuity whenever it exists. Surprisingly these contributions are not known. Rarely the notion of derivative of set functions is presented and used in educational texts. An example is provided by \emph{Lezioni di analisi matematica} of \textsc{Fubini}. There are several editions of these \emph{Lezioni}: starting by the second edition \cite[(1915)]{fubiniB1915}, \textsc{Fubini} introduces a derivative \emph{\`a la Peano} of additive set functions in order to build a basis for integral calculus in one or several variables. Nevertheless, in his \emph{Lezioni}, \textsc{Fubini} assumes continuity of its derivative as an additional property. Ironically, \textsc{Fubini} is aware of continuity of \textsc{Peano}'s derivative, whenever it exists; this is clear from two letters of 1916 that he sent to \textsc{Vitali} \cite[p.\,518-520]{vitali-opere}; in particular, in the second letter, about the \textsc{Peano}'s paper \emph{Grandezze coesistenti} \cite[(1915)]{peano1915}, he writes: \begin{quote} Sarebbe bene citare [l'articolo di] Peano e dire che, se la derivata esiste e per calcolarla in [un punto] $A$ si adottano anche dominii che tendono ad $A$, pur non contenendo $A$ all'interno, allora la derivata \`e continua.\,\footnote{[\![It would be important to cite the paper of Peano, saying that, whenever the derivative exists and its evaluation is performed by considering domains that approach $A$, without requiring that the point $A$ belongs to the domains themselves, then the derivative is continuous.% ]\!]} \end{quote} The notion of derivative of set function is also exposed in the textbooks \emph{Lezioni di analisi infinitesimale} of \textsc{Picone} \cite[(1923) vol.\,II, p.\,465--506]{picone}, in \emph{Lezioni di analisi matematica} of \textsc{Zwirner} \cite [(1969), pp.\,327-335]{zwirner} and in \emph{Advanced Calculus} of R.\,C. and E.\,F. \textsc{Buck} \cite[(1965)]{buck}. In the book of \textsc{Picone}, a definition of derivative \emph{\`a la Cauchy} of ``additive'' set functions is given;\,\footnote{Significant instances of additive set functions in the sense of \textsc{Picone} are outer measure of Peano-Jordan on all subsets of ${\mathbb{R}}^n$ and lower/upper integrals of functions with respect to arbitrary domain of integration \cite[(1923) vol.\, II, p.\,356-357, 370-371]{picone}. The family of decompositions that leads to the notion of additive set function in the sense of \textsc{Picone} is clearly defined on page 356-357 of his book \cite{picone} and includes the family of decompositions (\ref{dec-picone}) and (\ref{dec-picone2}). } it represents an improvement of \textsc{Cauchy}, \textsc{Fubini} and \textsc{Vitali} definitions. Of course, his derivative is not necessarily a continuous function. Whenever the derivative is continuous, \textsc{Picone} states a fundamental theorem of calculus, and applies it to the change of variables in multiple integrals. In the book of \textsc{Zwirner} the notion of derivative \emph{\`a la Peano} of set functions is introduced, without mentioning \textsc{Peano} and, unfortunately, without providing any application. In the third book, R.\,C. and E.\,F. \textsc{Buck} introduce in a clear way a simplified notion of the uniform-derivative of \textsc{Lebesgue} (without mentioning him), and they apply it to obtain the basic formula for the change of variables in multiple integrals. \section{Appendix} All articles of \textsc{Peano} are collected in \emph{Opera omnia} \cite{peano_omnia}, a CD-ROM edited by C. S. \textsc{Roero}. Selected works of \textsc{Peano} were assembled and commented in \emph{Opere scelte} \cite{peano_opere} by \textsc{Cassina}, a pupil of \textsc{Peano}. For a few works there are English translations in \emph{Selected Works} \cite{peano_english}. Regrettably, fewer \textsc{Peano}'s papers have a public URL and are freely downloadable. For reader's convenience, we provide a chronological list of some mathematicians mentioned in the paper, together with biographical sources. \texttt{Html} files with biographies of mathematicians listed below with an asterisk can be attained at University of St Andrews's web-page \begin{center}\texttt{http://www-history.mcs.st-and.ac.uk/history/\{Name\}.html} \end{center} \textsc{Kepler}, Johannes (1571-1630)* \textsc{Cavalieri}, Bonaventura (1598-1647)* \textsc{Newton}, Isaac (1643-1727)* \textsc{Mascheroni}, Lorenzo (1750-1800)* \textsc{Cauchy}, Augustin L. (1789-1857)* \textsc{Lobachevsky}, Nikolai I. (1792-1856)* \textsc{Moigno} Fran\c cois N.~M. (1804-1884), see \emph{Enc. Italiana}, Treccani, Roma, 1934 \textsc{Grassmann}, Hermann (1809-1877)* \textsc{Serret}, Joseph A. (1819-1885)* \textsc{Riemann}, Bernhard (1826-1866)* \textsc{Jordan}, Camille (1838-1922)* \textsc{Darboux}, Gaston (1842-1917)* \textsc{Stolz}, Otto (1842-1905)* \textsc{Schwarz}, Hermann A. (1843-1921)* \textsc{Cantor}, Georg (1845-1918)* \textsc{Tannery}, Jules (1848-1910)* \textsc{Harnack}, Carl (1851-1888), see May \cite[(1973) p.\,186]{may1973} \textsc{Stieltjes}, Thomas J. (1856-1894)* \textsc{Peano}, Giuseppe (1858-1932)*, see \cite{peano_vita} \textsc{Young}, William H. (1863-1942)* \textsc{Segre}, Corrado (1863-1924)* \textsc{Vall\'ee Poussin} (de la), Charles (1866-1962)* \textsc{Hausdorff}, Felix (1868-1942)* \textsc{Borel}, Emile (1871-1956)* \textsc{Vacca}, Giovanni (1872-1953)* \textsc{Carath\'{e}odory}, Constantin (1873-1950)* \textsc{Lebesgue}, Henri (1875-1941)* \textsc{Vitali}, Giuseppe (1875-1932)* \textsc{Fr\'{e}chet}, Maurice (1878-1973)* \textsc{Fubini}, Guido (1879-1943)* \textsc{Riesz}, Frigyes (1880-1956)* \textsc{Tonelli}, Leonida (1885-1946)* \textsc{Picone}, Mauro (1885-1977), see \texttt{http://web.math.unifi.it} \textsc{Ascoli}, Guido (1887-1957), see May \cite[(1973) p.63]{may1973} \textsc{Radon}, Johann (1887-1956)* \textsc{Nikodym}, Otton (1887-1974)* \textsc{Bouligand}, George (1889-1979), \emph{see} \texttt{http://catalogue.bnf.fr} \textsc{Banach}, Stefan (1892-1945)* \textsc{Kuratowski}, Kazimierz (1896-1980)* \textsc{Cassina}, Ugo (1897-1964), see Kennedy \cite[(1980)]{peano_vita} \textsc{Cartan}, Henri (1904-2008)* \textsc{Dieudonn\'e}, Jean A.~E. (1906-1992)* \textsc{Choquet}, Gustave (1915-2006), see \emph{Gazette des Math.} v111:74-76, 2007 \textsc{May} Kenneth O. (1915-1977), see \cite[p.\,479]{dauben_scriba} \textsc{Medvedev} F\"edor A. (1923-1993), see \cite[p.\,482]{dauben_scriba} \bibliographystyle{plain}
1,314,259,996,403
arxiv
\section{Introduction} Positive definite kernels are studied for their manifold applications. In this paper we consider a special type of kernels $K$ defined on the free semigroup on $N$ generators with the property that $$K(\alpha \sigma ,\tau )=K(\sigma , I(\alpha )\tau )$$ for any words $\alpha ,\sigma ,\tau $, where $I(\alpha )$ denotes the word obtained by writting $\alpha $ in the reverse order. These kernels appear in many situations, see for instance \cite{CG1} and \cite{CG2}. Our goal is to determine an explicit structure of the positive definite kernels satisfying the above invariance property. Since in the case $N=1$ such kind of kernels are precisely the Hankel kernels, it is quite natural to consider associated orthogonal polynomials and to study their properties. Our main result establishes the connection between moments and Jacobi coefficients, as a multivariable extension of a classical result. We also describe the Jacobi coefficients of the free products of orthogonal polynomials. \section{Orthogonal polynomials} We introduce orthogonal polynomials on several hermitian variables and we discuss several general results. Especially, we emphasize the usefulness of a matrix notation that reduces very much the degree of complexity and makes clear the analogy with the classical, one-dimensional case. Let ${\mathbb F} _N^+$ be the unital free semigroup on $N$ generators $1,\ldots ,N$ with lexicographic order $\prec $. In particular, ${\mathbb F} ^+_1$ is the set ${\mathbb N} _0$ of nonnegative integers. The set of positive integers will be denoted by ${\mathbb N} $. The empty word is the identity element of ${\mathbb F} ^+_N$ and the length of the word $\sigma $ is denoted by $|\sigma |$. The length of the empty word is $0$. There is a natural involution on ${\mathbb F} _N^+$ given by $I(i_1\ldots i_l)=i_l\ldots i_1$ as well as a natural action of ${\mathbb F} _N^+$ on itself by juxtaposition, $(\sigma,\tau )\rightarrow \sigma \tau $, $\sigma ,\tau \in {\mathbb F} _N^+$. Let ${\mathcal P}_N$ be the algebra of polynomials on $N$ non-commuting indeterminates $X_1$,$\ldots $,$X_N$ with complex coefficients. For any $\sigma=i_1\cdots i_l\in {\mathbb F}_N^+$, we define $X_\sigma=X_{i_1}\cdots X_{i_l}$. Using this notation, each element $P\in {\mathcal P}_N$ can be uniquely written as \begin{equation}\label{pesi} P=\sum _{\sigma \in {\mathbb F} _{N}^+}c_{\sigma }X_{\sigma }, \end{equation} with only finitely many coefficients $c_{\sigma }$ different from zero. The length of the highest $\sigma $ such that $c_{\sigma }\ne 0$ is the {\it degree } of $P$. We also have $$P=\sum _{k\geq 0}P_k= \sum _{k\geq 0}\sum _{|\sigma |=k}c_{\sigma }X_{\sigma }, $$ where each $P_k$ belongs to the vector space ${\mathcal L} ^N_k$ of homogeneous polynomials of degree $k\geq 0$ in $N$ variables $X_1$, $\ldots $, $X_N$. The dimension of ${\mathcal L} ^N_k$ is $N^k$. An involution $+$ on ${\mathcal P} _N$ can be introduced as follows: $X^+_k=X_{k}$, $k=1,\ldots ,N$; on monomials, $(X_{\sigma })^+=X_{I(\sigma )}$; in general, if $P$ has the representation as in \eqref{pesi} then \begin{equation}P^+=\sum _{\sigma \in {\mathbb F} _{N}^+}\overline{c}_{\sigma } X_{\sigma }^+, \end{equation} and ${\mathcal P}_N$ is a unital, associative $*$-algebra over ${\mathbb C} $. Let $\phi $ be a strictly positive functional on ${\mathcal P}_N$, that is, $\phi $ is a linear unital map on ${\mathcal P} _N$ and $\phi (P^+P)>0$ for every $P\in {\mathcal P} _N-\{0\}$. The Gelfand-Naimark-Segal construction applied to $\phi $ gives a Hilbert space ${\mathcal H} _{\phi }$ such that $\{X_{\sigma }\}_{\sigma \in {\mathbb F} ^+_N}$ is a linearly independent family in ${\mathcal H} _{\phi }$. The Gram-Schmidt procedure gives a family $\{\varphi _{\alpha }\} _{\alpha \in {\mathbb F} ^+_N}$ of polynomials such that \begin{equation}\label{bond1} \varphi _{\alpha }= \sum _{\beta \preceq \alpha }a_{\alpha ,\beta }X_{\beta }, \quad a_{\alpha ,\alpha }>0; \end{equation} \begin{equation}\label{bond2} \langle \varphi _{\alpha }, \varphi _{\beta }\rangle _{\phi }= \delta _{\alpha ,\beta }, \quad \alpha ,\beta \in {\mathbb F} ^+_N, \end{equation} where for $P_1, P_2\in {\mathcal P} _N$, $$\langle P_1,P_2\rangle _{\phi }= \phi (P^+_2P_1). $$ The elements $\varphi _{\alpha }$, $\alpha \in {\mathbb F} ^+_N$, will be called the orthonormal polynomials associated with $\phi $. We notice that the use of the Gram-Schmidt process depends on the order chosen on ${\mathbb F} _N^+$. A different order would give a different family of orthogonal polynomials. Due to the natural grading on ${\mathbb F} _N^+$ it is possible to develop a base free approach to orthogonal polynomials. In the case of orthogonal polynomials on several commuting variables this is presented in \cite{DX}. However, in this paper we consider only the lexicographic order on ${\mathbb F} _N^+$. The moments of $\phi $ are $$s_{\sigma }=\phi (X_{\sigma }), \quad \sigma \in {\mathbb F} ^+_N, $$ and we define the moment kernel of $\phi $ by the formula $K_{\phi }(\alpha ,\beta )= s_{I(\alpha )\beta }$, $\alpha ,\beta \in {\mathbb F} ^+_N$. We notice that $K_{\phi }$ is a positive definite kernel on ${\mathbb F} ^+_N$ and for $\alpha ,\sigma ,\tau \in {\mathbb F} _N^+$, \begin{equation}\label{hankel} K_{\phi }(\alpha \sigma ,\tau)=K_{\phi }(\sigma ,I(\alpha )\tau ). \end{equation} This property can be viewed as a Hankel type condition. Conversely, it is easily seen that if a positive definite kernel $K$ satisfies \eqref{hankel} then there exists a positive functional $\phi $ on ${\mathcal P} _N$ such that $K=K_{\phi }$. \bigskip \noindent {\em 2.1. Three term relations.} Let $\phi $ be a unital strictly positive functional on ${\mathcal P} _N$ and let $\{\varphi _{\alpha }\}_{\alpha \in {\mathbb F} ^+_N}$ be the orthonormal polynomials associated with $\phi $. As in the commutative case (see \cite{DX}), it is very convenient to use a matrix notation, $\Phi _n=\left[\varphi _{\sigma }\right]_{|\sigma |=n}$ for $n\geq 0$ and $\Phi _{-1}=0$. With this notation, the analogy with the classical case $N=1$ will be much more transparent. It turns out that the family $\{\Phi _n\}_{n\geq 0}$ satisfies a three-term recursive formula, \begin{equation}\label{3termeni} X_k\Phi _n= \Phi _{n+1}A_{n+1,k}+ \Phi _nB_{n,k}+ \Phi _{n-1}A^*_{n,k}, \end{equation} for $k=1,\ldots ,N$ and $n\geq 0$ (see \cite{Co} and \cite{BCJ}). Each matrix $B_{n,k}$, $n\geq 0$, $k=1,\ldots ,N$, is a selfadjoint $N^n\times N^n$ matrix, while each $A_{n,k}$, $n>0$, $k=1,\ldots ,N$, is an $N^{n}\times N^{n-1}$ matrix such that $$A_n=\left[\begin{array}{ccc} A_{n,1} & \ldots & A_{n,N} \end{array} \right] $$ is an upper triangular invertible matrix for every $n\geq 0$, with strictly positive elements on the diagonal. The fact that $A_n$ is upper triangular comes from the lexicographic order that we use on ${\mathbb F} ^+_N$. The invertibility of $B_n$ is a consequence of the fact that $\phi $ is strictly positive and appears to be a basic translation of this information. The diagonal of $A_n$ is strictly positive since we chose $a_{\alpha ,\alpha }>0$. A family ${\mathcal A} =\{A_{n,k}, B_{m,k}\mid n>0, m\geq 0, k=1,\ldots ,N\}$ of matrices satisfying all these properties will be called admissible. It turns out that there are no other restrictions on the matrices $A_{n,k}$, $B_{n,k}$ as shown by the following Favard type result mentioned in \cite{Co}. A similar result for the monic orthogonal polynomials, $p_{\sigma }=\displaystyle\frac{1}{a_{\sigma ,\sigma }}\varphi _{\sigma }$ was recently mentioned in \cite{An}. \begin{theorem}\label{T4} Let $\varphi _{\sigma }=\sum _{\tau \preceq \sigma } a_{\sigma ,\tau}X_{\tau }$, $\sigma \in {\mathbb F} _N^+$, be elements in ${\mathcal P} _N$ such that $\varphi _{\emptyset }=1$ and $a_{\sigma ,\sigma }>0$. Assume that there exists an admissible family ${\mathcal A} $ of matrices such that for $k=1,\ldots ,N$ and $n\geq 0$, $$ X_k\Phi _n= \Phi _{n+1}A_{n+1,k}+ \Phi _nB_{n,k}+ \Phi _{n-1}A^*_{n,k}, $$ where $\Phi _n=\left[\varphi _{\sigma }\right]_{|\sigma |=n}$ for $n\geq 0$ and $\Phi _{-1}=0$. Then there exists a unique strictly positive functional $\phi $ on ${\mathcal P} _N$ such that $\{\varphi _{\sigma }\}_{\sigma \in {\mathbb F} _N^+}$ is the family of orthonormal polynomials associated to $\phi $. \end{theorem} There is also a family of Jacobi matrices associated with the three-term relation in the following way. For $P\in {\mathcal P} _N$ define $$ \Psi _{\phi }(P)\varphi _{\sigma }=P\varphi _{\sigma }. $$ Since the moment kernel has the Hankel type structure mentioned above in \eqref{hankel}, it follows that each $\Psi _{\phi }(P)$ is a symmetric operator on the Hilbert space ${\mathcal H} _{\phi }$ with dense domain ${\mathcal P} _N$. Moreover, for $P,Q \in {\mathcal P} _N$, $$\Psi _{\phi }(PQ)=\Psi _{\phi }(P)\Psi _{\phi }(Q),$$ and $\Psi _{\phi }(P){\mathcal D} \subset {\mathcal D}$, hence $\Psi _{\phi }$ is an unbounded representation of ${\mathcal P} _N$. Also, $\phi (P)= \langle \Psi _{\phi }(P)1,1\rangle _{\phi }$ for $P\in {\mathcal P} _N$. We distinguish the operators $\Psi _k=\Psi _{\phi }(X_k)$, $k=1,\ldots ,N$, since $\Psi _{\phi }(\sum _{\sigma \in {\mathbb F} ^+_N}c_{\sigma }X_{\sigma }) =\sum _{\sigma \in {\mathbb F} ^+_N}c_{\sigma }\Psi _{\sigma }$. Let $\{e_1,\ldots ,e_N\}$ be the standard basis of ${\mathbb C} ^N$ and define the unitary operator $W$ from $l^2({\mathbb F} ^+_N)$ onto ${\mathcal H} _{\phi }$ such that $W(e_{\sigma })=\varphi _{\sigma }$, $\sigma \in {\mathbb F} ^+_N$. We see that $W^{-1}{\mathcal D} $ is the linear space ${\mathcal D} _0$ generated by $e_{\sigma }$, $\sigma \in {\mathbb F} ^+_N$, so that we can define $$J_k=W^{-1}\Psi _{k}W,\quad k=1,\ldots ,N,$$ on ${\mathcal D} _0$. Each $J_k$ is a symmetric operator on ${\mathcal D} _0$ and by \eqref{3termeni}, the matrix of (the closure of) $J_k$ with respect to the orthonormal basis $\{e_{\sigma }\}_{\sigma \in {\mathbb F} ^+_N}$ is $$J_k=\left[ \begin{array}{cccc} B_{0,k} & A^*_{1,k} & 0 & \ldots \\ & & & \\ A_{1,k} & B_{1,k} & A^*_{2,k} & \\ & & & \\ 0 & A_{2,k} & B_{2,k} & \ddots \\ & & & \\ \vdots & & \ddots & \ddots \end{array} \right].$$ We call $(J_1,\ldots ,J_N)$ a Jacobi $N$-family on ${\mathcal D} _0$. It turns out that the usual admissibility conditions on $A_{n,k}$ and $B_{n,k}$ insure a joint model of a Jacobi family in the following sense. \begin{theorem}\label{jacobi} Let $(J_1,\ldots ,J_N)$ be a Jacobi $N$-family and assume that the corresponding ${\mathcal A} $ is an admissible family of matrices. Then there exists a unique strictly positive functional $\phi $ on ${\mathcal P} _N$ with associated orthonormal polynomials $\{\varphi _{\sigma }\}_{\sigma \in {\mathbb F} ^+_N}$ such that the map $W(e_{\sigma })=\varphi _{\sigma }$, $\sigma \in {\mathbb F} ^+_N,$ extends to a unitary operator from $l^2({\mathbb F} ^+_N)$ onto ${\mathcal H} _{\phi }$ and $J_k=W^{-1}\Psi _{k}W$ for $k=1,\ldots ,N$. \end{theorem} For details about the proof of this result see \cite{Co}. \bigskip \noindent {\em 2.2. Jacobi $N$-families and combinatorics of lattice paths.} The matrices $A_{n,k}$ and $B_{n,k}$ contain the whole information about the orthonormal polynomials (or the moment kernel $K_{\phi }$). Ususally they are called the Jacobi coefficients of $K_{\phi }$ and can be calculated from the moments. For instance, $$A_n=\left[a_{\alpha ,\beta }\right]^{-1}_{|\alpha |=|\beta |=n} \left[a_{\alpha ,\beta }\right]^{\oplus N}_{|\alpha |=|\beta |=n-1},$$ where $a_{\alpha ,\beta }$ are the coefficients of the orthogonal polynomials and for a matrix $A$ we use the notation $$A^{\oplus l}= \underbrace{A\oplus \ldots \oplus A}_{\mbox{$l$ times}}. $$ In their turn, the coefficients $a_{\alpha ,\beta }$, $\beta \preceq \alpha $, can be calculated from the formula $$a_{\alpha ,\beta }=\displaystyle\frac{1} {\sqrt{D_{\alpha -1}D_{\alpha }}} \det \left[K(\alpha ',\beta ')\right]_{\alpha '\prec \alpha , \beta '\preceq \alpha ,\beta '\ne \beta },$$ where $$D_{\alpha }=\det \left[K(\alpha ',\beta ')\right]_{\alpha ',\beta \preceq \alpha }.$$ The formula for $B_{n,k}$ is somewhat more involved and we do not record it here. Instead we consider a different kind of relation between moments and Jacobi coefficients which appears to be more explicit. The case $N=1$ is classical, see \cite{Fl}, \cite{La}. The Jacobi $N$-family $\left(J_1, \ldots ,J_N\right)$ is a convenient tool to deal with this matter. Thus, for any $\sigma \in {\mathbb F} ^+_N$ we have that $$s_{\sigma }=\phi (X_{\sigma })= \langle \Psi _{\phi }(X_{\sigma })1,1 \rangle _{\phi }= \langle \Psi _{\sigma }1,1\rangle _{\phi }, $$ and by Theorem ~\ref{jacobi}, $$\langle \Psi _{\sigma }1,1\rangle _{\phi }= \langle J_{\sigma }e_0,e_0\rangle , $$ therefore we have \begin{equation}\label{key} s_{\sigma }=\langle J_{\sigma }e_0,e_0\rangle ,\quad \sigma \in {\mathbb F} ^+_N. \end{equation} Now we introduce some special paths on ${\mathbb N} _0\times \{1,\ldots ,N\}\times {\mathbb N} _0$. The allowed steps are the following: level steps $l^k_{n,m}$ from a point $(n,k,m)$ to $(n+1,k,m)$, level steps $lp^n_{m,k}$ from a point $(n,k,m)$ to $(n,p,m)$, for some $p\in \{1,\ldots ,N\}-\{k\}$, rise steps $r^k_{n,m}$ from a point $(n,k,m)$ to $(n+1,k,m+1)$, and fall steps $f^k_{n,m}$ from a point $(n,k,m)$ to $(n+1,k,m-1)$ (see Figure ~1 for an example). \begin{figure}[h] \setlength{\unitlength}{3000sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(3399,2499)(214,-1948) \thinlines {\put(601,-1936){\vector( 0, 1){2475}} }% {\put(226,-1561){\vector( 1, 0){2775}} }% {\multiput(976,-1261)(9.01163,0.00000){259}{\makebox(1.6667,11.6667){\SetFigFont{5}{6}{\rmdefault}{\mddefault}{\updefault}.}} }% {\multiput(1426,-961)(8.98760,0.00000){243}{\makebox(1.6667,11.6667){\SetFigFont{5}{6}{\rmdefault}{\mddefault}{\updefault}.}} }% {\put(601,-1561){\vector( 2, 1){1500}} }% \thicklines {\put(1201,-1261){\line( 1, 0){300}} \put(1501,-1261){\line( 2, 3){300}} \put(1801,-811){\line( 2, 1){600}} \put(2401,-511){\line( 2,-3){300}} \put(2701,-961){\line( 1, 0){300}} }% \end{picture}% \caption{\mbox{An example of a path for $N=2$}} \end{figure} We introduce a weight on steps by the formula $$ w(\mbox{step})=\left\{ \begin{array}{cl} I & \mbox{if step $=lp^n_{m,k}$}\\ B_{m,k} & \mbox{if step $=l^k_{n,m}$}\\ A_{m+1,k} & \mbox{if step $=r^k_{n,m}$}\\ A^*_{m,k} & \mbox{if step $=f^k_{n,m}$}, \end{array} \right. $$ where $I$ denotes the identity matrix of appropriate size. If ${\bf p}$ is made of $l$ steps, step $1$, $\ldots $, step $l$, then we define the weight of ${\bf p}$ by the formula $$w({\bf p})=w(\mbox{step $l$})\ldots w(\mbox{step $1$}).$$ Any word $\sigma \in {\mathbb F} ^+_N-\{\emptyset \}$ has a unique representation $\sigma =i_1^{k_1}\ldots i_p^{k_p}$ with $1_1$, $\ldots $, $i_p\in \{1, \ldots ,N\}$, $k_1$, $\ldots $, $k_p>0$, and $i_l\ne i_{l+1}$ for $l=1$, $\ldots $, $p-1$. We consider the set ${\mathcal M} _{\sigma }$ of all paths that start at $(0,i_p,0)$ and end at $(|\sigma |,i_1,0)$, with the property that the first $k_p$ steps belong to ${\mathbb N} _0\times \{i_p\}\times {\mathbb N} _0$, the next $k_{p-1}$ steps belong to ${\mathbb N} _0\times \{i_{p-1}\}\times {\mathbb N} _0$, and so on, until the last $k_1$ steps which belong to ${\mathbb N} _0\times \{i_1\}\times {\mathbb N} _0$. These sets are related to the set of Motzkin paths. The Motzkin paths of length $n$ are the paths in ${\mathbb N} ^2_0$ made of level, fall, and rise steps, starting at $(0,0)$ and ending at $(n,0)$. Their set is denoted by ${\mathcal M} _n$ and the number of elements of ${\mathcal M} _n$ is given by the Motzkin number $$M_n=\frac{1}{n}\sum _k \left(\begin{array}{c} n \\ k \end{array}\right) \left(\begin{array}{c} n-k \\ k-1 \end{array}\right). $$ It is easily seen that for any $\sigma \in {\mathbb F} ^+_N-\{\emptyset \}$ there is a bijection between ${\mathcal M}_{\sigma }$ and ${\mathcal M} _{|\sigma |}$. We can now describe a combinatorial structure of the moments. \begin{theorem}\label{comb} Let $\phi $ be a strictly positive functional on ${\mathcal P} _N$ and let ${\mathcal A}$ be the admissible family of matrices associated with $\phi $ by \eqref{3termeni}. Then the moments of $\phi $ can be calculated by the formula \begin{equation}\label{momjac} s_{\sigma }=\sum _{{\bf p}\in {\mathcal M} _{\sigma }}w({\bf p}), \quad \quad \sigma \in {\mathbb F} ^+_N-\{\emptyset \}. \end{equation} \end{theorem} \begin{proof} We consider the following points in ${\mathbb N} _{0}\times \{1,\ldots ,N\}\times {\mathbb N} _{0}$: $$P_{n,j}=(0,j,n), \quad n\geq 0, j=1,\ldots ,N; $$ $$Q_{n,k,m}=(n,k,m), \quad m,n\geq 0, k=1,\ldots ,N. $$ For $\sigma \in {\mathbb F} ^+_N-\{\emptyset \}$, $\sigma =i_1^{k_1}\ldots i_p^{k_p}$, we claim that $$J_{\sigma }=\left[ \begin{array}{ccc} J^{\sigma }_{0,0} & J^{\sigma }_{0,1} & \ldots \\ J^{\sigma }_{1,0} & J^{\sigma }_{1,1} & \\ \vdots & &\ddots \end{array} \right],$$ where the entry $J^{\sigma }_{k,j}$ gives the sum of (the weights of) the paths in ${\mathbb N} _{0}\times \{1,\ldots ,N\}\times {\mathbb N} _{0}$ from $P_{j,i_p}$ to $Q_{|\sigma |,i_1,k}$. The claim is clearly true for $|\sigma |=1$ and then suppose it true for any word of length $\leq n$. Then consider a word $\sigma $ of length $n+1$. Several cases can occur. {\it Case 1.} $k=0$. Let $\sigma =i_1^{k_1}\ldots i_p^{k_p}=i_1\tau $. First, assume $k_1=1$. Due to the fact that the level steps of type $lp$ have weight $I$ and by the induction hypothesis, we deduce that the sum of the paths from $P_{j,i_p}$ to $Q_{|\sigma |,i_1,0}$ is $$B_{0,i_1}J^{\tau }_{0,j}+A^*_{1,i_1}J^{\tau }_{1,j},$$ which is precisely the $(0,j)$ entry of the product $$ J_{i_1}J_{\tau }=J_{\sigma }. $$ The case $k_1>1$ is similar, just by the induction hypothesis we deduce that the sum of paths from $P_{j,i_p}$ to $Q_{|\sigma |,i_1,0}$ is again $$B_{0,i_1}J^{\tau }_{0,j}+A^*_{1,i_1}J^{\tau }_{1,j},$$ which is precisely the $(0,j)$ entry of the product $$ J_{i_1}J_{\tau }=J_{\sigma }. $$ {\it Case 2.} $j=0$ is similar. {\it Case 3.} $k,j\geq 1$, then the induction hypothesis implies that the sum of paths from $P_{j,i_p}$ to $Q_{|\sigma |,i_1,k}$ is $$A_{k,i_1}J^{\tau }_{k-1,j}+B_{k,i_1}J^{\tau }_{k,j}+ A^*_{k+1,i_1}J^{\tau }_{k+1,j}, $$ which is precisely the $(k,j)$ entry of $J_{\sigma }$. \end{proof} When all $B_{n,k}$ are zero, the level steps of type $l^k_{n,m}$ dissapear, and our discussion is somewhat related to parts of \cite{Ni}. Formula \eqref{momjac} can be also used to calculate the Jacobi coefficients from the moments in a relatively simple way. Let $\sigma \in {\mathbb F} ^+_N-\{\emptyset \}$. It is convenient to introduce the notation $i(n)$ in order to denote the $n$th letter of the word $\sigma $ (from left to right). If $|\sigma |=2n$, then there exists a unique path ${\bf p}_{\sigma }$ with corresponding weight $$A^*_{1,i(i)}\ldots A^*_{n,i(n)}A_{n,i(n)}\ldots A_{1,i(i)}.$$ If $|\sigma |=2n+1$, then there exists a unique path, still denoted ${\bf p}_{\sigma }$, with corresponding weight $$A^*_{1,i(i)}\ldots A^*_{n,i(n)}B_{n+1,i(n+1)}A_{n,i(n)}\ldots A_{1,i(i)}.$$ In any case, let ${\mathcal M} ^*_{\sigma }={\mathcal M} _{\sigma }-\{{\bf p}_{\sigma }\}$. Also, we introduce the notation: $\tilde A_1=A_1$, and for $n\geq 2$, $$\tilde A_n=A_nA^{\oplus N}_{n-1}\ldots A^{\oplus N^{n-1}}_{1}.$$ \begin{corollary}\label{jacmom} The following formulae hold: for $n\geq 1$, $$A^*_nA_n=\left(\tilde A^*_{n-1}\right)^{-1} \left(\left[K_{\phi }(\sigma ,\tau )\right] _{|\sigma |=|\tau |=n}- \left[\sum _{{\bf p}\in {\mathcal M} ^*_{I(\sigma )\tau }}w({\bf p})\right]_ {|\sigma |=|\tau |=n} \right)\tilde A_{n-1}^{-1};$$ $$B_{0,k}=s_k, \quad k=1,\ldots ,N,$$ and for $n\geq 1$, $k=1,\ldots ,N,$ $$B_{n,k}==\left(\tilde A^*_{n}\right)^{-1} \left(\left[K_{\phi }(k\sigma ,\tau )\right] _{|\tau |=|\sigma |+2=n+1}- \left[\sum _{{\bf p}\in {\mathcal M} ^*_{I(\sigma )\tau }}w({\bf p})\right]_ {|\tau |=|\sigma |+2=n+1} \right)\tilde A_{n}^{-1}.$$ \end{corollary} Due to the fact that $A_n$ is an upper triangular matrix with strictly positive elements on the diagonal, the first relation of the previous result uniquely determine $A_n$ by Cholesky factorization. \section{Free products} The set ${\mathcal P} _N$ can be viewed as the free product of $N$ copies of ${\mathcal P} _1$: $${\mathcal P} _N=\underbrace{{\mathcal P} _1\star \ldots \star {\mathcal P} _1}_{\mbox{$N$ times}}= {\mathbb C}\oplus \left(\oplus _{n\geq 1} \oplus _{i_1\ne i_2,\ldots ,i_{n-1}\ne i_n}{\mathcal P} ^0_{i_1} \otimes \ldots \otimes {\mathcal P} ^0_{i_n}\right),$$ where ${\mathcal P} ^0_{i}$ is the set of polynomials in the variable $X_i$, $i=1,\ldots ,N$, without constant term. This remark suggests that the simplest examples of families of orthogonal polynomials can be obtained by using free products. Some examples already appeared in \cite{An}. Here we describe a genreal construction. This allows to introduce multivariable analogues of all classical orthogonal polynomials. The simplest attempt to construct families of orthogonal polynomials on ${\mathcal P} _N$ would be to consider orthogonal polynomials associated with free products of positive functionals. Let $\phi _1 $ and $\phi _2 $ be two strictly positive functional on ${\mathcal P} _{N_1}$, respectively ${\mathcal P} _{N_2}$. Their free product $\phi =\phi _1\star \phi _2$ on ${\mathcal P} _{N_1}\star {\mathcal P} _{N_2}$ is defined by $\phi (1)=1$ and $\phi (P_{i_1}\ldots P_{i_n})=\phi _{i_1}(P_{i_1})\ldots \phi _{i_n}(P_{i_n})$ for $n\geq 1$, $i_1\ne i_2$, $\ldots $, $i_{n-1}\ne i_n$, $P_{i_k}\in {\mathcal P} ^0_{N_{i_k}}$, and $i_k\in \{1,2\}$ for $k=1,\ldots ,n$. By results in \cite{Boc}, \cite{Boz}, $\phi $ is a positive functional. However, as it turns out, $\phi $ is not strictly positive. Thus, consider the case of two strictly positive functionals $\phi _1 $ and $\phi _2 $ on ${\mathcal P} _1$. Let $K$ be the restriction of the moment kernel $K_{\phi _1\star \phi _2}$ to the set $\{1, 12\}$. Its matrix is then $$ \begin{array}{rcl} K& =&\left[ \begin{array}{cc} \phi _1(X^2_1) & \phi _1(X^2_1)\phi _2(X_2) \\ \phi _2(X_2)\phi _1(X^2_1) & \phi _2(X_2)\phi _1(X^2_1)\phi _2(X_2) \end{array} \right] \\ & & \\ & =& \left[\begin{array}{cc} 1 & 0 \\ 0 & \phi _2(X_2) \end{array} \right] \left[\begin{array}{cc} \phi _1(X^2_1) & \phi _1(X^2_1) \\ \phi _1(X^2_1) & \phi _1(X^2_1) \end{array} \right] \left[\begin{array}{cc} 1 & 0 \\ 0 & \phi _2(X_2) \end{array} \right]. \end{array} $$ The matrix $\left[\begin{array}{cc} 1 & 1 \\ 1 & 1 \end{array} \right]$ has rank one, so $K$ is never invertible. We can consider another simple way related to free products in order to build orthogonal polynomials in several noncommutative variables. Thus, let $\{\varphi _{n,k}\}$, $n\geq 0$, $k\in \{1,\ldots ,N\}$, be $N$ families of orthonormal polynomials on the real line, determined by the recursion formulae: \begin{equation}\label{onedim} x\varphi _{n,k}(x)=a_{n+1,k}\varphi _{n+1,k}(x)+ b_{n,k}\varphi _{n,k}(x)+a_{n,k}\varphi _{n-1,k}(x). \end{equation} We introduce polynomials in $N$ noncommutative variables as follows. Any word $\sigma \in {\mathbb F} ^+_N-\{\emptyset \}$ can be uniquely represented in the form $\sigma =i^{k_1}_1\ldots i^{k_p}_p,$ $i_1$, $\ldots $, $i_p\in \{1, \ldots ,N\}$, $k_1$, $\ldots $, $k_p>0$, and $i_l\ne i_{l+1}$ for $l=1$, $\ldots $, $p-1$. Then define \begin{equation}\label{freeprod} \varphi _{\sigma }(X_1,\ldots ,X_N)=\varphi _{k_1,i_1}(X_{i_1})\ldots \varphi _{k_p,i_p}(X_{i_p}). \end{equation} \begin{theorem}\label{main} There exists an admissible family ${\mathcal A} $ of matrices such that for $k=1,\ldots ,N$ and $n\geq 0$, \begin{equation}\label{33termeni} X_k\Phi _n= \Phi _{n+1}A_{n+1,k}+ \Phi _nB_{n,k}+ \Phi _{n-1}A^*_{n,k}, \end{equation} where $\Phi _n=\left[\varphi _{\sigma }\right]_{|\sigma |=n}$ for $n\geq 0$, $\Phi _{-1}=0$, and $\varphi _{\sigma }$ are given by \eqref{freeprod}. \end{theorem} \begin{proof} First we prove the result for $N=2$. From \eqref{onedim} we deduce $$X_1=\varphi _1a_{1,1}+b_{0,1}= \left[ \begin{array}{cc} \varphi _1 & \varphi _2 \end{array} \right] \left[ \begin{array}{c} a_{1,1}\\ 0 \end{array} \right] +b_{0,1} $$ so that $$X_1\Phi _0=\Phi _1A_{1,1}+\Phi _0B_{0,1} $$ with $$A_{1,1}=\left[ \begin{array}{c} a_{1,1} \\ 0 \end{array} \right]\quad \mbox{and}\quad B_{0,1}=\left[ \begin{array}{c} b_{0,1} \end{array} \right]. $$ Similarly, $$X_2\Phi _0=\Phi _1A_{1,2}+\Phi _0B_{0,2} $$ with $$ A_{1,2}=\left[ \begin{array}{c} a_{1,2} \\ 0 \end{array} \right]\quad \mbox{and}\quad B_{0,2}=\left[ \begin{array}{c} b_{0,2} \end{array} \right]. $$ We see that $$A_1= \left[ \begin{array}{cc} A_{1,1} & A_{1,2} \end{array} \right] = \left[ \begin{array}{cc} a_{1,1} & 0 \\ 0 & a_{1,2} \end{array} \right] $$ is upper triangular (actually diagonal) and has the elements on the diagonal $>0$ (since all $a_{n,k}>0$). The case $n\geq 1$ can be delt with in a similar manner. The first $2^{n-1}$ words of length $n$ start with letter $1$ and have the structure $$1^{n-k}\tau $$ where $\tau $ is a word of lenght $k$ starting with letter $2$ (unless it is $\emptyset $). For $k=0$ there is exactly one such word, $1^n$, while for $0<k\leq n-1$, there are $2^{k-1}$ such kind of words. Using \eqref{onedim} we have that $$ \begin{array}{rcl} X_1\varphi _{1^{n-k}\tau }&=& X_1\varphi _{1^{n-k}}\varphi _{\tau }=X_1\varphi _{n-k,1}\varphi _{\tau }\\ & & \\ &=&\varphi _{n-k+1,1}\varphi _{\tau }a_{n-k+1,1}+ \varphi _{n-k,1}\varphi _{\tau }b_{n-k,1}+ \varphi _{n-k-1,1}\varphi _{\tau }a_{n-k,1} \\ & & \\ &=&\varphi _{1^{n-k+1}\tau }a_{n-k+1,1}+ \varphi _{1^{n-k}\tau }b_{n-k,1}+ \varphi _{1^{n-k-1}\tau }a_{n-k,1}. \end{array} $$ The last $2^{n-1}$ words of lenght $n$ start with letter $2$ and therefore we have for such a word $\sigma $ that $$ \begin{array}{rcl} X_1\varphi _{\sigma }&=&X_1\varphi _{0,1}\varphi _{\sigma } \\ & & \\ &=&\varphi _{1,1}\varphi _{\sigma }a_{1,1}+\varphi _{\sigma }b_{0,1} \\ & & \\ &=&\varphi _{1\sigma }a_{1,1}+\varphi _{\sigma }b_{0,1}. \end{array} $$ Putting together the above two formulae we deduce that $$ X_1\Phi _n= \Phi _{n+1}A_{n+1,1}+ \Phi _nB_{n,1}+ \Phi _{n-1}A^*_{n,1}, $$ where $$A_{n,1}= \left[ \begin{array}{ccclc} a_{n,1} & & & & \\ & a_{n-1,1} & & & \\ & & a^{\oplus 2}_{n-2,1} & & \\ & & & \ddots & \\ & & & & a^{\oplus 2^{n-2}}_{1,1} \\ & & & & \\ & & 0_{2^{n-1}\times 2^{n-1}} & & \end{array} \right] $$ and $$B_{n,1}= \left[ \begin{array}{ccclc} b_{n,1} & & & & \\ & b_{n-1,1} & & & \\ & & b^{\oplus 2}_{n-2,1} & & \\ & & & \ddots & \\ & & & & b^{\oplus 2^{n-2}}_{1,1} \\ \end{array} \right]; $$ the unspecified entries are all zero. Similarly, we deduce $$ X_2\Phi _n= \Phi _{n+1}A_{n+1,2}+ \Phi _nB_{n,1}+ \Phi _{n-1}A^*_{n,2}, $$ for some suitable matrices $A_{n,2}$ and $B_{n,2}$. Actually, the same proof works for $N>2$ and we record here the form of the matrices $A_{n,k}$, $B_{n,k}$ for an arbitrary $N$. Let $${\mathcal W} ^n_k=\{\sigma \in {\mathbb F} ^+_N\mid \mbox{$|\sigma |=n$ and $\sigma =k\tau $ for some $\tau $}\}$$ and denote by $\pi ^n_k$ the bijection from the set $\{1,2,\ldots ,N^{n-1}\}$ onto ${\mathcal W} ^n_k$ defined simply by $\pi ^n_k(l)=$the $l$th word in ${\mathcal W} ^n_k$, with respect to the lexicographic order. Also, if $\sigma \in {\mathcal W} ^n_k$ then it has a unique representation $\sigma =k^p\tau $ with $\tau $ a word that does not start with the letter $k$. Define $n_k(\sigma )=p$. Now $B_{n,k}$ is an $N^n\times N^n$ matrix such that, for $l,m\in \{1, 2,\ldots ,N^n\}$, \begin{equation}\label{b} \left(B_{n,k}\right)_{m,l}= \left\{\begin{array}{cl} b_{n_k(\pi ^{n+1}_k(m))-1,k} & l=m \\ &\\ 0 & l\ne m \end{array} \right. \end{equation} and $A_{n,k}$ is an $N^n\times N^{n-1}$ matrix such that for $l\in \{1,2,\ldots ,N^{n-1}\}$ and $m \in \{1, 2,\ldots ,N^n\}$, \begin{equation}\label{a} \left(A_{n,k}\right)_{m,l}= \left\{\begin{array}{cl} a_{n_k(\pi ^{n}_k(m))-1,k} & l=m \\ & \\ 0 & l\ne m \end{array}. \right. \end{equation} Therefore $A_n= \left[ \begin{array}{ccc} A_{n,1} & \ldots & A_{n,N} \end{array} \right] $ is a diagonal matrix with strictly positive diagonal elements, so that ${\mathcal A} =\{A_{n,k}, B_{m,k}\mid n>0, m\geq 0, k=1,\ldots ,N\}$ is an admissible family of matrices. \end{proof}
1,314,259,996,404
arxiv
\section{Introduction} When we look at the night sky, we see that galaxies seem to be arranged in a particular way. One might expect that galaxies would be distributed randomly, much as grains of sand would if you threw a handful across the floor, but instead, they seem to trace elegant structures; galaxy clusters are connected to each other by long filaments, interspersed with large voids, where few or no galaxies are seen. The drivers behind the formation of these 'large-scale structures' have been the subject of intense study and debate for over thirty years; how did the Universe go from being smooth and homogeneous just after the Big Bang to the clumpy, clustered Universe we see today? At the core of current theories for the formation of these structures is the premise that the evolution of the total mass distribution is described by the gravitational collapse of primordial density fluctuations, and that this evolution is traced by the evolution of galaxies. Overdense regions, or `haloes', are predicted to undergo mergers to build haloes of increasing mass, with galaxies forming from the baryonic matter in these haloes. This framework of `biased' hierarchical buildup \citep{col,hatt} has proven to be remarkably successful in explaining several important aspects of galaxy and large-scale structure formation. This paradigm is however not without its problems. A good example of these problems concerns the evolution of massive ($\geq10^{11}M_{\sun}$) galaxies. We might expect that massive galaxies form slowly, with many halo mergers needed to build up the required large baryon reservoirs, and indeed some galaxies do appear to form in this way \citep{van,bel}. There is however evidence that many massive galaxies may form at high redshift \citep{dun,blak} and on short timescales \citep{ell}, directly counter to early, `naive' model predictions. Intriguingly, recent surveys \citep[e.g.][]{eal,scot} have uncovered a huge population of distant, IR bright sources that are plausible candidates for being rapidly forming, massive galaxies, however they are so numerous that even recent models have difficulty in explaining their counts, and invoke a wide variety of solutions (e.g. \citet{bau}). It seems likely therefore that there are strong, but subtle links between distant, IR/sub-mm bright galaxies, and the formation of large-scale structures. Therefore, we need observations that relate the properties of these galaxies with the underlying dark matter distribution. Motivated by this, we have used data from the Spitzer Wide Area Infrared Extragalactic Survey (SWIRE, \citet{lon}) to select large samples of distant Ultraluminous Infrared Galaxies (ULIRGs, L$_{ir}\geq 10^{12}$L$_{\sun}$) and study their clustering evolution with redshift. We assume $H_{0}=100h$ km s$^{-1}$ Mpc$^{-1}$, $\Omega=1$, and $\Omega_{\Lambda}=0.7$. The results presented here were originally published in \citet{far06a,far06b}. \section{Analysis and results} To select high redshift ULIRGs, we use the 1.6$\mu$m emission feature, which arises due to photospheric emission from evolved stars. When this feature is redshifted into one of the IRAC channels then that channel exhibits a `bump' \citep{sim,saw}. A complete discussion of the source selection and characterization methods is given in Lonsdale et al 2006 (in preparation), which we summarize here. Our sources are taken from the ELAIS N1 and ELAIS N2 fields, and the Lockman Hole, covering 20.9 square degrees in total. We first selected those sources fainter than R=22 (Vega), and brighter than 400$\mu$Jy at 24$\mu$m. Within this set, we selected two samples that displayed a `bump' in the 4.5$\mu$m and 5.8$\mu$m channels, i.e. where $f_{3.6}<f_{4.5}>f_{5.8}>f_{8.0}$ for one sample (the `B2' sample) and where $f_{3.6}<f_{4.5}<f_{5.8}>f_{8.0}$ for the other sample (the `B3' sample). This resulted in a total of 1689 B2 sources and 1223 B3 sources. For both samples we used {\sc Hyper-z} \citep{bol} to estimate redshifts, the results from which place most of the B2 sources within $1.5<z<2.0$, and most of the B3 sources within $2.2<z<2.8$. From the best fits we also derived IR luminosities and power sources; the requirement that the sources have $f_{24}>400\mu$Jy demands an IR luminosity of $\geq10^{12}$L$_{\odot}$ for all the sources, with (most objects having) a starburst as the dominant power source, with star formation rates of $\geq200$M$_{\odot}$yr$^{-1}$. Similarly, the presence of the 1.6$\mu$m feature demands a minimum mass of evolved stars of $\sim10^{11}$M$_{\odot}$. Both the B2 and B3 sources are thus good candidates for being moderately massive galaxies harboring an intense, obscured starburst, making them similar in nature to both local ULIRGs, and high-redshift SMGs. We measured the angular clustering of both samples using the methods described in \citet{far06a,far06b}, which we summarize here. We found that the levels of angular clustering seen in the three fields were consistent with each other to within 0.5$\sigma$, and so combined the angular clustering measures for each sample over the three fields. To quantify the strength of clustering, we fit both datasets with a power law, $\omega(\theta) = A_{\omega}\theta^{1-\gamma}$, where $\gamma=1.8$ and $A_{\omega}$ is the clustering amplitude; $A_{\omega}=0.0125\pm0.0017$ for the B3s and $A_{\omega}=0.0046\pm0.0011$ for the B2s. To convert these angular clustering amplitudes to spatial clustering amplitudes, we invert Limbers equation: \begin{equation} \label{equ:spatc3} \frac{r_{0}(z)}{f(z)}=\left[\frac{H_{0}^{-1}A_{\omega}cC\left[\int_{a}^{b}\frac{dN}{dz}\,dz\right]^{2}}{\int_{a}^{b}\left(\frac{dN}{dz}\right)^{2}E(z) D_{\theta}^{1-\gamma}(z)f(z)(1+z)dz}\right]^{\frac{1}{\gamma}} \end{equation} \noindent where $f(z)$ parametrizes the redshift evolution of $r_{0}$, and: \begin{equation} \label{equ:spatc4} C = \frac{\Gamma(\gamma/2)}{\Gamma(1/2)\Gamma([\gamma-1]/2)}, E(z) = [\Omega_{m}(1+z)^{3} + \Omega_{\Lambda}]^{\frac{1}{2}} \end{equation} We derived $dN/dz$ from the photometric redshift distributions \citep{lon3}. For the B2 sources this is a Gaussian centered at $z=1.7$ with a FWHM of $1.0$, and for the B3 sources this is a Gaussian centered at $z=2.5$ with a FWHM of $1.2$. The resulting correlation lengths are $r_{0}=9.4\pm2.24h^{-1}$Mpc for the B2 sources, and $r_{0}=14.4\pm1.99h^{-1}$Mpc for the B3 sources. \section{Discussion} To place these clustering results in context, we consider two models. The first parametrizes the spatial correlation function, $\xi$, as a single power law in comoving coordinates, where the comoving correlation length, $r_{0}(z)$, is: \begin{equation} \label{equ:spatc2} r_{0}(z) = r_{0}f(z), f(z)=(1+z)^{\gamma-(3+\epsilon)} \end{equation} \noindent Here the choice of $\epsilon$ determines the redshift evolution \citep{phl,ove}. Several cases are usually quoted. First is `comoving clustering', where haloes expand with the Universe, and $\epsilon=\gamma-3$; in this case clustering remains constant. Second is the family of models for which $\epsilon\geq0$, for which clustering increases with time. Examples of this family include (a) `stable' clustering, for which $\epsilon\simeq0$ (in this case the haloes are frozen in {\it proper} coordinates, (b) the predicted evolution of clustering of the overall dark matter distribution, where $\epsilon\simeq\gamma-1$ \citep{car2b}, and $r_{0}\simeq5$ at z=0 \citep{jen}, (c) `linear' clustering, where $\epsilon=1.0$. A cautionary note to this is that detailed interpretations of clustering evolution from these models suffer from several theoretical flaws \citep{mos,smi}, and so should be thought of as qualitative indicators rather than quantitative predictions. We therefore simply use the `stable' and `linear' models as indicators of the possible range of halo clustering amplitude with redshift. The second class of model comprises those in which comoving correlation lengths increase with {\it increasing} redshift. These models introduce `bias', $b(z)$, between the galaxies and the underlying dark matter. An example of such models are the `fixed mass' models \citep{mata,mos}, which predict the clustering strength of haloes of a specified mass at any given redshift. In Figure \ref{r0plot} we plot the $\epsilon$ model for dark matter, `stable'and `linear' epsilon models normalized to the B2 and B3 clustering strengths, the fixed halo mass models for halo masses of $10^{12}$M$_{\odot}$, $10^{13}$M$_{\odot}$, and $10^{14}$M$_{\odot}$, the $r_{0}$ values for the B2 and B3 galaxies, and the spatial correlation lengths of other galaxy populations taken from the literature. With the the uncertainties described earlier firmly in mind, we use Figure \ref{r0plot} to explore the relationships between our samples, the underlying dark matter, and other galaxies. Both the B2s and the B3s are strongly clustered, with correlation lengths much higher than that predicted for the overall DM distribution. Both B2s and B3s cluster significantly more strongly than optical QSOs at their respective epochs, and B3s cluster more strongly than SMGs. Based on the \citet{mata} models, then we derive {\it approximate} 1$\sigma$ halo mass ranges of $10^{13.7}<$M$_{\odot}<10^{14.1}$ for the B3s, and $10^{13.5}<$M$_{\odot}<10^{13.9}$ for the B2s. Interestingly, halo masses comparable to these were recently derived for an independent sample of high-redshift ULIRGs by \citet{mag}. The most interesting comparison is however between the two samples themselves. The clustering evolution of QSOs with redshift \citep{cro2} may mean that there is a `minimum' host halo mass for QSO activity, below which no QSO is seen, of $\sim5\times10^{12}$M$_{\odot}$. The correlation lengths for the B2 and B3 samples are consistent with the same conclusion but for a $\sim6\times10^{13}M_{\odot}$ `minimum' DM halo mass. Taken together, these results imply that a minimum halo mass is a `threshold' factor for {\it all} forms of luminous activity in galaxies, both starbursts and AGN. It is also interesting to speculate on what the host haloes of B2 and B3 sources contain at lower and higher redshifts. We might expect that a halo hosting a B3 source could contain an optically bright LBG at $z\sim4$, followed by a B3 at $z\sim2.5$, possibly accompanied by other (near-IR selected) star forming systems \citep{dad,dad2}, before evolving to host a rich galaxy cluster at low redshifts. The occupants of a halo hosting a B2 galaxy would however probably be different. We would expect that such a halo could contain an SMG at $z\sim2.5$, and optically fainter LBGs at $4<z<5$ (though probably not LBGs at $z\sim3$). At lower redshifts such a halo might host a radio-bright AGN and or ERO at $z\sim1$, and a (poor to rich) cluster at $z=0$. We conclude that ULIRGs at $z\geq1.5$ as a class likely signpost stellar buildup in galaxies in clusters at $z=0$, with higher redshift ULIRGs signposting stellar buildup in galaxies that will reside in more massive clusters at lower redshifts. \begin{figure} \plotone{farrah_fig1.ps} \caption{Comoving correlation length, $r_{0}$, vs. redshift. Other data are taken from \citealt{mos,ove,dad,bla,ouc,ade,cro2,geo,all}. The `Fixed mass' lines show the predicted clustering amplitude of haloes of a given mass at any particular redshift, whereas the $\epsilon$ lines show the predicted clustering amplitude of an individual halo for three halo growth models, described in the text. The `Stable' and `Linear' lines give a qualitative indicator of the range of how DM haloes may grow with redshift, and we have normalized `Stable' and `Linear' lines to the clustering amplitudes of the B2s and the B3's. The shaded regions therefore indicate what these haloes may host at lower and higher redshifts - the haloes hosting B3s may contain an optically bright LBG at $z\simeq4$ (upper green point), and grow to host very rich galaxy clusters at z=0, whereas the haloes hosting B2 sources may contain optically fainter LBGs at $4<z<5$, SMGs at $z\sim2.5$, radio-bright AGN (upper pink triangle) and (old) EROs at $z\simeq1$, and poor to rich clusters at z=0. \label{r0plot}} \end{figure} \acknowledgements Support for this work, part of the Spitzer Space Telescope Legacy Science Program, was provided by NASA through an award issued by JPL under NASA contract 1407.
1,314,259,996,405
arxiv
\section*{Abstract} The antithetic integral feedback motif recently introduced in \cite{Briat:15e} is known to ensure robust perfect adaptation for the mean dynamics of a given molecular species involved in a complex stochastic biomolecular reaction network. However, it was observed that it also leads to a higher variance in the controlled network than that obtained when using a constitutive (i.e. open-loop) control strategy. This was interpreted as the cost of the adaptation property and may be viewed as a performance deterioration for the overall controlled network. To decrease this variance and improve the performance, we propose to combine the antithetic integral feedback motif with a negative feedback strategy. Both theoretical and numerical results are obtained. The theoretical ones are based on a tailored moment closure method allowing one to obtain approximate expressions for the stationary variance for the controlled network and predict that the variance can indeed be decreased by increasing the strength of the negative feedback. Numerical results verify the accuracy of this approximation and show that the controlled species variance can indeed be decreased, sometimes below its constitutive level. Three molecular networks are considered in order to verify the wide applicability of two types of negative feedback strategies. The main conclusion is that there is a trade-off between the speed of the settling-time of the mean trajectories and the stationary variance of the controlled species; i.e. smaller variance is associated with larger settling-time. \section*{Author summary} Homeostasis, the ability of living organisms to regulate their internal state, is of fundamental importance for their adaptation to environmental changes and their survival. This is the reason why complex regulatory genetic networks evolved and allowed for the emergence of more and more complex organisms. Recently, the theoretical study of those regulatory networks using ideas and concepts from control theory and the design of novel ones have gained a lot of attention. Synthetic regulatory circuits are seen as elementary building blocks for the design of complex networks that need to incorporate some regulating elements to be fully functional. This is for instance the case in metabolic engineering where the production of biomolecules, such as drugs or biofuels, need to be optimized and tightly regulated. A particular circuit, the so-called antithetic integral controller, is now known to ensure homeostasis even when regulatory circuits are subject to randomness. However, it is also known that this circuit increases variability in the network. The effects of a correcting negative feedback loop on the variance are discussed here and it is shown that variability can be reduced this way. Notably, we show that there is a tradeoff between speed of the network and variability. \section*{Introduction} The design and implementation of artificial in-vivo biomolecular controllers have become very popular \cite{Briat:15e,Briat:16a,Qian:17,Cuba:17,Annunziata:17,Lillacci:17} because of their potential applications for the tight and robust control of gene expression \cite{Briat:15e}, the optimization of metabolic networks for the efficient production of biomolecules \cite{Venayak:15,Cress:15} , or the development of new treatments for certain genetic diseases \cite{Ye:14}. Indeed, many of the instances of those problems can be interpreted from an homeostatic point of view in the sense that they may all be solved by achieving or restoring homeostasis in the corresponding genetic network using synthetic regulatory circuits \cite{Ye:14,Venayak:15,Cress:15,Briat:15e,Schukur:16}. In this regard, those problems essentially reduce to the design and the implementation of robust and reliable regulatory circuits that can optimize an inefficient network or correct a malfunctioning one -- an observation which strongly suggests that ideas from control theory and control engineering \cite{Albertos:10} could be adapted to biochemical control problems \cite{DelVecchio:15,Harris:15,Briat:15e}. A cornerstone in control theory and engineering is the so-called \emph{integral controller} that can ensure precise constant set-point regulation for a regulated variable in a given system. Such mechanism, where the action onto the controlled system is depending on the integral of the deviation of the regulated variable from the desired set-point, is to be contrasted with the so-called \emph{proportional controller} where the system is simply actuated proportionally to the deviation of the regulated variable from the desired set-point. Unlike integral control, the latter one is unable to achieve robust constant set-point regulation for the controlled variable and to reject constant disturbances. In other words, integral control has the capacity of ensuring perfect adaptation for the regulated variable. The downside, however, is that it has a destabilizing effect on the dynamics (emergence of oscillations or even diverging trajectories) of the overall controlled system which can be then compensated by adjoining a proportional action, thus giving rise to the so-called Proportional-Integral (PI) controller \cite{Astrom:95}. Based on the strength of these facts, an integral controller referred to as the \emph{antithetic integral controller} was proposed in \cite{Briat:15e} for the control of the mean level of some molecular species of interest in a given biochemical reaction network. This controller is implementable in terms of elementary biochemical reactions with mass-action kinetics, making it practically implementable in-vivo using, for instance, sigma- and anti-sigma-factors \cite{Lillacci:17}. This controller theoretically works in both the deterministic and the stochastic settings. In the latter setting, it was notably shown that, under some reasonable conditions, the ergodicity properties of the controlled network are independent of the parameters of the antithetic integral controller -- a surprising key property that has no counterpart in the deterministic setting and that dramatically simplifies its implementation. A drawback, however, is the increase of the stationary variance of the regulated species compared to the constitutive variance that would be obtained by using a static open-loop strategy, even though the latter one would be unable to ensure regulation and perfect adaptation for the mean level of the regulated species. This phenomenon is seemingly analogous to the destabilizing behavior of the deterministic integral controller mentioned in the previous paragraph. This variance increase can hence be interpreted as the price to pay for perfect adaptation at the mean species level. The goal of this paper is to investigate the effect of adding a negative feedback to the antithetic integral motif in a way akin, yet different, to deterministic PI controllers. As discussed above, adding a proportional action in the deterministic setting compensates for the destabilizing effect of the integrator. Comparatively, it may seem reasonable to think that, in the stochastic setting, a proportional action could have an analogous effect and would result in a decreased variance for the controlled variable (this is, for instance, what happens when considering certain linear systems driven by white noise). In fact, it has been shown that negative feedback at a transcriptional level in a gene expression network leads to a variance reduction in the protein levels; see e.g. \cite{Becskei:00,Thattai:01,Paulsson:04,Kaern:05} and the references therein. Two types of negative feedback are considered in the present paper: the first one consists of an ON/OFF proportional action whereas the second one is governed by a repressing Hill function. First we theoretically prove using a tailored moment closure method that, in a gene expression network controlled with an antithetic integral controller, the stationary variance in the protein copy number can be decreased by the use of a negative feedback. In this specific case, the steady-state variance is decreasing monotonically as a function of the strength of the negative feedback. An immediate consequence is that it is theoretically possible to set the steady-state variance to a level that lies below the constitutive steady-state variance, which is the value of the steady-state variance that would have been obtained using a constitutive (i.e. open-loop) control strategy. The theoretical prediction will also be observed by exact numerical predictions using Gillespie's algorithm (Stochastic Simulation Algorithm - SSA \cite{Gillespie:76}). A caveat, however, is that setting the gain of the negative feedback very high will likely result in a very low steady-state variance but may also result in a regulation error for the mean of the controlled species and in a loss of ergodicity for the overall controlled network. In this regard, reducing the steady-state variance below its constitutive level may not always be physically possible. Finally, it is also emphasized that a low stationary variance for the controlled species is often associated with higher settling-time for the controlled species. Hence, there is a tradeoff between variability and fast dynamics/small settling-time. The two negative feedback actions also exhibit quite different behaviors. Indeed, while the ON/OFF proportional feedback seems to be efficient at reducing the stationary variance through an increase of its gain, the dynamics of the mean gets first improved by reducing the settling-time but then gets dramatically deteriorated by the appearance of a fast initial transient phase followed by a very slow final one resulting then in a high settling-time. On the other hand, the Hill controller leads to very homogeneous mean dynamics for different feedback strength but the steady-state variance is also much less sensitive and does not vary dramatically. It is then argued that those differences may find an explanation by reasoning in a deterministic point of view. The ON/OFF controller (an error-feedback) introduces a stable zero in the dynamics of the closed-loop network which is small in magnitude when the gain of the negative feedback is high. When this is the case, the zero is close to the origin and the closed-loop dynamics will almost contain a derivative action, whence the fast initial transient phase. On the other hand, the Hill negative feedback (an output-feedback) does not introduce such a zero in the closed-loop dynamics, which may explain the homogeneity of the mean trajectories. Another possible reason is that the effective proportional gain (which will be denoted by $\beta$) is much less sensitive to changes in the feedback strength than the ON/OFF controller. Approximate equations for the stationary variance are then obtained in the general unimolecular network case. The obtained expressions shed some light on an interesting connection between the covariances of the molecular species involved in the stochastic reaction network and the stability of a deterministic linear system controlled with a standard PI controller, thereby unveiling an unexpected, yet coherent, bridge between the stochastic and deterministic settings. Applying this more general framework to the a gene expression network with protein maturation allows one to reveal that the steady-state variance may not be necessarily a monotonically decreasing function of the negative feedback strength. In spite of this, the same conclusions as in the gene expression network hold: the variance can sometimes be decreased below its constitutive level but this may also be accompanied with a loss of ergodicity. The same qualitative conclusions for the transient of the mean dynamics and the properties of the controller also hold in this case. Even though the proposed theory only applies to unimolecular networks, stochastic simulations are performed for a gene expression network with protein dimerization; a bimolecular network. Once again, the same conclusions as in for previous networks hold with the difference that the constitutive variance level is unknown in this case due to openness of the moment equations. These results tend to suggest that negative feedback seems to operate in the same way in bimolecular networks as in unimolecular networks. \subsection*{Reaction networks} Let us consider a stochastic reaction network $(\boldsymbol{X},\mathcal{R})$ involving $d$ molecular species $\X{1},\ldots,\X{d}$ interacting through $K$ reaction channels $\mathcal{R}_1,\ldots,\mathcal{R}_K$ defined as \begin{equation} \mathcal{R}_k:\ \sum_{i=1}^d\zeta_{k,i}^l\X{i}\rarrow{\rho_k} \sum_{i=1}^d\zeta_{k,i}^r\X{i},\ k=1,\ldots,K \end{equation} where $\rho_k\in\mathbb{R}_{>0}$ is the reaction rate parameter and $\zeta_k^r=\col(\zeta_{k,1}^r,\ldots,\zeta_{k,d}^r)$, $\zeta_k^l=\col(\zeta_{k,1}^l,\ldots,\zeta_{k,d}^l)$ are the left/right stoichiometric vectors of the reaction $\mathcal{R}_k$. The corresponding stoichiometric vector is hence given by $\zeta_k:=\zeta_k^r-\zeta_k^l\in\mathbb{Z}^d$ indicating that when this reaction fires, the state jumps from $x$ to $x+\zeta_k$. The stoichiometric matrix $S\in\mathbb{Z}^{d\times K}$ of this reaction network is defined as $S:=\begin{bmatrix} \zeta_1\ \cdots\ \zeta_K \end{bmatrix}$. When the kinetics is mass-action, the propensity function $\lambda_k$ of the reaction $\mathcal{R}_k$ is given by $\textstyle\lambda_k(x)=\rho_k\prod_{i=1}^d\frac{x_i!}{(x_i-\zeta_{n,i}^l)!}$. Under the well-mixed assumption, the above network can be described by a continuous-time Markov process $(X_1(t),\ldots,X_d(t))_{t\ge0}$ with the $d$-dimensional nonnegative lattice $\mathbb{Z}_{\ge0}^d$ as state-space; see e.g. \cite{Anderson:11}. \subsection*{The regulation/perfect adaptation problems and antithetic integral control} Let us consider here a stochastic reaction network $(\boldsymbol{X},\mathcal{R})$. The regulation problem consists of finding another reaction network (i.e. a set of additional species and additional reactions) interacting with $(\boldsymbol{X},\mathcal{R})$ in a way that makes the interconnection well-behaved (i.e. ergodic) and such that the mean of some molecular species $\X{\ell}$ for some given $\ell\in\{1,\ldots,d\}$ converges to a desired set-point (given here by $\mu/\theta$ for some $\mu,\theta>0$) in a robust way; i.e. irrespective of the values of the parameters of the network $(\boldsymbol{X},\mathcal{R})$. It was shown in \cite{Briat:15e} that, under some assumptions on the network $(\boldsymbol{X},\mathcal{R})$, the antithetic integral controller defined as \begin{equation}\label{eq:AIC} \underbrace{\phib\rarrow{\mu}\Z{1}}_{\mbox{reference}},\ \underbrace{\phib\rarrow{\theta X_\ell}\Z{2}}_{\mbox{measurement}},\ \underbrace{\Z{1}+\Z{2}\rarrow{\eta}\phib}_{\mbox{comparison}},\ \underbrace{\phib\rarrow{kZ_1}\X{1}}_{\mbox{actuation}}, \end{equation} solves the above regulation problem. This regulatory network consists of two additional species $\Z{1}$ and $\Z{2}$, and four additional reactions. The species $\Z{1}$ is referred to as the actuating species as it is the species that governs the rate of the actuation reaction which produces the actuated species $\X{1}$ at a rate proportional to $Z_1$. The species $\Z{2}$ is the sensing species as it is produced at a rate proportional to the controlled species $\X{\ell}$ through the measurement reaction. The first reaction is the reference reaction as it encodes part of the set-point $\mu/\theta$ whereas the third reaction is the comparison reaction that compares the population of the controller species and annihilates them accordingly, thereby closing negatively the loop while, and at the same time correlating the populations of the controller species. The comparison (or titration) reaction is the crucial element of the above controller network and, to realize such a reaction, one needs to rely on intrinsic strongly binding properties of certain molecules such as sigma- and anti-sigma-factors \cite{Briat:15e} or small RNAs and RNAs \cite{Qian:17,Levine:07,Yoo:13}. \subsection*{Variance amplification in antithetic integral control} We discussed above about the convergence properties of the mean level of the controlled species $\X{\ell}$ when network $(\boldsymbol{X},\mathcal{R})$ is controlled with the antithetic integral controller \eqref{eq:AIC}. However, it was remarked in \cite{Briat:15e} that while the mean of $\X{\ell}$ converges to the desired steady-state, the stationary variance of the controlled species could be much larger than its constitutive value that would be obtained by simply considering a naive constitutive production of the species $\X{1}$ that would lead to the same mean steady-state value $\mu/\theta$. This was interpreted as the price to pay for having the perfect adaptation property for the controlled species. To illustrate this phenomenon, let us consider the following gene expression network: \begin{equation}\label{eq:gene_expression} \phib\rarrow{k_r}\X{1},\ \X{1}\rarrow{k_p}\X{1}+\X{2}, \X{1}\rarrow{\gamma_r}\phib, \X{2}\rarrow{\gamma_p}\phib \end{equation} where $\X{1}$ denotes mRNA and $\X{2}$ denotes protein. The objective here is to control the mean level of the protein by acting at a transcriptional level using the antithetic controller \eqref{eq:AIC}; hence, we set $k_r=kZ_1$. Using a tailored moment closure method, it is proved in the SI that the stationary variance $ \textnormal{Var} _\pi^{I}(X_2)$ for the protein copy number is approximately given by the following expression \begin{equation}\label{eq:variance_ge} \textnormal{Var} _\pi^{I}(X_2)\approx\dfrac{\mu}{\theta}\left(\dfrac{1+\dfrac{k_p}{\gamma_r+\gamma_p}+\dfrac{kk_p}{\gamma_r\gamma_p}}{1-\dfrac{k\theta k_p}{\gamma_r\gamma_p(\gamma_r+\gamma_p)}}\right),\ k>0,\ k/\eta\ll1. \end{equation} The rationale for the assumption $k/\eta\ll1$ is that it allows for closing the moments equation (which is open because of the presence of the comparison reaction) and obtain a closed-form solution for the stationary variance. On the other hand, the constitutive (i.e. open-loop) stationary variance $ \textnormal{Var} _\pi^{OL}(X_2)$ for the protein copy number obtained with the constitutive strategy \begin{equation} k_r=\dfrac{\mu}{\theta}\dfrac{\gamma_r\gamma_p}{k_p} \end{equation} is given by \begin{equation} \textnormal{Var} _\pi^{OL}(X_2)=\dfrac{\mu}{\theta}\left(1+\dfrac{k_p}{\gamma_r+\gamma_p}\right). \end{equation} It is immediate to see that the ratio \begin{equation} \dfrac{ \textnormal{Var} _\pi^{I}(X_2)}{ \textnormal{Var} _\pi^{OL}(X_2)}\approx\dfrac{1+\dfrac{kk_p}{\gamma_r\gamma_p(\gamma_r+\gamma_p)}}{1-\dfrac{k\theta k_p(k_p+\gamma_r+\gamma_p)}{\gamma_r\gamma_p(\gamma_r+\gamma_p)}},\ k,\theta>0,\ k/\eta\ll1 \end{equation} is greater than 1 for all $k,\theta>0$ such that the denominator is positive. Note that the above formula is not valid when $k=0$ or $\theta=0$ since this would result in an open-loop network for which set-point regulation could not be achieved. This expression is also a monotonically increasing function of the gain $k$, a fact that was numerically observed in \cite{Briat:15e}. This means that choosing $k$ very small will only result in a small increase of the stationary variance of the controlled species when using an antithetic integral feedback. However, this will very likely result in very slow dynamics for the mean of the controlled species. Finally, it is important to stress that while this formula is obviously not valid when the denominator is nonpositive, we know from \cite{Briat:15e} that in the case of the gene expression network, the closed-loop network will be ergodic with converging first and second-order moments for all $k>0$ and all $\theta>0$ (assuming that the ratio $\mu/\theta$ is kept constant). This inconsistency stems from the fact that the proposed theoretical approach relies on a tailored moment closure approximation that will turn out to be connected to the Hurwitz stability of a certain matrix that may become unstable when the gain $k$ of the integrator is too large. This will be elaborated more in the following sections. \subsection*{Negative feedback action} We will consider in this paper two types of negative feedback action. The first one, referred to as the \emph{ON/OFF proportional feedback}, is essentially theoretical and cannot be exactly implemented, but it may be seen as a local approximation of some more complex (e.g. nonlinear) repressing function. It is given by the reaction \begin{equation}\label{eq:djksjdl} \phib\rarrow{F(X_\ell)}\X{1} \end{equation} together with the propensity function $F(X_\ell)=K_p\max\{0,\mu-\theta X_\ell\}$ where $K_p$ is the so-called feedback gain/strength. It is similar to the standard proportional feedback action used in control theory with the difference that a regularizing function, in the form of a max function, is involved in order the restrict the propensity function to nonnegative values. Note that this controller can still be employed for the in-silico control of single-cells using a stochastic controller as, in this case, we would not be restricted anymore to mass-action, Hill or Michaelis-Menten kinetics. This was notably considered in the case of in-silico population control in \cite{Briat:12c,Briat:13h,Guiver:15}. The second type of negative feedback action, referred to as the \emph{Hill feedback}, consists of the reaction \eqref{eq:djksjdl} but involves the non-cooperative repressing Hill function $F(X_\ell)=K_p/(1+X_\ell)$ as propensity function. This type of negative feedback is more realistic as such functions have empirically been shown to arise in many biochemical, physiological and epidemiological models; see e.g. \cite{Murray:02}. In both cases, the total rate of production of the molecular species $\X{1}$ can be expressed as the sum $kZ_1+F(X_\ell)$ which means that, at stationarity, we need to have that $\mathbb{E}_\pi[kZ_1+F(X_\ell)]=u^*$ where $u^*$ is equal to the value of the constitutive (i.e. deterministic) production rate for $\X{1}$ for which we would have that $\mathbb{E}_\pi[X_\ell]=\mu/\theta$. Noting now that for both negative feedback functions, we will necessarily have that $\mathbb{E}_\pi[F(X_\ell)]>0$, then this means that if the gain $K_p$ is too large, it may be possible that the mean of the controlled species do not converge to the desired set-point implying, in turn, that the overall controlled network will fail to be ergodic. This will be notably the case when $\mathbb{E}_\pi[F(X_\ell)]>u^*$. In particular, when $F(X_\ell)=K_p\max\{0,\mu-\theta X_\ell\}$, a very conservative sufficient condition for the closed-loop network to be ergodic is that $K_p<u^*/\mu$ whereas when $F(X_\ell)=K_p/(1+X_\ell)$, this condition becomes $K_p<u^*$. These conditions can be determined by considering the worst-case mean value of the negative feedback strategies; i.e. $K_p\mu$ and $K_p$, respectively. \section*{Results} \subsection*{Invariants for the antithetic integral controller} We describe some important invariant properties of the antithetic integral controller \eqref{eq:AIC} which are independent of the parameters of the controlled network under the assumption that these invariants exist; i.e. they are finite. Those invariants are given by \begin{align} \label{geinv1} \textnormal{Cov} _\pi (X_\ell,Z_1-Z_2) = \frac{\mu}{\theta}, \end{align} \begin{align} \label{geinv2} \mathbb{E}_\pi (Z_1Z_2) = \frac{\mu}{\eta}, \end{align} \begin{align} \label{geinv3} \mathbb{E}_\pi (Z^2_1Z_2) = \frac{\mu}{\eta} \left( 1 + \mathbb{E}_\pi(Z_1) \right) \end{align} and \begin{align} \label{geinv4} \mathbb{E}_\pi (Z_1Z^2_2) = \frac{\mu + \theta \mathbb{E}_\pi( X_\ell Z_2 ) }{\eta} \end{align} play an instrumental role in proving all the theoretical results of the paper. Interestingly, we can notice that $ \textnormal{Cov} _\pi (X_\ell,Z_1-Z_2)=\mathbb{E}_\pi[X_\ell]$, which seems rather coincidental. From the second invariant we can observe that, if $\eta\gg\mu$, then $\mathbb{E}_\pi (Z_1Z_2)\approx0$, which indicates that the values taken by the random variable $Z_2(t)$ will be most of the time equal to 0. Note that it cannot be $Z_1(t)$ to be mostly taking zero values since $\Z{1}$ is the actuating species whose mean must be nonzero (assuming here that the natural production rates of the molecular species in the controlled network are small). Similarly, setting $\eta$ large enough in the third expression will lead to a similar conclusion. Note that $\mathbb{E}_\pi(Z_1)$ is independent of $\eta$ here and only depends on the set-point $\mu/\theta$, the integrator gain $k$ and the parameters of the network which is controlled. The last expression again leads to similar conclusions. Indeed, if $\eta$ is sufficiently large, then $\mathbb{E}_\pi( X_\ell Z_2 )\approx0$ and, hence, $\mathbb{E}_\pi (Z_1Z^2_2) \approx 0$ which implies that $Z_2(t)$ needs to be most of the time equal to 0. These properties will be at the core of the moment closure method used to obtain an approximate closed-form formula for the covariance matrix for the closed-loop network. \subsection*{An approximate formula for the stationary variance of the controlled species} Let us assume here that the open-loop network $(\boldsymbol{X},\mathcal{R})$ is mass-action and involves, at most, unimolecular reactions. Hence, the vector of propensity functions can be written as \begin{equation} \lambda(x)=Wx+w_0 \end{equation} for some nonnegative matrix $W\in\mathbb{R}^{K\times d}$ and nonnegative vector $w_0\in\mathbb{R}^K$. It is proved in the SI that, under the assumption $k/\eta\ll1$, we can overcome the moment closure problem arising from the presence of the annihilation reaction in the antithetic controller and show that the exact stationary covariance matrix of the network given by \begin{equation*} \begin{bmatrix} \textnormal{Cov} _\pi^{PI}( X,X) & \textnormal{Cov} ^{PI}_\pi( X ,Z) \\ \textnormal{Cov} _\pi^{PI}( Z, X ) & \textnormal{Var} ^{PI}_\pi(Z) \end{bmatrix},\ Z:=Z_1-Z_2 \end{equation*} is approximatively given by the matrix $\Sigma$ solving the Lyapunov equation \begin{equation}\label{eq:Lyapunov} R\Sigma + \Sigma R^T + Q = 0 \end{equation} where \begin{equation*} \begin{array}{rcl} R&=&\begin{bmatrix} SW - \beta e_1 e_\ell ^T & k e_1\\ -\theta e^T_\ell &0 \end{bmatrix},\\ Q&=&\begin{bmatrix} S D S^T + c e_1 e^T_1 & 0 \\ 0 & 2\mu \end{bmatrix},\\ D&=&\textnormal{diag}(W\mathbb{E}_\pi[X]+w_0),\\ c &=& - \dfrac{1}{ e^T_\ell (SW)^{-1}e_1 } \left( \dfrac{\mu}{\theta} + e^T_\ell (SW)^{-1}Sw_0 \right),\\ \beta&=&-\dfrac{ \textnormal{Cov} ^{PI}_\pi( F(X_\ell ), X_\ell)}{ \textnormal{Cov} ^{PI}_\pi( X_\ell , X_\ell)} \end{array} \end{equation*} Note that since the function $F$ is decreasing then the effective proportional gain, $\beta$, is always a positive constant and seems to be mostly depending on $K_p$ but does not seem to change much when $k$ varies (see e.g. Figure \ref{fig:Gene_Prop_Beta} and Figure \ref{fig:Gene_Hill_Beta} in the appendix). It can also be seen that for the Lyapunov equation to have a positive definite solution, we need that the matrix $R$ be Hurwitz stable; i.e. all its eigenvalues have negative real part. In parallel of that, it is known from the results in \cite{Briat:15e} that the closed-loop network will remain ergodic when $\beta=0$ even when the matrix $R$ is not Hurwitz stable. In this regard, the formula \eqref{eq:Lyapunov} can only be valid when the parameters $\beta$ and $k$ are such that the matrix $R$ is Hurwitz stable. When this is not the case, the formula is out its domain of validity and is meaningless. The stability of the matrix $R$ is discussed in more details in the SI. \subsection*{Connection to deterministic proportional-integral control} Interestingly, the matrix $R$ coincides with the closed-loop system matrix of a deterministic linear system controlled with a particular proportional-integral controller. To demonstrate this fact, let us consider the following linear system \begin{equation} \begin{array}{rcl} \dot{x}(t)&=&SWx(t)+e_1u(t)\\ y(t)&=&e_\ell^Tx(t) \end{array} \end{equation} where $x$ is the state of the system, $u$ is the control input and $y$ is the measured/controlled output. We propose to use the following PI controller in order to robustly steers the output to a desired set-point $\mu/\theta$ \begin{equation} u(t)=\dfrac{\beta}{\theta}(\mu-\theta y(t))+k\int_0^t (\mu-\theta y(s))ds \end{equation} where $\theta$ is the sensor gain, $\beta/\theta$ is the proportional gain and $k$ is the integral gain. The closed-loop system is given in this case by \begin{equation} \begin{bmatrix} \dot{x}(t)\\ \dot{I}(t) \end{bmatrix}=\begin{bmatrix} SW - \beta e_1 e_\ell ^T & k e_1\\ -\theta e^T_\ell &0 \end{bmatrix} \begin{bmatrix} x(t)\\ I(t) \end{bmatrix}+\begin{bmatrix} \frac{\beta}{\theta}\\ % 1 \end{bmatrix}\mu \end{equation} where we can immediately recognize the $R$ matrix involved in the Lyapunov equation \eqref{eq:Lyapunov}. \subsection*{Example - Gene expression network} We present here the results obtained for the gene expression network \eqref{eq:gene_expression} using the two negative feedback actions. In particular, we will numerically verify the validity of the formula \eqref{eq:variance_ge} and study the influence of the controller parameters on various properties of the closed-loop network. The matrix $R$ is given in this case by \begin{equation} R=\begin{bmatrix} -\gamma_r & -\beta & k\\ k_p & -\gamma_p & 0\\ 0 & -\theta & 0 \end{bmatrix}. \end{equation} It can be shown that the above matrix is Hurwitz stable (i.e. all its eigenvalues are located in the open left half-plane) if and only if the parameters $k,\beta>0$ satisfy the inequality \begin{equation}\label{eq:RH_gene} 1 - \dfrac{k \theta k_p}{ \gamma_r \gamma_p (\gamma_r +\gamma_p) }+ \dfrac{ \beta k_p }{ \gamma_r \gamma_p}>0. \end{equation} Hence, given $k>0$, the matrix $R$ will be Hurwitz stable for any sufficiently large $\beta>0$ illustrating the stabilizing effect of the proportional action. When the above condition is met, then the closed-loop stationary variance $ \textnormal{Var} _\pi^{PI}(X_2)$ of the protein copy number is approximately given by the expression \begin{equation}\label{eq:vpi_gene} \textnormal{Var} _\pi^{PI}(X_2)\approx \Sigma_{22}= \dfrac{\mu}{\theta}\left[ \dfrac{1 + \dfrac{k_p}{ \gamma_r +\gamma_p } + \dfrac{k k_p}{ \gamma_r \gamma_p } +\dfrac{ \beta k_p }{ \gamma_r( \gamma_r +\gamma_p) } }{1 - \dfrac{k \theta k_p}{ \gamma_r \gamma_p (\gamma_r +\gamma_p) }+ \dfrac{ \beta k_p }{ \gamma_r \gamma_p} } \right]. \end{equation} For any fixed $k>0$ such that \eqref{eq:RH_gene} is satisfied, the closed-loop steady-state variance is a monotonically decreasing function of $\beta$. As a consequence, there will exist a $\beta_c>0$ such that \begin{equation} \Sigma_{22}<\dfrac{\mu}{\theta}\left(1+\dfrac{k_p}{\gamma_r+\gamma_p}\right) \end{equation} for all $\beta>\beta_c$. In particular, when $\beta\to\infty$, then we have that \begin{equation} \Sigma_{22}\to\dfrac{\mu}{\theta}\dfrac{\gamma_p}{\gamma_r+\gamma_p}<\dfrac{\mu}{\theta}. \end{equation} We now analyze the results obtained with the antithetic integral controller combined with an OF/OFF proportional feedback. The first step is the numerical comparison of the approximate formula \eqref{eq:vpi_gene} with the stationary variance computed using $10^6$ SSA simulations with the parameters $k_p=2$, $\gamma_r=2$, $\gamma_p=7$, $\mu=10$, $\theta=2$ and $\eta=100$. The absolute value of the relative error between the exact and the approximate stationary variance of the protein copy number for several values for the gains $k$ and $K_p$ is depicted in Figure \ref{fig:Gene_Prop_RE}. We can observe there that the relative error is less than 15\% except when $k$ is very small where the relative error is much larger. However, in this latter case, the mean trajectories do not have time to converge to their steady state value and, therefore, what is depicted in the figure for this value is not very meaningful. In spite of that, we can observe that the approximation is reasonably accurate. We now look at the performance of the antithetic integral controller combined with an OF/OFF proportional feedback. Figure \ref{fig:Gene_Prop_E} depicts the trajectories of the mean protein copy number while Figure \ref{fig:Gene_Prop_V} depicts the trajectories of the variance of the protein copy number, both in the case where $k=3$. Regarding the mean copy number, we can observe that while at the beginning increasing $K_p$ seems to improve the transient phase, then the dynamics gets more and more abrupt at the start of the transient phase as the gain $K_p$ continues to increase and gets slower and slower at the end of the transient phase, making the means very slow to converge to their set-point. On the other hand, we can see that the stationary variance seems to be a decreasing function of the gain $K_p$. More interestingly, when the gain $K_p$ exceeds 20, the stationary variance becomes smaller than its constitutive value. Figure \ref{fig:Gene_Prop_VS} helps at establishing the influence of the gains $k$ and $K_p$ onto the stationary variance of the protein copy number. We can see that, for any $k$, increasing $K_p$ reduces the stationary variance while for any $K_p$, reducing $k$ reduces the variance, as predicted by the approximate formula \eqref{eq:vpi_gene}. Hence, a suitable choice would be to pick $k$ small and $K_p$ large. We now compare this choice for the parameters with the one that would lead to a small settling-time for the mean dynamics; see Figure \ref{fig:Gene_Prop_ST}. We immediately see that a small $k$ is not an option if one wants to have fast mean dynamics. A sweet spot in this case would be around the right-bottom corner where the settling-time is the smallest. Interestingly, the variance is still at a quite low level even if sometimes higher than the constitutive value. We now perform the same analysis for the antithetic integral controller combined with the Hill feedback and first verify the accuracy of the approximate formula \eqref{eq:vpi_gene}. We can observe in Figure \ref{fig:Gene_Hill_RE} that the formula is very accurate in this case. To explain this, it is important to note that the gains $K_p$ in both controllers are not directly comparable, only the values for the parameter $\beta$ are. For identical $K_p$'s, the value of $\beta$ for the ON/OFF proportional feedback is much larger than for the Hill feedback (see Figure \ref{fig:Gene_Prop_Beta} and Figure \ref{fig:Gene_Hill_Beta} in the appendix). The Figure \ref{fig:Gene_Prop_RE} and Figure \ref{fig:Gene_Hill_RE} all together simply say that the formula is very accurate when $\beta$ is small. We now look at the performance of the antithetic integral controller combined with a Hill feedback. Similarly to as previously, Figure \ref{fig:Gene_Hill_E} depicts the trajectories of the mean protein copy number while Figure \ref{fig:Gene_Hill_V} depicts the trajectories of the variance of the protein copy number, both in the case where $k=3$. Regarding the mean copy number, we can observe than the dynamics are much more homogeneous than in the previous case and that increasing $K_p$ reduces the overshoot and, hence, the settling-time. This can again be explained by the fact that $\beta$ is much smaller in this case. Similarly, the spread of the variances is much tighter than when using the other negative feedback again because of the fact that $\beta$ is small in this case. This homogeneity is well illustrated in Figure \ref{fig:Gene_Hill_VS} and Figure \ref{fig:Gene_Hill_ST} where we conclude on the existence of a clear tradeoff between settling-time and stationary variance. As can been seen in Figure \ref{fig:Gene_Prop_E} and Figure \ref{fig:Gene_Hill_E}, the mean dynamics are quite different and it would be interesting to explain this difference in terms of control theoretic ideas. A first explanation lies in the sensitivity of the parameter $\beta$ in terms of the feedback strength $K_p$. In the case of the ON/OFF proportional feedback, this sensitivity is quite high whereas it is very low in the case of the Hill feedback (see Figure \ref{fig:Gene_Prop_Beta} and Figure \ref{fig:Gene_Hill_Beta} in the appendix). This gives an explanation on why the mean trajectories are very different in the case of the ON/OFF proportional feedback for different values of $K_p$ while the mean trajectories are very close to each other in the case of the Hill feedback. A second explanation lies in the type of feedback in use. Indeed, the ON/OFF proportional feedback is an error-feedback and, when combined with the antithetic integral controller, may introduce a stable zero in the mean dynamics. On the other hand, the Hill feedback is an output-feedback that does not seem to introduce such a zero. When increasing the negative feedback gain $K_p$, this zero moves towards the origin. Once very close to the origin, this zero will have an action in the closed-loop mean dynamics that is very close to a derivative action, leading then to abrupt initial transient dynamics. A theoretical basis for this discussion is developed in more details in the SI. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Gene_Prop_RError.pdf} \caption{Absolute value of the relative error between the exact stationary variance of the protein copy number and the approximate formula \eqref{eq:vpi_gene} when the gene expression network is controlled with the antithetic integral controller \eqref{eq:AIC} and an ON/OFF proportional controller.}\label{fig:Gene_Prop_RE} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Gene_Prop_E_edited.pdf} \caption{Mean trajectories for the protein copy number when the gene expression network is controlled with the antithetic integral controller \eqref{eq:AIC} with $k=3$ and an ON/OFF proportional controller. The set-point value is indicated as a black dotted line.}\label{fig:Gene_Prop_E} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Gene_Prop_V_edited.pdf} \caption{Variance trajectories for the protein copy number when the gene expression network is controlled with the antithetic integral controller \eqref{eq:AIC} with $k=3$ and an ON/OFF proportional controller. The stationary constitutive variance is depicted in black dotted line.}\label{fig:Gene_Prop_V} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Gene_Prop_VS.pdf} \caption{Stationary variance for the protein copy number when the gene expression network is controlled with the antithetic integral controller \eqref{eq:AIC} and an ON/OFF proportional controller.}\label{fig:Gene_Prop_VS} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Gene_Prop_ST.pdf} \caption{Settling-time for the mean trajectories for the protein copy number when the gene expression network is controlled with the antithetic integral controller \eqref{eq:AIC} and an ON/OFF proportional controller.}\label{fig:Gene_Prop_ST} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Gene_Hill_RError.pdf} \caption{Absolute value of the relative error between the exact stationary variance of the protein copy number and the approximate formula \eqref{eq:vpi_gene} when the gene expression network is controlled with the antithetic integral controller \eqref{eq:AIC} and a Hill controller.}\label{fig:Gene_Hill_RE} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Gene_Hill_E_edited.pdf} \caption{Mean trajectories for the protein copy number when the gene expression network is controlled with the antithetic integral controller \eqref{eq:AIC} with $k=3$ and a Hill controller. The set-point value is indicated as a black dotted line.}\label{fig:Gene_Hill_E} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Gene_Hill_V_edited.pdf} \caption{Variance trajectories for the protein copy number when the gene expression network is controlled with the antithetic integral controller \eqref{eq:AIC} with $k=3$ and a Hill controller. The stationary constitutive variance is depicted in black dotted line.}\label{fig:Gene_Hill_V} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Gene_Hill_VS.pdf} \caption{Stationary variance for the protein copy number when the gene expression network is controlled with the antithetic integral controller \eqref{eq:AIC} and a Hill controller.}\label{fig:Gene_Hill_VS} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Gene_Hill_ST.pdf} \caption{Settling-time for the mean trajectories for the protein copy number when the gene expression network is controlled with the antithetic integral controller \eqref{eq:AIC} and a Hill controller.}\label{fig:Gene_Hill_ST} \end{figure} \subsection*{Example - Gene expression network with protein maturation} The results obtained in the previous section clearly only hold for the gene expression network and it would be quite hasty to directly generalize those results to more complex unimolecular networks. This hence motivates the consideration of a slightly more complicated example, namely, the gene expression network involving a protein maturation reaction given by \begin{equation} \begin{array}{c} \phib\rarrow{k_r}\X{1},\ \X{1}\rarrow{k_p}\X{1}+\X{2}, \X{1}\rarrow{\gamma_r}\phib, \X{2}\rarrow{\gamma_p}\phib\\ % \X{2}\rarrow{k_p'}\X{3}, \X{3}\rarrow{\gamma_p'}\phib \end{array} \end{equation} where, as before, $\X{1}$ denotes mRNA, $\X{2}$ denotes protein and, now, $\X{3}$ denotes the mature protein. In this case, the goal is to control the average mature protein copy number by, again, acting at a transcriptional level. As this network is still unimolecular, the proposed framework remains valid. In particular, the matrix $R$ is given by \begin{equation} R=\begin{bmatrix} -\gamma_r & 0 & -\beta & k\\ k_p & -(\gamma_p+k_p') & 0 & 0\\ 0 & k_p' & -\gamma_p' & 0\\ 0 & 0 & -\theta & 0 \end{bmatrix} \end{equation} and is Hurwitz stable provided that the two following conditions are satisfied \begin{equation} \beta<\dfrac{1}{k_pk_p'}\left((\gamma_r + \gamma_p + \gamma_p' + k_p')(\gamma_r\gamma_p +\gamma_r\gamma_p. + \gamma_p\gamma_p' + \gamma_rk_p' + \gamma_p'k_p')-\gamma_r\gamma_p'(\gamma_p+k_p')\right) \end{equation} and \begin{equation} k_p^2k_p'^2\beta^2+\sigma_1\beta+\sigma_0<0 \end{equation} where \begin{equation} \begin{array}{rcl} \sigma_1&=&- k_pk_p'(\gamma_r + \gamma_p'+ \gamma_p + k_p')(\gamma_r\gamma_p + \gamma_r\gamma_p' + \gamma_p\gamma_p' + \gamma_rk_p' + \gamma_p'k_p')\\ && +2\gamma_r\gamma_p'k_pk_p'(\gamma_p + k_p'),\\ \sigma_0&=& - \gamma_r\gamma_p'(\gamma_p + k_p')(\gamma_r + \gamma_p + \gamma_p' + k_p')(\gamma_r\gamma_p + \gamma_r\gamma_p' + \gamma_p\gamma_p' + \gamma_rk_p' + \gamma_p'k_p')\\ && +\gamma_r^2\gamma_p2^2(\gamma_p + k_p')^2+ kk_pk_p'\theta(\gamma_r + \gamma_p + \gamma_p' + k_p')^2. \end{array} \end{equation} Considering, for instance, the following parameters $k_p=1$, $\gamma_r=2$, $\gamma_p=1$, $k_p'=3$, $\gamma_p'=1$, $\mu=10$, $\theta=2$ and $\eta=100$, the above conditions reduce to \begin{equation} \beta<30 \end{equation} and \begin{equation} 9\beta^2 - 246\beta + 294k - 720<0. \end{equation} The intersection of these conditions yield the stability conditions \begin{equation} k\in(0,49/6) \textnormal{ and } \beta\in\left(\dfrac{41-7\sqrt{49-6k}}{3}, \dfrac{41+7\sqrt{49-6k}}{3}\right)\cap(0,\infty). \end{equation} It can be verified that for values on the boundary of at least one of those intervals, the matrix $R$ has eigenvalues on the imaginary axis. Standard calculations on the moments equation show that the open-loop variance is given by \begin{equation}\label{eq:vpi_mat} \textnormal{Var} _\pi^{OL}(X_3)=\dfrac{\mu}{\theta}\left(1+k_pk_p'\dfrac{k_p' + \gamma_r + \gamma_p + \gamma_p'}{(\gamma_r + \gamma_p')(\gamma_r + \gamma_p + k_p')(\gamma_p + \gamma_p' + k_p')}\right). \end{equation} With the numerical values for the parameters previously given, the open-loop variance is approximately equal to $37/6\approx6.1667$. The closed-loop variance, however, is approximately given by \begin{equation} \textnormal{Var} _\pi^{PI}(X_3)\approx\Sigma_{33}=\dfrac{\mu}{\theta}\left(\dfrac{\dfrac{\theta}{\mu} \textnormal{Var} _\pi^{OL}(X_3)+\dfrac{\zeta_k}{\zeta_d}k+\dfrac{\zeta_\beta}{\zeta_d}\beta+\dfrac{\zeta_{k\beta}}{\zeta_d}k\beta}{1+\dfrac{\xi_k}{\xi_d}k+\dfrac{\xi_\beta}{\xi_d}\beta+\dfrac{\xi_{\beta^2}}{\xi_d}\beta^2}\right) \end{equation} where \begin{equation} \begin{array}{rcl} \xi_d&=&\gamma_r\gamma_p'(\gamma_r + \gamma_p')(\gamma_p + k_p')(\gamma_r + \gamma_p + k_p')(\gamma_p + \gamma_p' + k_p')\\ \xi_k&=&-k_pk_p'\theta(\gamma_r + \gamma_p + \gamma_p' + k_p')^2\\ \xi_\beta&=&k_pk_p'(\gamma_r^2\gamma_p + \gamma_r^2\gamma_p' + \gamma_r^2k_p' + \gamma_r\gamma_p^2 + \gamma_r\gamma_p\gamma_p' + 2\gamma_r\gamma_pk_p' + \gamma_r\gamma_p'^2\\ &&+ \gamma_r\gamma_p'k_p' + \gamma_rk_p'^2 + \gamma_p^2\gamma_p' + \gamma_p\gamma_p'^2 + 2\gamma_p\gamma_p'k_p' + \gamma_p'^2k_p' + \gamma_p'k_p'^2)\\ \xi_{\beta^2}&=&-k_p^2k_p'^2 \end{array} \end{equation} and \begin{equation} \begin{array}{rcl} \zeta_d&=&\xi_d\\ \zeta_k&=&k_pk_p'(\gamma_r^2\gamma_p + \gamma_r^2\gamma_p' + \gamma_r^2k_p' + \gamma_r\gamma_p^2 + 2\gamma_r\gamma_p\gamma_p' + 2\gamma_r\gamma_pk_p' + \gamma_r\gamma_p'^2\\ &&+ 2\gamma_r\gamma_p'k_p' - \theta\gamma_r\gamma_p' + \gamma_rk_p'^2 + \gamma_p^2\gamma_p' + \gamma_p\gamma_p'^2 + 2\gamma_p\gamma_p'k_p' - \theta\gamma_p\gamma_p' \\ &&+ \gamma_p'^2k_p' - \theta\gamma_p'^2 + \gamma_p'k_p'^2 - \theta\gamma_p'k_p')\\ \zeta_\beta&=&\gamma_p'k_pk_p'(\gamma_r^2 + \gamma_r\gamma_p + \gamma_rk_p' + \gamma_p'\gamma_r + \gamma_p^2 + 2\gamma_pk_p' + \gamma_p'\gamma_p + k_p'^2 + \gamma_p'k_p')\\ \zeta_{k\beta}&=&-k_p^2k_p'^2. \end{array} \end{equation} An expression that is more complex than, yet very similar to, the formula \eqref{eq:vpi_gene} obtained for the simple gene expression network. For the considered set of parameter values, the approximated variance is a nonmonotonic function of the parameter $\beta$ as it can be theoretically observed in Figure \ref{fig:Mat_NM} in the appendix. It turns out that this behavior can also be observed in the numerical simulations depicted in Figure \ref{fig:Mat_Prop_V_NM} in the appendix where we can see that the variance exhibits this monotonic behavior. However, it should also be pointed out that this increase is accompanied with the emergence of an tracking error for the mean dynamics (see Figure \ref{fig:Mat_Prop_V_NM} in the appendix) and a loss of ergodicity for the overall controlled network as emphasized by diverging mean dynamics for the sensing species (see Figure \ref{fig:Mat_Prop_Z2_NM} in the appendix). This contrasts with the gene expression case where the variance was a monotonically decreasing function of $\beta$. Regarding the mean dynamics, we can see that increasing $K_p$ and, hence, $\beta$, to reasonable levels improves the settling-time as depicted in Figure \ref{fig:Mat_Prop_E} for the special case of $k=3$. However, this is far from being the general case since the settling-time can exhibit a quite complex behavior for this network (see \ref{fig:Mat_Prop_ST}). The stationary variance depicted in Figure \ref{fig:Mat_Prop_VS} exhibits here a rather standard and predictive behavior where a small $k$ and a large $K_p$ both lead to its reduction. Similar conclusions can be drawn when the network is controlled with a Hill negative feedback controller; see Figure \ref{fig:Mat_Hill_E}, Figure \ref{fig:Mat_Hill_E}, Figure \ref{fig:Mat_Hill_E} and Figure \ref{fig:Mat_Hill_E} in the appendix. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Mat_Prop_E_edited.pdf} \caption{Mean trajectories for the mature protein copy number when the gene expression network with protein maturation is controlled with the antithetic integral controller \eqref{eq:AIC} with $k=3$ and an ON/OFF proportional controller. The set-point value is indicated as a black dotted line.}\label{fig:Mat_Prop_E} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Mat_Prop_V_edited.pdf} \caption{Variance trajectories for the mature protein copy number when the gene expression network with protein maturation is controlled with the antithetic integral controller \eqref{eq:AIC} with $k=3$ and an ON/OFF proportional controller. The stationary constitutive variance is depicted in black dotted line.}\label{fig:Mat_Prop_V} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Mat_Prop_VS.pdf} \caption{Stationary variance for the mature protein copy number when the gene expression network with protein maturation is controlled with the antithetic integral controller \eqref{eq:AIC} and an ON/OFF proportional controller.}\label{fig:Mat_Prop_VS} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Mat_Prop_ST.pdf} \caption{Settling-time for the mean trajectories for the mature protein copy number when the gene expression network with protein maturation is controlled with the antithetic integral controller \eqref{eq:AIC} and an ON/OFF proportional controller.}\label{fig:Mat_Prop_ST} \end{figure} \subsection*{Example - Gene expression network with protein dimerization} The proposed theory is only valid for unimolecular networks but, in spite of that, it is still interesting to see whether similar conclusions could be obtained for a network that is not unimolecular. This motivates the consideration of the following gene expression network with protein dimerization: \begin{equation} \begin{array}{c} \phib\rarrow{k_r}\X{1},\ \X{1}\rarrow{k_p}\X{1}+\X{2}, \X{1}\rarrow{\gamma_r}\phib, \X{2}\rarrow{\gamma_p}\phib\\ % \X{2}+\X{2}\rarrow{k_d}\X{3}, \X{3}\rarrow{\gamma_d} \X{2}+\X{2}, \X{3}\rarrow{\gamma_d'}\phib \end{array} \end{equation} where, as before, $\X{1}$ denotes mRNA, $\X{2}$ denotes protein but, now, $\X{3}$ denotes a protein homodimer. In this case, the Lyapunov equation \eqref{eq:Lyapunov} is not valid anymore because of the presence of the dimerization reaction but we can still perform stochastic simulations. The considered parameter values are given by $k_p=1$, $\gamma_r=2$, $\gamma_p=1$, $k_d=3$, $\gamma_d=\gamma_d'=1$, $\mu=10$, $\theta=2$ and $\eta=100$. We can see in Figure \ref{fig:Dimer_Prop_E}, Figure \ref{fig:Dimer_Prop_V}, Figure \ref{fig:Dimer_Prop_VS}, Figure \ref{fig:Dimer_Prop_ST} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Dim_Prop_E_edited.pdf} \caption{Mean trajectories for the homodimer copy number when the gene expression network with protein dimerization is controlled with the antithetic integral controller \eqref{eq:AIC} with $k=3$ and an ON/OFF proportional controller. The set-point value is indicated as a black dotted line.}\label{fig:Dimer_Prop_E} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Dim_Prop_V_edited.pdf} \caption{Variance trajectories for the homodimer copy number when the gene expression network with protein dimerization is controlled with the antithetic integral controller \eqref{eq:AIC} with $k=3$ and an ON/OFF proportional controller. The stationary constitutive variance is depicted in black dotted line.}\label{fig:Dimer_Prop_V} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Dim_Prop_VS.pdf} \caption{Stationary variance for the homodimer copy number when the gene expression network with protein dimerization is controlled with the antithetic integral controller \eqref{eq:AIC} and an ON/OFF proportional controller.}\label{fig:Dimer_Prop_VS} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{./Dim_Prop_ST.pdf} \caption{Settling-time for the mean trajectories for the homodimer copy number when the gene expression network with protein dimerization is controlled with the antithetic integral controller \eqref{eq:AIC} and an ON/OFF proportional controller.}\label{fig:Dimer_Prop_ST} \end{figure} \section*{Discussion} Adjoining a negative feedback strategy to the antithetic integral controller was shown to reduce the stationary variance for the controlled species, an effect that was expected from previous studies and predicted by the obtained theoretical results. The structure of the negative feedback strategy was notably emphasized to have important consequences on the magnitude of the variance reduction. Indeed, the ON/OFF controller can be used to dramatically reduce the variance while still preserving the ergodicity of the closed-loop network. This can be explained mainly because the proportional effective gain $\beta$ is very sensitive to changes in the feedback strength $K_p$ and can reach reasonably large values (still smaller than $K_p$); see Fig. \ref{fig:Gene_Prop_Beta} in the appendix. The preservation of the ergodicity property for the closed-loop network comes from the fact that $\mathbb{E}_\pi[K_p\max\{0,\mu-\theta X_\ell\}]$ remains smaller than the value of the nominal stationary control input (the constant input for which the stationary mean of the controlled species equals the desired set-point) for a wide range of values for $K_p$. Regarding the mean dynamics, this feedback leads to a decrease of the settling-time but also also leads to abrupt transient dynamics for large values of $K_p$ because of the presence of a stable zero in the mean closed-loop dynamics that is inversely proportional to $\beta$ (which is very sensitive to changes in $K_p$ in this case and which can reach high values). Unfortunately, this controller cannot be implemented in-vivo because it does not admit any reaction network implementation. However, it can still be implemented in-silico for the stochastic single-cell control for the control of cell populations using, for instance, targeted optogenetics; see e.g. \cite{Rullan:17}. On the other hand, the Hill feedback, while being practically implementable, has a much less dramatic impact on the stationary variance and on the mean dynamics. The first reason is that the effective proportional gain $\beta$ is less sensitive with respect to changes in $K_p$ and remains very small even when $K_p$ is large; see Fig. \ref{fig:Gene_Hill_Beta}. The absence of zero does not lead to any abrupt transient dynamics even for large values for $K_p$ but this may also be due to the fact that $\beta$ always remains small as opposed to as in the ON/OFF proportional feedback case. A serious issue with this feedback is that ergodicity can be easily lost since $\mathbb{E}_\pi[K_p/(1+X_\ell)]$ becomes very quickly larger than the value of the nominal control input as we increase $K_p$. The properties of both feedback strategies are summarized in Table \ref{tab:statsvsprop}. To prove the main theoretical results, a tailored closure method had to be developed to deal with the bimolecular comparison reaction. A similar one has also been suggested in \cite{Olsman:17} for exactly the same purpose. These methods rely on the assumption that the molecular count of the controller species $\Z{2}$ is, most of the time, equal to 0, a property that is ensured by assuming that $k/\eta\ll 1$. This allowed for the simplification and the closure of the moment equations. The theory was only developed for unimolecular networks because of the problem solvability. However, the extension of those theoretical results to more general reaction networks, such as bimolecular networks, is a difficult task mainly because of the moment closure problem that is now also present at the level of the species of the controlled network. In this regard, this extension is, at the moment, only possible using existing moment closure methods (see e.g. \cite{Hespanha:08b,Milner:11,Smadbeck:13}) which are known to be potentially very inaccurate and would then compromise the validity of the obtained approximation. We believe that obtaining accurate and general theoretical approximations for the stationary variance for bimolecular networks is currently out of reach. It is also unclear whether the obtained qualitative and quantitative results still hold when the assumption $k/\eta\ll1$ on the controller parameters is not met. Interestingly, the results obtained in the current paper provide some interesting insights on an unexpected connection between deterministic PI control and its stochastic analogues. In particular, it is possible to observe that the destabilizing effect of deterministic integral control is analogous to the variance increase due to the use of the stochastic antithetic integral controller. In a similar way, the stabilizing property of deterministic proportional controllers is the deterministic analogue of the property of variance decrease of the stochastic proportional controller; see Table \ref{tab:detvsstoc}). The controller considered in this paper is clearly analogous to PI controllers. A usual complemental element is the so-called derivative action (or a filtered version of it) in order to add an anticipatory effect to the controller and prevent high overshoot; see \cite{Astrom:95}. So far, filtered versions of the derivative action have been proposed in a deterministic setting. Notably, the incoherent feedforward loop locally behaves like a filtered derivative action. More recently, a reaction network approximating a filtered derivative action was proposed in \cite{Halter:17} in the deterministic setting. It is unclear at the moment whether a stochastic version for the derivative action can be found but it is highly possible that such a stochastic derivative action can be implemented in terms of elementary reactions. The negative feedback strategy considered here is an ideal/simplified one. Indeed, it was assumed in this paper that the controlled species was directly involved in the negative feedback. However, it is very likely that, the controlled species may not be directly usable in the feedback, that intermediary species may be involved (e.g. a gene expression network is involved in the feedback) or that the feedback is in terms of a species upstream the controlled species (for instance feedback uses a protein while the controlled species is the corresponding homodimer). The theory may be adapted to deal with such cases as long as the controlled network is unimolecular. It is however expected that the same qualitative behavior will be observed. The reason for that is that in unimolecular networks, species cooperate in the sense that they act positively on each other. Hence, decreasing the variance of one species will also decrease the variance of all the species that are created from it. For instance, in a gene expression network, if the mRNA variance is decreased, the protein variance will decrease as well, and vice-versa. Finally, the implementation of such negative feedback loops is an important, yet elusive, task. It is unclear at the moment how in-vivo experiments could be conducted. Preliminary experimental results to validate the theoretical/computational ones could be obtained using optogenetics and single-cell control for population control. In-vivo experiments will certainly require a lot more effort. \begin{table} \centering \caption{Effects of the different feedback strategies on the mean dynamics and the stationary variance.}\label{tab:statsvsprop} \begin{tabular}{|c||c|c|} \hline & ON/OFF Proportional Feedback & Hill Feedback\\ \hline \hline Ergodicity & robust (+) & fragile (-)\\ \hline $\beta$ & very sensitive (+) & poorly sensitive (-) \\ & wide range (+) & small range \\ \hline Mean Dynamics & reduce settling-time (+) & reduce settling-time (+)\\ & zero dynamics (-) & no zero dynamics (+)\\ \hline Stationary variance & dramatic reduction (++) & slight reduction (+)\\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{The effects of the proportional and integral actions on the dynamics of a system in both the deterministic and stochastic setting.}\label{tab:detvsstoc} \begin{tabular}{|c||c|c|} \hline & Integral action & Proportional action\\ \hline \hline Deterministic & regulation (+) & no regulation (-)\\ Setting & destabilizing (-) & stabilizing (+)\\ \hline Stochastic & regulation (+) & no regulation (-)\\ Setting & increases variance (-) & decreases variance (+)\\ \hline \end{tabular} \end{table}
1,314,259,996,406
arxiv
\section{\label{sec:level1}Introduction} High resolution studies of atomic Rydberg states in the presence of external static electric and magnetic fields have proved to be exceedingly fruitful for the investigation of atomic dynamics because, owing to the large radial extent and weak binding of atomic Rydberg levels, the effects of external static fields are much more significant for Rydberg levels than for atomic ground or low-lying excited states~\cite{Braun1993}. Consequently for more than a quarter century (up to the present) experimentalists and theorists have been investigating atomic Rydberg spectra in external fields, including in particular the interesting case of crossed static electric and magnetic fields~\cite{Braun1993, Crosswhite1979, Gay1979, Korevaar1983, Clark83, Clark1985, Nessmann1987, Fauth1987, Hare1988, Wieb89,Gay1989, Vincke1992, Yeaz93, Dippel1994, Uzer1994, Uzer1996, Connerade1997, Neum97, Uzer1997a, Uzer1997b, Taylor1997, Sadovskii1998, Wunner1998, Cederbaum1999, Cushman1999, Cushman2000, Delos2001, Uzer2001, Schmelcher2001, Taylor2002a, Freu02, Taylor2002b, Valent2003, Wunner2003, Abdu04,Conn05}. These latter investigations for the crossed field case include studies of motional Stark effects on Rydberg atom spectra in a magnetic field~\cite{Crosswhite1979}, of novel, highly excited resonance states~\cite{Gay1979, Clark1985, Nessmann1987, Fauth1987, Wieb89, Vincke1992, Dippel1994, Connerade1997, Cederbaum1999}, of circular Rydberg states~\cite{Hare1988}, of Rydberg wave packets in crossed fields~\cite{Gay1989, Yeaz93}, of non-hydrogenic signatures in Rydberg spectra~\cite{Taylor1997, Wunner1998}, of doubly-excited states in crossed fields~\cite{Schmelcher2001}, of recurrence spectra~\cite{Taylor2002a,Freu02, Taylor2002b}, and of various aspects of electron dynamics in combined Coulomb and crossed static electric and magnetic fields~\cite{Braun1993, Uzer1994, Uzer1996, Neum97, Uzer1997a, Uzer1997b, Sadovskii1998,Cushman1999, Cushman2000, Delos2001, Uzer2001, Valent2003, Wunner2003,Abdu04,Conn05}. The related problem of photodetachment of a weakly bound electron (e.g., as in photodetachment of a negative ion) in the presence of crossed static electric and magnetic fields has been the subject of fewer investigations despite its having a comparably rich spectrum. (Note that the weakly bound electron in a negative ion can simply decay, or become detached, solely due to the presence of the external static electric and magnetic fields, a process that has long been studied theoretically, as in, e.g.,~\cite{Drukarev1972,Popov1998}.) Experimentally, crossed field effects have been found to be significant in photodetachment of negative ions in the presence of a static magnetic field owing to the influence of the motional electric field experienced by the detached electron~\cite{Blumberg1979, Yuki97}. The photodetachment spectrum of H$^-$ in the presence of crossed static electric and magnetic fields has been treated theoretically by Fabrikant~\cite{Fabr91} and by Peters and Delos~\cite{Peter93,Peter93b}; a generalization to the case of photodetachment of H$^-$ in the presence of static electric and magnetic fields of arbitrary orientation has been given by Liu et al.~\cite{Liu96,Liu97,Liu97a}. In each of these works the static fields are assumed to be sufficiently weak that they do not affect the relatively compact initial state. Fabrikant~\cite{Fabr91} gave the first quantum treatment of single photon detachment in crossed static electric and magnetic fields using the zero-range potential model to describe the initial state of H$^-$; rescattering of the electron from the potential was also investigated, although the effect was found to be small except for high magnetic field strengths. Peters and Delos~\cite{Peter93} gave a semiclassical analysis of H$^-$ photodetachment in crossed fields and correlated significant features of the spectrum with closed classical orbits. Subsequently they derived quantum formulas for this process (using the zero-range potential model for the initial state) and exhibited the connection to their predicted classical closed periodic orbits~\cite{Peter93b}. The generalization of Liu et al.~\cite{Liu96,Liu97,Liu97a} to the case of static electric and magnetic fields of arbitrary orientation is also based upon the zero-range potential model. In all of these works the electromagnetic field that causes photodetachment is assumed to be weak and monochromatic. Also, the photodetachment spectrum is analyzed numerically only over a very small energy range above threshold. In this paper we consider detachment of H$^-$ by a short laser pulse in the presence of crossed static electric and magnetic fields. We present an analytic expression for the final state of the detached electron taking into account exactly the effects of the laser field as well as both static fields. The initial state is described by the solution of the zero-range potential, as in all other quantum treatments to date~\cite{Fabr91,Peter93b,Liu96,Liu97,Liu97a}. We present also an analytic expression for the photodetachment transition amplitude that can be used to describe the probabilities of {\it {multiphoton}} detachment in crossed fields. In this paper, however, our focus is on single photon detachment by short laser pulses and on the connection between the detached electron wave packet motion and the predicted classical closed periodic orbits of Peters and Delos~\cite{Peter93}. As noted by Alber and Zoller~\cite{Alber1991} (in connection with electronic wave packets in Rydberg atoms), such wave packets ``provide a bridge between quantum mechanics and the classical concept of the trajectory of a particle'' and ``the evolution of these wave packets provides real-time observations of atomic or molecular dynamics.'' We show this connection for the case of short pulse laser-detached electron wave packets in crossed static electric and magnetic fields. In addition, we show analytically how our short pulse results reduce to the quantum monochromatic field results of Fabrikant~\cite{Fabr91} and Peters and Delos~\cite{Peter93b} in the long pulse limit as well as the connection between our analytic quantum formulation for the photodetachment spectrum and those features that we associate with the predicted classical closed orbits~\cite{Peter93}. Finally, we present numerical results in the long pulse limit over a large energy range above the single photon detachment threshold in order to demonstrate clearly these manifestations of classical behavior in our predicted photodetachment spectrum. This paper is organized as follows: In Sec. II we present our theoretical formulation for detachment of H$^-$ by a short laser pulse in the presence of crossed static electric and magnetic fields. In particular, in this section (with details given in an Appendix) we present an exact, analytic expression for the wave function for an electron interacting with both the laser pulse and the crossed static electric and magnetic fields. We present here also analytic expressions for the transition probability amplitudes for both a single laser pulse and a double laser pulse (i.e., two coherent single pulses separated by a time delay). In addition, the long pulse (monochromatic field) limit of our results is presented and this result is compared with a number of prior works for various static field cases. In Sec. III we establish the connection between the long pulse limit of our results and the closed classical periodic orbits predicted by Peters and Delos~\cite{Peter93}. In Sec. IV we present our numerical results, starting first with a comparison with prior results for the long pulse (monochromatic field) case and then examining the short pulse case, including the final state motion of the detached electron wave packets. \section{\label{sec:level2}Theoretical Formulation} We consider photodetachment of H$^-$ by one or more short laser pulses in the presence of crossed static electric and magnetic fields. In the final state, we assume the detached electron experiences only the laser and static fields; we ignore final state interaction of the electron with the residual hydrogen atom. For weak external fields, this is expected to be a good approximation for this predominantly single photon process. In this section, we first give the $S$-matrix transition amplitude for photodetachment of H$^{-}$. Then we present an exact quantum mechanical solution to the time-dependent Schr\"{o}dinger equation for the final state of the detached electron in both the crossed static electric and magnetic fields and the time dependent laser pulse. We then use this result together with S-matrix theory to obtain detachment rates and cross sections. Atomic units are used throughout this paper unless otherwise stated. \subsection{$S$-matrix Transition Amplitude for Photodetachment of H$^{-}$} We adopt the Keldysh approximation for the final-state, {\it i.e.}, we neglect the binding potential~\cite{Reiss80}. In this case, the $S$-matrix transition amplitude from the initial state $\psi_{i}$ to the final state $\psi_{f}$ is given by \begin{equation} S_{fi} = - i\int_{-\infty }^{\infty}dt^{\prime}\left\langle \psi_{f}(\mathbf{p},t^{\prime})| V_I(t^{\prime})|\psi_{i}(\mathbf{p},t^{\prime}) \right\rangle, \label{Stime0} \end{equation} where $V_I$ represents the laser-electron interaction and the bracket $\left\langle {}\right\rangle $ stands for integration over momentum space. For the zero range potential for which the bound state wave function has the form $e^{-\kappa r}/r$, the $S$-matrix element in Eq.~(\ref{Stime0}) can be shown to be gauge-invariant~\cite{Gao90,Niki66}. Such a bound state wave function can be used to represent the weakly bound electron of H$^{-}$. We use that of Ohmura and Ohmura~\cite{Ohmu60}, which in momentum space is given by \begin{equation} \psi _{i}(\mathbf{p},t)=\frac{C_{i}}{\sqrt{2\pi }}\frac{e^{-i\varepsilon _{i}t}}{% p^{2}/2-\varepsilon _{i}}, \label{initialwave} \end{equation}% where $C_{i}$ is a normalization constant and $\varepsilon _{i}$ is the initial state energy. Using the variational results of Ref.~\cite{Ohmu60} and effective range theory for a weakly bound $s$-electron~\cite{Beth50}, one obtains~\cite{Du88} $C_{i}=0.31552$ and $\varepsilon _{i}=-0.027751$ a.u.. The gauge-invariant $S$-matrix transition amplitude for H$^-$ detachment is then given by (cf. Eq.~(27) of Ref.~\cite{Gao90}) \begin{equation} \left( S_{fi}\right) _{k_{x}k_{y}n_{z}}=i\int_{-\infty }^{\infty}dt^{\prime}\left\langle \psi _{f}(\mathbf{p},t^{\prime })|C_{i}/\sqrt{2\pi}\right\rangle e^{-i\varepsilon _{i}t^{\prime}}. \label{Stime} \end{equation} \subsection{The Final State Wave Function} \begin{figure} \centering \includegraphics[width=12cm]{figs/fig1}\\ \caption{Geometrical arrangement of fields in photodetachment of $H^-$ by a linearly polarized laser (with electric field $\bf{E_L}$) in the presence of crossed static electric ($\bf{E_S}$) and magnetic ($\bf{B}$) fields. Both the laser and the static electric fields point along the $z$-axis. As indicated, the drift motion of the detached electron is along the $y$-axis.} \label{figure1} \end{figure} In order to calculate the $S$-matrix transition amplitude in Eq.~(\ref{Stime}), we present in this section an analytical expression for the final state wave function $\psi _{f}$. As aforementioned, we neglect the binding potential after detachment. Therefore, $\psi _{f}$ is actually a Volkov-type wave function that describes a free electron moving in the combined field of the crossed static electric and magnetic fields and the time-dependent electric field associated with the short laser pulse. In Fig.~\ref{figure1} we illustrate the configuration of the external fields in which the detached electron moves: the uniform static magnetic field defines the $x$ axis and the static electric field defines the $z$ axis, i.e., \begin{eqnarray} &&\mathbf{B}=B\hat{\mathbf i}\\ &&\mathbf{E_{S}}=E_{S}\hat{\mathbf k}. \label{staticfields} \end{eqnarray}% We assume that each laser pulse has the following general form, \begin{equation} \mathbf{E_L}(t)=E_{0}e^{-\alpha ^{2}\left( t-\tau \right) ^{2}}\sin \left( \omega t+\beta \right) \hat{\mathbf k}, \label{laserpulse} \end{equation}% where $\omega $ is the laser frequency, $\tau $ is the time delay with respect to $t=0$, and $\beta $ is a (generally constant) phase. The duration of the laser pulse is defined to be the full width at half maximum (FWHM) of the laser intensity, and is given by \begin{equation} T_{p}=\sqrt{2\ln 2}/\alpha. \label{Tpulse} \end{equation}% We introduce the vector potentials for the magnetic field and the laser field respectively, as follows: \begin{eqnarray} &&\mathbf{A_B}= -zB\hat{\mathbf j}, \\ &&\mathbf{A_L}(t)= -c\int_{-\infty}^{t}\mathbf{E_L}(t^{\prime })dt^{\prime}, \label{vectpot} \end{eqnarray} where $c$ is the speed of light in vacuum. The final state wave function for the detached electron is obtained as the solution of the time-dependent Schr\"{o}dinger equation (TDSE) in momentum space, \begin{equation} i\frac{\partial}{\partial t}\psi _{f}^{(p)}(\mathbf{p},t)=H\psi _{f}^{(p)}(\mathbf{p},t), \label{tdsemom} \end{equation} in which the Hamiltonian $H$ is given by \begin{eqnarray} H(\mathbf{p},t) &= & \frac{1}{2}\left[ \mathbf{p}+\frac{1}{c}% \left( \mathbf{A}_{L}+\mathbf{A}_{B}\right) \right]^{2}+ \mathbf{\hat{r}} \cdot \mathbf{E}_S \\ \label{hamiltonianmom0} &=&-\frac{1}{2}\omega _{c}^{2}\frac{\partial ^{2}}{\partial p_{z}^{2}}% -i\omega _{c}\left( p_{y}-\frac{E_{S}}{\omega _{c}}\right) \frac{\partial}{% \partial p_{z}}+\frac{1}{2}p_{z}^{2}+\frac{1}{c}p_{z}A_{L}(t) \nonumber \\ &&+\frac{1}{2}p_{x}^{2}+\frac{1}{2}p_{y}^{2}+\frac{1}{2c^{2}}A_{L}^{2}(t), \label{hamiltonianmom} \end{eqnarray} where $\omega _{c}=B/c$ is the cyclotron frequency. It can be shown that Eq.~(\ref{tdsemom}) has an exact analytical solution. The details of the derivation are presented in Appendix~\ref{appendixA}. The final expression of the solution is given by \begin{eqnarray} \psi _{f}^{(p)}(\mathbf{p},t)&=&\delta (p_{x}-k_{x})\delta (p_{y}-k_{y})\exp \left[ -i\varepsilon _{f}t-if(t)\right] \nonumber \\ &\times& \omega _{c}^{-1/4} g_{n_{z}}\left(\sqrt{2}\zeta _{p_{z}}\right) \exp \left[ -ib(k_{y},t)\sqrt{2}\zeta _{p_{z}}\right], \label{finalwvmom} \end{eqnarray} in which $g_{n_{z}}\left( x\right)$ is defined by \begin{equation} g_{n_{z}}\left( x \right)= \frac{1}{\sqrt{2^{n_{z}}n_{z}!\sqrt{\pi }}}e^{-x^{2}/2}H_{n_{z}}(x), \label{gnzetay} \end{equation} where $H_{n_{z}}(x)$ is the $n_z\it{th}$ Hermite polynomial. In Eq.~(\ref{finalwvmom}) we have also defined \begin{eqnarray} && \varepsilon _{f} = \frac{1}{2}\left( k_{x}^{2}+k_{y}^{2}\right) +\varepsilon _{n_{z}}-\frac{1}{2}\omega _{c}\zeta _{k_{y}}^{2}, \nonumber \\ && \hspace{0.48cm}=\frac{1}{2}k_{x}^{2}+\frac{E_{S}}{\sqrt{\omega _{c}}}\zeta _{k_{y}}+\varepsilon _{n_{z}}+\frac{1}{2}\frac{% E_{S}^{2}}{\omega _{c}^{2}}, \label{epsf} \\ && f(t)= \frac{1}{\sqrt{\omega _{c}}}\zeta _{k_{y}}\xi (t)+\frac{1}{2c^{2}}% \int_{-\infty}^{t}A_{L}^{2}(t^{\prime})dt^{\prime}-\int_{-\infty }^{t}L(t^{\prime})dt^{\prime}, \label{ft} \\ && b(k_{y},t)= \zeta _{k_{y}}-\dot{\xi}(t)/\sqrt{\omega _{c}^{3}}, \label{bt} \end{eqnarray} where the arguments $\zeta _{p_{z}}$ and $\zeta _{k_{y}}$ are given by \begin{eqnarray} &&\zeta _{p_{z}}(t)= \left[ p_{z}-\xi (t)\right] /\sqrt{2\omega _{c}}, \label{zetapz} \\ &&\zeta _{k_{y}}= \frac{1}{\sqrt{\omega _{c}}}\left( k_{y}-\frac{E_{S}}{% \omega _{c}}\right), \label{zetaky1} \end{eqnarray} and the energy of the $n_zth$ Landau level $\varepsilon _{n_{z}}$ in Eq.~(\ref{epsf}) is given by \begin{equation} \varepsilon _{n_{z}} = \left( n_{z}+\frac{1}{2}\right) \omega _{c}. \label{epsnz} \end{equation} In Eq.~(\ref{ft}), $L(t)$ and $\xi(t)$ are functions related to the vector potential of the short laser pulse. $L(t)$ is defined as \begin{equation} L(t) = \frac{1}{2\omega _{c}^{2}}\dot{\xi}^{2}(t)-\frac{1}{2}\xi ^{2}(t)-% \frac{1}{c}A_{L}(t)\xi (t), \label{tdsepsiz3b} \end{equation} while $\xi (t)$ satisfies the following differential equation: \begin{equation} \ddot{\xi}(t)+\omega _{c}^{2}\xi (t)=-\frac{\omega _{c}^{2}}{c}A_{L}(t), \label{odeksi} \end{equation} where $\ddot{\xi}(t)$ denotes the second derivative of $\xi (t)$. We present the exact solution for $\xi(t)$ in Appendix \ref{appendixB}. However, in the long pulse case ($\alpha/\omega \ll 1$), a simplified expression for $\xi(t)$ can be obtained as \begin{equation} \xi (t) \simeq a\left( \omega \right) e^{-\alpha ^{2}(t-\tau )^{2}}\cos \left( \omega t+\beta \right), \label{longksi} \end{equation} where we have defined \begin{equation} a\left( \omega \right) =\frac{E_{0}\,\omega _{c}^{2}}{\omega \left( \omega ^{2}-\omega _{c}^{2}\right)}. \label{constA} \end{equation} In this case, the first derivative, $\dot{\xi}(t)$, is thus given by \begin{equation} \dot{\xi}(t)\simeq -\omega a\left( \omega \right) e^{-\alpha ^{2}(t-\tau )^{2}}\sin \left( \omega t+\beta \right). \label{longksidot} \end{equation} In order to investigate wave packet dynamics, it is useful to derive an expression for the final state wave function in which its $z$ component is given in coordinate space. This is achieved by taking the Fourier transform of Eq.~(\ref{finalwvmom}) with respect to $p_{z}$, i.e., \begin{equation} \psi _{f}^{(z)}(p_{x},p_{y},z,t) \equiv \psi _{k_{x},k_{y},n_z}^{(z)}(p_{x},p_{y},z,t) =\frac{1}{\sqrt{2\pi }}\int_{-\infty}^{\infty}dp_{z}\psi _{f}^{(p)}(\mathbf{p},t)e^{izp_{z}}. \label{finalwvrealz} \end{equation} Changing the integration variable to $\zeta _{p_{z}}$ (cf. Eq.~(\ref{zetapz})), we obtain \begin{eqnarray} \psi _{f}^{(z)}(p_{x},p_{y},z,t) &=&\delta (p_{x}-k_{x})\delta (p_{y}-k_{y})i^{n_{z}}\omega _{c}^{1/4}g_{n_{z}}\left[ \sqrt{\omega _{c}}% z-b(k_{y},t)\right] \nonumber \\ &&\times \exp \left[ iz\xi (t)-i\varepsilon _{f}t-if(t)\right], \label{finalwvrealz1} \end{eqnarray} where we have made use of Eqs.~7.388(2) and~7.388(4) in Ref.~\cite{Gradsh}. \subsection{$S$-matrix Amplitude for Photodetachment of H$^{-}$} In order to examine the motion of the detached electron wave packet in crossed $\mathbf{E}$ and $\mathbf{B}$ fields, we define in analogy to Eq.~(23) of Ref.~\cite{Wang95} a time-dependent transition amplitude $R_{fi}(t)$ from the initial state to the final state $(k_{x},k_{y},n_{z})$: \begin{eqnarray} \left( R_{fi}(t)\right)_{k_{x}k_{y}n_{z}} &=&i\frac{C_{i}}{\sqrt{2\pi}}% \sqrt{2\omega _{c}}\int_{-\infty}^{t}dt^{\prime}e^{i\varepsilon _{fi}t^{\prime}+if(t^{\prime})} \nonumber \\ &&\times \int_{-\infty}^{\infty} \omega _{c}^{-1/4} g_{n_{z}}\left(\sqrt{2}\zeta _{p_{z}}\right) \exp \left[ i b(k_{y},t^{\prime}) \sqrt{2}\zeta _{p_{z}}\right] d\zeta _{p_{z}}, \end{eqnarray} where $\varepsilon _{fi}= \varepsilon _{f}-\varepsilon _{i}$, $b(k_{y},t^{\prime})$ is given by Eq.~(\ref{bt}), and where we have used Eq.~(\ref{finalwvmom}) for the final state wave function in Eq.~(\ref{Stime}). Using Eqs.~7.388(2) and~7.388(4) in Ref.~\cite{Gradsh} to carry out the integration over $\zeta _{p_{z}}$, we obtain \begin{equation} \left( R_{fi}(t)\right) _{k_{x}k_{y}n_{z}} = i^{n_{z}+1}C_{i}\int_{-\infty}^{t}dt^{\prime}e^{i\varepsilon _{fi}t^{\prime}+if(t^{\prime})}\omega _{c}^{1/4} g_{n_{z}}\left[ b(k_{y},t^{\prime})\right]. \label{momSmatr} \end{equation} Note that in the limit of $t\rightarrow \infty$, $R_{fi}(t)$ reduces to the $S$-matrix transition amplitude (\ref{Stime}), {\it i.e.}: \begin{equation} \left( S_{fi}\right) _{k_{x}k_{y}n_{z}} = \lim _{t\rightarrow\infty} \left( R_{fi}(t)\right) _{k_{x}k_{y}n_{z}}. \end{equation} In principle, with this analytical $S$-matrix amplitude one can readily calculate the total and multiphoton transition rates, as done for H$^-$ detachment in a static electric field in Ref.~\cite{Gao90}. However, in the present paper, we restrict our consideration to the one-photon detachment process (Note that there are still many cycles in the laser pulses that we will consider in this work). Our analytical results facilitate easy comparison with some other previous results. Consequently, we evaluate Eq.~(\ref{momSmatr}) only to first order in the laser electric field strength $E_{0}$, i.e., we employ the following approximations: \begin{eqnarray*} e^{if(t^{\prime})} &\simeq &1+if(t^{\prime}) \\ &\simeq &1+i\frac{1}{\sqrt{\omega _{c}}}\zeta _{k_{y}}\xi (t^{\prime}), \end{eqnarray*} \[ g_{n_{z}}\left[ b(k_{y},t^{\prime}\right] \simeq g_{n_{z}} (\zeta _{k_{y}})+g_{n_{z}}^{\prime}(\zeta _{k_{y}})% \left[ -\frac{\dot{\xi}(t^{\prime})}{\sqrt{\omega _{c}^{3}}}\right] , \] where $g_{n_{z}}^{\prime}(\zeta _{k_{y}})$ stands for the derivative of $g_{n_{z}}(\zeta _{k_{y}})$. Thus, to first order in $E_{0}$, the time-dependent transition amplitude is given by \begin{eqnarray} \left( R_{fi}^{(1)}(t)\right) _{k_{x}k_{y}n_{z}} &=&i^{n_{z}+1}C_{i}\omega _{c}^{1/4}g_{n_{z}} (\zeta _{k_{y}})\int_{-\infty}^{t}dt^{\prime}e^{i\varepsilon _{fi}t^{\prime}} \nonumber \\ &&-i^{n_{z}+1}C_{i}\omega _{c}^{1/4}g_{n_{z}}^{\prime}(\zeta _{k_{y}})\int_{-\infty}^{t}dt^{\prime}e^{i\varepsilon _{fi}t^{\prime }}\frac{\dot{\xi}(t^{\prime})}{% \sqrt{\omega _{c}^{3}}} \nonumber \\ &&-i^{n_{z}}C_{i}\zeta _{k_{y}}\omega _{c}^{1/4}g_{n_{z}}(\zeta _{k_{y}})\int_{-\infty}^{t}dt^{\prime}e^{i\varepsilon _{fi}t^{\prime}}\frac{% \xi (t^{\prime})}{\sqrt{\omega _{c}}}. \label{Smatrixfinal2} \end{eqnarray} Note that, as usual, the first term in Eq.~(\ref{Smatrixfinal2}) does not contribute to the photodetachment process (since for $t\rightarrow \infty$, the only contributions are for $\varepsilon_{fi}\rightarrow 0$); hence this term is discarded in the following discussion. In the long pulse approximation, with the help of Eqs.~(\ref% {longksi}) and~(\ref{longksidot}), one can show that \begin{eqnarray} \left( R_{fi}^{(1)}(t)\right) _{k_{x}k_{y}n_{z}} &=&i^{n_{z}+1}C_{i} g_{n_{z}}^{\prime}(\zeta _{k_{y}})\frac{\omega a(\omega)}{\omega _{c}^{5/4}}% \int_{-\infty}^{t}dt^{\prime}e^{-\alpha ^{2}(t^{\prime}-\tau )^{2}+i\varepsilon _{fi}t^{\prime}}\sin \left( \omega t^{\prime }+\beta \right) \nonumber \\ &&-i^{n_{z}}C_{i}\zeta _{y}g_{n_{z}}(\zeta _{k_{y}})\frac{(\omega)} {\omega _{c}^{1/4}}\int_{-\infty}^{t}dt^{\prime}e^{-\alpha ^{2}(t^{\prime}-\tau )^{2}+i\varepsilon _{fi}t^{\prime}}\cos \left( \omega t^{\prime}+\beta \right), \end{eqnarray} which reduces to \begin{eqnarray} \left( R_{fi}^{(1)}(t)\right) _{k_{x}k_{y}n_{z}} &=&-i^{n_{z}}C_{i}\frac{a(\omega)}{2% \omega _{c}^{5/4}}\left[ \omega g _{n_{z}}^{\prime}(\zeta _{k_{y}})+\omega _{c}\zeta _{y}g _{n_{z}}(\zeta _{k_{y}})\right] \nonumber \\ &&\times \int_{-\infty}^{t}dt^{\prime}e^{-\alpha ^{2}(t^{\prime }-\tau )^{2}+i\varepsilon _{fi}t^{\prime}-i\omega t^{\prime}-i\beta } \label{fstSm} \end{eqnarray} if we neglect the emission process (i.e., if we discard terms involving $e^{+i\omega t^{\prime}}$). The integration over $t^{\prime}$ in Eq.~(\ref{fstSm}) can be carried out analytically: \begin{eqnarray*} &&\int_{-\infty}^{t}dt^{\prime}e^{-\alpha ^{2}(t^{\prime}-\tau )^{2}+i\varepsilon _{fi}t^{\prime}-i\omega t^{\prime}-i\beta} \\ &=&\frac{\sqrt{\pi}}{2\alpha}\left\ \mathop{\rm erf}% \left[ \alpha \left( t-\tau \right) -i\frac{\varepsilon _{fi}-\omega }{2\alpha}% \right] +1\right\} \\ &&\times \exp \left[ -\frac{\left( \varepsilon _{fi}-\omega \right) ^{2}}{% 4\alpha ^{2}}+i\left( \varepsilon _{fi}-\omega \right) \tau -i\beta \right]. \end{eqnarray*} Thus, for the single laser pulse in Eq.~(\ref{laserpulse}), the first-order time-dependent transition amplitude in the long pulse approximation is given by% \begin{eqnarray} \left( R_{fi}^{(1)}(t)\right) _{k_{x}k_{y}n_{z}}^{\text{sgl}} &=&-i^{n_{z}}C_{i}\frac{\pi a(\omega)% }{2\omega _{c}^{5/4}}\left[ \omega g_{n_{z}}^{\prime}(\zeta _{k_{y}})+\omega _{c}\zeta _{y} g_{n_{z}}(\zeta _{k_{y}})\right] \nonumber \\ &&\times D_{\text{sgl}}(\varepsilon _{fi},t) \delta_{\alpha}(\varepsilon _{fi}-\omega), \label{Smatrixfinaltime1} \end{eqnarray} where we have defined \begin{equation} D_{\text{sgl}}(\varepsilon _{fi},t)=e^{i\left( \varepsilon _{fi}-\omega \right) \tau -i\beta}\left[ 1+% \mathop{\rm erf}% \left[ \alpha \left( t-\tau \right) -i\frac{\varepsilon _{fi}-\omega }{2\alpha}% \right] \right], \label{Dsglt} \end{equation} and have also introduced the quasi-$\delta$-function~\cite{Wang95}, \begin{equation} \delta_{\alpha}(\varepsilon _{fi}-\omega) = \left( 2 \pi^{1/2} \alpha\right)^{-1} \exp \left[ -\frac{\left( \varepsilon _{fi}-\omega \right) ^{2}}{4\alpha ^{2}}\right]. \end{equation} In the limit that our finite laser pulse becomes a monochromatic plane wave, the quasi-$\delta$-function becomes the usual Dirac $\delta$-function, \begin{equation} \delta(\varepsilon _{fi}-\omega) = \lim_{\alpha\rightarrow 0}\delta_{\alpha}(\varepsilon _{fi}-\omega). \label{quasidelta1} \end{equation} Taking the limit $t\rightarrow +\infty$, we obtain from Eq.~(\ref{Smatrixfinaltime1}) the following analytical expression for the $S$-matrix amplitude for the case of a single, finite laser pulse: \begin{eqnarray} \left( S_{fi}^{(1)}\right) _{k_{x}k_{y}n_{z}}^{\text{sgl}} &=&-i^{n_{z}}C_{i}\frac{\pi a(\omega)}{\omega _{c}^{5/4}}\left[ \omega g _{n_{z}}^{\prime}(\zeta _{k_{y}})+\omega _{c}\zeta _{y}g _{n_{z}}(\zeta _{k_{y}})\right] \nonumber \\ &&\times e^{i\left( \varepsilon _{fi}-\omega \right) \tau -i\beta }\delta_{\alpha}(\varepsilon _{fi}-\omega), \label{Smatrixfinalinf} \end{eqnarray} where we have used the fact that $% \mathop{\rm erf}% (\infty +iy)=1$ for any finite real number $y$. \subsection{Detached Electron Wave Packet} We may obtain the detached electron wave packet probability amplitude as a sum over all final states of the product of the time-dependent transition amplitude for transition to the final state $(k_x, k_y,n_z)$ at a particular time $t$ [Eq.~(\ref{Smatrixfinaltime1})] and the wave function [Eq.~(\ref{finalwvrealz1})] for that state (cf. Sec. II. D of Ref.~\cite{Wang95}): \begin{equation} \psi_{\text{WP}}(p_{x},p_{y},z,t)=\sum_{n_{z}=0}^{\infty }\int_{-\infty}^{\infty}dk_{x}\int_{-\infty}^{\infty}dk_{y}\psi _{k_{x},k_{y},n_{z}}^{(z)}(p_{x},p_{y},z,t)\left( R_{fi}^{(1)}(t)\right) _{k_{x}k_{y}n_{z}}. \end{equation}% By using Eqs.~(\ref{finalwvrealz1}) and~(\ref{Smatrixfinaltime1}), the wave packet for the single laser pulse~(\ref{laserpulse}) is given by \begin{eqnarray} \psi^{\text{sgl}}_{\text{WP}}(p_{x},p_{y},z,t) &=&-C_{i}\frac{\pi a(\omega)}{2 \omega _{c}}% \sum_{n_{z}=0}^{\infty}\left( -1\right) ^{n_{z}}\exp \left[ iz\xi -i\varepsilon _{f}^{\prime}t-i\frac{1}{\sqrt{\omega _{c}}}\zeta _{p_{y}}\xi % \right] \nonumber \\ &&\times g_{n_{z}}\left[ \sqrt{\omega _{c}}z-b(p_{y},t)% \right] \left[ \omega g_{n_{z}}^{\prime}(\zeta _{p_{y}})+\omega _{c}\zeta _{y}g_{n_{z}}(\zeta _{p_{y}})\right] \nonumber \\ &&\times D_{\text{sgl}}(\varepsilon _{fi}^{\prime },t)\delta_{\alpha}(\varepsilon _{fi}^{\prime}-\omega) \label{wavepacket1} \end{eqnarray}% where $\zeta _{p_{y}}$ and $b(p_{y},t)$ are defined by Eqs.~(\ref{zetaky1}) and~(\ref{bt}) respectively, $\varepsilon _{fi}^{\prime} =\varepsilon_f^{\prime} - \varepsilon_i$, and $\varepsilon_f^{\prime}$ is given by Eq.~(\ref{epsf}) with $\zeta _{k_{y}}$ replaced by $\zeta _{p_{y}}$. \subsection{$S$-matrix and Wave Packet Amplitudes for the Double Pulse Case} We consider here the case that there are two laser pulses of the form of Eq.~(\ref{laserpulse}), with the second one delayed with respect to the first by a time interval $\tau$ and having a relative phase of $\beta$, i.e., \begin{equation} \mathbf{E}_{\mathbf{L}}^{\mathbf{{dbl}}}(t)= E_{0}\left[ e^{-\alpha ^{2}t^{2}}\sin \left( \omega t\right) + e^{-\alpha ^{2}\left( t-\tau \right) ^{2}}\sin \left( \omega t+\beta \right) \right] \hat{\mathbf k}. \label{laserpulsedbl} \end{equation} To first order in $E_0$, it is easy to show that for the double laser pulse case, the time-dependent transition amplitude is given by% \begin{eqnarray} \left( R_{fi}^{(1)}(t)\right) _{k_{x}k_{y}n_{z}}^{\text{dbl}} &=&-i^{n_{z}}C_{i}\frac{\pi a(\omega)}{2 \omega _{c}^{5/4}}\left[ \omega g _{n_{z}}^{\prime}(\zeta _{k_{y}})+\omega _{c}\zeta _{y}g _{n_{z}} (\zeta _{k_{y}})\right] \nonumber \\ &&\times D_{\text{dbl}}(\varepsilon _{fi},t) \delta_{\alpha}\left(\varepsilon _{fi}-\omega \right), \label{crsdbl} \end{eqnarray} where the function $D_{\text{dbl}}(\varepsilon _{fi},t)$ is given by \begin{equation} D_{\text{dbl}}(\varepsilon _{fi},t)=1+% \mathop{\rm erf}% \left[ \alpha t-i\frac{\varepsilon _{fi}-\omega}{2\alpha}\right] +e^{i\left( \varepsilon _{fi}-\omega \right) \tau -i\beta}\left\{1+% \mathop{\rm erf}% \left[ \alpha \left( t-\tau \right) -i\frac{\varepsilon _{fi}-\omega }{2\alpha}% \right] \right\}. \label{Ddblt} \end{equation}% When $t\rightarrow \infty $, the above formula reduces to \begin{eqnarray} \left( S_{fi}^{(1)}\right) _{k_{x}k_{y}n_{z}}^{\text{dbl}} &=&\lim_{t\rightarrow \infty}\left( R_{fi}^{(1)}(t)\right) _{k_{x}k_{y}n_{z}}^{\text{dbl}} \nonumber \\ &=&-i^{n_{z}}C_{i}\frac{\pi a(\omega)}{\omega _{c}^{5/4}}\left[ \omega g _{n_{z}}^{\prime}(\zeta _{k_{y}})+\omega _{c}\zeta _{y}g _{n_{z}} (\zeta _{k_{y}})\right] \nonumber \\ &&\times \left[ 1+e^{i\left( \varepsilon _{fi}-\omega \right) \tau -i\beta}% \right] \delta_{\alpha}\left(\varepsilon _{fi}-\omega \right). \label{crsdbl0} \end{eqnarray} The wave packet amplitude for the double laser pulse case is correspondingly given by \begin{eqnarray} \psi _{\text{WP}}^{\text{dbl}}(p_{x},p_{y},z,t) &=&-C_{i}\frac{\pi a(\omega) % }{2 \omega _{c}}\sum_{n_{z}=0}^{\infty}\left( -1\right) ^{n_{z}}\exp \left[ iz\xi -i\varepsilon _{f}^{\prime}t-i\frac{1}{\sqrt{\omega _{c}}}\zeta _{p_{y}}\xi \right] \nonumber \\ &&\times g _{n_{z}} \left[ \sqrt{\omega _{c}}z-b(p_{y},t)% \right] \left[ \omega g _{n_{z}}^{\prime}(\zeta _{p_{y}})+\omega _{c}\zeta _{y}g _{n_{z}} (\zeta _{p_{y}})\right] \nonumber \\ &&\times D_{\text{dbl}}(\varepsilon _{fi}^{\prime },t)\delta_{\alpha}\left(\varepsilon _{fi}^{\prime}-\omega \right). \label{wvpkdble} \end{eqnarray} \subsection{Photodetachment Cross Section} The transition probability to a particular final state $\left( k_{x},k_{y},n_{z}\right) $ is given by \begin{equation} P_{k_{x}k_{y}n_{z}}=\left| \left( S_{fi}\right) _{k_{x}k_{y}n_{z}}\right| ^{2}, \label{wrate} \end{equation} and the total photodetachment probability is calculated by integrating over all final states, \begin{equation} P=\sum\limits_{n_{z}=0}^{\infty}\int_{-\infty}^{\infty }dk_{x}\int_{-\infty}^{\infty}dk_{y}P_{k_{x}k_{y}n_{z}}. \label{transrate} \end{equation} For an infinitely long, monochromatic beam, the probability $P$ is proportional to time, $t$. In this case, it does not make sense to talk about the total transition probability. Instead, one normally considers the total transition rate, $W$, which is given by~\cite{Reiss80} \begin{equation} W = \lim_{t\rightarrow\infty} \frac{1}{t} P. \end{equation} The total photodetachment cross section is obtained by dividing the total photodetachment rate $W$ by the photon flux $F$ (the number of photons per unit area per unit time): \begin{equation} \sigma_{pw} =\frac{W}{F}, \label{crsdef0} \end{equation}% where `{\it pw}' stands for the monochromatic plane wave case. For the short laser pulse case, it does not make sense to talk about a transition rate since the transition probability is not simply proportional to time $t$. In addition, the photon flux $F$ is not well defined. Nevertheless, it is possible to renormalize the total probability for detachment by a short laser pulse in such a way that the renormalized probability reduces, in the limit of an infinitely long pulse, to the usual formula for the photodetachment cross section. Since the renormalized probability will have the dimensions of area, we denote it as an {\it effective photodetachment cross section}, $\sigma$. To derive this effective cross section, one uses the time duration of the laser pulse as the unit of time. One calculates the total photodetachment probability $P$ during the laser pulse duration and the total number of photons per unit area (i.e., the photon density), $\Sigma$, during the laser pulse duration. Then an effective photodetachment cross section, $\sigma$, may be defined as \begin{equation} \sigma =\frac{P}{\Sigma}. \label{crsdef} \end{equation} Clearly $\sigma$ defined in this way has the dimensions of a cross section. In the rest of this paper, $\sigma$ should be understood to be this effective photodetachment cross section, i.e., calculated according to Eq.~(\ref{crsdef}). We shall show below that this $\sigma$ for the short laser pulse case reduces in the limit $\alpha\rightarrow0$ (cf. Eq.~(\ref{laserpulse})) to the usual photodetachment cross section for a monochromatic plane wave. It has been shown by Wang and Starace~\cite{Wang95} that for a single Gaussian pulse defined by Eq.~(\ref{laserpulse}) with $\tau =\beta=0$, the photon density $\Sigma$ is given by the following formula: \begin{equation} \Sigma_{\text{sgl}}=\frac{cE_{0}^{2}}{8\pi \omega} \frac{\sqrt{2\pi}}{2\alpha}. \label{flux0} \end{equation} For the double pulse case (cf. Eq.~(\ref{laserpulsedbl})), $\Sigma$ is correspondingly given by \begin{equation} \Sigma_{\text{dbl}}=\frac{cE_{0}^{2}}{8\pi \omega}\frac{\sqrt{2\pi}}{\alpha}\left[ 1+\cos \beta \exp (-\alpha ^{2}\tau ^{2}/2)\right]. \label{flux2} \end{equation} Taking $\beta $ and $\tau $ to be zero in Eq.~(\ref% {Smatrixfinalinf}), and using Eqs.~(\ref{constA}) and (\ref{wrate})-(\ref{flux0}), we have for the photodetachment cross section of H$^-$ by a single pulse of the form of Eq.~(\ref{laserpulse}): \begin{eqnarray} \sigma ^{(1)} &=&\frac{4\pi ^{2}C_{i}^{2}\omega _{c}^{2}}{c\omega (\omega ^{2}-\omega _{c}^{2})^{2}}\sum\limits_{n_{z}=0}^{\infty }\int_{-\infty}^{\infty}d\zeta _{k_{y}}\left[ \zeta _{k_y}\omega _{c}g_{n_{z}}\left( \zeta _{k_{y}}\right) +\omega g_{n_{z}}^{\prime }\left( \zeta _{k_{y}}\right) \right] ^{2} \nonumber \\ &&\times \int_{-\infty}^{\infty}dk_{x} \overline{\delta}_{\alpha}\left(\varepsilon _{fi}-\omega \right), \label{crs1st} \end{eqnarray} where we have employed a second quasi-$\delta$ function~\cite{Wang95}, \begin{equation} \overline{\delta}_{\alpha}\left(\varepsilon _{fi}-\omega \right) = \frac{1}{\alpha \sqrt{2\pi}}% \exp \left[ -\frac{\left( \varepsilon _{fi}-\omega \right) ^{2}}{2\alpha ^{2}}% \right], \label{quasidelta2} \end{equation} which reduces to the usual Dirac $\delta$-function in the limit of a monochromatic plane wave, i.e., \begin{equation} \delta\left(\varepsilon _{fi}-\omega \right) = \lim_{\alpha\rightarrow 0} \overline{\delta}_{\alpha}\left(\varepsilon _{fi}-\omega \right). \end{equation} Note that \begin{equation} \varepsilon _{fi}-\omega =\frac{1}{2}\left[ k_{x}^{2}+Q\left( \zeta _{k_{y}}\right) \right], \label{finaleps} \end{equation} in which we have defined (cf. Eq.~\ref{epsf}) \begin{equation} Q\left( \zeta _{k_{y}}\right) =\frac{2E_{S}}{\sqrt{\omega _{c}}}\left( \zeta _{k_{y}}+\zeta _{\min}\right), \label{Kzetaky} \end{equation} where \begin{equation} \zeta _{\min}=\frac{\sqrt{\omega _{c}}}{E_{S}}\left[ (n_{z}+\frac{1}{2}% )\omega _{c}+\frac{E_{S}^{2}}{2\omega _{c}^{2}}-\varepsilon _{i}-\omega \right]. \label{zetamin} \end{equation} The integration over $k_{x}$ in Eq.~(\ref{crs1st}) has an analytical result when $% Q\left( \zeta _{k_{y}}\right) \geq 0$. The result is \begin{eqnarray} &&\int_{-\infty}^{\infty}dk_{x}\frac{1}{\sqrt{2\alpha ^{2}\pi}}\exp \left[ -\frac{\left( \varepsilon _{fi}-\omega \right) ^{2}}{2\alpha ^{2}}\right] \nonumber \\ &=&\frac{1}{\sqrt{2\alpha ^{2}\pi}}\int_{-\infty}^{\infty}dk_{x}\exp % \left[ -\frac{\left( k_{x}^{2}+Q\left( \zeta _{k_{y}}\right) \right) ^{2}}{% 8\alpha ^{2}}\right] \label{intkx} \\ &=&\frac{1}{\alpha \sqrt{2\pi}}\sqrt{\frac{E_{S}}{\sqrt{\omega _{c}}}% \left( \zeta _{k_y}+\zeta _{\min}\right)}\exp \left[ -\frac{E_{S}^{2}\left( \zeta _{k_y}+\zeta _{\min}\right) ^{2}}{4\omega _{c}\alpha ^{2}}\right] K_{% \frac{1}{4}}\left[ \frac{E_{S}^{2}\left( \zeta_{k_y}+\zeta _{\min }\right) ^{2}}{4\omega _{c}\alpha ^{2}}\right] \nonumber \end{eqnarray} where we have used the following formula (cf.~Eq.~(3.323) on p.307 of Ref.~\cite{Gradsh}): \[ \int_{0}^{\infty}dx\exp \left[ -\beta ^{2}x^{4}-2\gamma ^{2}x^{2}\right] =2^{-3/2}\frac{\gamma}{\beta}e^{\gamma ^{4}/2\beta ^{2}}K_{\frac{1}{4}% }\left( \gamma ^{4}/2\beta ^{2}\right), \]% which holds for $\left| \arg \beta \right| <\frac{\pi}{4}$ and $\left| \arg \gamma \right| <\frac{\pi}{4}$, and where $K_{\nu}(z)$ is a modified Bessel function (cf.~p.375 of Ref.~\cite{Abra65}). When $Q\left( \zeta _{k_{y}}\right) <0$, the integration in Eq.~(\ref{intkx}) must be done numerically. \subsubsection{Plane Wave Limit of the Cross Section} In the plane wave limit, $\alpha \rightarrow 0$, the integration over $k_{x}$ (making use of Eq.~(\ref{quasidelta2})) becomes \begin{equation} \int_{-\infty}^{\infty}dk_{x}\delta \left( \varepsilon _{fi}-\omega \right) =\int_{-\infty}^{\infty}dk_{x}\delta \left( \frac{1}{2}k_{x}^{2}+\frac{1% }{2}Q\left( \zeta _{k_{y}}\right) \right). \end{equation}% This integral is non-zero only when $\varepsilon _{fi}-\omega =\frac{1}{2}k_{x}^{2}+% \frac{1}{2}Q\left( \zeta _{k_{y}}\right) =0$, i.e., when we have strict energy conservation. For non-zero real $k_{x}$, we should thus require \[ Q\left( \zeta _{k_{y}}\right) =\frac{2E_{S}}{\sqrt{\omega _{c}}}\left( \zeta _{k_y}+\zeta _{\min}\right) < 0 \]% or \begin{equation} \tilde{\zeta}_{k_{y}} \equiv - \zeta _{k_{y}} > \zeta _{\min}. \label{zetalimit} \end{equation} Thus we have that \begin{eqnarray*} &&\int_{-\infty}^{\infty}dk_{x}\delta \left( \frac{1}{2}k_{x}^{2}+\frac{1}{% 2}Q\left( \zeta _{k_{y}}\right) \right) \\ &=&\frac{1}{|Q|^{1/2}} \int_{-\infty}^{\infty}dk_{x} \left[ \delta \left( k_{x} + |Q|^{1/2} \right) + \delta \left( k_{x} - |Q|^{1/2}\right) \right] \\ &=&\frac{2\omega _{c}^{1/4}}{\sqrt{2E_{S}}}\frac{1}{\sqrt{-\zeta _{k_{y}}-\zeta _{\min}}}, \end{eqnarray*} In the plane wave limit, we have then (converting $\zeta _{k_{y}}$ to $\tilde{\zeta}_{k_{y}}$) \begin{equation} \sigma _{\alpha= 0}^{(1)} =\frac{8\pi ^{2}C_{i}^{2}\omega _{c}^{9/4}}{c\omega (\omega ^{2}-\omega _{c}^{2})^{2}}\frac{1}{\sqrt{2E_{S}}}\sum\limits_{n_{z}=0}^{\infty }\int_{\zeta _{\min}}^{\infty}\frac{d\tilde{\zeta}_{k_{y}}}{\sqrt{\tilde{% \zeta}_{k_{y}}-\zeta _{\min}}}\left[ \tilde{\zeta}_{k_{y}}\omega _{c}g_{n_{z}}\left( \tilde{\zeta}_{k_{y}}\right) +\omega g_{n_{z}}^{\prime}\left( \tilde{\zeta}_{k_{y}}\right) \right] ^{2}. \label{crs1stpln} \end{equation} We consider now two limiting cases, corresponding to weak static magnetic and electric fields respectively. \paragraph{The weak magnetic field limit.} The plane wave cross section in Eq.~(\ref{crs1stpln}) can be simplified when the cyclotron frequency, $\omega _{c}$, is much smaller than the laser frequency, $\omega$, i.e., $\omega _{c}\ll \omega $. In this case, \begin{equation} \sigma _{\alpha =0,\omega _{c}\ll \omega}^{(1)} = \frac{3\sigma ^{0}}{k^{3}}\frac{\omega _{c}^{9/4}}{\sqrt{2E_{S}}}% \sum\limits_{n_{z}=0}^{\infty}\int_{\zeta _{\min}}^{\infty}d\tilde{\zeta}% _{k_{y}}\frac{g_{n_{z}}^{\prime 2}\left( \tilde{\zeta}_{k_{y}}\right)}{% \sqrt{\tilde{\zeta}_{k_{y}}-\zeta _{\min}}}, \label{crsdelos} \end{equation} where we have defined \begin{equation} \sigma ^{0} =\frac{8\pi ^{2}C_{i}^{2}}{3c\omega ^{3}}k^{3}, \label{crs_zerofld} \end{equation} in which $\sigma ^{0}$ is the photodetachment cross section for H$^-$ in the monochromatic field limit in the absence of any static fields, and $k$ is the magnitude of the detached electron's momentum, $k^{2}=2E_{f}=2\left( \omega +\varepsilon _{i}\right)$. We note that our weak magnetic field result in Eq.~(\ref{crsdelos}) agrees with the formula of Peters and Delos (see Eqs.~(3.6) and~(3.7a) of Ref.~\cite{Peter93b}). Eq.~(\ref{crsdelos}) agrees also with Fabrikant's result (see Eq.~(53) of Ref.~\cite{Fabr91}) except for the extra term in his formula that accounts for final-state interaction of the electron with the atomic residue. \paragraph{Weak static electric field limit.} In the limit $E_{S}\rightarrow 0$, we have that $\ \zeta _{\min }\rightarrow -\infty$. And we have also \begin{equation} \lim_{E_{S}\rightarrow 0}\sqrt{2E_{S}}\sqrt{\tilde{\zeta}_{k_{y}}-\zeta _{\min}} =\omega _{c}^{1/4}\sqrt{2\omega _{c}\left[ \left( \varepsilon _{i}+\omega \right) /\omega _{c}-(n_{z}+\frac{1}{2})\right]}. \end{equation} Substituting this result into Eq.~(\ref{crs1stpln}) and carrying out the integration involving the Hermite polynomials, we obtain \begin{equation} \sigma _{\alpha =0,E_{S}=0}^{(1)}=\frac{8\pi ^{2}C_{i}^{2}\omega _{c}^{2}}{% c\omega (\omega +\omega _{c})^{2}}\sum\limits_{n_{z}=0}^{n_{1}}\left[ \frac{% \omega ^{2}+\omega _{c}^{2}}{(\omega -\omega _{c})^{2}}n_{z}+\frac{1}{2}% \right] \frac{1}{\sqrt{2\omega _{c}\left[ \left( \varepsilon _{i}+\omega \right) /\omega _{c}-(n_{z}+\frac{1}{2})\right]}} \label{crs_gao} \end{equation} where the upper limit of summation, $n_1$, is the largest integer that satisfies, \[ n_{1}<\left[ \frac{\varepsilon _{i}+\omega}{\omega _{c}}-\frac{1}{2}\right]. \] Eq.~(\ref{crs_gao}) is exactly the same as Gao's result for the one-photon detachment cross section in a static uniform magnetic field (see Eq.~(31) of Ref.~\cite{Gao90a}). \subsubsection{Cross Section for the Double Pulse Case} From Eqs.~(\ref{crsdbl0}), ~(\ref{transrate}),~(\ref{crsdef}) and (\ref{flux2}), it is easy to show that for the double laser pulse case, the cross section is given by \begin{eqnarray} \sigma _{\text{dbl}}^{(1)} &=&\frac{4\pi ^{2}C_{i}^{2}\omega _{c}^{2}}{% c\omega (\omega ^{2}-\omega _{c}^{2})^{2}}\sum\limits_{n_{z}=0}^{\infty}\int_{-\infty}^{\infty }d\zeta _{k_{y}}\left[ \zeta _{y}\omega _{c}g_{n_{z}}\left( \zeta _{k_{y}}\right) +\omega g_{n_{z}}^{\prime}\left( \zeta _{k_{y}}\right) \right] ^{2} \nonumber \\ &&\times \int_{-\infty}^{\infty}dk_{x}% \frac{1+\cos \left[ \left( \varepsilon _{fi}-\omega \right) \tau -\beta \right]}{1+\cos \beta \exp (-\alpha ^{2}\tau ^{2}/2)} \overline{\delta}_{\alpha}\left( \varepsilon _{fi}-\omega \right). \label{DblPulseCS} \end{eqnarray} We note that for $\tau =\beta =0$, this formula reduces to the single pulse result in Eq.~(\ref{crs1st}), as it should (cf. Eq.~(\ref{laserpulsedbl})). \section{Connections to Classical Closed Orbits} In the previous section we have derived a general quantum mechanical expression for the (effective) photodetachment cross section for H$^{-}$ by a short laser pulse in the presence of crossed static electric and magnetic fields. We have also shown that our plane wave limit result (given by Eq.~(\ref{crs1stpln})) reduces for the limiting cases of weak static magnetic (cf. Eq.~(\ref{crsdelos})) or weak static electric (cf. Eq.~(\ref{crs_gao})) fields to known results of others. Magnetic field strengths, $B$, that are readily available at present in the laboratory are weak in the sense that they satisfy the relation, $\omega _{c}\ll \omega $. Therefore the quantum result for the photodetachment cross section in the plane wave limit given in Eq.~(\ref{crsdelos}) is of great interest owing to the possibility of experimental measurements with currently available technology. In this section we analyze this equation for the purpose of making connection with the classical closed orbits analyzed by Peters and Delos~\cite{Peter93}. This connection will prove useful for interpreting some of the numerical predictions presented in the next section. For $\omega\gg\omega_{c}$, the detached electron energy lies in the region of large $n_{z}$. In this limit the integrand in Eq.~(\ref{crsdelos}) becomes highly oscillatory, as may be seen by considering the large $n_{z}$ (Plancherel-Rotach) limit of the Hermite function, $g_{n_{z}}$~\cite{Szego39}: \begin{equation} g_{n_{z}}(\tilde{\zeta}_{k_{y}})\simeq\sqrt{\frac{2}{\pi \sqrt{2n_{z}(1-\eta ^{2})}}}\left\{\sin \left[ \left( n_{z}+\frac{1}{2}\right) \left( \eta \sqrt{1-\eta ^{2}}-\arccos \eta \right) +\frac{3\pi}{4}\right]+O(n_{z}^{-1})\right\}, \label{gnzzetay} \end{equation} where we have defined \begin{equation} \eta \equiv \tilde{\zeta}_{k_{y}}/\sqrt{2n_{z}+1}, \label{yita} \end{equation} and note that \[ \epsilon \leqslant \arccos \eta\leqslant \pi -\epsilon ,\text{}\epsilon \rightarrow 0^{+}, \] {\it{i.e.}}, the argument of the Hermite function, $g_{n_{z}}(\tilde{\zeta}_{k_{y}})$, must lie between the classical turning points: \[ -\sqrt{2n_{z}+1}<\tilde{\zeta}_{k_{y}}<\sqrt{2n_{z}+1}. \] The function $g_{n_{z}}^{\prime}\left( \tilde{\zeta}_{k_{y}}\right) $ that occurs in the integrand of Eq.~(\ref{crsdelos}) may be calculated by differentiation of Eq.~(\ref{gnzzetay}) with respect to $\tilde{\zeta}_{k_{y}}$, as follows: \begin{eqnarray} g_{n_{z}}^{\prime}\left( \tilde{\zeta}_{k_{y}}\right) &=& \frac{\partial \eta} {\partial \tilde{\zeta}_{k_{y}}} \frac{\partial}{\partial \eta} g_{n_{z}} \left( \tilde{\zeta}_{k_{y}}\right) = \frac{1}{\sqrt{2 n_z +1}} \sqrt{\frac{2}{\pi \sqrt{2 n_z}}} \nonumber \\ &\times& \left\{\frac{\eta}{2} \left(1- \eta^2 \right)^{-\frac{5}{4}} \sin\left[ S(n_z, \eta)\right] + (2n_z + 1) \left(1- \eta^2 \right)^{\frac{1}{4}} \cos\left[ S(n_z, \eta)\right] \right\}, \label{gnderivappr1} \end{eqnarray} where we have defined the phase \begin{equation} S(n_{z},\eta )=\left( n_{z}+\frac{1}{2}\right) \left( \eta \sqrt{1-\eta ^{2}}% -\arccos \eta \right) +\frac{3\pi}{4}. \label{sphase} \end{equation} Assuming that $n_z\gg1$, Eq.~(\ref{gnderivappr1}) can be simplified (in particular, the first term within the curly brackets can be ignored in comparison with the second term), so that we obtain: \begin{equation} g_{n_{z}}^{\prime}\left( \tilde{\zeta}_{k_{y}}\right) \simeq \sqrt{\frac{2}{\pi}} (2n_z)^{\frac{1}{4}} (1-\eta^2)^{\frac{1}{4}} \cos\left[ S(n_z, \eta)\right]. \label{gnderivappr2} \end{equation} Owing to the fact that $n_z$ is large, the phase function $S(n_z,\eta)$ changes significantly as $\eta$ varies (cf. Eq.~(\ref{sphase})), so that $g_{n_{z}}^{\prime}\left( \tilde{\zeta}_{k_{y}}\right)$ oscillates rapidly as a function of $\tilde{\zeta}_{k_{y}}$. From Eq.~(\ref{crsdelos}) we see that the magnitude of the photodetachment cross section will have the highest maxima when the squares of the various $g_{n_{z}}^{\prime}\left( \tilde{\zeta}_{k_{y}}\right)$ functions that are summed (over $n_z$) have their maxima and minima in phase with each other, i.e., when neighboring phase functions differ by an integer multiple of $\pi$: \[ S\left(n_{z},\eta(n_z)\right )-S\left(n_{z}-1,\eta(n_z-1)\right)\simeq\frac{d}{d n_{z}}S(n_{z},\eta )=j\pi \text{, where }j=0,\pm 1,\pm 2,... \] This condition is similar to that found for the largest local maxima in the photodetachment cross section of $H^-$ in the presence of parallel static electric and magnetic fields~\cite{Wang97}. We compute the total derivative of $S(n_z,\eta)$ as: \begin{equation} \frac{d}{d n_{z}}S(n_{z},\eta )=\frac{\partial S}{\partial n_z} + \frac{\partial S}{\partial\eta} \frac{\partial \eta}{\partial n_z}\label{totderivS} \end{equation} The partial derivatives of the phase $S(n_z,\eta)$ follow straightforwardly from the definition in Eq.~(\ref{sphase}). The partial derivative, ${\partial \eta}/{\partial n_z}$, is calculated using the definition in Eq.~(\ref{yita}) and the expression for $\tilde{\zeta}_{k_{y}}$ obtained from the energy conservation condition, $\varepsilon _{fi}-\omega=0$, together with Eqs.~(\ref{finaleps})-(\ref{zetamin}) and Eq.~(\ref{zetalimit}). After some straightforward algebra, one obtains: \begin{equation} \frac{d}{d n_{z}}S(n_{z},\eta )=-\arccos \eta +\sqrt{1-\eta ^{2}}\frac{\omega _{c}}{E_{S}}\sqrt{2\varepsilon _{n_{z}}}=j\pi. \label{closeorb1} \end{equation} The condition~(\ref{closeorb1}) for the highest local maxima in the photodetachment cross section~(\ref{crsdelos}) may be re-written in terms of the scaled energy $\varepsilon$, defined by \begin{equation} \varepsilon =\varepsilon _{n_{z}}\left( \frac{\omega _{c}}{E_{S}}\right) ^{2}, \label{scaledeps} \end{equation} and the angle $\varphi$, defined by \begin{equation} \varphi =\frac{\pi}{2}-\arccos \eta, \label{scaledphi} \end{equation} to obtain: \begin{equation} \cos \varphi +\frac{\varphi}{\sqrt{2\varepsilon }}-\frac{1}{\sqrt{2\varepsilon}}% \left( j+\frac{1}{2}\right) \pi =0. \label{stationphase1} \end{equation} This result is identical to the classical equation expressing the relationship of the azimuthal angle $\varphi$ and the scaled energy $\varepsilon$ for a closed orbit of an electron in crossed fields (see Eq.~(3.12) of Ref.~\cite{Peter93}). The classical Hamiltonian corresponding to the quantum Hamiltonian~(\ref{hamiltonianmom}) for the detached electron is given by \begin{eqnarray}\label{classicalH} H_{\text{cls}} &=& \frac{1}{2} \omega _{c}^2 z ^2 + \omega _{c} z \left(p_y - \frac{E_{S}}{\omega _{c}} \right) + \frac{1}{2}p_z^2 + \frac{1}{2}p_x^2 + \frac{1}{2}p_y^2 \nonumber \\ &=& \frac{1}{2}p_x^2 + \frac{E_{S}}{\omega _{c}} p_y + \frac{1}{2}p_z^2 + \frac{1}{2} \left[\omega _{c} z + \left(p_y - \frac{E_{S}}{\omega _{c}} \right)\right]^2 - \frac{1}{2} \frac{E_{S}^2}{\omega _{c}^2}. \label{classH} \end{eqnarray} Denoting \begin{equation}\label{circuez} \varepsilon_z = \frac{1}{2}p_z^2 + \frac{1}{2} \left[\omega _{c} z + \left(p_y - \frac{E_{S}}{\omega _{c}} \right)\right]^2, \end{equation} and introducing the following scaled coordinate, momentum, and time variables, \begin{equation}\label{scalmom1} \tilde{\mathbf{q}} = \frac{\omega_c^2}{E_S}\mathbf{q}, \end{equation} \begin{equation}\label{scalmom2} \tilde{\mathbf{p}} = \frac{\omega_c}{E_S}\mathbf{p}, \end{equation} \begin{equation}\label{scaltime} \tilde{t} = \omega_c t, \end{equation} Eq.~(\ref{classH}) may be rewritten as \begin{equation} E = \varepsilon + \frac{1}{2} \tilde{p}_x^2 + \tilde{p}_y - \frac{1}{2}, \label{scaledHcall} \end{equation} where $E = \omega _{c}^{2} (\omega + \varepsilon_i)/{E_{S}^{2}}$ is the scaled total energy and $\varepsilon$ is given by Eq.~(\ref{scaledeps}) in which the quantum energy $\varepsilon_{n_z}$ (cf. Eq.~(\ref{epsnz})) is replaced by the classical energy $\varepsilon_z$ in Eq.~(\ref{circuez}). With the help of Eqs.~(\ref{epsnz}), (\ref{zetaky1}) and~(\ref{zetalimit}), it is easy to show from the definition~(\ref{yita}) that \begin{equation} \eta = -\frac{p_y - E_S/\omega_c}{\sqrt{2 \varepsilon_{n_z}}}, \label{yita2} \end{equation} which can be rewritten, in terms of scaled energies, as \begin{equation} \eta = \frac{\varepsilon - (E-1/2)}{\sqrt{2\varepsilon}}, \label{yita3} \end{equation} by using the energy conservation equation (\ref{scaledHcall}) and the fact that $p_x = 0$ for closed orbits. Substituting Eq.~(\ref{yita3}) into Eq.~(\ref{scaledphi}), we can rewrite Eq.~(\ref{stationphase1}) as \begin{equation} \Lambda(\varepsilon) \equiv \sqrt{2\varepsilon -\left[\varepsilon -(E-1/2)\right]^{2}} -\arccos\left[ \frac{\varepsilon -(E-1/2)}{\sqrt{2\varepsilon}}\right] =j\pi. \label{stationphase2} \end{equation} For a given scaled total energy $E$, the number of the solutions of Eq.~(\ref{stationphase2}) gives the total number of closed orbits. The return time of a closed orbit in crossed fields is given by~\cite{Peter93} \begin{equation} T_{\text{ret}}= 2(\omega_c)^{-1} \sqrt{2\varepsilon}\cos \varphi, \label{Treturn} \end{equation} which can be rewritten with the help of Eqs.~(\ref{scaledphi}) and~(\ref{yita3}) as \begin{equation} T_{\text{ret}}= 2(\omega_c)^{-1} \sqrt{2\varepsilon - \left[ \varepsilon - (E-1/2)\right]^2}. \label{Treturn2} \end{equation} As discussed in~\cite{Peter93}, there exists a very important group of closed orbits whose total energies are given approximately (in the large energy limit) by \begin{equation} E_j^b \simeq \frac{\pi^2}{2}\frac{E_S^2}{\omega_c^2}\left[\left(j+\frac{1}{2}\right)^2 - \frac{3} {\pi^2}\right], \label{bndengy1} \end{equation} where $j=1, 2, 3,...$. These are called boundary energies, because for each $j$ a new closed orbit appears at the energy given by Eq.~(\ref{bndengy1}) and for higher total energies this newborn closed orbit will split (or ``bifurcate'') into a pair of closed orbits with two different energies and return times, given by Eqs.~(\ref{stationphase2}) and~(\ref{Treturn2}). Actually, each boundary energy defines the onset of large oscillations in the cross section. However, the largest amplitude oscillation in the cross section occurs at a slightly higher energy at which a different type of closed orbit occurs that has a truly circular motion in the drift frame in the $y$-$z$ plane. The energy of this orbit may be obtained by setting the initial momentum along the $y$ axis equal to the drift velocity, {\it i.e.}, \begin{equation} p_y^0 = \frac{E_S}{\omega_c}. \label{py0cond} \end{equation} From the energy conservation equation (\ref{scaledHcall}) and the fact that $p_x=0$ for a closed orbit, we have $\varepsilon = E-1/2$. Substituting this result into Eq.~(\ref{stationphase2}) gives \begin{equation} \sqrt{2\varepsilon} =(j + \frac{1}{2} )\pi, \label{stationphase3} \end{equation} which in unscaled variables corresponds to a total energy equal to \begin{equation} \overline{E}_j = \frac{\pi^2}{2}\frac{E_S^2}{\omega_c^2} \left[\left(j+\frac{1}{2}\right)^2 + \frac{1} {\pi^2}\right]. \label{bndengy2} \end{equation} Comparing Eqs.~(\ref{bndengy1}) and~(\ref{bndengy2}), one sees that the energy difference between the boundary orbits and the orbits having $p_y^0$ equal to the drift velocity is $2{E_S^2}/{\omega_c^2}$, independent of the value of $j$. Boundary closed orbits satisfy $\partial \Lambda(\varepsilon) / \partial \varepsilon =0$, which gives the relationship $ \varepsilon = E+\frac{3}{2}$ in the large energy limit. Closed orbits for which Eq.~(\ref{py0cond}) applies have $ \varepsilon = E-\frac{1}{2}$. From Eqs.~(\ref{Treturn2}) and~(\ref{stationphase3}), we find for these latter orbits that \begin{equation} T_{\text{ret}}= (j +\frac{1}{2}) T_B, \label{Treturn1} \end{equation} where $T_B= {2\pi} / {\omega_c} $ is the cyclotron period and $j$ is a positive integer. This formula is very similar to that obtained for the case of parallel static electric and magnetic fields ~\cite{Pete94, Wang97}, in which the largest oscillation amplitude of the cross section corresponds to classical orbits for which for an electron is ejected along the static field direction and reflected by the static electric field such that its return time satisfies $ T_{\text{ret}} = j T_B $. In the parallel fields case, the motion in the plane perpendicular to the magnetic field is simply cyclotron motion with period $T_B$. Classical closed orbits having a return time equal to an integer multiple of $T_B$ are associated with the largest oscillations in the cross section~\cite{Wang97}. In the crossed fields case, however, the situation is much more complicated. However, since $\varepsilon_z$ (cf. Eq.~(\ref{circuez})) is conserved, when the detached electron has an initial momentum $p_y^0$ given by Eq.~(\ref{py0cond}) (and an initial position $z=0$), the initial momentum along the $z$ axis takes its maximum value. This implies that this particular closed orbit starts out (in the drift frame) aligned with the laser polarization direction. These circular orbits (in the drift frame), having energies given by Eq.~(\ref{bndengy2}), are associated with the largest amplitude oscillation of the cross section. \section{Results and Discussion} \begin{figure} \includegraphics[width=12cm]{figs/fig2}\\ \caption{Photodetachment cross section for $H^-$ in the plane wave limit for a static magnetic field $B=1$ T and three different values of the static electric field strength. Results are plotted versus detached electron kinetic energy above the zero field detachment threshold, $\omega + \varepsilon _{i}$, up to $2.7$ cm$^{-1}$.} \label{figure2} \end{figure} In this section we present numerical results based on the quantum mechanical theoretical formulation presented above. We present first plane wave limit results for the photodetachment cross section of $H^-$ in the presence of crossed static electric and magnetic fields over a much larger energy range than in prior works~\cite{Fabr91, Peter93, Peter93b}. This large range allows us to demonstrate very clearly the signatures of the predicted classical closed orbits~\cite{Peter93}, both in the energy spectrum and in the time (i.e., Fourier transform) spectrum. We examine next the short laser pulse case, demonstrating first the effects of laser pulse duration on the photodetachment cross section. We then examine the detached electron wave packet dynamics in the $y$-$z$ plane and the possibility of modulating the detachment cross section by pump probe (Ramsey interference) techniques. The connection between the time development of the quantum wave packet of the detached electron and the predicted classical closed orbits is also discussed. \subsection{Photodetachment Cross Section in the Plane Wave Limit} \subsubsection{Static Electric Field Dependence for Near Threshold Energies} \begin{figure} \includegraphics[width=12cm]{figs/fig3}\\ \caption{Same as Fig.~\ref{figure2} but for three higher static electric field strengths and detached electron kinetic energies up to $5$ cm$^{-1}$.} \label{figure3} \end{figure} In Figs.~\ref{figure2} and~\ref{figure3} we present the photodetachment cross section for $H^-$ for a static magnetic field, $B = 1$ T, and six different values of the static electric field, $E_S$. Our quantum theory predictions are obtained from the plane wave limit result given in Eq.~(\ref{crs1stpln}). One sees in Fig.~\ref{figure2} that, as noted by Fabrikant~\cite{Fabr91}, even a very small static electric field removes the known singularity in the detachment cross section for energies corresponding to integer multiples of the cyclotron frequency in the pure magnetic field case (see, e.g.,~\cite{Gao90a}). In particular, for $E_S = 0.5$ V/cm, the behavior of the cross section is very similar to that of the pure magnetic field case~\cite{Gao90a} (to which our results reduce in the limit of zero static electric field, as shown in Sec. II.F.1(b) above), but without the cyclotron singularities. On the other hand, beginning with $E_S=7$ V/cm, the oscillatory modulation of the cross section by the static electric field becomes obvious. As shown in Fig.~\ref{figure3} the frequency of this modulation decreases as the static electric field magnitude increases, just as is found for the case of a pure static electric field or for the case of parallel static magnetic and electric fields (see, e.g.,~\cite{Wang95}). One sees also that the cross section becomes non-zero at the zero-field threshold owing to the lowering of the threshold by the static electric field. \begin{figure} \includegraphics[width=12cm]{figs/fig4}\\ \caption{Photodetachment cross section for $H^-$ in the plane wave limit for $B=1$ T, $E_S=60$ V/cm, and detached electron kinetic energies up to $30$ cm$^{-1}$.} \label{figure4} \end{figure} For the present crossed static magnetic and electric field case, the modulation of the cross section becomes increasingly complex the higher the maximum total energy $E_f$ becomes. For a maximum detached electron kinetic energy of 30 cm$^{-1}$, Fig.~\ref{figure4} shows that the oscillatory modulations differ above and below approximately 15 cm$^{-1}$. For energies below 15 cm$^{-1}$, there exists only a sinusoidal modulation. Above about 15 cm$^{-1}$, the modulation consists of more than one frequency and becomes more complicated the higher in energy one looks. In order to examine these structures in detail, it is instructive to plot only the oscillatory part of the cross section, which is defined by \begin{equation}\label{crsosc} \sigma_{\text{osc}} = \sigma _{\alpha \rightarrow 0}^{(1)} - \sigma^0, \end{equation} where $\sigma _{\alpha \rightarrow 0}^{(1)}$ is the total cross section in the plane wave limit (given by Eq.~(\ref{crs1stpln})) and $\sigma^0$ is the photodetachment cross section in the absence of any external static fields (given by Eq.~(\ref{crs_zerofld})). Figure~\ref{figure5} shows the oscillatory part of the cross section over three different energy ranges, corresponding to total energies up to 60 cm$^{-1}$, 180 cm$^{-1}$, and 500 cm$^{-1}$ respectively. While the oscillatory modulations of the cross section become increasingly dense and complex as the total energy increases, we see also that clear patterns in the spectra emerge and become more obvious the higher in energy we look. The onset of these repetitive patterns is indicated in each panel by the vertical dotted lines, which represent the locations of the boundary energies~\cite{Peter93} defined by Eq.~(\ref{bndengy1}). The peak amplitudes are indicated by the open triangles at the energies defined by Eq.~(\ref{bndengy2}), which correspond to the locations of circular classical orbits in the drift frame. The connection of these classical closed orbits and our quantum mechanical cross sections can be most easily investigated in the time domain, which we consider next. \subsubsection{Fourier Transform Spectra and Closed Classical Orbits} \begin{figure} \includegraphics[width=12cm]{figs/fig5}\\ \caption{The oscillatory part of the cross section, $\sigma_{\rm {osc}}$ (cf. Eq.~(\ref{crsosc})), for $B=1$ T and $E_S=60$ V/cm for $\omega+\varepsilon_i$ ranging from (a) 0 to 60 cm$^{-1}$; (b) 0 to 180 cm$^{-1}$; (c) 0 to 500 cm$^{-1}$. Dashed lines indicate the boundary energies (cf. Eq.~(\ref{bndengy1})) at which a new closed orbit appears~\cite{Peter93} and the open triangles indicate the energies at which the amplitude of the oscillatory part of the quantum cross section is expected to have a local maximum (cf. Eq.~(\ref{bndengy2})).} \label{figure5} \end{figure} \begin{figure} \includegraphics[width=12cm]{figs/fig6}\\ \caption{Fourier transform spectra for the oscillatory part of the cross section (cf. Eq.~(\ref{crsosc})) for different maximum energies, $E_f^{max} = \omega+\varepsilon_i$, as indicated in each panel. See text for a detailed description.} \label{figure6} \end{figure} The Fourier transform of the oscillatory part of the photodetachment cross section, $\sigma_{\text{osc}}$, is presented in Fig.~\ref{figure6} for increasing values of the maximum total detached electron kinetic energy, $E_f^{max}$. Times are given in units of the cyclotron period, $T_B$. We see from Fig.~\ref{figure6}(a) that at the lowest maximum energy, only one peak appears in the time spectrum. The peak position indicates the return time (i.e., orbit period) of a closed classical orbit having an energy of 10.98 cm$^{-1}$. As $E_f^{max}$ increases to 15.91 cm$^{-1}$ in (b), another peak emerges. This corresponds to the first classical boundary energy (cf. Eq.~(\ref{bndengy1})) for $j = 1$. Unlike the case of classical dynamics, however, where the boundary energy is sharply defined, our calculations indicate that the second peak in Fig.~\ref{figure6}(b) begins to appear around the energy 13.5 cm$^{-1}$. Note also that the first peak in (b) shifts to the right as compared to that in (a). This indicates that the return time of the first closed orbit increases when the total energy increases. In panel (c) we observe that the second peak increases in magnitude while the first peak decreases in magnitude. In panel (d), when the total energy equals 21.97 cm$^{-1}$, one observes the bifurcation of the second peak that first appeared in panel (b). For $E_f^{max}=48.49$ cm$^{-1}$, in panel (e), one sees the appearance of a third peak around 2.4 $T_B$ and notice that the width of the splitting of the second peak increases as compared to that in (d). The left and right boundaries of the split and broadened second peak correspond to the return times of the two bifurcated orbits at the maximum available total energy $E_f^{max}$. The serrated U-shaped region between the left and right boundaries of the split second peak correspond to the return times of the two bifurcated orbits for lower total energies (e.g., such as those two shown in panel (d) for a total energy of 21.97 cm$^{-1}$). Finally, in panel (f) for a slightly higher total energy we observe that the third peak grows in magnitude relative to the first and second (split) peaks. Consider now a much larger total final state energy, $E_f^{max}$ = 500 cm$^{-1}$. The oscillatory part of the cross section is shown in Fig.~\ref{figure5}(c). In this figure, the dashed lines correspond to the seven boundary orbit energies, given by Eq.~(\ref{bndengy1}), that appear for energies up to 500 cm$^{-1}$, and the triangles correspond to the seven circular orbit (in the drift frame) energies, given by Eq.~(\ref{bndengy2}). The Fourier transform spectrum for energies in the range from 0 - 500 cm$^{-1}$ is shown in Fig.~\ref{figure7}(a), while the Fourier transform of only the part of the spectrum in the range from 400 - 500 cm$^{-1}$ is shown in Fig.~\ref{figure7}(b). In (a), the open circles denote the return times (periods) of the 15 closed classical orbit solutions of Eq.~(\ref{stationphase2}) that exist for a total energy of 500 cm$^{-1}$. These periods are calculated using Eq.~(\ref{Treturn2}) and the results are given in Table~\ref{table1} together with the corresponding orbit energies. (Note that for each $j>0$ and for a total energy $E$ not equal to one of the boundary energies given by Eq.~(\ref{bndengy1}), Eq.~(\ref{stationphase2}) has two solutions.) The open triangles, on the other hand, correspond to the circular orbits in the drift frame corresponding to the local maximum amplitudes of the oscillatory part of the cross section; their return times are given by the very simple Eq.~(\ref{Treturn1}). \begin{table}[tbp] \caption{Numerical solutions of Eq.~(\ref{stationphase2}) for the energies, $ \varepsilon_j^{\pm}$(in units of cm$^{-1}$), for the closed orbits that exist for a total energy $\omega+\varepsilon_i=500$ cm$^{-1}$ (302.869 in scaled units) and the corresponding closed orbit periods, $ T_j^{\pm}$ (in units of $T_B$), calculated from Eq.~(\ref{Treturn2}). Note that there is only one solution for the case $j=0$.} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $j$ & 0 &1&2&3&4&5&6&7 \\ \hline \hline $ \varepsilon_j^-$ & 461.459& 462.389 &463.987&466.337&469.595&474.056& 480.398& 491.231\\ $ T_j^-$ & 0.959 &1.918 &2.876&3.831&4.783&5.730&6.665& 7.571 \\ \hline $\varepsilon_j^+$ &--- &542.137 &541.0371&539.127&536.264&532.160&526.143&515.609 \\ $ T_j^+$ &---&1.041 &2.082&3.126&4.172&5.224&6.286& 7.378\\ \hline \end{tabular}% \label{table1}% \end{table} Note that the bowl-like structures appearing in Fig. \ref{figure7}(a) above each open triangle result from the fact that closed orbits have different return times for each different total energy and from the fact that the Fourier transform spectrum in this figure results from a large range of total energies, i.e., from 0 to 500 cm$^{-1}$. When we calculate the Fourier transform of the oscillatory part of the cross section over only the limited energy range, $400$ cm$^{-1}$ $\le \omega+\varepsilon_i \le 500$ cm$^{-1}$, as in Fig. \ref{figure7}(b), then we observe that the first 13 peaks are approximately located at the positions of the first 13 closed classical orbit periods given in Table~\ref{table1}, which were calculated for a maximum total energy of $500$ cm$^{-1}$. The energy region $0 \le \omega+\varepsilon_i \le 400$ cm$^{-1}$ is thus inferred to be responsible for the bowl-like structures in Fig. \ref{figure7}(a) owing to the shifts of the lowest 13 classical orbit periods (having $j \le 6$) for lower total energies. The bowl-like structure remains between the 14th and 15th closed classical orbits (having $j = 7$) as this pair of orbits first occurs above a total energy of approximately 455 cm$^{-1}$ (cf. Fig.~\ref{figure5}(c)). We also observe from the data in Figs.~\ref{figure5} and~\ref{figure7} that the oscillation amplitude of the cross section becomes larger as the total energy increases. \begin{figure} \caption{Fourier transform spectra of the oscillatory part of the cross section, $\sigma_{\rm {osc}}$ (cf. Eq.~(\ref{crsosc})), given in Fig.~\ref{figure5} (c) calculated over two different energy ranges: (a) $0 \le \omega+\varepsilon_i \le 500$ cm$^{-1}$; (b) $400$ cm$^{-1}$ $\le \omega+\varepsilon_i \le 500$ cm$^{-1}$. In (a) the open circles indicate the return times (i.e., orbit periods) of the 15 closed orbits for $\omega+\varepsilon_i=500$ cm$^{-1}$ (see Table~\ref{table1}). Also in (a), the open triangles indicate the return times of the circular closed orbits (in the drift frame) having $\tilde{p}_y^0=1$ (cf. Eqs.~(\ref{scalmom2}) and~(\ref{py0cond})); these return times are given by Eq.~(\ref{Treturn1}).} \label{figure7} \end{figure} \subsection{Detachment by Short Laser Pulses} Photodetachment by means of one or more short laser pulses differs from that by a monochromatic laser. Most obviously, the pulse bandwidth affects the measured spectrum of detached electrons. In addition, short laser pulses produce localized detached electron wave packets whose motion in crossed fields can be investigated and compared to classical predictions~\cite{Alber1991, Garraway1995, Bluhm1996}. Most interesting, perhaps, is the possibility of controlling the modulation of the detachment spectrum by variation of the parameters of one or more laser pulses. In the rest of this section, we examine each of these topics in turn for the crossed static electric and magnetic field case. We note first, however, several previous works on related problems. Ramsey interference effects resulting from photodetachment of H$^-$ by two short, coherent laser pulses as a function of their relative phase was examined by Wang and Starace for the case of a static electric field~\cite{Wang93} and the case of parallel static electric and magnetic fields~\cite{Wang95}. The latter work~\cite{Wang95} showed that a large modulation of the effective detachment probability can be achieved by optimizing the static field magnitudes and the time delay between laser pulses, as follows: the field magnitudes should be such that the classical time for reflection of an electron back to the origin by the static electric field equals an integer multiple of the harmonic oscillator period for electron motion in the static magnetic field; also, the time delay of the second pulse should coincide with the classical time for the electron's return to the origin. For the case of a single short laser pulse, Du~\cite{Du95} examined the photodetachment of H$^-$ in the presence of a static electric field using modified closed orbit formulas. He showed that when the laser pulse duration is shorter than particular closed orbit periods, then those orbits no longer contribute to the photodetachment spectrum. Finally, Zhao et al.~\cite{Zhao99} have derived a uniform semiclassical formula for the photodetachment cross section of a negative ion by a short laser pulse for the case of parallel static electric and magnetic fields. \begin{figure} \includegraphics[width=12cm]{figs/fig8}\\ \caption{Effective photodetachment cross section of H$^-$ (cf. Eq.~(\ref{crs1st})) by a single laser pulse of the form~(\ref{laserpulse}) with four different pulse durations (cf. Eq.~(\ref{Tpulse}) in the presence of crossed static electric and magnetic fields, $E_S = 15$ V/cm and $B = 1$ T. Results are plotted vs. electron kinetic energy beginning from the zero-field threshold. Also shown (dotted line) is the photodetachment cross section for the case of a continuous (monochromatic) laser without any external static fields present.} \label{figure8} \end{figure} \subsubsection{Pulse Duration Effects} The fundamental difference between using a short laser pulse and using a continuous (monochromatic) laser is the bandwidth of the short laser pulse. In the former case, the laser pulse will excite a group of final states that form an electron wave packet, whereas in the latter case only a well-defined final state will be reached. In our present case of detachment in the presence of crossed static electric and magnetic fields, the spacing of Landau levels is very small (0.93 cm$^{-1}$ for $B=1$ T), so that one expects that even a quite long pulse having a duration of several picoseconds will have considerable finite bandwidth effects on the detached electron wave packet and its dynamics. In Fig.~\ref{figure8}, we present the effective total cross section (cf. Eq.~(\ref{crs1st})) for a laser pulse of the form~(\ref{laserpulse}) (with $\tau = \beta = 0$) and four different pulse durations (cf. Eq.~(\ref{Tpulse})) in the presence of crossed static electric and magnetic fields of magnitudes $E_S = 15$ V/cm and $B=1$ T. As we see from Fig.~\ref{figure8}, for the longest pulse duration, 240 ps, the effective cross section is identical with that for a (monochromatic) plane wave. As the pulse duration decreases, the modulation of the cross section is suppressed, beginning at the highest energies shown and progressing to structures at lower energies. Thus, when the pulse duration is reduced to 45 ps, the modulation structure beyond the energy 2.3 cm$^{-1}$ is largely suppressed. As the pulse duration is further reduced to 30 ps, even the modulation between 1 and 2 cm$^{-1}$ decreases in magnitude. At the shortest pulse duration, 15 ps, the oscillatory structure completely disappears and the effective cross section becomes a smooth curve passing through the oscillatory cross sections for longer pulse durations. In this case, the cross section is nearly identical to the one for detachment by a continuous (monochromatic) laser in the absence of any external static fields, as shown by the dotted line in Fig.~\ref{figure8}. The major difference between these cases occurs near the threshold: our short pulse effective cross section is finite at the zero static field threshold, while the monochromatic field cross section, in accordance with Wigner's threshold law, is zero at threshold. This difference is due to the lowering of the detachment threshold by the static electric field. It is interesting to relate these changes in the structure of the effective detachment cross sections as a function of pulse duration to the energy positions of the known classical closed orbits ~\cite{Peter93, Peter93b}. For the maximum energy 2.7 cm$^{-1}$ considered here, there are three closed orbits available for the static field parameters we employ. These three orbits have return times (periods) of 30.69, 40.94 and 60.72 ps. As the energy decreases to 1.8 cm$^{-1}$, the return times decrease to 28.94, 43.32 and 56.19 ps respectively. As the energy is further decreased to 1 cm$^{-1}$, there are only two closed orbits having return times of 26.95 and 48.58 ps respectively. We observe in Fig.~\ref{figure8} that as the laser pulse durations become shorter than the closed orbit periods, the structure of the effective cross section is reduced. In particular, for the shortest pulse duration, 15 ps, which is smaller than any closed orbit period, all structure has disappeared. \subsubsection{Wave packet dynamics} \begin{figure} \includegraphics[width=15cm]{figs/fig9}\\ \caption{(color online). Contour plot snapshots of detached electron wave packet motion in the $y$-$z$ plane for the case of zero initial momentum in the $x$ direction [cf. Eq.~(\ref{wave packetyxsgl})]. The static electric and magnetic field strengths are 60 V/cm and 1 T respectively. The laser pulse duration is $T_p=2$ ps and $t = 0$ corresponds to the end of this pulse. The total electron energy is $E_f=8$ cm$^{-1}$.} \label{figure9} \end{figure} We examine here the dynamics of a detached electron wave packet produced by a short laser pulse under the influence of crossed static electric and magnetic fields. As discussed in Refs.~\cite{Peter93, Peter93b}, the oscillatory part of the photodetachment cross section produced by a monochromatic laser field may be associated with those closed orbits that exist for a given value of the photon energy. As discussed in the previous section, the oscillatory part of the effective cross section produced by a short laser pulse is suppressed when the pulse duration is smaller than the classical orbit periods of those orbits that exist at the energy being considered. (See~\cite{Du95} for the related pure static electric field case.) From the correspondence between classical and quantum mechanics, we expect to observe that the detached electron wave packets produced by short laser pulses will trace the paths of allowed classical closed orbits. In order to illustrate the quantum wave packet motion corresponding to the classical dynamics, we shall only consider two-dimensional quantum motion by imposing the restriction $p_x =0$. As discussed above, the $x$ component of the momentum has to be zero in order for there to be any closed orbits. In this case, the time-dependent electron wave packet in coordinate space is given by \begin{equation}\label{wave packetyxsgl} \psi_{\text{sgl}} ^{\text{wvpk}}(0,y,z,t) = \frac{1}{2\pi}\int_{-\infty}^{\infty} \psi_{\text{sgl}} ^{\text{wvpk}}(0,p_{y},z,t) e^{ip_y y}, \end{equation} which is the Fourier transform of Eq.~(\ref{wavepacket1}), taking $p_x=0$. A similar Fourier transform can be employed for the double pulse case in Eq.~(\ref{wvpkdble}). In Figs.~\ref{figure9}-\ref{figure11} we present snapshots of the detached electron wave packet for the case of a single laser pulse of the form~(\ref{laserpulse}) with pulse duration $T_p = 2$ ps. The time evolution starts from $t_0 = -T_p$. The static electric and magnetic field strengths are taken to be 60 V/cm and 1 T respectively in all cases. Note that the cyclotron period is $T_B= 35.72$ ps for $B = 1$ T. \begin{figure} \caption{(color online). Same as Fig.~\ref{figure9} but for a total detached electron energy of $E_f=15.9$ cm$^{-1}$.} \label{figure10} \end{figure} In Fig.~\ref{figure9}, we take the total energy $E_f$ to be 8 cm$^{-1}$. There is only one classical closed orbit for this energy and these static field parameters. The return time of the closed orbit is calculated to be $T_{\rm{ret}}^{0} = 24.1$ ps (0.674 $T_B$). In Fig.~\ref{figure9}(a), we see that two electron wave packets are created at the peak intensity of the laser pulse on either side of the $z = 0$ axis, which correspond to electrons being ejected either along or opposite to the direction of increasing static electric field, $E_S$ (cf. Fig.~\ref{figure1}). After the end of the pulse in (b), the two electron wave packets move apart. However, as time increases we see in (c) that both wave packets are turned back by the external magnetic field. As shown in (c) and (d), each wave packet undergoes considerable spatial spreading. The most interesting plot is shown in (e), where we see a large portion of the left hand wave packet sweep through the residual core (at the origin). We note that the time corresponding to this snapshot is exactly the return time of the only classical closed orbit in the present case. It is return of this piece of the wave packet that leads to the regular sinusoidal oscillation one sees in Fig.~\ref{figure5}(a) below 15.9 cm$^{-1}$. As the time approaches 1 $T_B$ in (f), we see that the two wave packets refocus on the positive $y$ axis (the direction of drift motion) and that they pass through each other and continue their rotational motion during the next cyclotron period. However, owing to the drift motion along the $y$-axis, we see in (h) that the left hand wave packet is no longer able to return to the atomic core in the second cyclotron cycle for this total energy. The two wave packets do refocus further along the positive $y$-axis again at 2 $T_B$, as shown in (i). \begin{figure} \caption{(color online). Same as Figs.~\ref{figure10}(e), (f), (g) and (h) except that the wave packet amplitudes are shown here in three-dimensions rather than as contour plots. Note the scale change in panel (b), which shows the re-focusing of the wave packet amplitude along the drift axis (i.e., away from the atomic core at the origin).} \label{figure11} \end{figure} We note that the refocusing and the drift of the electron wave packets are exactly analogous to the classical dynamics discussed by Peters and Delos in~\cite{Peter93}. They showed that classical orbits with different initial conditions will refocus at various points along the drift axis. We note also that pump-probe experimental studies of the related problem of the motion of pump-laser-produced Rydberg-state electron wave packets in the Rubidium atom in the presence of crossed fields have found enhancements of probe laser-produced ionization signals when the delay of the probe laser equals the orbital period of the appropriate closed classical orbit for this related problem~\cite{Yeaz93}. We present similar snapshots in time for the increased energy of 15.9 cm$^{-1}$ in Fig.~\ref{figure10}. At this energy, the first so-called boundary orbit~\cite{Peter93} may be populated (cf. Eq.~(\ref{bndengy1}) for $j = 1$). There are thus two classical closed orbits that exist at this total energy, whose return times are 27.4 ps (0.766 $T_B$) and 48.8 ps (1.366 $T_B$). In Figs.~\ref{figure10}(e) and~\ref{figure10}(g) we see that different parts of the quantum electron wave packet return to the atomic core at these two times. The most distinctive feature of the part of the electron wave packet that returns to the origin at approximately 49 ps (cf. Fig.~\ref{figure10}(g)) (which corresponds to the higher energy, classical boundary orbit) is that most of the arc in (g) passes through the atomic core at the origin (which we have confirmed by observing the motion of the electron wave packet on a finer time scale). We note also that the energy of the classical boundary orbit corresponds to the abruptly increased amplitude of the oscillatory part of the cross section seen in Fig.~\ref{figure5}(a) around the energy location 15.9 cm$^{-1}$ indicated by the first dashed line. The fact that electron wave packet amplitudes return to the region of the atomic core implies the possibility of modulating the detachment cross section, analogously to the case of a monochromatic laser, as shown in Figs.~\ref{figure4} and~\ref{figure5}. However, the present wave packet studies show why for the crossed field case the modulation of the cross section is very small. Consider the three dimensional wave packet snapshots in Fig.~\ref{figure11}, which are calculated for times corresponding to those in Figs.~\ref{figure10}(e), (f), (g), and (h). Owing to the spreading of the electron wave packet, when it returns to the origin the part of the wave packet that overlaps the origin is nearly two orders of magnitude smaller than the probability in (f), in which the wave packet re-focuses along the drift axis, i.e., away from the origin. Because of such wave packet spreading and drift away from the origin, modulation of the photodetachment cross section in the crossed static field case is necessarily small. For similar reasons, the use of short pulse, pump-probe type techniques to control the photodetachment cross section in the crossed static field case also results in only small modulations of the cross section, as we discuss next. \subsubsection{Pump-Probe Coherent Control of the Effective Photodetachment Cross Section Using Short Laser Pulses} The idea of using laser pulses shorter than electron wave packet orbital periods to control electron wave packet motion was initially formulated theoretically for Rydberg (i.e., bound) electron wave packets~\cite{ARZ1986}. This idea was extended theoretically to photodetached (i.e., continuum) electron wave packets in the presence of external static fields, including static electric~\cite{Wang93} and parallel static electric and magnetic~\cite{Wang95} fields. Experimentally, short pulse, pump-probe studies of photodetachment of O$^-$ in the presence of a static magnetic field demonstrated Ramsey interference between photodetached electron wave packets~\cite{Yuki97}. Such Ramsey interference may also be demonstrated in the present crossed electric and magnetic field case. \begin{figure} \includegraphics[width=12cm]{figs/fig12}\\ \caption{The effective cross section for the double laser pulse case as modulated by: (a) the relative phase between the two pulses for several time delays, as indicated; (b) the time delay between the two pulses for two fixed relative phases, 0 and $\pi$. } \label{figure12} \end{figure} In Fig.~\ref{figure12} we present the effective total photodetachment cross section (cf. Eq.~(\ref{DblPulseCS})) as a function of the relative phase $\beta$ and the time delay $\tau$ between two laser pulses (cf. Eq~(\ref{laserpulsedbl})) for a total detached electron energy of 15.9 cm$^{-1}$ and for $E_S=60$ V/cm and $B=1$ T. The pulse duration $T_p$ of both pulses is taken to be 4 ps. Fig.~\ref{figure12}(a) shows the dependence of the effective cross section on the relative phase, $\beta$, for six time delays, $\tau$, between the pulses. One observes that the modulations of the cross section have local maxima for time delays of 0.77 $T_B$ ($\approxeq 27.5$ ps) and 1.37 $T_B$ ($\approxeq 49$ ps), which are precisely the return times of the two allowed classical closed orbits for a total electron energy of 15.9 cm$^{-1}$. However, the modulation of the cross section for the larger time delay is much greater than for the smaller time delay, which is consistent with the extent of electron wave packet overlap with the origin shown in Figs.~\ref{figure10}(e) and (g). In other words, in the latter case a large portion of the wave packet passes over the origin, which makes the Ramsey interference with the newly produced wave packet amplitude (due to the second pulse) of greater amplitude. Fig.~\ref{figure12}(b) shows the dependence of the effective cross section on the time delay, $\tau$, for two relative phases, $\beta$, between the pulses: 0 and 1 $\pi$. Fig.~\ref{figure12}(b) clearly shows that the maxima and minima in the effective cross section as a function of the time delay between the pulses occur for time delays of 0.77 $T_B$ ($\approxeq 27.5$ ps) and 1.37 $T_B$ ($\approxeq 49$ ps), which are the orbital periods of the two allowed classical closed orbits. We see once again that the modulation of the effective cross section is much larger for the classical closed orbit having the larger time delay, as explained above. \section{Conclusions} In this paper we have presented a detailed quantum mechanical analysis of detachment of a weakly bound electron by a short laser pulse in the presence of crossed static electric and magnetic fields. For specificity, we have chosen the parameters of the initial state of the weakly bound electron as those appropriate for the outer electron in H$^-$. In particular we have presented an analytic expression for the final state electron wave function, i.e., the wave function for an electron moving in the field of a laser pulse of arbitrary intensity as well as in crossed static electric and magnetic fields of arbitrary strengths. The general detachment probability formulas we present may therefore be used to analyze multiphoton detachment in crossed fields (although we have not presented this analysis here, but instead have focused on the weak laser field case). Based upon our analytic results for the detachment probability by a short laser pulse, we have defined an effective detachment cross section for the short pulse case that is shown to reduce, in the long pulse limit to results of others for the monochromatic, plane wave case. Our effective cross section formula allows us to demonstrate the effects of the laser pulse duration, such as, e.g., that for pulse durations shorter than the period of a particular classical closed orbit, the features of that closed orbit in the photodetachment spectrum (for the plane wave case) will simply vanish. By means of a stationary phase analysis, we have derived a condition for the existence of closed classical orbits that agrees exactly with that obtained by Peters and Delos by a purely classical analysis~\cite{Peter93}. We have also illustrated the bifurcation of the closed classical orbits at the so-called boundary energies~\cite{Peter93} by Fourier transforming the oscillatory part of our quantum cross section (in the long pulse limit) over various ranges of the final state electron energy. Finally, our analysis of the motion of detached electron wave packets produced by a short laser pulse provides a direct comparison of quantum and classical features for the crossed static electric and magnetic field problem. We find that the dynamics of our two-dimensional detached electron wave packets are consistent with the predictions of closed classical orbit theory~\cite{Peter93}. We have also shown that wave packet spreading and the fact that wave packet refocusing only occurs at the origin in the drift frame means that control of electron detachment in crossed static fields by means of laser pulses is less effective than in the parallel static electric and magnetic field case~\cite{Wang95}.
1,314,259,996,407
arxiv
\section*{Introduction} \subsection*{Background and Motivation} Derived algebraic geometry has provided significant new insights into the theory of algebraic moduli spaces. Classical moduli spaces are often pathological and singular but can be realised as truncations of smooth derived spaces. One tool which has proven fundamental in derived geometry is Koszul duality. Koszul duality, at least for Lie algebras and augmented commutative algebras, says that there is an adjunction of $(\infty,1)$-categories $$\adj{\textbf{D}_{\kappa}}{(\textbf{cdga}^{aug})^{op}}{\textbf{dgla}}{\hat{C}_{\kappa}}$$ between the category of augmented commutative differentially graded algebras and differentially graded Lie algebras over a field of characteristic zero. Moreover after forgetting the Lie algebra structure, the underlying complex of $\textbf{D}_{\kappa}(A)$ can be identified with a shift of the tangent complex of $A$ at the canonical point of $Spec(A)$ corresponding to the augmentation of $A$. One can also give conditions under which the unit $\mathfrak{g}\rightarrow \textbf{D}_{\kappa}\circ \hat{\textbf{C}}_{\kappa}(\mathfrak{g})$ is an equivalence. The idea behind its application to moduli theory is as follows. If $m$ is a point of a moduli problem $M$ one considers the shifted tangent complex at this point $\mathbb{T}_{M,m}[1]$. This object is equipped with a canonical Lie algebra structure which typically satisfies the assumptions such that the unit is an equivalence. The spectrum of the Koszul dual commutative algebra to this Lie algebra can be regarded as a formal neighbourhood of the point $m$. This idea is made precise by Lurie using the language of formal moduli problems in \cite{DAGX}. The adjunction above actually factors through an \textit{equivalence} of $(\infty,1)$-categories $$\adj{\Omega_{\kappa}}{\textbf{coComm}}{\textbf{dgla}}{\textbf{B}_{\kappa}}$$ between the categories of differentially graded coaugmented conilpotent cocommutative coalgebras and differentially graded Lie algebras. The functor $\hat{C}_{\kappa}$ is obtained by composing $B_{\kappa}$ with the dualising functor $(-)^{\vee}:\textbf{coComm}\rightarrow\textbf{cdga}^{op}$. Hinich gives a geometric interpretation of this equivalence by interpreting coalgebras as representing \textit{formal stacks}. Many moduli spaces, such as the moduli space of instantons or of Galois representations, are naturally analytic or smooth rather than algebraic in nature. It its nascence the goal of this paper was to develop an analogue of Koszul duality which could be applied to the study of analytic and smooth moduli problems. The idea is as follows. Derived algebraic geometry over a ring $R$ may be viewed as derived geometry, in the sense of To\"{e}n and Vezzosi \cite{toen2004homotopical}, relative to the monoidal model category $Ch({}_{R}\mathpzc{Mod})$ (or $Ch_{\ge0}({}_{R}\mathpzc{Mod})$). The category of affine spaces is opposite to the category of cdgas, and a suitable Grothendieck topology allows one to define and study derived stacks. The recent novel perspective of Bambozzi, Ben-Bassat, and Kremnitzer \cite{koren}, \cite{orenbambozzi}, \cite{bambozzi} on non-derived analytic geometry suggests that derived analytic geometry over a Banach field $R$ of characteristic $0$ be viewed as derived geometry relative to the monoidal model category $Ch(Ind(Ban_{R}))$ or $Ch_{\ge0}(Ind(Ban_{R}))$. Here $Ind(Ban_{R})$ is the formal completion of the category of Banach $R$-modules by inductive limits. One can also consider instead the full subcategory $CBorn_{R}$ of complete bornological $R$-modules. Our analytic analogue of Koszul duality relates the categories of commutative algebras, Lie algebras, and cocommutative coalgebras internal to the category $Ch(Ind(Ban_{R}))$. Before precisely outlining our results let us first review some of the extensive history of Koszul duality. Vallette's work on cooperadic Koszul duality \cite{vallette2014homotopy} generalises previous work of Hinich \cite{hinich2001dg} which interprets duality between Lie algebras and cocommutative coalgebras in terms of formal stacks. This in turn generalises results from the seminal work of Quillen \cite{quillen1969rational} on rational homotopy theory. Getzler and Jones \cite{getzler1994operads} have also done crucial work on Koszul duality. Motivated by studying differential forms on interated loops spaces they in particular study duality for $E_{n}$-algebras. In the process of establishing chiral Koszul duality, Francis and Gaitsgory \cite{francis2012chiral} prove a general Koszul duality result in the context of pro-nilpotent $(\infty,1)$-categories. An operadic version of Koszul duality has been established by Ginzburg and Kapranov \cite{ginzburg1994koszul}. There is also a curved operadic version due to Hirsh and Milles \cite{hirsh2012curved}. Ching and Harper have recently proved a spectral version of Koszul duality \cite{ching2015derived}. The relationship between Koszul duality and deformation theory has also been extensively studied by Kontsevich and Soibelman in \cite{kontsevich2000deformations} and \cite{kontsevich2003deformation}, as well as by Lurie \cite{DAGX} and Hennion \cite{hennion2015tangent}. \subsection*{Outline of Results} Although motivated by Koszul duality for bornological (or ind-Banach) modules, we will in fact prove Koszul duality for a large class of monoidal model categories, which also includes the category of vector spaces over a field. The model structure on $Ch(CBorn_{R})$ arises from a Quillen exact (in fact quasi-abelian) structure on $CBorn_{R}$. We introduce the notion of a Koszul category $\mathpzc{M}$, which is a monoidal model category of the form ${}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ for $\mathpzc{E}$ an exact category, in which the techniques of Koszul duality work. In fact we shall prove something much more general, which also incorporates, for example, generalisations of $E_{n}$ self-duality \cite{getzler1994operads}. Namely we generalise the notion of a Koszul twisting morphism $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ from a cooperad to an operad. We then identify an $(\infty,1)$-category of co-algebras over $\mathfrak{C}$ which is equivalent to the $(\infty,1)$-category of algebras over $\mathfrak{P}$. Our main theorem is the following (Theorem \ref{coopKoszuldual}). \begin{thm} Let $\mathpzc{M}$ be a Koszul category, and $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ a Koszul morphism in $\mathpzc{M}$. The bar-cobar adjunction induces an adjoint equivalence of $(\infty,1)$-categories $$\adj{\Omega_{\alpha}}{\textbf{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}}{\textbf{Alg}_{\mathfrak{P}}}{\textbf{B}_{\alpha}}$$ \end{thm} The statement, and much of its proof, is a generalisation of the proof of \cite{vallette2014homotopy} Theorem 2.1. (1) and (2). Given a Koszul twisting morphism $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ satisfying some additional assumptions, we consider the $(\infty,1)$-category of algebras over the twisted dual operad of $\mathfrak{C}$, $(\mathfrak{S}^{c}\otimes_{H}\mathfrak{C})^{\vee}$. In the case of the twisting morphism $\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}^{nu}\rightarrow\mathfrak{Lie}$ this is of course just the $(\infty,1)$-category of commutative algebras. We consider the functor $$\hat{\textbf{C}}_{\alpha}\defeq(-)^{\vee}[1]\circ\textbf{B}_{\alpha}:\textbf{Alg}_{\mathfrak{P}}\rightarrow\textbf{Alg}_{(\mathfrak{S}^{c}\otimes_{H}\mathfrak{C})^{\vee}}^{op}$$ Our first theorem on operadic Koszul duality is the following, (Theorem \ref{alphadjoint}). \begin{thm} The functor $\hat{\textbf{C}_{\alpha}}:\textbf{Alg}_{\mathfrak{P}}\rightarrow\textbf{Alg}_{(\mathfrak{S}^{c}\otimes_{H}\mathfrak{C})^{\vee}}^{op}$ admits a right adjoint $\textbf{D}_{\alpha}$. \end{thm} Finally we will specialise to Koszul twisting morphism $\kappa:\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}^{nu}\rightarrow\mathfrak{Lie}$. As in \cite{DAGX}, for an augmented commutative algebra $A$ we relate the underlying object of the Lie algebra $\textbf{D}_{\alpha}(A)$ with the shifted tangent complex $\mathbb{T}_{0}(A)[1]$ of $A$ at its canonical point $A\rightarrow R$ (Proposition \ref{shiftedtanget}). \begin{thm} Let $|-|:\textbf{Alg}_{\mathfrak{Lie}}(\textbf{M})\rightarrow\textbf{M}$ be the forgetful functor. There is a natural equivalence of functors $|-|\circ\textbf{D}_{\alpha}\cong\mathbb{T}_{0}[1]$. \end{thm} Finally prove a general theorem identifying when the unit of the adjunction is an equivalence. As a consequence we obtain the following result (Theorem \ref{analyticopkoszul}). \begin{thm} Let $k$ be a spherically complete field, $\mathfrak{g}$ be a bornological Lie algebra over $k$ concentrated in negative degrees such that each $|\mathfrak{g}_{n}|$ is a dual nuclear bornological. If the differentials in $|\mathfrak{g}|$ are Fredholm operators, then $\eta_{\mathfrak{g}}:\mathfrak{g}\rightarrow\textbf{D}_{\kappa}\circ\hat{\textbf{C}}_{\kappa}(\mathfrak{g})$ is an equivalence. \end{thm} \subsection*{Structure of the Paper} The paper is laid out as follows. After establishing the conventions of the paper, in the first two sections Section \ref{appcatalg} and Section \ref{homotopexact} we introduce the two main ingredients of our version of Koszul duality, namely operad theory in additive categories, and homotopy theory in exact categories. Section \ref{appcatalg} mainly consists of recalling foundational results from \cite{loday2012algebraic} reformulated to work for monoidal additive categories containing certain colimits. The primary goal of this section is to arrive at the \textit{bar-cobar construction} for a twisting morphism between a cooperad and an operad. After recalling some results about monoidal model structures on categories of chain complexes in exact categories from \cite{kelly2016projective} in Section \ref{homotopexact} we introduce the notion of a \textit{Koszul category}. These are essentially categories of chain complexes in exact categories equipped with monoidal model structures such that Koszul duality for Kosuzl twisting morphisms works. We then give a method of constructing such categories from \textit{pre-Koszul categories} which provides a rich class of examples. We also discuss connective Koszul categories as well as categories of filtered objects in Koszul categories and their associated $(\infty,1)$-categories. In Section \ref{algcoalghom} our work begins in earnest. Namely we analyize the homotopy theory of algebras over operads in Koszul categories, focusing in particular on the category of unital commutative algebras. For example we show how to compute homotopy pushouts in the category of commutative algebras. Section \ref{seccoopkosz} is the main event. After defining the categories of coalgebras we consider, and studying their homotopy theory, we prove various generalisations of the cooperadic Koszul duality of \cite{vallette2014homotopy}. Then in Section \ref{secopkosz} we prove a generalisation of Lurie's \cite{DAGX} version of operadic Koszul duality. Finally in Section \ref{secexamples} we give various examples of our version of Koszul duality, including generalisations of previously known results for categories of modules over a ring, as well as totally new results for categories of complete bornological spaces over Banach fields. We conclude by suggesting some further directions in which this work could be continued, such as analytic and smooth versions of chiral Koszul duality. \subsection*{Notation and Conventions}\label{notation} Throughout this work we will use the following notation. \begin{itemize} \item $1$-categories will be denoted using the mathpzc font $\mathpzc{C},\mathpzc{D},\mathpzc{E}$, etc. In particular we denote by $\mathpzc{Ab}$ the category of abelian groups and ${}_{{\mathbb Q}}\mathpzc{Vect}$ the category of ${\mathbb Q}$-vector spaces. If $\mathpzc{M}$ is a model category, or a category with weak equivalences, its associated $(\infty,1)$-category will be denoted $\textbf{M}$. \item Operads will be denoted using capital fractal letters $\mathfrak{C},\mathfrak{P}$, etc. The category of algebras over an operad will be denoted $\mathpzc{Alg}_{\mathfrak{P}}$. \item We denote the operads for unital associative algebras, unital commutative algebras, non-unital commutative algebras, and Lie algebras by $\mathfrak{Ass},\mathfrak{Comm},\mathfrak{Comm}^{nu}$, and $\mathfrak{Lie}$ respectively. Similarly, we denote the cooperads for counital cocommutative coalgebras and non-counital cocommutative coalgebras by $\mathfrak{coComm},\mathfrak{coComm}^{ncu}$ respectively \item For the operad $\mathfrak{Ass},\mathfrak{Comm},\mathfrak{Lie}$ we will denote the corresponding free algebras by $T(V),S(V)$, and $L(V)$ respectively. We also denote by $\hat{S}(V)$ the commutative algebra of formal power series on an object $V$. \item Unless stated otherwise, the unit in a monoidal category will be denoted by $k$, the tensor functor by $\otimes$, and for a closed monoidal category the internal hom functor will be denoted by $\underline{\textrm{Hom}}$. \item For symmetric monoidal categories the symmetric braiding will be denoted $\Sigma$. \item Filtered colimits will be denoted by $\textrm{lim}_{\rightarrow}$. Projective limits will be denoted $\textrm{lim}_{\leftarrow}$. \end{itemize} \subsubsection*{Chain Complexes} Let us now introduce some conventions for chain complexes. Let $\underline{Gr}_{\mathbb{Z}}(\mathpzc{E})$ denote the category whose objects are $\mathbb{Z}$-indexed collections of objects of $\mathpzc{E}$, which we usually write as $\bigoplus_{n\in\mathbb{Z}}E_{n}$. If $\bigoplus_{n\in\mathbb{Z}}E_{n}$ and $\bigoplus_{n\in\mathbb{Z}}F_{n}$ are two objects in this category then we set $$Hom_{\underline{Gr}_{\mathbb{Z}}(\mathpzc{E})}(\bigoplus_{m\in\mathbb{Z}}E_{m},\bigoplus_{n\in\mathbb{Z}}F_{n})\defeq\bigoplus_{n\in\mathbb{Z}}\prod_{i}Hom_{\mathpzc{E}}(E_{i},F_{i+n})$$ If $f\in \prod_{i}Hom_{\mathpzc{E}}(E_{i},F_{i+n})$ then $f$ is said to have degree $n$. We denote the subgroup $\prod_{i}Hom_{\mathpzc{E}}(E_{i},F_{i+n})$ consisting of degree $n$ maps by $Hom_{n}(E,F)$. $\underline{Gr}_{\mathbb{Z}}(\mathpzc{E})$ is a symmetric monoidal category where $$(X_{\bullet}\otimes Y_{\bullet})_{n}=\bigoplus_{i+j=n}X_{i}\otimes Y_{j}$$ If $f:V\rightarrow V'$ is a degree $p$ map and $g:W\rightarrow W'$ is a degree $q$ map then one defines $$f\otimes g|_{V_{m}\otimes W_{n}}\defeq(-1)^{q+m}f_{m}\otimes g_{n}$$ The monoidal unit is the graded object $G_{0}(k)$ where $(G_{0}(k))_{0}=k$ and $(G_{0}(k))_{n}=0$ for $n>0$. We denote by $\mathpzc{Gr}_{\mathbb{Z}}(\mathpzc{E})$ the wide subcategory of $\underline{Gr}_{\mathbb{Z}}$ where $Hom_{\mathpzc{Gr}_{\mathbb{Z}}}(E,F)\defeq Hom_{0}(E,F)$, and by $\mathpzc{Gr}_{\mathbb{N}{0}}(\mathpzc{E})$ the full subcategory of $\mathpzc{Gr}_{\mathbb{Z}}(\mathpzc{E})$ on objects $\bigoplus_{n\in{\mathbb Z}}E_{n}$ such that $E_{n}=0$ for $n<0$. \begin{defn} A \textbf{curved chain complex} in $\mathpzc{E}$ is a pair $(X_,d_{X})$ where $X\in\underline{Gr}(\mathpzc{E})$ and $d\in Hom_{-1}(X,X)$. If $(X_,d_{X})$ and $(Y_,d_{Y})$ are curved complexes, a morphism from $(X_,d_{X})$ to $(Y_,d_{Y})$ is a morphism $f\in Hom_{0}(X,Y)$ which commutes with $d_{X}$ and $d_{Y}$. The category of curved chain complexes is denoted $\tilde{Ch}(\mathpzc{E})$. \end{defn} We will frequently use the following special complexes. \begin{defn} If $E$ is an object of an additive category $\mathpzc{E}$ we let $S^{n}(E)\in \tilde{Ch}(\mathpzc{E})$ be the complex whose $n$th entry is $E$, with all other entries being $0$. We also denote by $D^{n}(E)\in \tilde{Ch}(\mathpzc{E})$ the complex whose $n$th and $(n-1)$st entries are $E$, with all other entries being $0$, and the differential $d_{n}$ being the identity. \end{defn} We also define $\tilde{Ch}_{\ge0}(\mathpzc{E})$ to be the full subcategory of $\tilde{Ch}(\mathpzc{E})$ on complexes $A_{\bullet}$ such that $A_{n}=0$ for $n<0$, $\tilde{Ch}_{\le0}(\mathpzc{E})$ to be the full subcategory of $\tilde{Ch}(\mathpzc{E})$ on complexes $A_{\bullet}$ such that $A_{n}=0$ for $n>0$, $Ch_{+}(\mathpzc{E})$, the full subcategory of chain complexes $A_{\bullet}$ such that $A_{n}=0$ for $n<<0$, $\tilde{Ch}_{-}(\mathpzc{E})$, the full subcategory of chain complexes $A_{\bullet}$ such that $A_{n}=0$ for $n>>0$ and $\tilde{Ch}_{b}(\mathpzc{E})$ to be the full subcategory of $\tilde{Ch}(\mathpzc{E})$ on complexes $A_{\bullet}$ such that $A_{n}\neq 0$ for only finitely many $n$. The categories $\tilde{Ch}(\mathpzc{E})$ also comes equipped with a shift functor. It is given on objects by $(A_{\bullet}[1])_{i}=A_{i+1}$ with differential $d_{i}^{A[1]}=-d^{A}_{i+1}$. The shift of a morphism $f^{\bullet}$ is given by $(f_{\bullet}[1])_{i}=f_{i+1}$. $[1]$ is an auto-equivalence with inverse $[-1]$. We set $[0]=\textrm{Id}$ and $[n]=[1]^{n}$ for any integer $n$.\newline \\ Finally, we define the mapping cone as follows. \begin{defn} Let $X_{\bullet}$ and $Y_{\bullet}$ be curved complexes complexes in an additive category $\mathpzc{E}$ and $f_{\bullet}:X_{\bullet}\rightarrow Y_{\bullet}$. The \textbf{mapping cone of} $f_{\bullet}$, denoted $\textrm{cone}(f_{\bullet})$ is the complex whose components are $$\textrm{cone}(f_{\bullet})_{n}=X_{n-1}\oplus Y_{n}$$ and whose differential is \begin{displaymath} d^{\textrm{cone}(f)}_{n} = \left( \begin{array}{lr} -d_{n-1}^{X} &0\\ -f_{n-1} & d^{Y}_{n} \end{array} \right) \end{displaymath} \end{defn} There are natural morphisms $\tau:Y_{\bullet}\rightarrow \textrm{cone}(f)$ induced by the injections $Y_{i}\rightarrow X_{i-1}\oplus Y_{i}$, and $\pi:\textrm{cone}(f)\rightarrow X_{\bullet}[-1]$ induced by the projections $X_{i-1}\oplus Y_{i}\rightarrow X_{i-1}$. The sequence $$Y_{\bullet}\rightarrow\textrm{cone}(f)\rightarrow X_{\bullet}[-1]$$ is split exact in each degree. \begin{defn} The \textbf{category of chain complexes in }$\mathpzc{E}$, denoted $Ch(\mathpzc{E})$, is the full subcategory of $\tilde{Ch}(\mathpzc{E})$ consisting of curved complexes $(X,d_{X})$ such that $d_{X}^{2}=0$. \end{defn} All the constructions above restrict to $Ch(\mathpzc{E})$. In particular if $\mathpzc{E}$ is a (closed/ symmetric) monoidal category then so is $Ch(\mathpzc{E})$. Let us also introduce some notation for truncation functors. \begin{defn} Let $\mathpzc{E}$ be an additive category which has kernels. For a chain complex $X_{\bullet}$ we denote by $\tau_{\ge n}X$ the complex such that $(\tau_{\ge n}X)_{m}=0$ if $m<n$, $(\tau_{\ge n}X)_{m}= X_{m}$ if $m>n$ and $(\tau_{\ge n}X)_{n}=\textrm{Ker}(d_{n})$. The differentials are the obvious ones. The construction is clearly functorial. Dually we define the truncation functor $\tau_{\le n}$. \end{defn} \subsection*{Acknowledgements} The author would like to thank Kobi Kremnitzer for many useful comments and conversations throughout the course of this work. \section{Operadic Algebra in Additive Categories}\label{appcatalg} Before we get stuck into the homotopy theory of algebras in exact categories let us first discuss some of the basics of operadic and cooperadic algebra in additive categories. Much of this section is not new, and our reference is \cite{loday2012algebraic} where, unless stated otherwise, anything left unproved in this section can be found. While the book works in the context of vector spaces, most of the proofs work mutatis mutandis for monoidal additive categories with some mild assumptions. At the outset let us fix exactly what we mean by a monoidal additive category for the purposes of this paper. A \textbf{monoidal additive category} is an additive category $\mathpzc{E}$ together with a unital associative bifunctor $\otimes:\mathpzc{E}\times\mathpzc{E}\rightarrow\mathpzc{E}$ which commutes with colimits in each variable. We shall assume also that $\mathpzc{E}$ has countable colimits and kernels. The unit will be denoted $k$. \subsection{$\Sigma$-Modules} In the next two sections we mostly follow \cite{loday2012algebraic} Chapter 5. \subsubsection{Discrete Groups in Monoidal Categories} Let $(\mathpzc{E},\otimes, k)$ be a monoidal category with all small coproducts, such that $\otimes$ commute with coproducts in each variable. We denote by $k[-]:\mathpzc{Set}\rightarrow\mathpzc{E}$ the functor which sends a set $S$ to the object $k[S]=\coprod_{s\in S}k$. If $f:S\rightarrow T$ is a map of sets, then $k[f]:k[S]\rightarrow k[T]$ is the morphism which sends the copy of $k$ indexed by $s\in S$ to the copy indexed by $f(s)\in T$. Objects and morphisms in the essential image of the functor $k[-]:\mathpzc{Set}\rightarrow\mathpzc{E}$ will be called \textbf{discrete}. \begin{prop} Let $(\mathpzc{E},\otimes,k)$ be a monoidal category. Endow $\mathpzc{Set}$ with its Cartesian monoidal structure. Then the functor $k[-]:\mathpzc{Set}\rightarrow\mathpzc{E}$ is strong monoidal. \end{prop} \begin{proof} Let $S$ and $T$ be sets. Then $$ \Bigr(\coprod_{S}k\Bigr)\otimes\Bigr(\coprod_{T}k\Bigr) \cong\coprod_{S}\coprod_{T} k\otimes k \cong\coprod_{S}\coprod_{T} k \cong\coprod_{S\times T}k $$ \end{proof} In particular $\mathpzc{Set}\rightarrow\mathpzc{E}$ sends groups to Hopf monoids. If $G$ is a group we call $k[G]$ the group monoid of $G$ in $\mathpzc{E}$. If $G$ and $H$ are groups and $H\rightarrow G$ is a morphism, then we get a morphism of group monoids $k[H]\rightarrow k[G]$. We denote by $Ind_{H}^{G}:{}_{k[H]}\mathpzc{Mod}\rightarrow {}_{k[G]}\mathpzc{Mod}$ the functor $k[G]\otimes_{k[H]}(-)$. \subsubsection{Graded Objects and $\Sigma$-Modules} We denote by $k[\Sigma]$ the monoid in $\mathpzc{Gr}_{\mathbb{N}_{0}}(\mathpzc{E})$ defined as follows. In degree $n$ it is given by the monoid $k[\Sigma_{n}]$, the free monoid on the symmetric group in $n$ letters. \begin{defn} The \textbf{category of } $\Sigma$-\textbf{modules} in $\mathpzc{E}$ is the category of right $k[\Sigma]$-modules in $\mathpzc{Gr}_{\mathbb{N}_{0}}(\mathpzc{E})$. It is denoted $\mathpzc{Mod}_{\Sigma}(\mathpzc{E})$ \end{defn} \begin{defn} Let $M$ be a $\Sigma$-module. Its associated \textbf{Schur functor} is the endofunctor $\tilde{M}:\mathpzc{E}\rightarrow\mathpzc{E}$ defined by $$\tilde{M}(V)=\bigoplus_{n\ge0}M(n)\otimes_{\Sigma_{n}} V^{\otimes n}$$ The assignment $M\mapsto\tilde{M}$ is functorial in a natural way. We denote the functor $\mathpzc{Mod}_{\Sigma}\rightarrow\mathpzc{End}(\mathpzc{E},\mathpzc{E})$ by $\textrm{Sch}$. \end{defn} \subsubsection{The Tensor Product of $\Sigma$-Modules} We are going to define a monoidal structure on the category of $\Sigma$-Modules. \begin{defn} Let $M$ and $N$ be two $\Sigma$-modules. The \textbf{tensor product} of $M$ and $N$, denoted $M\otimes N$, is the $\Sigma$-module define by $$(M\otimes N)(n)=\bigoplus_{i+j=n}\textit{Ind}_{\Sigma_{i}\times\Sigma_{j}}^{\Sigma_{n}}(M(i)\otimes N(j))$$ \end{defn} \begin{defn} The $\Sigma$-module $I$ is defined by $I(i)=0$ for $i\neq 1$ and $I(1)=k$. \end{defn} The following can be proven as in \cite{loday2012algebraic} Section 5.1. \begin{prop} $(\mathpzc{Mod}_{\Sigma},\otimes, I)$ is a symmetric monoidal category. Moreover if we endow $[\mathpzc{E},\mathpzc{E}]$ with its object wise monoidal structure, $\textrm{Sch}$ is a strong monoidal functor. \end{prop} \subsubsection{The Composite Product of $\Sigma$-Modules} Operads are defined as monoids with respect to a different monoidal structure on $\mathpzc{Mod}_{\Sigma}$ which we define here. \begin{defn} Let $M$ and $N$ be two $\Sigma$-modules. The \textbf{composite product} of $M$ and $N$, denoted $M\circ N$ is defined by $$M\circ N(n)=\bigoplus_{k\ge0}M(k)\otimes_{\Sigma_{k}}N^{\otimes k}(n)$$ \end{defn} \begin{prop} $(\mathpzc{Mod}_{\Sigma},\circ,I)$ is a monoidal category. \end{prop} Another useful notion, particularly in the context of differentials on operads, is the infinitesimal composite product of $\Sigma$-modules \begin{defn} The functor $-\circ(-;-):\mathpzc{Mod}_{\Sigma}^{3}\rightarrow\mathpzc{Mod}_{\Sigma}$ is the subfunctor of $-\circ(-\oplus-):\mathpzc{Mod}_{\Sigma}^{3}\rightarrow\mathpzc{Mod}_{\Sigma}$ which is linear in the last variable. \end{defn} \begin{defn} The \textbf{infinitesimal composite product} functor $-\circ_{(1)}-:\mathpzc{Mod}_{\Sigma}^{2}\rightarrow\mathpzc{Mod}_{\Sigma}$ is the functor $$-\circ_{(1)}-\defeq-\circ(I;-)$$ \end{defn} \begin{defn} Let $f:M_{1}\rightarrow M_{2}$ an $g:N_{1}\rightarrow N_{2}$ be morphisms of $\Sigma$-modules. The \textbf{infinitesimal composite} of $f$ and $g$, denoted $f\circ'g$ is the map $$f\circ' g:M_{1}\circ N_{1}\rightarrow M_{2}\circ (N_{1}; N_{2})$$ given by the formula $$f\circ' g=\sum_{i}f\otimes (Id_{N_{1}}\otimes\ldots\otimes Id_{N_{1}}\otimes g\otimes Id_{N_{1}}\otimes\ldots\otimes Id_{N_{1}})$$ \end{defn} \subsubsection{Operads and cooperads} \begin{defn} The \textbf{category of operads} denoted $\textit{Op}(\mathpzc{E})$ is the category of associative monoids $(\mathfrak{P},\gamma,\eta)$ in the monoidal category $(\mathpzc{Mod}_{\Sigma},I, \circ)$. The \textbf{category of cooperads} denoted $\textit{coOp}(\mathpzc{E})$ is the category of coassociative comonoids in the monoidal category $(\mathpzc{Mod}_{\Sigma},I, \circ)$. \end{defn} There is an important subtlety, in that the definition of an operad and a cooperad are not dual \textit{relative to }$\mathpzc{E}$. Namely, there is a fully faithful functor $$\mathpzc{coOp}(\mathpzc{E})\rightarrow\mathpzc{Op}(\mathpzc{E}^{op})^{op}$$ which is not an equivalence in general. This is because the construction of the composite product is not self-dual. \begin{prop} The forgetful functor $|-|:\mathpzc{Op}(\mathpzc{E})\rightarrow\mathpzc{E}$ has a left adjoint $\mathfrak{T}(-)$, called the \textbf{free operad functor}. \end{prop} For cooperads the story is again more subtle. The category of cooperads has a full subcategory $\mathpzc{coOp}^{con}(\mathpzc{E})$ called the category of \textbf{conilpotent cooperads}. We will not go into details here, but essentially it consists of those cooperads which are conilpotent when considered as comonoids in the monoidal category $({}\mathpzc{Mod}_{\Sigma},\circ,I)$. A cooperad $\mathfrak{C}$ being conilpotent is equivalent to a certain filtration of $\mathfrak{C}$, called the \textbf{coradical filtration} being exhaustive. \begin{prop} The forgetful functor $|-|:\mathpzc{coOp}^{con}(\mathpzc{E})\rightarrow\mathpzc{E}$ has a right adjoint $\mathfrak{T}^{c}(-)$, called the \textbf{cofree cooperad functor}. \end{prop} From now on we shall assume that all cooperads are conilpotent. \begin{defn} Let $(\mathfrak{P},\gamma,\eta)$ be an operad. The \textbf{infinitesimal composition map} $\gamma_{(1)}:\mathfrak{P}\circ_{(1)}\mathfrak{P}\rightarrow\mathfrak{P}$ is given by the composition. \begin{displaymath} \xymatrix{ \mathfrak{P}_{\circ_{(1)}}\mathfrak{P}\ar@{=}[r] & \mathfrak{P}\circ(I;\mathfrak{P})\ar@{>->}[r] & \mathfrak{P}\circ (I\oplus\mathfrak{P})\ar[rr]^{Id_{\mathfrak{P}}\circ(\eta+Id_{\mathfrak{P}})} & & \mathfrak{P}\circ\mathfrak{P}\ar[r]^{\gamma} & \mathfrak{P} } \end{displaymath} \end{defn} \begin{defn} Let $(\mathfrak{C},\Delta,\epsilon)$ be a cooperad. The \textbf{infinitesimal decomposition map} $\Delta_{(1)}\mathfrak{C}\rightarrow\mathfrak{C}\circ_{(1)}\mathfrak{C}$ is given by the composition. \begin{displaymath} \xymatrix{ \mathfrak{C}\ar[r]^{\Delta} & \mathfrak{C}\circ\mathfrak{C}\ar[rr]^{Id_{\mathfrak{C}}\circ'Id_{\mathfrak{C}}} & & \mathfrak{C}\circ(\mathfrak{C};\mathfrak{C})\ar[rr]^{Id_{\mathfrak{C}}\circ(\epsilon;Id_{\mathfrak{C}})} & & \mathfrak{C}\circ(I;\mathfrak{C})\ar@{=}[r] & \mathfrak{C}_{\circ_{(1)}}\mathfrak{C} } \end{displaymath} \end{defn} There is another useful tensor product on the category of $\Sigma$-modules called the Hadamard tensor product. \begin{defn} If $\mathfrak{O}$ and $\mathfrak{P}$ are $\Sigma$-modules then we define their \textbf{Hadamard tensor product} by $(\mathfrak{O}\otimes_{H}\mathfrak{P})(n)\defeq\mathfrak{O}(n)\otimes\mathfrak{P}(n)$ equipped with the diagonal action of $\Sigma_{n}$. \end{defn} If $\mathfrak{O}$ and $\mathfrak{P}$ are operads then their Hadamard tensor product has a natural operad structure. By duality the Hadamard tensor product of two cooperads is a cooperad. \subsection{Modules and Algebras Over Operads} Let us now discuss categories of algebras over operads. \begin{defn}\label{defoperad} Let $(\mathfrak{P},\gamma,\eta)$ be an operad in $\mathpzc{E}$. A $\mathfrak{P}$-\textbf{module} is a pair $(N,\lambda_{N})$ where $N$ is a $\Sigma$-module and $\lambda_{N}:\mathfrak{P}\circ N\rightarrow N$ is a morphism of $\Sigma$-modules such that the following diagrams commute \begin{displaymath} \xymatrix{ (\mathfrak{P}\circ\mathfrak{P})\circ N\ar@{=}[r]\ar[d]^{\gamma\circ N} & \mathfrak{P}\circ(\mathfrak{P}\circ N)\ar[rr]^{\mathfrak{P}(\lambda_{N})} & & \mathfrak{P}\circ N\ar[d]^{\lambda_{N}}\\ \mathfrak{P}\circ N\ar[rrr]^{\lambda_{N}} & & & N } \end{displaymath} \begin{displaymath} \xymatrix{ I\circ N\ar@{=}[dr]\ar[r]^{\eta\circ N} & \mathfrak{P}\circ N\ar[d]^{\gamma_{N}}\\ & N } \end{displaymath} \end{defn} There is the obvious notion of a morphism of $\mathfrak{P}$-modules, giving the category ${}_{\mathfrak{P}}\mathpzc{Mod}(\mathpzc{E})$. There is an obvious forgetful functor $|-|:{}_{\mathfrak{P}}\mathpzc{Mod}(\mathpzc{E})\rightarrow{}\mathpzc{Mod}_{\Sigma}$. For a $\Sigma$-module $N$ there is a $\mathfrak{P}$-module structure on $\mathfrak{P}\circ N$ given by \begin{displaymath} \xymatrix{ \mathfrak{P}\circ(\mathfrak{P}\circ N)\ar@{=}[r] & (\mathfrak{P}\circ\mathfrak{P})\circ N\ar[r]^{\gamma\circ N} & \mathfrak{P}\circ N } \end{displaymath} This construction is functorial, and gives the left adjoint to the forgetful functor. \begin{defn} Let $\mathfrak{C}$ be a cooperad. The \textbf{category of} $\mathfrak{C}$-\textbf{comodules} is $\mathpzc{coMod}_{\mathfrak{C}}\defeq ({}_{\mathfrak{C}}\mathpzc{Mod}(\mathpzc{E}^{op}))^{op}$ \end{defn} For $\Sigma$-modules $\mathfrak{C}$, denote by $\mathfrak{C}\hat{\circ} C\defeq\prod_{n\ge 0}\mathfrak{C}(n)\otimes_{\Sigma_{n}}C^{\otimes n}$. If $\mathfrak{C}$ is a cooperad then a $\mathfrak{C}$-comodule is an object $C$ of $\mathpzc{E}$ equipped with a map $\Delta:C\rightarrow\mathfrak{C}\hat{\circ}C$ satisfying axioms similar to the duals of Definition \ref{defoperad}. \begin{defn} A $\mathfrak{P}$-\textbf{algebra} if a $\mathfrak{P}$-module which is concentrated in arity $1$. The full subcategory of $\mathfrak{P}$-modules consisting of $\mathfrak{P}$-algebras is denoted $\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})$. Similarly one defines coalgebras over a cooperad. This category is denoted $\mathpzc{coAlg}_{\mathfrak{C}}(\mathpzc{M})$ \end{defn} \begin{defn} Let $\mathfrak{C}$ be a cooperad. A \textbf{conilpotent} $\mathfrak{C}$-\textbf{comodule} is a $\Sigma$-module $V$ together with a map $\Delta_{V}: V\rightarrow\mathfrak{C}\circ V$ such that the composite map $V\rightarrow\mathfrak{C}\circ V\rightarrow\mathfrak{C}\hat{\circ}V$ endows $V$ with the structure of a $\mathfrak{C}$-comodule. \end{defn} The full subcategory of $\mathfrak{C}$-comodules consisting of conilpotent $\mathfrak{C}$-comodules is denoted $\mathpzc{coMod}_{\mathfrak{C}}^{conil}$. Let $(C,\Delta_{C})$ be a $\mathfrak{C}$-cooperad. Again this definition is equivalent to the so-called coradical filtration filtration on $V$ being exhaustive. The full subcategory of $\mathpzc{coMod}_{\mathfrak{C}}^{conil}$ consisting of comodules concentrated in arity $1$ is denoted $\mathpzc{coAlg}_{\mathfrak{C}}^{conil}$ \subsection{Derivations}\label{secder} Typically the operads and algebras we consider will be equipped with differentials. In this section we present a rather more general discussion of derivations which will prove useful. \begin{defn} Let $\mathfrak{P}$ be an operad. A \textbf{derivation} of $\mathfrak{P}$ is a map $d_{\mathfrak{P}}:\mathfrak{P}\rightarrow \mathfrak{P}$ such that the following diagram commutes \begin{displaymath} \xymatrix{ \mathfrak{P}\circ \mathfrak{P}\ar[d]^{\gamma_{\mathfrak{P}}}\ar[rrr]^{d_{\mathfrak{P}}\circ(Id;Id)+Id\circ'd_{\mathfrak{P}} }& & &\mathfrak{P}\circ (\mathfrak{P};\mathfrak{P})\ar[r] & \mathfrak{P}\circ \mathfrak{P}\ar[d]^{\gamma_{\mathfrak{P}}}\\ N\ar[rrrr]^{d_{\mathfrak{P}}} & & & & N } \end{displaymath} An \textbf{operad with derivation} is a pair $(\mathfrak{P},d_{\mathfrak{P}})$ where $\mathfrak{P}$ is an operad and $d_{\mathfrak{P}}$ is a derivation of $\mathfrak{P}$. \end{defn} \begin{defn} Let $(\mathfrak{P},d_{\mathfrak{P}})$ be an operad with derivation. A $\mathfrak{P}$-\textbf{module with derivation} is a $\mathfrak{P}$-module $(N,\lambda_{N})$ together with a map $d_{N}:N\rightarrow N$ such that the following diagram commutes \begin{displaymath} \xymatrix{ \mathfrak{P}\circ N\ar[d]^{\lambda_{N}}\ar[rrr]^{d_{\mathfrak{P}}\circ(Id;Id)+Id\circ'd_{N} }& & &\mathfrak{P}\circ (N;N)\ar[r] & \mathfrak{P}\circ N\ar[d]^{\lambda_{N}}\\ \mathfrak{P}\ar[rrrr]^{d_{N}} & & & & \mathfrak{P} } \end{displaymath} \end{defn} In particular $\mathfrak{P}$ may be regarded as a module with derivation over itself. The following is shown in \cite{loday2012algebraic} Proposition 6.3.19. \begin{prop} Let $(\mathfrak{P},d_{\mathfrak{P}})$ be an operad with derivation, $N$ a $\Sigma$-module, and $\phi:N\rightarrow\mathfrak{P}\circ N$ a map of $\Sigma$-modules. Then there is a unique derivation on the $\mathfrak{P}$-module $\mathfrak{P}\circ N$ whose restriction to $N$ is $\phi$. It is given by the formula $$d_{\phi}=d_{\mathfrak{P}}\circ N+(\gamma_{(1)}\circ Id_{N})(Id_{\mathfrak{P}}\circ'\phi)$$ \end{prop} \subsubsection{Relative Derivations} In this section we will introduce a relative notion of derivation, and the notion of operads and algebras with derivation over a ring with derivation. Let $(R,d_{R})$ be a commutative algebra with derivation. We may regard $R$ as an operad concentrated in arity $1$. By an $R$-module with derivation, we just mean an algebra with derivation over the operad with derivation $(R,d_{R})$. Unravelling the definitions, this is just an $R$-module $V$ equipped with a map $d_{V}:V\rightarrow V$ such that the following diagram commutes \begin{displaymath} \xymatrix{ R\otimes V\ar[rr]^{d_{R}\otimes V+R\otimes d_{V}}\ar[d] & & R\otimes V\ar[d]\\ V\ar[rr]^{d_{V}} & & V } \end{displaymath} Denote by ${}_{(R,d_{r})}\mathpzc{Mod}(\mathpzc{E})$ the category of $R$-modules with derivation. This category is symmetric monoidal. If $(M,d_{N})$ and $(N,d_{N})$ are $R$-modules with derivation, then we define a derivation $d_{M\otimes_{R} N}$ on $M\otimes_{R} N$ by $d_{M}\otimes_{R} Id_{N}+Id_{M}\otimes_{R} d_{N}$. Then $(M,d_{M\otimes N})$ is an $R$-module with derivation. An \textit{operad with derivation over }$(R,d_{R})$ is an operad in the symmetirc monoidal category ${}_{(R,d_{R})}\mathpzc{Mod}$. Unravelling the definitions, an operad with derivation over $(R,d_{R})$ is an operad with derivation $(\mathfrak{P},d_{\mathfrak{P}})$ in ${}_{R}\mathpzc{Mod}$ such that the maps $I\rightarrow\mathfrak{P}$ and $\mathfrak{P}\circ\mathfrak{P}\rightarrow\mathfrak{P}$ commute with the derivations. \begin{defn} Let $(R,d_{R})$ be a ring with derivation, and $(\mathfrak{P},d_{\mathfrak{P}})$ an operad with derivation over $(R,d_{R})$. Let $A$ and $B$ be $\mathfrak{P}$-modules, and $\epsilon:A\rightarrow B$ a morphism of modules. An $\epsilon$-\textbf{derivation from }$A$\textbf{ to }$B$ over $(R,d_{R})$ is a morphism $D:A\rightarrow B$ such that the following diagrams commute \begin{displaymath} \xymatrix{ \mathfrak{P}\circ(A;A)\ar[rrr]^{d_{\mathfrak{P}}\circ(\epsilon;\epsilon)+Id_{\mathfrak{P}}\circ(\epsilon;D)}\ar[d]^{\gamma_{A}} &&& \mathfrak{P}(B;B)\ar[d]^{\gamma_{B}}\\ A\ar[rrr]^{D} & && B\\ } \end{displaymath} \begin{displaymath} \xymatrix{ R\otimes V\ar[rr]^{d_{R}\otimes \epsilon+R\otimes D}\ar[d] & & R\otimes W\ar[d]\\ V\ar[rr]^{D} & & W } \end{displaymath} \end{defn} In particular a $\mathfrak{P}$-module with derivation over $(R,d_{R})$ is just a pair $(A,d_{A})$ where $d_{A}$ is an $Id_{A}$-derivation from $A$ to itself. \begin{prop} Let $V$ be an object of $\mathpzc{Mod}_{\Sigma}(_{R}\mathpzc{Mod})$ and $A$ a $\mathfrak{P}$-module. Let $i:V\rightarrow A$ and $\phi:V\rightarrow A$ be maps of $R$-modules, where $\phi$ is a $i$-derivation over $(R,d_{R})$. Let $\tilde{i}:\mathfrak{P}(V)\rightarrow A$ denote the unique map of $\mathfrak{P}$-modules whose restriction to $V$ is $i$. There is a unique $\tilde{i}$-derivation $d_{\phi}:\mathfrak{P}(V)\rightarrow A$ over $(R,d_{R})$ whose restriction to $V$ is $\phi$. It is given by the formula $$d_{\phi}=\gamma_{A}(d_{\mathfrak{P}}\circ\tilde{i}+(\gamma_{(1)}\circ A)(Id_{\mathfrak{P}}\circ'\phi))$$ \end{prop} There is an important special case of this result. Let $(\mathfrak{P},d_{\mathfrak{P}})$ be an operad with derivation over a ring $(R,d_{R})$ with derivation in a category $\mathpzc{E}$. Let $A$ be a $\mathfrak{P}$-module with derivation over $(R,d_{R})$ and let $V$ be any object of $\mathpzc{E}$. Suppose $\alpha:V\rightarrow A$ and $i:A\rightarrow V$ are maps in $\mathpzc{E}$. There is a unique induced map $R\otimes i:R\otimes V\rightarrow A$ of $R$-modules, and a unique $R\otimes i$-derivation $d_{\alpha}:R\otimes V\rightarrow A$. Applying the proposition again, there is a unique induced $R\otimes i$-derivation $\mathfrak{P}(R\otimes V)\rightarrow A$ of $\mathfrak{P}$-modules over $(R,d_{R})$, which we also denote by $d_{\alpha}$. \begin{cor}\label{coproder} Let $A$, $B$, an $C$ be $\mathfrak{P}$-modules, and let $\epsilon_{A}:A\rightarrow C$ and $\epsilon_{B}:B\rightarrow C$ be morphisms of $\mathfrak{P}$-modules. Let $D_{A}:A\rightarrow C$ and $D_{B}:B\rightarrow C$ be derivations from $A$ to $C$ and from $B$ to $C$ respectively over $(R,d_{R})$. Then there is a unique derivation $D:A\coprod B\rightarrow C$ over $(R,d_{R})$ whose restriction to $A$ is $D_{A}$ and whose restriction to $B$ is $D_{B}$. \end{cor} \begin{proof} $A\coprod B$ can be constructed as a quotient of $\mathfrak{P}(A\oplus B)$. There is a map of $\Sigma$-modules $\overline{D_{A}+D_{B}}: A\oplus B\rightarrow C$ which uniquely extends to a derivation $\mathfrak{P}(A\oplus B)\rightarrow C$. One checks that this descends to a map $A\coprod B\rightarrow C$. \end{proof} \subsection{(Co)Operads in Chain Complexes} In this section we let $(\mathpzc{E},\otimes,k)$ be a symmetric monoidal additive category with kernels, cokernels, and countable coproducts. We fix an object $R\in\mathpzc{Alg}_{\mathfrak{Comm}}(Ch(\mathpzc{E}))$ and consider the monoidal additive category $\mathpzc{M}\defeq {}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$. \subsubsection{Derivations and Chain Complexes} Let $R\in\mathpzc{Alg}_{\mathfrak{Comm}}(Ch(\mathpzc{E}))$. Then $R$ may be regarded as a commutative algebra in $\underline{Gr}_{\mathbb{Z}}(\mathpzc{E})$ equipped with a degree $-1$ derivation $d_{R}$ which squares to zero. Let $|R|_{gr}$ denote the underlying graded ring of $R$. Then an operad in ${}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ is precisely an operad with derivation $(\mathfrak{P},d_{\mathfrak{P}})$ in ${}_{|R|_{gr}}\mathpzc{Mod}(\underline{Gr}_{\mathbb{Z}}(\mathpzc{E}))$ over $(|R|_{gr},d_{R})$, such that $d_{\mathfrak{P}}$ is a degree $-1$ map which squares to zero. \subsubsection{The Convolution Operad} Let $(\mathfrak{C},\Delta,\epsilon)$ be a cooperad and $(\mathfrak{P},\gamma,\eta)$, an operad in $\mathpzc{M}$, and consider the object in $\mathpzc{Gr}_{\mathbb{N}_{0}}Ch(\mathpzc{Ab})$ given by $$\textbf{Hom}(\mathfrak{C},\mathfrak{P})(n)\defeq\underline{\textrm{Hom}}_{\mathpzc{M}}(\mathfrak{C}(n),\mathfrak{P}(n))$$ It is a (right) $\Sigma$ module in $Ch(\mathpzc{Ab})$. For $f\in\textrm{Hom}_{\mathpzc{E}}(\mathfrak{C}(n),\mathfrak{P}(n))$ $\Sigma\in\Sigma_{n}$ acts on the right by $$f\mapsto \Sigma\circ f\circ\Sigma^{-1}$$ Using the methods of \cite{loday2012algebraic} Section 6.4.1 this $\Sigma$-module in $Ch(\mathpzc{Ab})$ can be made into an operad, called the \textbf{convolution operad} of $\mathfrak{C}$ and $\mathfrak{P}$. Recall that to any $dg$-operad $\mathfrak{P}$ there is an associated dg pre-Lie algebra whose underlying differentially graded abelian group is $\prod_{n}\mathfrak{P}(n)$. Denote by $$\textbf{Hom}_{\Sigma}(\mathfrak{C},\mathfrak{P})\defeq\prod_{n\ge0}\textrm{Hom}_{\Sigma_{n}}(\mathfrak{C}(n),\mathfrak{P}(n))$$ the subobject of $\prod_{n\ge0}\textbf{Hom}(\mathfrak{C}(n),\mathfrak{P}(n))$ consisting of $\Sigma$-equivariant maps. \begin{prop} $\textrm{Hom}_{\Sigma}(\mathfrak{C},\mathfrak{P})$ is a sub dg pre-Lie algebra of the dg pre-Lie algebra associated to the convolution operad. \end{prop} \begin{defn} $\textrm{Hom}_{\Sigma}(\mathfrak{C},\mathfrak{P})$ is called the \textbf{convolution dg pre-Lie algebra}. The binary operation is denoted $\star$ \end{defn} \subsubsection{Twisting Morphisms and Twisted Composite Products} Details for the next two sections can be found in Chapter 11 of \cite{loday2012algebraic}. Let $\mathfrak{C}$ be a cooperad and $\mathfrak{P}$ an operad. We consider the convolution operad $\underline{\textrm{Hom}}_{\mathpzc{M}}(\mathfrak{C},\mathfrak{P})$ in $Ch(\mathpzc{Ab})$. \begin{defn} A \textbf{twisting morphism} is a Maurer-Cartan element in the convolution dg pre-Lie algebra $\textrm{Hom}_{\Sigma}(\mathfrak{C},\mathfrak{P})$. \end{defn} Typically the twisting morphisms we consider on additive categories are induced from ones in $Ch(\mathpzc{Ab})$ (or $Ch({}_{{\mathbb Q}}\mathpzc{Mod})$). To this end we note the following result. \begin{prop}\label{preserveMC} Let $\mathpzc{D}$ and $\mathpzc{M}$ be monoidal additive categories and $F:\mathpzc{D}\rightarrow\mathpzc{M}$ a strict monoidal functor. Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a twisting morphism in $Ch(\mathpzc{E})$. Then $F(\alpha):F(\mathfrak{C})\rightarrow F(\mathfrak{P})$ is a twisting morphism. \end{prop} \begin{proof} This follows from the fact that $F$ induces a homomorphism of convolution Lie algebras, and homomorphisms of Lie algebras send Maurer-Cartan elements to Maurer-Cartan elements. \end{proof} Twisting morphisms allows us to construct twisted differentials on the composite products $\mathfrak{C}\circ\mathfrak{P}$ and $\mathfrak{P}\circ\mathfrak{C}$, which we will need to construct the bar-cobar adjunction. Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a degree $-1$ derivation over $(R,d_{R})$. Denote by $d^{r}_{\alpha}$ the unique derivation on $\mathfrak{C}\circ\mathfrak{P}$ over $(R,d_{R})$ which extends the composition \begin{displaymath} \xymatrix{ \mathfrak{C}\ar[r]^{\Delta_{(1)}} & \mathfrak{C}\circ_{(1)}\mathfrak{C}\ar[r]^{Id_{\mathfrak{C}}\circ_{(1)}\alpha} & \mathfrak{C}\circ_{(1)}\mathfrak{P}\ar[r] & \mathfrak{C}\circ\mathfrak{P} } \end{displaymath} Denote by $d^{l}_{\alpha}$ the unique derivation on $\mathfrak{P}\circ\mathfrak{C}$ the unique derivation which extends \begin{displaymath} \xymatrix{ \mathfrak{C}\ar[r]^{\Delta} & \mathfrak{C}\circ\mathfrak{C}\ar[r]^{\alpha\circ Id_{\mathfrak{C}}} & \mathfrak{P}\circ\mathfrak{C} } \end{displaymath} \begin{lem} On $\mathfrak{P}\circ\mathfrak{C}$ the derivation $d_{\alpha}$ satisfies $$d_{\alpha}^{2}=d^{l}_{\partial(\alpha)+\alpha\star\alpha}$$ On $\mathfrak{C}\circ\mathfrak{P}$ the derivation $d_{\alpha}$ satisfies $$d_{\alpha}^{2}=d^{r}_{\partial(\alpha)+\alpha\star\alpha}$$ \end{lem} \begin{cor} If $\alpha$ is a twisting morphism then $$\mathfrak{P}\circ_{\alpha}\mathfrak{C}\defeq(\mathfrak{P}\circ\mathfrak{C},d_{\alpha})$$ and $$\mathfrak{C}\circ_{\alpha}\mathfrak{P}\defeq(\mathfrak{C}\circ\mathfrak{P},d_{\alpha})$$ are chain complexes. \end{cor} Finally we denote by $\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P}$ the complex whose underlying graded $\Sigma$-module is given by $\mathfrak{P}\circ\mathfrak{C}\circ\mathfrak{P}$, with differential given by $d_{\mathfrak{P}\circ\mathfrak{C}\circ\mathfrak{P}}+Id_{\mathfrak{P}}\circ' d_{\alpha}^{r}-d_{\alpha}^{l}\circ Id_{\mathfrak{P}}$. This is called the \textbf{two-sided twisted composite product}. As in \cite{hirsh2012curved} Proposition 5.1.3. a direct computation shows that this is a complex. \begin{example}\label{cofreetwist} Let $V$ be an object of $\mathpzc{M}$. The composite morphism $$\mathfrak{T}^{c}(V[1])\rightarrow V[1]\rightarrow V\rightarrow\mathfrak{T}(V)$$ is a twisting morphism. Indeed the proof of Lemma 7.4.2 in \cite{loday2012algebraic} for more general quadratic operads goes through here. \end{example} \subsubsection{Bar and Cobar Constructions for Algebras} Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a twisting morphism. Let us construct an adjoint pair of functors $$\adj{\Omega_{\alpha}}{\mathpzc{coAlg}^{conil}_{\mathfrak{C}}(\mathpzc{M})}{\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})}{B_{\alpha}}$$ Let $C$ be a $\mathfrak{C}$-coalgebra. The underlying graded algebra of $\Omega_{\alpha}C$ is the free algebra $\mathfrak{P}(C)$ on $C$. The differential however is augmented using the twisting morphism. Namely, it is the sum of the derivations $d_{1}=-d_{\mathfrak{P}}\circ C-\mathfrak{P}\circ; d_{C}$ and $d_{2}$, where $-d_{2}$ is the unique derivation extending the map \begin{displaymath} \xymatrix{ C\ar[r]^{\Delta} & C\circ C\ar[r]^{\alpha\circ Id_{C}} & \mathfrak{P}\circ C } \end{displaymath} \begin{prop} $-d_{1}-d_{2}$ is a square-zero derivation on $\mathfrak{P}(C)$. \end{prop} \begin{defn} $(\mathfrak{P}(C),d_{1}+d_{2})$ is called the \textbf{cobar construction of }$A$\textbf{ with respect to }$\alpha$. \end{defn} Now let $A$ be a $\mathfrak{P}$-algebra in $\mathpzc{E}$. The underlying graded algebra of $B_{\alpha}A$ is the cofree coalgebra $\mathfrak{C}(A)$. Denote by $d_{1}$ the square zero coderivation $d_{\mathfrak{C}}\circ Id_{A}+\mathfrak{C}\circ' d_{A}$. There is a unique coderivation $d_{2}$ extending the degree $-1$ map \begin{displaymath} \xymatrix{ \mathfrak{C}\circ A\ar[r]^{\alpha\circ Id_{A}} & \mathfrak{P}\circ A\ar[r]^{\gamma_{A}} & A } \end{displaymath} \begin{prop} $d_{1}+d_{2}$ is a square-zero derivation on $\mathfrak{C}(A)$. \end{prop} \begin{defn} $B_{\alpha}A\defeq(\mathfrak{C}(A),d_{1}+d_{2})$ is called the \textbf{bar construction of } $A$ \textbf{with respect to }$\alpha$. \end{defn} \begin{prop} There is an adjunction $$\adj{\Omega_{\alpha}}{\mathpzc{coAlg}^{conil}_{\mathfrak{C}}(\mathpzc{M})}{\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})}{B_{\alpha}}$$ \end{prop} In the context of the bar-cobar adjunction it is convenient to introduce the \textit{shifting (co)operad}. If $V$ is an object of $\mathpzc{M}$ then one defines the $\Sigma$-module $\underline{End}_{V}$ by $$\underline{End}_{V}(n)\defeq\underline{Hom}_{\Sigma_{N}}(V^{\otimes n},V)$$ where $\Sigma_{n}$ acts on $V$ trivially. This can be endowed with the structure of an operad, which we denote by $End_{V}$, and a cooperad, which we denote by $End^{c}_{V}$. \begin{defn} The \textbf{shifting operad}, denoted $\mathfrak{S}$, is the operad $End_{R[1]}$. The \textbf{shifting cooperad} is $End^{c}_{R[1]}$. \end{defn} By \cite{loday2012algebraic} Page 234 we have the following. \begin{prop} For any object $V\in {}_{R}\mathpzc{Mod}$ and any $\Sigma$-module $\mathfrak{P}$ there is an isomorphism, natural in $V$, $$\mathfrak{P}(V)[-1]\cong (\mathfrak{S}\otimes_{H}\mathfrak{P})(V[-1])$$ In particular in $\mathfrak{P}$ is an operad the shift functor induces an equivalence of categories $$[-1]:\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})\rightarrow\mathpzc{Alg}_{\mathfrak{S}\otimes_{H}\mathfrak{P}}(\mathpzc{M})$$ Similarly If $\mathfrak{C}$ is a cooperad the shift functor induces an equivalence of categories. $$[-1]:\mathpzc{coAlg}^{conil}_{\mathfrak{C}}(\mathpzc{M})\rightarrow\mathpzc{coAlg}^{conil}_{\mathfrak{S}^{c}\otimes_{H}\mathfrak{C}}(\mathpzc{M})$$ \end{prop} \subsection{Duality and Twisting Morphisms} In this section $\mathpzc{M}={}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ where $\mathpzc{E}$ is a closed monoidal category. In particular $\mathpzc{M}$ is closed monoidal as well. Let $\mathfrak{C}$ be a cooperad in $\mathpzc{M}$. The dualising functor $(-)^{\vee}:\mathpzc{M}\rightarrow\mathpzc{M}^{op}$ is lax monoidal, so it induces a functor $$(-)^{\vee}:\mathpzc{coAlg}_{\mathfrak{C}}\rightarrow(\mathpzc{Alg}_{\mathfrak{C}^{\vee}})^{op}$$ Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a twisting morphism. Denote by $$\hat{C}_{\alpha}:\mathpzc{Alg}_{\mathfrak{P}}\rightarrow\mathpzc{Alg}_{(\mathfrak{S}^{c}\otimes_{H}\mathfrak{C})^{\vee}}$$ the composition $(-)^{\vee}\circ[-1]\circ B_{\alpha}$. The hat notation is supposed to invoke comparisons with formal power series, which will be evident for the case of duality between cocommutative coalgebras and Lie algebras. Indeed the underlying ${\mathbb Z}$-graded algebra of $\hat{C}_{\alpha}(\mathfrak{g})$ is $\prod_{n}((\mathfrak{S}^{c}(n)\otimes\mathfrak{C}(n))\otimes_{\Sigma_{n}}\mathfrak{g}^{\otimes n}[-1])^{\vee}$. Consider the ${\mathbb Z}$-graded `polynomial' algebra $\bigoplus_{n}((\mathfrak{S}^{c}(n)\otimes\mathfrak{C}(n))\otimes_{\Sigma_{n}}\mathfrak{g}^{\otimes n}[-1])^{\vee}$. There is a natural map of algebras $\bigoplus_{n}((\mathfrak{S}^{c}(n)\otimes\mathfrak{C}(n))\otimes_{\Sigma_{n}}\mathfrak{g}^{\otimes n}[-1])^{\vee}\rightarrow\prod_{n}((\mathfrak{S}^{c}(n)\otimes\mathfrak{C}(n))\otimes_{\Sigma_{n}}\mathfrak{g}^{\otimes n}[-1])^{\vee}$. Under nice circumstances, which we establish below, the differential on $\hat{C}_{\alpha}$ restricts to a differential on this algebra. Thus we get a differentially-graded algebra $C_{\alpha}(\mathfrak{g})\rightarrow \hat{C}_{\alpha}(\mathfrak{g})$. \begin{defn} Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a twisting morphism. A $\mathfrak{P}$-algebra $\mathfrak{g}$ is said to be $\alpha$-\textbf{separable} if \begin{enumerate} \item the map $(\mathfrak{S}^{c}(n)\otimes\mathfrak{C}(n))^{\vee}\otimes_{\Sigma_{n}}\mathfrak{g}^{\vee}\otimes\ldots\otimes\mathfrak{g}^{\vee}[1]\rightarrow((\mathfrak{S}^{c}(n)\otimes\mathfrak{C}(n))\otimes_{\Sigma_{n}}\mathfrak{g}^{\otimes n}[-1])^{\vee}$ is an isomorphism. \item the map $\mathfrak{C}^{\vee}\circ\mathfrak{g}^{\vee}\rightarrow(\mathfrak{C}\circ\mathfrak{g})^{\vee}$ is a monomorphism. \item the map $\mathfrak{g}^{\vee}\rightarrow(\mathfrak{P}\circ\mathfrak{g})^{\vee}$ factors through $\mathfrak{P}^{\vee}\circ\mathfrak{g}^{\vee}$. \end{enumerate} \end{defn} \begin{prop}\label{polykoszul} If $\mathfrak{g}$ is a separable $\mathfrak{P}$-algebra then the differential on $\hat{C}_{\alpha}(\mathfrak{g})$ restricts to a differential on $C_{\alpha}(\mathfrak{g})$. \end{prop} \begin{proof} By our assumptions the map $C_{\alpha}(\mathfrak{g})\rightarrow \hat{C}_{\alpha}(\mathfrak{g})$ is a monomorphism. Moreover the algebra $C_{\alpha}(\mathfrak{g})$ is free on $\mathfrak{g}^{\vee}[1]$, so it suffices to check that the restriction of the differential to $\mathfrak{g}^{\vee}[1]$ factors through $C_{\alpha}(\mathfrak{g})$. Then it will automatically square to zero. In fact we shall check that the underlying graded map of $\mathfrak{g}^{\vee}\rightarrow C_{\alpha}(\mathfrak{g})[-1]\cong(\mathfrak{C}\circ_{\alpha}\mathfrak{g})^{\vee}$ factors through $\mathfrak{C}^{\vee}\circ\mathfrak{g}^{\vee}$. This follows from the commutative diagram below. \begin{displaymath} \xymatrix{ (\mathfrak{g})^{\vee}\ar[r]^{}&(\mathfrak{P}\circ\mathfrak{g})^{\vee}\ar[r]^{\;\;\;(\alpha\circ Id_{\mathfrak{G}})^{\vee}} & (\mathfrak{C}\circ\mathfrak{g})^{\vee}\\ (\mathfrak{g})^{\vee}\ar[r]\ar@{=}[u] & \mathfrak{P}^{\vee}\circ\mathfrak{g}^{\vee}\ar[r]^{\alpha^{\vee}\circ Id}\ar[u] & \mathfrak{C}^{\vee}\circ\mathfrak{g}^{\vee}\ar[u] } \end{displaymath} \end{proof} \subsection{Non-Symmetric Operads} There is a non-symmetric version of the theory detailed above. Details can be found in \cite{loday2012algebraic} Section 5.9. Let $(\mathpzc{E},\otimes)$ be a monoidal additive category which is not necessarily symmetric. Consider the category $Gr_{\mathbb{N}_{0}}(\mathpzc{E})$ of graded objects in $\mathpzc{E}$. \begin{defn} Let $M$ and $N$ be graded objects. The \textbf{non-symmetric composite product} of $M$ and $N$, denoted $M\circ_{ns} N$ is the graded object given by $$(M\circ_{ns} N)(n)=\bigoplus_{k}M(k)\otimes N^{\otimes k}(n)$$ \end{defn} \begin{defn} The graded object $I$ is defined by $M(i)=0$ for $i\neq 1$ and $I(1)=k$. \end{defn} With these the constructions and results analogous to those for symmetric operads above go through mutatis mutandis.\newline \\ Suppose now that $(\mathpzc{E},\otimes)$ is symmetric. Denote the category of non-symmetric operads by $\mathpzc{Op}_{ns}$. As with any module category there is an adjunction $$\adj{k[\Sigma]\otimes(-)}{\mathpzc{Grad}(\mathpzc{E})}{\Sigma-\mathpzc{Mod}}{|-|}$$ which induces an adjunction $$\adj{(-)^{\Sigma}}{\mathpzc{Op}_{ns}}{\mathpzc{Op}}{|-|}$$ \section{The Homotopical Setup}\label{homotopexact} We now introduce the second ingrendient of Koszul duality, namely homotopy. Precisely we define a class of additive model categories in which we have a rich theory of homotopical algebra. \subsection{Exact Categories} The model categories we consider will come from additive categories equipped with exact structures. Following \cite{hovey}, \cite{christensen2002quillen}, and \cite{gillespiemodelexact}, we developed homotopy theory in exact categories in detail in \cite{kelly2016projective}. Here we briefly recall some essential definitions from the theory of exact categories. Details about exact categories can be found in \cite{Buehler} A $\textbf{kernel-cokernel pair}$ in $\mathpzc{E}$ is a pair of composable maps $(i,p)$, $i:A\rightarrow B,p:B\rightarrow C$ such that $i=\textrm{Ker}(p)$ and $p=\textrm{Coker}(i)$. If $\mathcal{Q}$ is a class of kernel-cokernel pairs and $(i,p)\in\mathcal{Q}$, then we say that $i$ is an admissible monic and $p$ is an admissible epic with respect to $\mathcal{Q}$. \begin{defn} A \textbf{Quillen exact structure} on an additive category $\mathpzc{E}$ is a collection $\mathcal{Q}$ of kernel-cokernel pairs such that \begin{enumerate} \item Isomorphisms are both admissible monics and admissible epics. \item Both the collection of admissible monics and the collection of admissible epics are closed under composition. \item If \begin{displaymath} \xymatrix{ A\ar[d]\ar[r]^{f} & B\ar[d]\\ X\ar[r]^{f'} & Y } \end{displaymath} is a push out diagram, and $f$ is an admissible monic, then $f'$ is as well. \item If \begin{displaymath} \xymatrix{ A\ar[d]\ar[r]^{f'} & B\ar[d]\\ X\ar[r]^{f} & Y } \end{displaymath} is a pullback diagram, and $f$ is an admissible epic, then $f'$ is as well. \end{enumerate} \end{defn} Let $(\mathpzc{E},\mathcal{Q})$ be an exact category. We call a null sequence \begin{displaymath} \xymatrix{ 0\ar[r] & A\ar[r]^{i} & B\ar[r]^{p} & C\ar[r] & 0 } \end{displaymath} \textbf{short exact} if $(i,p)$ is a kernel-cokernel pair in $\mathcal{Q}$. We will use interchangeably the notion of kernel-cokernel pair and short exact sequence. When it is not likely to cause confusion, we will suppress the notation $(\mathpzc{E},\mathcal{Q})$ to $\mathpzc{E}$. When studying exact categories it is natural to consider so-called exact functors: \begin{defn} Let $(\mathpzc{E},\mathcal{P})$, $(\mathpzc{F},\mathcal{Q})$ be exact categories. A functor $F:\mathpzc{E}\rightarrow\mathpzc{F}$ is said to be \textbf{exact} (with respect to $\mathcal{P}$ and $\mathcal{Q}$) if for any short exact sequence $$0\rightarrow X\rightarrow Y\rightarrow Z\rightarrow 0$$ in $\mathcal{P}$, $$0\rightarrow F(X)\rightarrow F(Y)\rightarrow F(Z)\rightarrow 0$$ is a short exact sequence in $\mathcal{Q}$. \end{defn} On any additive category one can define the split exact structure for which the kernel-cokernel pairs are the split exact sequences. Any exact category contains this is an exact subcategory. At the other extreme we have quasi-abelian exact structures. \begin{defn}\label{quas} An additive category $\mathpzc{E}$ with all kernels and cokernels is said to be \textbf{quasi-abelian} if the class $\mathpzc{qac}$ of all kernel-cokernel pairs forms an exact structure on $\mathpzc{E}$. \end{defn} Let $X\in Ch(\mathpzc{E})$ be a chain complex. \begin{defn} $X$ is said to be \textbf{acyclic} if for each $n$ the map $d_{n}:X_{n}\rightarrow Z_{n-1}X$ is an admissible epimorphism, where $Z_{n-1}X\defeq Ker(d_{n-1}:X_{n-1}\rightarrow X_{n-2})$. \end{defn} \begin{defn} Let $\mathpzc{E}$ be an exact category. A map $f:X\rightarrow Y$ in $Ch(\mathpzc{E})$ is said to be a \textbf{quasi-isomorphism} if $cone(f)$ is acyclic. \end{defn} \subsection{(Monoidal) Model Structures on Exact Categories} The class $\mathcal{W}$ of quasi-imorphisms satisfies the 2-out-of-6 property, namely if $f:W\rightarrow X$, $g:X\rightarrow Y$, $h:Y\rightarrow Z$ and $h\circ g$, $g\circ f$ are quasi-isomorphisms, if and only if $f,g,h$ and $h\circ g\circ f$ are. Thus the class $\mathcal{W}$ makes $Ch(\mathpzc{E})$ into a \textit{homotopical category}. \begin{defn} A \textbf{homotopical category} is a pair $(\mathpzc{M},\mathcal{W})$ where $\mathpzc{M}$ is a category, and $\mathcal{W}$ is a class of morphisms in $\mathpzc{M}$ containing all identity morphisms and satisfying the 2-out-of-6 property. \end{defn} If $\mathpzc{C}$ is a model category and $\mathcal{W}$ is its class of weak equivalences, then $(\mathpzc{C},\mathcal{W})$ is a homotopical category. It will be important for us that a model structure exists on $Ch(\mathpzc{E})$ for which the class of weak equivalences coincides with the class of quasi-isomorphisms. In \cite{kelly2016projective} Chapter 2, Section 8 we introduced the notion of a monoidal exact category. Essentially this is an exact category $\mathpzc{E}$ equipped with an additive associative bifunctor $\otimes:\mathpzc{E}\times\mathpzc{E}\rightarrow\mathpzc{E}$ which commutes with colimits in each variable. Closed monoidal exact categories, in which the assumption that $\otimes$ commutes with colimits automatically holds, are discussed in \cite{vst2012exact}. \begin{defn} A \textbf{monoidal exact category} is an exact category $\mathpzc{E}$, equipped with a bifunctor $\otimes:\mathpzc{E}\times\mathpzc{E}\rightarrow\mathpzc{E}$ such that $(\mathpzc{E},\otimes)$ is a monoidal additive category. \end{defn} In a monoidal exact category we have the notions of flatness and purity. \begin{defn} Let $(\mathpzc{E},\otimes)$ be a monoidal exact category. An object $X$ of $\mathpzc{E}$ is said to be \textbf{flat} if for any exact sequence $$0\rightarrow E\rightarrow F\rightarrow G\rightarrow 0$$ the sequences $$0\rightarrow X\otimes E\rightarrow X\otimes F\rightarrow X\otimes G\rightarrow 0$$ and $$0\rightarrow E\otimes X\rightarrow F\otimes X\rightarrow G\otimes X\rightarrow 0$$ are exact. \end{defn} \begin{defn} Let $(\mathpzc{E},\otimes)$ be a monoidal exact category. A sequence $$0\rightarrow E\rightarrow F\rightarrow G\rightarrow 0$$ is said to be \textbf{pure} if for any object $X$ of $\mathpzc{E}$ the sequences $$0\rightarrow X\otimes E\rightarrow X\otimes F\rightarrow X\otimes G\rightarrow 0$$ and $$0\rightarrow E\otimes X\rightarrow F\otimes X\rightarrow G\otimes X\rightarrow 0$$ are exact. A monomorphism $f:E\rightarrow F$ is said to be \textbf{pure} if it is an admissible monomorphism occuring as the kernel of a pure exact sequence. \end{defn} The class of all pure monomorphisms is denote \textbf{PureMon}. In \cite{kelly2016projective} Lemma 4.6 we showed the following: \begin{prop} If a monoidal exact category $\mathpzc{E}$ has enough flat objects and $$0\rightarrow E\rightarrow F\rightarrow G\rightarrow 0$$ is a short exact sequence with $G$ flat, then the sequence is pure. \end{prop} By passing to an abelianization (see \cite{kelly2016projective}) it is straightforward to prove the following. \begin{prop}\label{3pure} Let \begin{displaymath} \xymatrix{ & 0\ar[d] & 0\ar[d] & 0\ar[d] &\\ 0\ar[r] & A\ar[r]\ar[d] & B\ar[r]\ar[d] & C\ar[r]\ar[d]\ar[r] & 0\\ 0\ar[r] & X\ar[r]\ar[d] & Y\ar[r]\ar[d] & Z\ar[r]\ar[d] & 0\\ 0\ar[r] & K\ar[r]\ar[d] & L\ar[r]\ar[d] & M\ar[r]\ar[d] & 0\\ & 0 & 0 & 0 & } \end{displaymath} be a diagram in which all columns and two of the rows, or all rows and two of the columns are pure exact. Then the remaining row or column is pure exact. \end{prop} We shall require a technical assumption on $\mathpzc{E}$. Let $\mathcal{I}$ be a filtered category, and let $\mathcal{S}$ be a class of morphisms in $\mathpzc{E}$. Denote by $Fun_{\mathcal{S}}^{cocont}(\mathcal{I},\mathpzc{E})$ the category of cocontinuous functors from $F:\mathcal{I}\rightarrow\mathpzc{E}$ such that for any morphism $\alpha$ in $\mathcal{I}$ the map $F(\alpha)$ is in $\mathcal{S}$. \begin{defn} $\mathpzc{E}$ is said to be \textbf{weakly} $\mathcal{S}$-\textbf{elementary} if for any ordinal $\lambda$ the functor $lim_{\rightarrow}:Fun_{\mathcal{S}}^{cocont}(\lambda,\mathpzc{E})\rightarrow\mathpzc{E}$ exists and is exact. \end{defn} Typically we will assume that our exact categories are weakly \textbf{PureMon}-elementary. \subsection{$K$-Flatness and $K$-Cotorsion in Monoidal Exact Categories}\label{Kcotorproj} \begin{defn} Let $\mathpzc{M}$ be a homotopical category equipped with a monoidal structure. An object $X$ of $\mathpzc{M}$ is said to be $K$-\textbf{flat} if for any weak equivalence $f:A\rightarrow B$ in $\mathpzc{M}$, the map $X\otimes A\rightarrow X\otimes B$ is a weak equivalence. \end{defn} In homotopical categories the notion of derived functors makes sense. If $X$ is $K$-flat and for $A$ tensor product functor $A\otimes(-)$ is left-derivable, then for any object $A$ of $\mathpzc{M}$ the map $X\otimes^{\mathbb{L}}A\rightarrow X\otimes A$ is an equivalence. Let us now focus on homotopical categories of the form $(Ch(\mathpzc{E}),\mathcal{W})$, where $\mathpzc{E})$ is an exact category and $\mathcal{W}$ is the class of weak equivalences. \begin{prop}\label{boundedKflat} Let $\mathpzc{E}$ be a weakly $\textbf{PureMon}$-elementary exact category. Let $X_{\bullet}$ be an $(\aleph_{0};\textbf{PureMon})$-extension of bounded below complexes of flat objects, that is $X_{\bullet}=lim_{\rightarrow_{n\in\mathbb{N}}} X_{\bullet}^{n}$ where each $X_{\bullet}^{n}$ is consists of flat objects and $X_{\bullet}^{n}\rightarrow X_{\bullet}^{n+1}$ is a pure monomorphism. Then $X_{\bullet}$ is $K$-flat. \end{prop} \begin{proof} First note that $X_{\bullet}$ is a $(\aleph_{0};\mathcal{S})$-extension of bounded below complexes satisfying the hypotheses of the proposition. Since $\mathpzc{E}$ is weakly $(\aleph_{0};\mathcal{S})$-elementary, we may in fact assume that $X_{\bullet}$ is bounded below. Now $X_{\bullet}$ is obtained from the complex $S^{0}(X_{0})$ by a transfinite composition of pushouts of the form \begin{displaymath} \xymatrix{ S^{k}(F)\ar[r]\ar[d] & A\ar[d]\\ D^{k+1}(F)\ar[r] & B } \end{displaymath} where $F$ is a flat object. By induction each $A$ and $B$ consists of flat objects and the map $A\rightarrow B$ is a pure monomorphism. It follows that we may assume that $X$ is concentrated in degree $0$, in which case the result is clear. \end{proof} Dually one defines cotorsion objects. \begin{defn} Suppose that $(\mathpzc{M},\mathcal{W})$ is a homotopical category with a closed monoidal structure such that the internal hom functor $\underline{Hom}(-,O)$ is right derivable for any object $O$ of $\mathpzc{M}$. If $\mathfrak{O}$ is a class of objects in $\mathpzc{M}$, an object $X$ is said to be $\mathfrak{O}$-K-cotorsion if the map $\underline{Hom}(X,O)\rightarrow\mathbb{R}\underline{Hom}(X,O)$ is an equivalence for any $O\in\mathfrak{O}$. When $\mathfrak{O}$ is the class of finite products of copies of the monoidal unit $k$, we say that $X_{\bullet}$ is \textbf{finitely }$K$-\textbf{cotorsion}. \end{defn} Now suppose that $\mathpzc{E}$ is a closed monoidal exact category, and consider the homotopical category $(Ch(\mathpzc{E}),\mathcal{W})$. Let us assume that $\underline{Hom}$ is right-derivable. Notice that to check an complex$X_{\bullet}$ is finitely $K$-cotorsion, it is necessary and sufficient to check that $\underline{Hom}(X_{\bullet},k)\rightarrow\mathbb{R}\underline{Hom}(X_{\bullet},k)$ is an equivalence. In particular the functor $\underline{Hom}(-,k)$ preserves weak equivalences between finitely $K$-cotorsion objects. A similar proof to Proposition \ref{boundedKflat} gives the following. \begin{prop}\label{boundedbelowKcotors} Any bounded below complex of $\mathfrak{O}$-cotorsion objects is $\mathfrak{O}$-K-cotorsion. \end{prop} \subsubsection{Duality} For a closed monoidal model category $\mathpzc{M}$ with monoidal unit $R$, we denote by $(-)^{\vee}:\mathpzc{M}\rightarrow\mathpzc{M}^{op}$ the functor $\underline{Hom}(-,R)$. Since $\mathpzc{M}$ is closed monoidal this is derivable to a functor $$\mathbb{R}(-)^{\vee}\defeq\mathbb{R}\underline{Hom}(-,R):Ho(\mathpzc{M})\rightarrow Ho(\mathpzc{M})^{op}$$ By abuse of notation we denote by $\mathbb{R}(-)^{\vee\vee}:Ho(\mathpzc{M})\rightarrow Ho(\mathpzc{M})$ the composition $\mathbb{R}(-)^{\vee}\circ \mathbb{R}(-)^{\vee}$. Note that in $Ho(\mathpzc{M})$ there is a map $V\rightarrow\mathbb{R}(V)^{\vee\vee}$. \begin{defn} An object $V$ is said to be \textbf{homotopically reflexive} if the map $V\rightarrow \mathbb{R}V^{\vee \vee}$ is an isomorphism in $Ho(\mathpzc{M})$. \end{defn} If $V$ and its dual are both finitely $K$-cotorsion then $\mathbb{R}(V)^{\vee\vee}$ is equivalent to $V^{\vee\vee}$, and such an object is homotopically refleixve if and only if the map $V\rightarrow V^{\vee\vee}$ is a weak equivalence. \subsection{Koszul Categories} We are now ready to define what we mean by a Koszul category. \begin{defn} A \textbf{Koszul category} is a monoidal model category $\mathpzc{M}$ of the form ${}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ where: \begin{enumerate} \item $\mathpzc{E}$ is a weakly \textbf{PureMon}-elementary symmetric monoidal exact category, and the monoidal structure on $Ch(\mathpzc{E})$ is the one induced from $\mathpzc{E}$. \item $R$ is a commutative monoid in $Ch(\mathpzc{E})$. \item $Ch(\mathpzc{E})$ is equipped with a combinatorial. monoidal model structure satisfying the monoid axiom such that cofibrations are pure monomorphisms whose cokernels are $K$-flat objects. \item ${}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ is equipped with the transferred model structure. \end{enumerate} A Koszul category is said to be \textbf{closed} if it is a closed monoidal model category. \end{defn} \subsubsection{Constructon of Koszul Categories} Let us give a source of Koszul categories. \begin{defn}\label{pre-Koszul} A \textbf{pre-Koszul category} is a monoidal exact category $(\mathpzc{E},\otimes)$, together with a model structure on $Ch(\mathpzc{E})$ such that \begin{enumerate} \item $\mathpzc{E}$ is weakly $\textbf{PureMon}$-elementary. \item the weak equivalences are the quasi-isomorphisms. \item with the monoidal structure induced by $\otimes$ on $Ch(\mathpzc{E})$, the model structure is monoidal \item the model structure is cofibrantly generated \item there is a set of objects $\{G_{i}\}_{i\in\mathcal{I}}$ and a set of pure monomorphisms $\{f_{l}\}_{l\in\mathcal{L}}$ such that $$\{S^{n}(G_{i})\rightarrow D^{n+1}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}\cup\{0\rightarrow D^{n}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}\cup\{S^{n}(f_{l})\}_{n\in{\mathbb Z},l\in\mathcal{L}}$$ is a set of generating cofibrations. \item there is a set of pure monomorphisms $\{g_{k}\}_{k\in\mathcal{K}}$ in $\mathpzc{E}$ such that $\{D^{n}(g_{k})\}_{n\in{\mathbb Z},k\in\mathcal{K}}$ is a collection of generating acyclic cofibrations \end{enumerate} A pre-Koszul category is said to be \textbf{projective} if $\{f_{l}\}_{l\in\mathcal{L}}=\emptyset$, and every $g_{k}$ is of the form $0\rightarrow F$ for $F$ flat. \end{defn} The meaning behind the term \textit{projective}, is that such a model structure behaves very much like a projective model structure, in that the cofibrations are degree-wise split. The collection $$\mathfrak{G}=\{S^{n}(G_{i})\rightarrow D^{n+1}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}\cup\{0\rightarrow D^{n}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}\cup\{S^{n}(f_{l})\}_{n\in{\mathbb Z},l\in\mathcal{L}}$$ is called the \textbf{cofibrany data} of $\mathpzc{E}$. \begin{prop} Let $\mathpzc{E}$ be a pre-Koszul category. Suppose that $f:X\rightarrow Y$ is a (trivial) cofibration in $Ch(\mathpzc{E})$. Then $f$ is a pure monomorphism and $coker(f)$ is a (trivially) cofibrant object. \end{prop} \begin{proof} We first note that $f$ is a retract of a transfinite composition of pushouts of maps in $$\{S^{n}(G_{i})\rightarrow D^{n+1}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}\cup\{0\rightarrow D^{n}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}\cup\{S^{n}(f_{l})\}_{n\in{\mathbb Z},l\in\mathcal{L}}$$ and is therefore a pure monomorphism. Let $C$ be the cokernel of $f:X\rightarrow Y$. Suppose that \begin{displaymath} \xymatrix{ 0\ar[d]\ar[r] & A\ar[d]\\ C\ar[r] & B } \end{displaymath} is a commutative diagram and that $A\rightarrow B$ is a trivial fibration. Then there is a commutative diagram \begin{displaymath} \xymatrix{ X\ar[d]^{f}\ar[r] & A\ar[d]\\ Y\ar[r] & B } \end{displaymath} where the map $X\rightarrow A$ is the zero map. Thus there is a lift $h:Y\rightarrow A$ in this diagram. But $h\circ f=0$, so this gives a lift $\tilde{h}:C\rightarrow A$ in the first diagram. The same proof works for trivial cofibrations. \end{proof} \begin{prop} If $C$ is a cofibrant object, $X$ is any object, and one of them is trivial, then $C\otimes X$ is trivial. \end{prop} \begin{proof} The proof of this part of Proposition 9.4 in \cite{kelly2016projective} works in this generality. \end{proof} We also have: \begin{cor} $Ch(\mathpzc{E})$ satisfies the monoid axiom. \end{cor} Let $\mathpzc{E}$ be a pre-Koszul category and consider the model category $Ch(\mathpzc{E})$. Since, by assumption, the model structure on $Ch(\mathpzc{E})$ is monoidal and satisfies the monoid axiom, for any unital commutative monoid $R\in\mathpzc{Alg}_{\mathfrak{Comm}}(Ch(\mathpzc{E}))$ the transferred model structure exists on ${}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$. Moreover this model structure is monoidal and satisfies the monoid axiom (Theorem 4.1 in \cite{schwede}). \begin{cor} Let $\mathpzc{E}$ be a pre-Koszul category, and $R\in\mathpzc{Alg}_{\mathfrak{Comm}}(Ch(\mathpzc{E})$. Then with the induced model structure ${}_{R}\mathpzc{Mod}$ is a Koszul category. \end{cor} \begin{defn} A Koszul category $\mathpzc{M}$ is said to be \textbf{strong} if it arises from a pre-Koszul category and a map $X\rightarrow Y$ in $\mathpzc{M}$ is a (trivial) cofibration if and only if it is an admissible monomorphism whose cokernel is a (trivially) cofibrant object. \end{defn} In the terminology of \cite{vst2012exact}, this means that the weak factorisation systems determining the model structure are \textit{exact weak factorisation systems}. Equivalently, the weak factorisation systems arise from cotorsion pairs (see \cite{vst2012exact} Section 6). \begin{prop} Let $\mathpzc{M}$ be a Koszul category. Suppose a map $g:Y\rightarrow Z$ is a (trivial) fibration if and only if it is an admissible epimorphism with (trivially) fibrant kernel. Then $\mathpzc{M}$ is strong Koszul. \end{prop} \begin{proof} Let $f:X\rightarrow Y$ be an admissible monomorphism in $\mathpzc{M}$ whose cokernel $C$ is (trivially) cofibrant. First we show that if $F$ is trivially fibrant/ fibrant in $\mathpzc{M}$ and $C$ is cofibrant/ trivially fibrant, then $Ext^{1}(C,F)=0$. Let $$0\rightarrow F\rightarrow G\rightarrow C\rightarrow 0$$ be an exact sequence. Then $G\rightarrow C$ is a trivial fibration/ fibration. Since the map $0\rightarrow C$ is a cofibration, there is a lift in the diagram \begin{displaymath} \xymatrix{ 0\ar[d]\ar[r] & G\ar[d]\\ C\ar[r] & C } \end{displaymath} This says precisely that the sequence is split, so $Ext^{1}(C,F)=0$. Now the result follows from Lemma 5.14. in \cite{vst2012exact}. \end{proof} \begin{cor} If $Ch(\mathpzc{E})$ is a strong Koszul category and $R\in\mathpzc{Alg}_{\mathfrak{Comm}}(Ch(\mathpzc{E}))$ then $\mathpzc{M}={}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ is strong Koszul. \end{cor} The fact that the weak factorisation systems are determined by cotorsion pairs also has the following consequence. \begin{cor}\label{middlecofibrant} If $\mathpzc{M}$ is strong Koszul and $0\rightarrow X\rightarrow Y\rightarrow Z\rightarrow 0$ is an exact sequence such that $X$ and $Y$ are (trivially) cofibrant then so is $Y$. \end{cor} We will have use of a stricter notion than strongness. \begin{defn} A strong Koszul category $\mathpzc{M}$ is said to be \textbf{hereditary} if whenever $f:X\rightarrow Y$ is an admissible epimorphism between (trivially) cofibrant objects, then $Ker(f)$ is (trivially) cofibrant. \end{defn} In the language of cotorsion pairs this is equivalent to the condition that the cotorsion pairs determining the weak factorisation systems are hereditary. \begin{prop} Suppose that $Ch(\mathpzc{E})$ is a hereditary Koszul category. If $R\in\mathpzc{Alg}_{\mathfrak{Comm}}(Ch(\mathpzc{E}))$ then $\mathpzc{M}={}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ is hereditary Koszul. \end{prop} \begin{proof} By Lemma 6.17 in \cite{vst2012exact} it suffices to show that if $X\rightarrow Y$ is an admissible epimorphism where $X$ and $Y$ are (trivially) fibrant, then $Ker(f)$ is (trivially) fibrant. But $Ker(f)$ is (trivially) fibrant precisely if its underlying complex is, and this is true since $Ch(\mathpzc{E})$ is hereditary. \end{proof} The utility of the hereditary property is the following $2$-out-of-$3$ result for cofibrations. \begin{prop}\label{23cofibhered} Let $f:X\rightarrow Y$ and $g:Y\rightarrow Z$ be maps in a hereditary Koszul category $\mathpzc{M}$. Suppose that $g\circ f$ and $g$ are (acyclic) cofibrations. Then $f$ is an (acyclic) cofibration. \end{prop} \begin{proof} The Obscure Lemma implies that $g$ is an admissible monomorphism. Moreover the Snake Lemma implies that there is an exact sequence $$0\rightarrow coker(f)\rightarrow coker(g\circ f)\rightarrow coker(g)\rightarrow 0$$ $coker(g\circ f)$ and $coker(g)$ are (trivially) cofibrant objects by strongness. Therefore $coker(f)$ is (trivially) cofibrant by the hereditary property. \end{proof} Another useful property is \textit{projectivity} \begin{defn} A strong Koszul category is said to be \textbf{projective} if it arises from a projective pre-Koszul category. \end{defn} If $\mathpzc{E}$ is monoidal elementary (see \cite{kelly2016projective}) and $Ch(\mathpzc{E})$ is equipped with the projective model structure, then $\mathpzc{M}={}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ is projective Koszul. Moreover $\mathpzc{M}$ is also hereditary Koszul in this case. Examples of this include $\mathpzc{E}={}_{R}\mathpzc{Mod}(\mathpzc{Ab})$ for any ring $R$, and $\mathpzc{E}=Ind(Ban_{R})$ for $R$ a Banach ring. \begin{defn} A Koszul category $\mathpzc{M}={}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ is said to be \textbf{elementary} if $\mathpzc{E}$ is an elementary exact category, and the model structure on $Ch(\mathpzc{E})$ is the projective one. \end{defn} \subsubsection{Connective Koszul Categories} For $n\in{\mathbb Z}$ let $Ch_{\ge n}(\mathpzc{E})$ denote the full subcategory of $Ch(\mathpzc{E})$ consisting of objects concentrated in degrees $\ge n$. There is an obvious inclusion functor $Ch_{\ge n}(\mathpzc{E})\rightarrow Ch(\mathpzc{E})$. It has both a left and right adjoint, but we will only be concerned with the right adjoint $\tau_{\ge n}$ defined in the introduction The following is straight-forward to check. \begin{prop} Let $\mathpzc{E}$ be a pre-Koszul category, and let $g:X\rightarrow Y$ be an acyclic fibration in $Ch(\mathpzc{E})$. Then $i_{\ge n}\circ\tau_{\ge n}(g)$ is an acyclic fibration. \end{prop} \begin{thm}\label{connective} Consider the adjunction $$\adj{i_{\ge n}}{Ch(\mathpzc{E})_{\ge n}}{Ch(\mathpzc{E})}{\tau_{\ge n}}$$ The right-transferred model structure exists on $Ch(\mathpzc{E})_{\ge n}$. Moreover it is monoidal and satisfies the monoid axiom. \end{thm} \begin{proof} Let $I_{\ge n}$ be the collection $$\{S^{m}(G_{i})\rightarrow D^{m+1}(G_{i})\}_{m\ge n,i\in\mathcal{I}}\cup\{0\rightarrow D^{m+1}(G_{i})\}_{m\ge n,i\in\mathcal{I}},\{S^{m}(f_{l})\}_{m\ge n,l\in\mathcal{L}}\cup\{0\rightarrow S^{m}(G_{i})\}_{m\ge n}$$ and let $J_{\ge n}$ be the collection $$\{D^{m}(g_{k})\}_{m\ge n,k\in\mathcal{K}}$$ where $f_{l}$ and $g_{k}$ are as in Definition \ref{pre-Koszul}. We are going to use Hovey's recognition Theorem \cite{hoveybook}, Theorem 2.1.19. The condition on smallness of the domains of $I_{\ge n}$ and $J_{\ge n}$ is clear. Every cofibration $f$ in $Ch(\mathpzc{E}$) which is concentrated in degrees $\ge n$ can be written as a retract of a transfinite composition of pushouts of maps in $I_{\ge n}$. Therefore $J_{\ge n}\subset I_{\ge n}\cap\mathcal{W}\subset I_{\ge n}-cof\cap\mathcal{W}$. The class $I_{\ge n}-cof\cap\mathcal{W}$ is closed under pushouts, retracts, and transfinite composition. Therefore $J_{\ge n}-cell\subset I_{\ge n}-cof\cap\mathcal{W}$. Now we claim that $(I_{\ge n})-inj$ is the collection of maps $g$ such that $i_{\ge n}(g)$ is an acyclic fibration in $Ch(\mathpzc{E})$. Indeed $i_{\ge n}(g)$ has the right lifting property with respect to all maps $$\{S^{m}(G_{i})\rightarrow D^{m+1}(G_{i})\}_{m\ge n,i\in\mathcal{I}}\cup\{0\rightarrow D^{m+1}(G_{i})\}_{m\ge n,i\in\mathcal{I}},\{S^{m}(f_{l})\}_{m\ge n,l\in\mathcal{L}}\cup\{0\rightarrow S^{m}(G_{i})\}_{m\ge n}$$ We claim it also has the right lifting property with respect to all maps of the form $$\{S^{m}(G_{i})\rightarrow D^{m+1}(G_{i})\}_{m< n,i\in\mathcal{I}}\cup\{0\rightarrow D^{m+1}(G_{i})\}_{m< n,i\in\mathcal{I}},\{S^{m}(f_{l})\}_{m< n,l\in\mathcal{L}}$$ Clearly the only maps for which something could go wrong are those of the form $S^{n-1} (G_{i})\rightarrow D^{n}(G_{i})$. But since the domain and codomain of $i_{\ge n}(g)$ are concentrated in degrees $\ge n$, a lifting against $S^{n-1} (G_{i})\rightarrow D^{n}(G_{i})$ is equivalent to a lifting against $0\rightarrow S^{n}(G_{i})$, which exists by assumption. Conversely if $i_{\ge n}(g)$ is an acyclic fibration then $f$ clearly it has the right lifting property with respect to $I_{\ge n}$. Next we claim that $I_{\ge n}-cof$ is the collection of maps $f:A\rightarrow B$ such that $i_{\ge n}(f)$ is a cofibration in $Ch(\mathpzc{E})$. Indeed suppose $f$ has the left-lifting property with respect to all maps $g$ in $(I_{\ge n})-inj$. Then $i_{\ge n}(f)$ has the left-lifting property with respect to all acyclic fibrations $g:X\rightarrow Y$ which are in the image of $i_{\ge n}$. Let $\tilde{g}$ be any acyclic fibration in $Ch(\mathpzc{E})$. Any diagram \begin{displaymath} \xymatrix{ i_{\ge n}A\ar[d]^{i_{\ge n}f}\ar[r] & \tilde{X}\ar[d]^{\tilde{g}}\\ i_{\ge n}B\ar[r]& \tilde{Y} } \end{displaymath} factors through \begin{displaymath} \xymatrix{ i_{\ge n}A\ar[d]^{i_{\ge n}f}\ar[r] & i_{\ge n}\circ\tau_{\ge n}(\tilde{X})\ar[d]^{i_{\ge n}\circ\tau_{\ge n}(\tilde{g})}\\ i_{\ge n}B\ar[r]& i_{\ge n}\circ\tau_{\ge n}(\tilde{Y}) } \end{displaymath} By the previous proposition $i_{\ge n}\circ\tau_{\ge n}(\tilde{g})$ is an acyclic fibration in $Ch(\mathpzc{E})$, so $\tau_{\ge n}(\tilde{g})$ has the right lifting property with respect to $f$. Hence there is a lift in the second diagram, and therefore in the first one. Finally we claim that $J_{\ge n}$-cof is the collection of maps such that $i_{\ge n}(f)$ is an acyclic cofibration in $Ch(\mathpzc{E})$. Let $f\in J_{\ge n}$-cof, and let $g:X\rightarrow Y$ be a fibration in $Ch(\mathpzc{E})$. Suppose that \begin{displaymath} \xymatrix{ i_{\ge n}A\ar[d]^{i_{\ge n}f}\ar[r] & X\ar[d]^{g}\\ i_{\ge n}B\ar[r]& Y } \end{displaymath} is a commutative diagram in $Ch(\mathpzc{E})$. As before, this factors through the diagram factors through \begin{displaymath} \xymatrix{ i_{\ge n}A\ar[d]^{i_{\ge n}f}\ar[r] & i_{\ge n}\circ\tau_{\ge n}(X)\ar[d]^{i_{\ge n}\circ\tau_{\ge n}(g)}\\ i_{\ge n}B\ar[r]& i_{\ge n}\circ\tau_{\ge n}(Y) } \end{displaymath} Since $g$ has the right-lifting property with respect to $J_{\ge n}$, there is a lift in the second diagram, and hence in the first. Thus $i_{\ge n}(f)$ is an acyclic cofibration. The fact that it is monoidal and satisfies the monoid axiom is clear. \end{proof} \begin{defn} A monoidal model category $\mathpzc{M}_{\ge n}$ of the form ${}_{R}\mathpzc{Mod}(Ch(\mathpzc{E})_{\ge n})$ where $\mathpzc{E}$ is a connective pre-Koszul category, $Ch(\mathpzc{E})$ is equipped with the corresponding model structure, $Ch(\mathpzc{E})_{\ge n}$ is equipped with the right-transferred model structure $R\in\mathpzc{Alg}_{\mathfrak{Comm}}(Ch_{\ge n}(\mathpzc{E}))$, and ${}_{R}\mathpzc{Mod}(Ch(\mathpzc{E})_{\ge n})$ is equipped with the left-transferred model structure, and $n\in{\mathbb Z}$ is called an $n$-\textbf{connective Koszul category}. \end{defn} \subsection{Filtrations and Gradings in Koszul Categories} Many of the algebraic objects we study will be equipped with filtrations. In \cite{kelly2016projective} we studied in detail categories of filtered objects in exact categories, and in monoidal exact categories. Fix a class of morphisms $\mathcal{S}$. Let us recall the definition of a filtered object. \begin{defn} Let $A$ be an object of $\mathpzc{M}$ and $\mathcal{S}$ a class of morphisms in $\mathpzc{E}$. An $\mathcal{S}$-\textbf{subobject} of $\mathpzc{E}$ is a map $s:A'\rightarrow A$ in $\mathcal{S}$. An $\mathcal{S}$-\textbf{filtration} of $A$ consists of a collection of $\mathcal{S}$-subobjects of $A$, $\{\alpha_{i}:A_{i}\rightarrow A\}_{i\in\mathbb{N}_{0}}$ together with maps $a_{i}:A_{i}\rightarrow A_{i+1}$ in $\mathcal{S}$ such that $\alpha_{i+1}\circ a_{i}=\alpha_{i}$. An $\mathcal{S}$-\textbf{filtered object} of $\mathpzc{M}$ is tuple of data $((A)_{top},\alpha_{i},a_{i})$ where $(A)_{top}$ is an object of $\mathpzc{E}$ and $(\alpha_{i},a_{i})$ is a $\mathcal{S}$-filtration of $(A)_{top}$. A \textbf{morphism of filtered objects} $g:((A)_{top},\alpha_{i},a_{i})\rightarrow((B)_{top},\beta_{i},b_{i})$ consists of a collection of morphisms $\{g_{i}:A_{i}\rightarrow B_{i}\}_{i\in\mathbb{N}_{0}}$, and $(g)_{top}:(A)_{top}\rightarrow (B)_{top}$ such that $g_{i+1}\circ a_{i}=b_{i}\circ g_{i}$ and $(g)_{top}\circ a_{i}=\beta_{i}\circ g_{i}$ for all $i\in\mathbb{N}_{0}$. $\mathcal{S}$-filtered objects and morphisms of $\mathcal{S}$-filtered objects can then be organised into an additive category $\mathpzc{Filt}_{\mathcal{S}}(\mathpzc{E})$. \end{defn} There is a functor $$gr:\mathpzc{Filt}_{\mathcal{S}}(\mathpzc{E})\rightarrow\mathpzc{Gr}_{\mathbb{N}_{0}}(\mathpzc{E})$$ called the \textbf{associated graded functor} which we will make extensive use of. It sends a filtered object $((A)_{top},\alpha_{i},a_{i})$ to $\bigoplus_{n=0}^{\infty}A_{n}\big\slash A_{n-1}=\bigoplus_{n=0}^{\infty}gr_{n}(A)$. We also let $(-)_{top}:\overline{\mathpzc{Filt}}_{\textbf{RegMon}}\rightarrow\mathpzc{M}$ denote the functor sending a filtered object $(A_{i},\alpha_{i},a_{i})$ to $(A)_{top}$. \begin{rem} In \cite{kelly2016projective} we denoted this functor by $(-)_{\infty}$. However we shall be applying this functor to cooperads, for which the notation $\mathfrak{C}_{\infty}$ already has an meaning in terms of homotopy cooperads. \end{rem} \begin{defn} A filtered object $((A)_{top},\alpha_{i},a_{i})$ is said to be \textbf{exhaustive} if $(A)_{top}$ together with the maps $\alpha_{i}:A_{i}\rightarrow (A)_{top}$ is a direct limit of the diagram \begin{displaymath} \xymatrix{ A_{0}\ar[r]^{a_{0}} & A_{1}\ar[r]^{a_{1}} & A_{2}\ar[r] &\ldots } \end{displaymath} The full subcategory of $\mathpzc{Filt}_{\mathcal{S}}(\mathpzc{E})$ on objects equipped with an exhaustive filtration will be denoted by $\overline{\mathpzc{Filt}}_{\mathcal{S}}(\mathpzc{E})$. \end{defn} We let $\textbf{RegMon}$ denote the class of regular monomorphisms in $\mathpzc{E}$ (i.e. morphisms which are kernels of their cokernels), and $\textbf{AdMon}$ denote the class of admissible monomorphisms. Constructing limits and colimits in this category is a somewhat subtle enterprise. However for cokernels we have the following result, which is in fact all we'll really need. Using results of \cite{kelly2016projective} one can show the following. \begin{prop}\label{filteredcokernels} Let $g:((A)_{top},\alpha_{i},a_{i})\rightarrow((B)_{top},\beta_{i},b_{i})$ be a morphism in $\mathpzc{Filt}_{\textbf{AdMon}}(\mathpzc{M})$ where each $g_{i}$ for $0\le i\le\infty$ is an admissible monomorphism. Then for each $0\le i<\infty$ the maps $\gamma_{i}:coker(g_{i})\rightarrow coker((g)_{top})$ and $g_{i}:coker(g_{i})\rightarrow coker(g_{i+1})$ are admissible monomorphism, and the object $(coker(g_{top}),\gamma_{i},g_{i})$ is a cokernel of $g$. If $g$ is a map in $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\mathpzc{M})$ then this is also the cokernel in $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\mathpzc{M})$. Finally, if each $g$ is a map in $\mathpzc{Filt}_{\textbf{PureMon}}(\mathpzc{M})$ (resp. $\overline{\mathpzc{Filt}}_{\textbf{PureMon}}(\mathpzc{M})$) and each $g_{i}$ is a pure monomorphism, then this is also the cokernel in $\mathpzc{Filt}_{\textbf{PureMon}}(\mathpzc{M})$ (resp. $\overline{\mathpzc{Filt}}_{\textbf{PureMon}}(\mathpzc{M})$). \end{prop} \begin{defn} Let $\mathpzc{M}$ be a Koszul category. An object $((A)_{top},a_{i},\alpha_{i})$ of $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\mathpzc{M})$ is said to be \textbf{filtered cofibrant} if each of the maps $a_{i}:A_{i}\rightarrow A_{i+1}$ is a cofibration. A map $f:A\rightarrow B$ between filtered objects is said to be a \textbf{filtered cofibration} if it is an admissible monomorphism with filtered cofibrant cokernel, and at each level of the filtration it is a cofibration. \end{defn} If $\mathpzc{M}$ is a Koszul category then $\mathpzc{Gr}_{\mathbb{N}_{0}}(\mathpzc{M})$ is a model category in which weak equivalences, cofibrations, and fibrations are defined degree-wise. \begin{prop} Let $\mathpzc{M}$ be a hereditary Koszul category. A map $f:A\rightarrow B$ in $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\mathpzc{M})$ is a filtered cofibration if and only if $gr(f)$ is a degree-wise admissible monomorphism with graded cofibrant cokernel. \end{prop} \begin{proof} Suppose $f$ is a filtered cofibration. By Proposition \ref{filteredcokernels} $gr(f)$ is a degree-wise admissible monomorphism with graded cofibrant cokernel. The converse follows by an easy inductive argument using Corollary \ref{middlecofibrant}. \end{proof} \begin{cor}\label{twooutofthreefiltcofib} Let $\mathpzc{M}$ be a hereditary Koszul category. If $f:X\rightarrow Y$ and $g:Y\rightarrow Z$ are morphisms in $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\mathpzc{M})$ such that $g\circ f$ and $g$ are filtered cofibrations then so is $f$. \end{cor} For $1\le j\le k$ let $A^{j}=((A)_{top}^{j},\alpha^{j},a^{j})$ be in $\mathpzc{Filt}_{\textbf{RegMon}}(\mathpzc{M})$. We define an object $\bigotimes_{j=1}^{k}A^{j}$ of $\mathpzc{Filt}_{\textbf{RegMon}}(\mathpzc{M})$ by $$(\bigotimes_{j=1}^{k}A^{j})_{\infty}=\bigotimes_{j=1}^{k}A^{j}_{\infty}$$ and $$(\bigotimes_{j=1}^{k}A^{j})_{n}=Im(\bigoplus_{i_{1}+\ldots+i_{k}\le n}A^{1}_{i_{1}}\otimes\ldots\otimes A^{k}_{i_{k}}\rightarrow \bigotimes_{j=1}^{k}A^{j}_{\infty})$$ We recall the proof of the following result from \cite{kelly2016projective}. \begin{prop}\label{gradestronmon} For $1\le j\le k$ let $A^{j}=((A)_{top}^{j},\alpha^{j},a^{j})$ be a filtered object. \begin{enumerate} \item Suppose that for each $n$ the map $$\bigoplus_{i_{1}+\ldots+i_{k}=n}A^{1}_{i_{1}}\otimes\ldots\otimes A^{k}_{i_{k}}\rightarrow A^{1}_{\infty}\otimes\ldots\otimes A^{k}_{\infty}$$ is admissible. Then the map $\bigotimes_{j=1}^{k}\textrm{gr}(A^{j})\rightarrow\textrm{gr}\Bigr(\bigotimes_{j=1}^{k}A^{j}\Bigr)$ is an admissible epimorphism.\\ \item If in addition for each $1\le j\le k$ and each $0\le i<\infty$ the map each map $A^{j}_{i}\rightarrow A^{j}_{i+1}$ is a pure monomorphism, then the map $\bigotimes_{j=1}^{k}\textrm{gr}(A^{j})\rightarrow\textrm{gr}\Bigr(\bigotimes_{j=1}^{k}A^{j}\Bigr)$ is an isomorphism. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item Let $I_{n}$ denote the image of the map $\bigoplus_{i_{1}+\ldots+i_{k}=n}A^{1}_{i_{1}}\otimes\ldots\otimes A^{k}_{i_{k}}\rightarrow A^{1}_{\infty}\otimes\ldots\otimes A^{k}_{\infty}$. By the obscure lemma the map $I_{n}\rightarrow I_{n+1}$ is an admissible monomoprhism. Moreover the map $\bigoplus_{i_{1}+\ldots+i_{k}=n}A^{1}_{i_{1}}\otimes\ldots\otimes A^{k}_{i_{k}}\rightarrow I_{n}$ is an admissible epimorphism. Hence the map $\bigoplus_{i_{1}+\ldots+i_{k}=n+1}A^{1}_{i_{1}}\otimes\ldots\otimes A^{k}_{i_{k}}\rightarrow I_{n+1}\big\slash I_{n}$ is an admisible epimorphism. The obscure lemma then implies the result. \item Suppose now that for each $1\le k\le n$ and each $0\le i<\infty$ the map $A^{k}_{i}\rightarrow A^{k}_{i+1}$ is a pure monomorphism. Equivalently $0\rightarrow A^{k}_{i}\rightarrow A^{k}_{i+1}\rightarrow\textrm{gr}_{i+1}(A^{k})$ is a pure exact sequence. By tensoring there is an induced $n$-dimensional chain complex which is exact along each axis. There is an acyclic sequence $$\bigoplus_{l=1}^{k}A_{i_{1}+1}\otimes\ldots\otimes A_{i_{l}}\otimes\ldots\otimes A_{i_{k}+1}\rightarrow A_{i_{1}+1}\otimes\ldots\otimes A_{i_{k}+1}\rightarrow\textrm{gr}_{i_{1}+1}(A)\otimes\ldots\otimes\textrm{gr}_{i_{k}+1}(A)\rightarrow 0$$ Moreover this is a pure exact sequence. Hence there is a pure exact sequence. $$0\rightarrow\sum_{l=1}^{k}A_{i_{1}+1}\otimes\ldots\otimes A_{i_{l}}\otimes\ldots\otimes A_{i_{k}+1}\rightarrow A_{i_{1}+1}\otimes\ldots\otimes A_{i_{k}+1}\rightarrow\textrm{gr}_{i_{1}+1}(A)\otimes\ldots\otimes\textrm{gr}_{i_{k}+1}(A)\rightarrow 0$$ This completes the proof. \end{enumerate} \end{proof} We let $\overline{\mathpzc{Filt}}_{\textbf{PureMon}}^{K}(\mathpzc{M})$ denote the full subcategory of $\overline{\mathpzc{Filt}}_{\textbf{PureMon}}(\mathpzc{M})$ consisting of filtered objects $A$ such that for each $i\in\mathbb{N}$, $\textrm{gr}_{i}(A)$ is $K$-flat. \subsubsection{$\Sigma_{n}$-filtrations} For associativity reasons, in general this does not endow $\overline{\mathpzc{Filt}}_{\textbf{RegMon}}(\mathpzc{M})$ with a monoidal structure. However there is a natural $\Sigma_{n}$ action on $\bigotimes_{j=1}^{k}A^{j}$ which is functorial. This brings us to a discussion of $\Sigma_{n}$-filtrations. Note that there is an obvious equivalence of categories between the category $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}({}_{\Sigma_{n}}\mathpzc{Mod})$ of filtered $\Sigma_{n}$-modules and the category of $\Sigma_{n}$-modules in $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\mathpzc{Mod})$. \begin{defn} An object $((A)_{top},\alpha_{i},a_{i})$ of $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}({}_{\Sigma_{n}}\mathpzc{Mod})$ is said to \textbf{have admissibly filtered coinvariants} if the maps $(\alpha_{i})_{\Sigma_{n}}$ and $(a_{i})_{\Sigma_{n}}$ are admissible monomorphisms. The full subcategory of $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}({}_{\Sigma_{n}}\mathpzc{Mod})$ consisting of filtered $\Sigma_{n}$-modules which have admissibly filtered coinvariants is denoted $\overline{\mathpzc{Filt}}_{{}_{\Sigma_{n}}\textbf{AdMon}}({}_{\Sigma_{n}}\mathpzc{Mod})$ \end{defn} The following is clear. \begin{prop} \begin{enumerate} \item Let $((A)_{top},\alpha_{i},a_{i})$ be an object of $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}\mathpzc{Mod}$. Then the free object $(\Sigma_{n}\otimes(A)_{top},\Sigma_{n}\otimes\alpha_{i},\Sigma_{n}\otimes a_{i})$ has admissibly filtered coinvariants. \item $\overline{\mathpzc{Filt}}_{{}_{\Sigma_{n}}\textbf{AdMon}}({}_{\Sigma_{n}}\mathpzc{Mod})$ is closed under taking summands in $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}({}_{\Sigma_{n}}\mathpzc{Mod})$. \item If $((A)_{top},\alpha_{i},a_{i})$ is in $\overline{\mathpzc{Filt}}_{{}_{\Sigma_{n}}\textbf{AdMon}}({}_{\Sigma_{n}}\mathpzc{Mod})$ then the natural map $gr(A)_{\Sigma_{n}}\rightarrow gr(A_{\Sigma_{n}})$ is an isomorphism. \end{enumerate} \end{prop} In particular if $\mathpzc{M}$ is a ${\mathbb Q}$-Koszul category then any admissibly filtered $\Sigma_{n}$-module has admissibly filtered coinvariants. Indeed in this case every $\Sigma_{n}$-module is a retract of a free one. \subsection{Quasi-Categories and Localization of Relative Cateogries} As we shall see, Koszul duality is most naturally formulated using $(\infty,1)$-categories rather than model categories. For concreteness we fill fix quasi-categories as our model for $(\infty,1)$-categories. \begin{defn} A \textbf{relative category} or a \textbf{category with weak equivalences} is a pair $(\mathpzc{C},\mathcal{W})$ where $\mathpzc{C}$ is a category and $\mathcal{W}$ is a wide subcategory of $\mathpzc{C}$ containing all isomorphisms, and satisfying the two-out-of-three property, namely if $f:A\rightarrow B$ and $g:B\rightarrow C$ are morphisms in $\mathcal{W}$ and two of $f,g,g\circ f$ are in $\mathcal{W}$, then so is the third. \end{defn} \begin{rem} If $\mathpzc{M}$ is a homotopical category, and $\mathcal{W}$ is the wide subcategory of weak equivalences in $\mathpzc{M}$, then $(\mathpzc{M},\mathcal{W})$ is a relative category. \end{rem} To a relative category $(\mathpzc{C},\mathcal{W})$ we can associate to it a quasi-category $\textbf{C}$ by Dwyer-Kan localistion \cite{dwyer1980simplicial}. If $\mathpzc{C}$ is a combinatorial simplicial model category and $\mathcal{W}$ is its subcategory of weak equivalences, then $\textbf{C}$ is a locally presentable $(\infty,1)$-category by Proposition A.3.7.6 in \cite{lurie2006higher}. \begin{defn} If $(\mathpzc{C},\mathcal{W})$ and $(\mathpzc{C}',\mathcal{W}')$ are relative categories, then a functor $F:\mathpzc{C}\rightarrow\mathpzc{C}'$ is said to be a \textbf{relative functor} if $F(f)\in\mathcal{W}'$ whenever $f\in\mathcal{W}$. A \textbf{relative adjunction} between relative categories is an adjunction $$\adj{F}{\mathpzc{C}}{\mathpzc{C}'}{G}$$ such that $F$ and $G$ are both relative functors. A relative adjunction is said to be a \textbf{relative equivalence} if the unit and counit of the adjunction are both component-wise weak equivalences. \end{defn} Corolalry 3.6 in \cite{dwyer1980calculating} says the following. \begin{thm}\label{relequivuc} Let $$\adj{F}{\mathpzc{C}}{\mathpzc{C}'}{G}$$ be a relative equivalence. Then there is an induced adjoint equivalence of $(\infty,1)$-categories $$\adj{\textbf{F}}{\textbf{C}}{\textbf{C}'}{\textbf{G}}$$ \end{thm} \subsubsection{Infinity Categories of Filtered and Graded Objects} Let $\mathpzc{E}$ be a monoidal exact category and $R$ a commutative monoid in $Ch(\mathpzc{E})$. Let $\mathpzc{M}={}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ If $\mathcal{W}$ is the wide subcategory of $\mathpzc{M}$ on quasi-isomorphisms then $(\mathpzc{M},\mathcal{W})$ is a homotopical category. Denote by $\textbf{M}$ the $(\infty,1)$-categorical localization of $\textbf{M}$ at $\mathcal{W}$. Consider the category $\mathpzc{Gr}_{\mathbb{N}_{0}}(\mathpzc{M})$. Denote by $\mathcal{W}_{gr}$ the class of graded quasi-isomorphisms in $\mathpzc{Gr}_{\mathbb{N}_{0}}(\mathpzc{M})$. Then $(\mathpzc{Gr}_{\mathbb{N}_{0}}(\mathpzc{M}), \mathcal{W}_{gr})$ is also a homotopical category. Now consider the category $\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\mathpzc{M})$. If $\mathcal{W}_{f}$ is the class of filtered quasi-isomorphisms then $(\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\mathpzc{M}),\mathcal{W}_{f})$ is also a homotopical category. Denote the $(\infty,1)$-categorical localization of this homotopical category by $\overline{\textbf{Filt}}(\textbf{M})$. By results of Chapter 5 in \cite{kelly2016projective} the functors $(-)_{n}:\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\mathpzc{M})\rightarrow \mathpzc{M}$ for $0\le n<\infty$ ,$(-)_{top}:\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\mathpzc{M})\rightarrow \mathpzc{M}$, and $gr:\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\mathpzc{M})\rightarrow \mathpzc{Gr}(\mathpzc{M})$ all induce functors of $(\infty,1)$-categories after localization. Moreover the induced functor $\textbf{gr}:\overline{\textbf{Filt}}(\textbf{M})\rightarrow\textbf{Gr}(\textbf{M})$ also reflects weak equivalences. Later we will need the following useful result, inspired by part of the proof of Proposition 2.2.12 in \cite{DAGX}. \begin{prop}\label{inductgraded} Let $\textbf{N}$ be an $(\infty,1)$-category, $\mathpzc{M}$ as above, and $F:\textbf{N}\rightarrow\overline{\mathpzc{Filt}}_{\textbf{AdMon}}(\textbf{M})$ a functor. If $\textbf{gr}\circ F$ and $(-)_{0}\circ F$ preserve sifted colimits then so does $F$. \end{prop} \begin{proof} It suffices to show that each $(-)_{n}\circ F$ preserves sifted colimits. The proof is an easy induction. By assumption it is true for $n=0$. We suppose it has been shown for $n=k$. Now there is a homotopy fibre sequence of functors $$(-)_{k}\circ F \rightarrow (-)_{k+1}\circ F \rightarrow gr_{k+1}\circ F$$ Since the left and right-hand functors preserves sifted colimits so does the middle functor \end{proof} \section{Homotopy Theory of Algebras}\label{algcoalghom} In this section we shall show that categories of algebras over certain operads in a Koszul category $\mathpzc{M}$ admit rich homotopy theories. \subsection{Model Structures on Categories of Algebras} In \cite{kelly2016projective} we gave conditions on model structures on categories of chain complexes $Ch(\mathpzc{E})$ under which the operads $\mathfrak{Ass},\mathfrak{Comm}$, and $\mathfrak{Lie}$ are admissible. Namely we required that $\mathpzc{E}$ needs to be enriched over ${\mathbb Q}$, $Ch(\mathpzc{E})$ admits the small object argument, and the model structure has generating acyclic cofibrations of the form $0\rightarrow D^{n}(F)$. Using \cite{white2017model} it is easy to prove a vast generalisation of this. In this section $\mathpzc{M}$ will be a combinatorial monoidal category satisfying the monoid axiom. Let $\mathfrak{P}$ be an operad in $\mathpzc{M}$, and consider the free-forgetful adjunction $$\adj{\textrm{Free}_{\mathfrak{P}}(-)}{\mathpzc{M}}{\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})}{|-|_{\mathfrak{P}}}$$ Recall that the \textbf{transferred model structure} on $\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})$, if it exists, is the one for which weak equivalences (resp. fibrations) are maps $f:A\rightarrow B$ of algebras such that $|f|_{\mathfrak{P}}$ is a weak equivalence (resp. fibration). \begin{defn} An operad $\mathfrak{P}$ is said to be \textbf{admissible} if the transferred model structure exists on $\mathpzc{Alg}_{\mathfrak{P}}$. \end{defn} \begin{defn} Let $\mathpzc{M}$ be a combinatorial monoidal category satisfying the monoid axiom. Denote by $\mathcal{L}_{\Sigma_{n}}^{t}$ the collection of maps $f$ in ${}_{\Sigma_{n}}\mathpzc{Mod}$ such that the underlying map $|f|$ is a trivial cofibration. $\mathpzc{M}$ is said to satisfy the $\Sigma_{n}$-\textbf{equivariant monoid axiom} if transfinite composition of pushouts of maps of the form $f\otimes_{\Sigma_{n}}X$ are contained in the class of weak equivalences for all $X\in {}_{\Sigma_{n}}\mathpzc{Mod}$. \end{defn} The utility of this technical assumption is the following result, which is Corollary C.5. in \cite{white2017model}. \begin{thm} If $\mathpzc{M}$ is a combinatorial monoidal category satisfying the monoid axiom and the $\Sigma_{n}$-equivariant monoid axiom, then every operad in $\mathpzc{M}$ is admissible. \end{thm} \begin{prop} Let $\mathpzc{M}$ be a combinatorial monoidal model category satisfying the monoid axiom. Suppose that $\mathpzc{M}$ is enriched over ${}_{{\mathbb Q}}\mathpzc{Vect}$. Then $\mathpzc{M}$ satisfies the $\Sigma_{n}$-equivariant monoid axiom. \end{prop} \begin{proof} Let $g$ be a transfinite composition of pushouts of maps of the form $f\otimes_{\Sigma_{n}}X$. Then $g$ is a retract of a transfinite composition of pushouts of maps of the form $f\otimes X$. This is a weak equivalence since $\mathpzc{M}$ satisfies the monoid axiom. \end{proof} The proof of Theorem 4.3 in \cite{spitzweck2001operads} gives the following. \begin{prop}\label{underlyingKflat} Let $\mathpzc{M}$ be a Koszul category, and $\mathfrak{P}$ an admissible operad such that each $\mathfrak{P}(n)$ is cofibrant as a $\Sigma_{n}$-module. If $\mathfrak{g}$ is a cofibrant $\mathfrak{P}$-algebra then its underlying complex is cofibrant. In particular it is $K$-flat. \end{prop} In fact we believe that cofibrant can be replaced by $K$-flat everywhere in this proposition, though we have not checked the technical details. \subsubsection{Standard Cofibrations} With the assurance that we at least have some examples of admissible operads in our more general setup, let us analyse the transferred model structures more closely. In particular we will study certain classes of cofibrations, called standard cofibrations, in strong Koszul categories. Though the idea is the same, because complexes in a general exact category don't split the discussion of standard cofibrations as defined in \cite{vallette2014homotopy} Section 2.4 is significantly more involved. In fact, we shall study much more general classes of maps. Let $\{G_{i}\}$ be a collection of objects in $\mathpzc{E}$ and $\{f_{l}\}_{l\in\mathcal{L}}$ a collection of admissible monomorphisms in $\mathpzc{E}$. Denote by $\mathfrak{G}$ the collection of morphisms $$\{S^{n}(G_{i})\rightarrow D^{n+1}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}\cup\{0\rightarrow D^{n}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}\cup\{S^{n}(f_{l})\}_{n\in{\mathbb Z},l\in\mathcal{L}}$$ Denote by $preCell(R;\mathfrak{G})$ the collection of maps obtained by transfinite composition of pushouts of maps in $$\{R\otimes S^{n}(G_{i})\rightarrow R\otimes D^{n+1}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}\cup\{0\rightarrow R\otimes D^{n}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}$$ and by $Cell(R;\mathfrak{G})$ the collection of maps obtained by transfinite composition of pushouts of maps in $$\{R\otimes S^{n}(G_{i})\rightarrow R\otimes D^{n+1}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}\cup\{0\rightarrow R\otimes D^{n}(G_{i})\}_{n\in{\mathbb Z},i\in\mathcal{I}}\cup\{R\otimes S^{n}(f_{l})\}_{n\in{\mathbb Z},l\in\mathcal{L}}$$ \begin{defn} Denote by $Sull_{\mathfrak{P}}(\mathpzc{M};\mathfrak{G})$ the class of maps of $\mathfrak{P}$-algebras which are obtained as a retract of a transfinite composition of pushouts of maps of the form $\mathfrak{P}(f)$ for $f\in preCell(R;\mathfrak{G})$, and by $Cof_{\mathfrak{P}}(\mathpzc{M};\mathfrak{G})$ the class of maps of $\mathfrak{P}$-algebras which are obtained as a retract of a transfinite composition of pushouts of maps of the form $\mathfrak{P}(f)$ for $f\in Cell(R;\mathfrak{G})$. \end{defn} Note that if $\mathpzc{E}={}_{k}\mathpzc{Vect}$, $k$ is a field of characteristic $0$, and $\mathfrak{P}=\mathfrak{Comm}$, then a map in $Sull_{\mathfrak{P}}(Ch(\mathpzc{E});\mathfrak{G})$ is a relative Sullivan algebra as in \cite{hess2007rational} Section 2. Let $V$ be an object of $\mathpzc{M}$, $A$ an object of $\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})$, and $\alpha:V\rightarrow A$ be a degree $-1$ derivation of $R$-modules. There is then an induced map of graded objects $$V\rightarrow A\rightarrow A\coprod\mathfrak{P}(V)$$ By Proposition \ref{coproder} there is then a unique derivation of degree $-1$ $$d_{\alpha}:A\coprod\mathfrak{P}(V)\rightarrow A\coprod\mathfrak{P}( V)$$ whose restriction to $V$ is $\alpha$. We denote the algebra equipped with the derivation given by $d_{A}+d_{\alpha}+d_{V}$ by $A\coprod_{\alpha}\mathfrak{P}(V)$. \begin{prop} Suppose that $\alpha:V[-1]\rightarrow A$ is a morphism in $\mathpzc{M}$, i.e. it commutes with differentials. Then $A\coprod_{\alpha}\mathfrak{P}(V)$ is a chain complex. \end{prop} \begin{proof} The derivation $A\coprod\mathfrak{P}( V)\rightarrow A\coprod\mathfrak{P}( V)[1]$ is induced from the derivation $$d_{\mathfrak{P}}\circ Id_{A\oplus V}+d_{A}+ Id_{\mathfrak{P}}\circ_{(1)}(\alpha+d_{V}):\mathfrak{P}(A\oplus V)\rightarrow\mathfrak{P}(A\oplus V)[1]$$ Therefore it suffices to show that this derivation squares to $0$. This is a straightforward computation. \end{proof} In particular if $\alpha:V[-1]\rightarrow A$ vommutes with differentials then $A\coprod_{\alpha}\mathfrak{P}(V)$ is naturally an object of $\mathpzc{M}$. The crucial lemma is the following, which is a generalisation of \cite{vallette2014homotopy} Lemma 2.7. \begin{lem}\label{standard} Let $V\rightarrow W\in preCell(R;\mathfrak{G})$ and let $A\in\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})$. Let $\alpha:W[-1]\rightarrow A$ be morphism of $R$-modules. Then the induced map $$A\coprod_{\alpha}\mathfrak{P}(V)\rightarrow A\coprod_{\alpha\circ f}\mathfrak{P}(W)$$ is in $Sull_{\mathfrak{P}}(\mathpzc{M};\mathfrak{G})$. If $\mathfrak{G}$ is the cofibrancy data of a strong Koszul category from Definition \ref{pre-Koszul}, and $V\rightarrow W\in Cell(R;\mathfrak{G})$, then $$A\coprod_{\alpha}\mathfrak{P}(V)\rightarrow A\coprod_{\alpha\circ f}\mathfrak{P}(W)$$ is in $Cof_{\mathfrak{P}}(\mathpzc{M};\mathfrak{G})$ \end{lem} First let us prove some auxiliary results. Let $(V_{\bullet},d_{V})$ and $(W_{\bullet},d_{W})$ be objects of $Ch(\mathpzc{E})$, and let $f:V_{\bullet}\rightarrow W_{\bullet}$ be a morphism of chain complexes. Suppose there are degree $-1$ derivations of $R$-modules $\nu:V_{\bullet}\rightarrow A_{\bullet}$ and $\omega:W_{\bullet}\rightarrow A_{\bullet}$ such that $\omega\circ f=\nu$, and both $$A\coprod_{\nu}\mathfrak{P}(V)\;\textrm{ and }A\coprod_{\omega}\mathfrak{P}(W)$$ are complexes. Then clearly the morphism $Id_{A}\coprod\mathfrak{P}(f)$ induces a morphism of chain complexes $$A\coprod_{\nu}\mathfrak{P}(V)\rightarrow A\coprod_{\omega}\mathfrak{P}(W)$$ Now, let $D:\mathcal{I}\rightarrow\mathpzc{M}$ be a diagram. For $i\in\mathcal{I}$ let $D(i)=V_{i}$. Suppose there is a degree $-1$ derivation $\alpha:\textrm{colim}V^{i}_{\bullet}\rightarrow A$. Composing with the maps $f_{i}:V_{i}\rightarrow \textrm{colim}V^{i}_{\bullet}$ gives degree $-1$ maps $\alpha_{i}=\alpha\circ f_{i}:V_{i}\rightarrow A$. Suppose that for each $i$ $A\coprod_{\alpha_{i}}\mathfrak{P}(V^{i})$ is a chain complex. \begin{prop}\label{transfinitestandard} There is an isomorphism of $\mathfrak{P}$-algebras $$\textrm{colim}(A\coprod_{\alpha_{i}}\mathfrak{P}(V^{i}))\cong A\coprod_{\alpha}\mathfrak{P}(V)$$ \end{prop} \begin{proof} By the above remarks there is a map of algebras with derivation $\textrm{colim}(A\coprod_{\alpha_{i}}\mathfrak{P}(V^{i}))\cong A\coprod_{\alpha}\mathfrak{P}(V)$. To see that is is an isomorphism we may forget the differentials and $R$-module structure, it which case it reduces to the fact that coproducts and colimits commute. \end{proof} Let $f:V\rightarrow W$ and $\nu:V\rightarrow A$ be degree $0$ maps and let $\omega:W\rightarrow A$ be a degree $-1$ derivation of $R$-modules. There is an induced degree $-1$ derivation $\nu+\omega:\textrm{cone}(f)\rightarrow A$. There is also a degree $-1$ derivation \begin{displaymath} \xymatrix{ V[1]\ar[r]^{(\nu,-f)} & A\oplus W\ar[r] & A\coprod_{-\omega}\mathfrak{P}( W) } \end{displaymath} which we denote by $\nu\cup (-f)$. \begin{prop}\label{standardcone} Suppose that $\omega:W\rightarrow A$ is a degree $-1$ derivation of $R$-modules such that the induced map $W[1]\rightarrow A$ commutes with differentials, and that $\nu$ satisfies $$\omega_{n}\circ f_{n}=d^{A}_{n}\circ\nu_{n}-\nu_{n-1}d_{n}^{V}$$ Then \begin{enumerate} \item $\nu\cup(-f)$ is a map of chain complexes. \item There is an isomorphism $$A\coprod_{\nu+\omega}\mathfrak{P}(\textrm{cone}(f))\cong (A\coprod_{-\omega}\mathfrak{P}(W))\coprod_{\nu\cup f}\mathfrak{P}(V[1])$$ \end{enumerate} \end{prop} \begin{proof} The first part is a direct computation. For the second part, let us forget the differentials and $R$-module structure for the moment. Then we have \begin{align*} (A\coprod_{\omega}\mathfrak{P}( W))\coprod_{\nu\cup f}\mathfrak{P}( V[1]) \cong A\coprod\mathfrak{P}( W)\coprod\mathfrak{P}( V[1]) \cong A\coprod\mathfrak{P}(W\oplus V[1])\\ =A\coprod\mathfrak{P}(\textrm{cone}(f))=A\coprod_{\nu+\omega}\mathfrak{P}(\textrm{cone}(f)) \end{align*} We need to check that this isomorphism preserves the differentials. Again this is a direct computation. \end{proof} \begin{prop}\label{extracofibs} Let $\mathpzc{M}$ be strong Koszul let $i:S^{n}(X)\rightarrow S^{n}(Y)$ be a cofibration in $Ch(\mathpzc{E})$, and let $\alpha:S^{n}(Y)[-1]\rightarrow A$ be a map of complexes. Then $p:A\coprod_{\alpha\circ R\otimes i}R\otimes S^{n}(X)\rightarrow A\coprod_{\alpha}R\otimes S^{n}(Y)$ is a cofibration. \end{prop} \begin{proof} Let \begin{displaymath} \xymatrix{ A\coprod_{\alpha\circ R\otimes i}R\otimes S^{n}(X)\ar[d]^{p}\ar[r]^{\;\;\;\;\;\;\;\;\;\;\;f} & M\ar[d]^{q}\\ A\coprod_{\alpha\circ R\otimes i}R\otimes S^{n}(Y)\ar[r]^{\;\;\;\;\;\;\;\;\;\;\;g} & N } \end{displaymath} be a diagram with $q$ an acyclic fibration. In clearly suffices to find a lift in the following diagram of complexes \begin{displaymath} \xymatrix{ A\oplus_{\alpha\circ R\otimes i}R\otimes S^{n}(X)\ar[d]\ar[r]^{\;\;\;\;\;\;\;\;\;\;\;f} & M\ar[d]^{q}\\ A\oplus_{\alpha\circ R\otimes i}R\otimes S^{n}(Y)\ar[r]^{\;\;\;\;\;\;\;\;\;\;\;g} & N } \end{displaymath} But the map $A\oplus_{\alpha\circ R\otimes i}R\otimes S^{n}(X)\rightarrow A\oplus_{\alpha\circ R\otimes i}R\otimes S^{n}(Y)$ is an admissible monomorphism whose cokernel is $R\otimes coker(S^{n}(X)\rightarrow S^{n}(Y))$, which is a cofibrant object. In particular it is a cofibration. \end{proof} \begin{proof}[Proof of Lemma \ref{standard}] By Proposition \ref{transfinitestandard} it is sufficient to show that this is the case for the maps $R\otimes S^{n}(P)\rightarrow R\otimes D^{n+1}(P)$, $0\rightarrow R\otimes D^{n}(P)$, and in the cofibrant case, $R\otimes S^{n}(X)\rightarrow R\otimes S^{n}(Y)$. First note that $R\otimes D^{n+1}(P)=\textrm{cone}(id_{R\otimes S^{n}(P)})$. Therefore by Proposition \ref{standardcone} and Proposition \ref{extracofibs} we reduce to showing that, given a degree $-1$ map $\alpha:S^{n}(P)\rightarrow A$, the map $A\rightarrow A\coprod_{\alpha}\mathfrak{P}(R\otimes S^{n}(P))$ is in $Sull_{\mathfrak{P}}(\mathpzc{M};\mathfrak{G})$, or $Cof_{\mathfrak{P}}(\mathpzc{M};\mathfrak{G})$ for the second case. But we have the pushout diagram \begin{displaymath} \xymatrix{ \mathfrak{P}(R\otimes S^{n-1}(P))\ar[r]^{\;\;\;\;\gamma_{A}\mathfrak{P}(R\otimes \alpha[-1])}\ar@{>->}[d] & A\ar[d]\\ \mathfrak{P}(R\otimes D^{n}(P))\ar[r] & A\coprod_{\alpha}\mathfrak{P}(R\otimes S^{n}(P)) } \end{displaymath} This completes the proof. \end{proof} This result has numerous applications. For the purposes of this paper the important consequence is Proposition \ref{factorfibcofib} below. However let us use it to show that certain interesting classes of algebras are cofibrant. Following \cite{loday2012algebraic} B.6.13 we define triangulated quasi-free algebras. \begin{defn} A $\mathfrak{P}$-algebra $A$ is said to be $\mathfrak{G}$-\textbf{quasi-free} if there is an object $V$ of $preCell(\mathfrak{G})$ such that, after forgetting differentials, the underlying graded algebra of $A$ is isomorphic to the underlying graded algebra of $\mathfrak{P}(R\otimes V)$. A quasi-free algebra $A$ is said to be $\mathfrak{G}$-\textbf{triangulated} if the underlying graded object of $V$ can be written as $V\cong\bigoplus_{i=0}^{\infty}V_{i}$, where each $V_{i}$ is in $\mathpzc{Gr}_{{\mathbb Z}}$, for $i\ge 1$ $d_{A}|_{\mathfrak{P}(R\otimes V_{n})}$ factors through $\mathfrak{P}(R\otimes\bigoplus_{i=0}^{n-1} V_{i})$, and $d_{A}|_{V_{0}}=0$. \end{defn} \begin{cor}\label{triangquasifree} A $\mathfrak{G}$-triangulated quasi-free $\mathfrak{P}$-algebra is a $\mathfrak{G}$-Sullivan model. \end{cor} \begin{proof} Let $A$ be $\mathfrak{G}$-triangulated. After forgetting differentials we may write $A=\mathfrak{P}(V)\cong \mathfrak{P}(R\otimes \bigoplus_{i=0}^{\infty}V_{i})$. For $0\le n<\infty$ let $A_{n}=\mathfrak{P}(R\otimes \bigoplus_{i=0}^{n}V_{i})$. This is a subobject of $A$ in $\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})$. Moreover $A\cong lim_{\rightarrow}A_{n}$. Thus we just need to show that $A_{0}$ is cofibrant and for each $n\ge 0$ the map $A_{n}\rightarrow A_{n+1}$ is a cofibration. $A_{0}$ is just the free algebra on $R\otimes V_{0}$, where $V_{0}$ is regarded as a complex in $Ch(\mathpzc{E})$ with trivial differential. Moreover as an object in $Ch(\mathpzc{E})$ $V_{0}$ is a direct sum of objects of the form $S^{n}(G)$ where $G\in\mathfrak{G}$. Hence $A_{0}$ is clearly a $\mathfrak{G}$-Sullivan model. Moreover, if we let $\alpha_{n+1}\defeq d|_{V_{n+1}}:V_{n+1}\rightarrow A_{n}$, then $A_{n+1}=A_{n}\coprod_{\alpha_{n+1}}\mathfrak{P}(R\otimes V_{i})$. This is a map in $Sull_{\mathfrak{P}}(\mathpzc{M};\mathfrak{G})$. The map $0\rightarrow A$ is a transfinite composition of such maps and hence is also in $Sull_{\mathfrak{P}}(\mathpzc{M};\mathfrak{G})$. \end{proof} In particular quasi-free algebras on bounded below objects of $preCell(\mathfrak{G})$ are $\mathfrak{G}$-triangulated. Immediately we get the following result. \begin{cor}\label{quasifreecofib} If $\mathpzc{M}={}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$, $R$ and $\mathfrak{P}$ are concentrated in non-negative degrees, $V$ is in $\underline{Gr}_{\mathbb{N}_{0}}(\mathfrak{G})$, and $A\in\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})$ is quasi-free on $V$, then $A$ is $\mathfrak{G}$-triangulated. \end{cor} \begin{proof} Write $V=\bigoplus V_{i}[i]$ where $V_{i}$ is an object of $\mathfrak{G}$. The differential $d_{A}|_{V_{n}[n]}$ must clearly factor through $\mathfrak{P}(R\otimes\bigoplus_{i=0}^{n-1}V_{i})$. This proves the claim. \end{proof} In the setting of this corollary we say that $A$ is $\mathfrak{G}$-non-negatively graded. \subsection{The Quasi-Category of Algebras} Let $\mathpzc{M}$ be a combinatorial monoidal model category satisfying the monoid axiom, and $\mathfrak{P}$ an admissible operad in $\mathpzc{M}$. The category $\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})$ is a (combinatorial) model category when equipped with the transferred model structure. We denote the associated $(\infty,1)$ category by $\textbf{Alg}_{\mathfrak{P}}(\mathpzc{M})$. We may also regard $\mathfrak{P}$ as an $(\infty,1)$-operad in the $(\infty,1)$-category $\textbf{M}$. To this one can associate the $(\infty,1)$-category of $(\infty,1)$-algebras in $\textbf{M}$ over the $(\infty,1)$-operad $\mathfrak{P}$. This category is denoted $\textbf{Alg}_{\mathfrak{P}}(\textbf{M})$. For general model categories it is not known that the category $\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})$ always presents $\textbf{Alg}_{\mathfrak{P}}(\textbf{M})$, i.e. that the $(\infty,1)$-categories $\textbf{Alg}_{\mathfrak{P}}(\mathpzc{M})$ and $\textbf{Alg}_{\mathfrak{P}}(\textbf{M})$ are equivalent. However \cite{pavlov2018admissibility} gives very general condition under which it is true. The category ${}\mathpzc{Mod}_{\Sigma}(\mathpzc{M})$ may be regarded as a diagram category in $\mathpzc{M}$. Since $\mathpzc{M}$ is a cofibrantly generated monoidal model category. Therefore we may equip it with the projective model structure (see \cite{hirschhorn2009model} Section 11.6). Before introducing a technical definition, let us recall some notation from the theory of monoidal categories. Suppose that $f:X\rightarrow Y$ and $g:X'\rightarrow Y'$ are maps in $\mathpzc{M}$. The \textbf{pushout product} map to be the unique map $X'\otimes Y\coprod_{X\otimes Y}X\otimes Y'\rightarrow X'\otimes Y'$ determined by the maps $X'\otimes Y\rightarrow X'\otimes Y'$ and $X\otimes Y'\rightarrow X'\otimes Y'$. If $f$ is a map of (right) $\Sigma_{n}$-modules, and $g$ is a map of (left) $\Sigma_{n}$-modules, then the map $f\Box g$ is equipped with a diagonal action of $\Sigma_{n}$. We denote by $f\Box_{\Sigma_{n}}g$ the map of coinvariants. For example, if $s$ is a map in $\mathpzc{M}$ then we denote by $s^{\Box n}$ the pushout-product of $s$ with itself $n$ times. This map is clearly equipped with an action of $\Sigma_{n}$. \begin{defn} An operad $\mathfrak{P}$ is said to be \textbf{rectifiably admissible} if for a projective cofibrant replacement $q:Q\mathfrak{P}\rightarrow\mathfrak{P}$ of the underlying $\Sigma$-module of $\mathfrak{P}$, any $n\in\mathbb{N}_{0}$, and any cofibration $s\in\mathpzc{M}$, the map $q(n)\Box_{\Sigma_{n}}s^{\Box n}$ is a weak equivalence. \end{defn} The following is (an immediate consequence of) Theorem 7.10 in \cite{pavlov2018admissibility}. \begin{thm} If $\mathfrak{P}$ is rectifiably admissible, then the $(\infty,1)$-category $\textbf{Alg}_{\mathfrak{P}}(\mathpzc{M})$ is naturally equivalent to the $(\infty,1)$-category $\textbf{Alg}_{\mathfrak{P}}(\textbf{M})$ of $(\infty,1)$-algebras in $\textbf{M}$ over $\mathfrak{P}$ considered as an $(\infty,1)$-operad. \end{thm} Let us see when this is the case for Koszul categories $\mathpzc{M}$. \begin{prop} If $\mathfrak{P}$ is admissible, and the underlying $\Sigma$-module of $\mathfrak{P}$ is a retract of a free $\Sigma$-module on a $K$-flat graded object, then $\mathfrak{P}$ is rectifiably admissible. In particular if $\mathpzc{M}$ is ${\mathbb Q}$-Koszul then any operad is rectifiably admissible. \end{prop} \begin{proof} Let $q:Q\mathfrak{P}\rightarrow\mathfrak{P}$ be a projective cofibrant resolution. We may assume that $q$ is of the form $\Sigma\otimes \tilde{q}$, where $\tilde{q}:\tilde{Q}\rightarrow\tilde{\mathfrak{P}}$ is an acyclic fibration of graded objects, $\tilde{Q}$ is projectively cofibrant in $\mathpzc{Gr}_{\mathbb{N}_{0}}(\mathpzc{M})$, and $\mathfrak{P}$ is $K$-flat in $\mathpzc{Gr}_{\mathbb{N}_{0}}(\mathpzc{M})$. Let $s:X\rightarrow Y$ be a cofibration. Then $q(n)\Box_{\Sigma_{n}}s^{\Box n}\cong\tilde{q}(n)\Box s^{\Box n}$. Write $\tilde{s}(n)=s^{\Box n}:\tilde{X}\rightarrow\tilde{Y}$, which is a cofibration since $\mathpzc{M}$ is a monoidal model category. Consider the pushout \begin{displaymath} \xymatrix{ \tilde{Q}(n)\otimes \tilde{X}\ar[d]^{\tilde{q}\otimes id}\ar[r]^{id\otimes\tilde{s}} &\tilde{Q}(n)\otimes\tilde{Y}\ar[d]\\ \tilde{\mathfrak{P}}(n)\otimes\tilde{X} \ar[r] & \tilde{Q}(n)\otimes\tilde{Y}\coprod_{\tilde{Q}(n)\otimes \tilde{X}}\tilde{\mathfrak{P}}(n)\otimes\tilde{X} } \end{displaymath} Since $\tilde{Q}(n)$ and $\tilde{\mathfrak{P}}(n)$ are $K$-flat the left-hand vertical map is a weak equivalence. The top horizontal map is an admissible monomorphism. By Proposition 7.5 in \cite{kelly2016projective} the right-hand vertical map is an equivalence. Now the map $\tilde{q}(n)\otimes Id:\tilde{Q}(n)\otimes\tilde{Y}\rightarrow\tilde{\mathfrak{P}}(n)\otimes\tilde{Y}$ is an equivalence, again since $\tilde{Q}(n)$ and $\tilde{\mathfrak{P}}(n)$ are $K$-flat. By the two-out-of-three property the map $\tilde{q}(n)\Box s^{\Box n}$ is an equivalence, as required. \end{proof} \subsubsection{Generation under Sifted Colimits}\label{siftedgeneration} The highly technical subtleties of the above discussion actually have a crucial consequence, in that it provides is with a very convenient set of generators for the $(\infty,1)$-category $\textbf{Alg}_{\mathfrak{P}}(\mathpzc{M})$, which we will now disucss. Let $\textbf{C}$ be a cocomplete $(\infty,1)$-category and $\textbf{C}_{0}$ a full subcategory. We denote by $\mathcal{P}_{\Sigma}(\textbf{C}_{0})$ the free cocompletion of $\textbf{C}_{0}$ by sifted colimits, i.e. by filtered colimits and geometric realisations. There is a natural functor $$\mathcal{P}_{\Sigma}(\textbf{C}_{0})\rightarrow\textbf{C}$$ Let $\textbf{T}$ be a monad on $\textbf{C}$ which preserves sifted colimits and let $\textbf{C}^{\textbf{T}}$ its category of Eilenberg-Moore algebras. Consider the corresponding adjunction. $$\adj{Free_{\textbf{T}}}{\textbf{C}}{\textbf{C}^{\textbf{T}}}{|-|_{\textbf{T}}}$$ Let $Free_{\textbf{T}}(\textbf{C}_{0})$ denote the full subcategory of $\textbf{C}^{\textbf{T}}$ spanned by free $\textbf{T}$-algebras on $\textbf{C}_{0}$. \begin{displaymath} \xymatrix{ \mathcal{P}_{\Sigma}(\textbf{C}_{0})\ar[d]\ar[r] & \mathcal{P}_{\Sigma}(Free_{\textbf{T}}(\textbf{C}_{0}))\ar[d]\\ \textbf{C}\ar[r] & \textbf{C}^{T} } \end{displaymath} Suppose that the left-hand vertical map is essentially surjective. By Proposition 4.7.3.14 in \cite{lurie2017higher} every object in $\textbf{C}^{T}$ can be obtained as a colimit of a simplicial diagram of objects in the image of $Free_{\textbf{T}}$. Therefore by the previous proposition every object in $\textbf{C}^{T}$ can be obtained as a sifted colimit of objects in $Free_{\textbf{T}}(\textbf{C}_{0})$. In particular the functor $\mathcal{P}_{\Sigma}(Free_{\textbf{T}}(\textbf{C}_{0}))\rightarrow\textbf{C}^{\textbf{T}}$ is essentially surjective. We are particularly interested in the case that $\textbf{C}$ is presented by a Kan complex enriched monoidal model category $\mathpzc{C}$ and $\mathfrak{P}$ is an admissible operad in $\mathpzc{C}$. This gives rise to a monadic Quillen adjunction $$\adj{\mathfrak{P}(-)}{\mathpzc{C}}{\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{C})}{|-|}$$ which by localization induces an adjunction of $(\infty,1)$-categories $$\adj{\mathfrak{P}(-)}{\textbf{C}}{\textbf{Alg}_{\mathfrak{P}}(\mathpzc{C})}{|-|}$$ If $\mathfrak{P}$ is rectifiably admissible then this is also a monadic adjunction. Since $\mathpzc{C}$ is cofibrantly generated $\textbf{C}$ is generated under sifted colimits by some small subcategory $\textbf{C}_{0}$ of cofibrant objects in $\mathpzc{C}$. Therefore $\textbf{Alg}_{\mathfrak{P}}(\mathpzc{C})$ is generated under sifted colimits by the full category of free $\mathfrak{P}$-algebras on objects in $\textbf{C}_{0}$. \begin{rem} The argument given above is a significant component of the one in \cite{hennion2015tangent} Proposition 1.2.2, which shows that the category of chain complexes of vector spaces over a field, and the category of Lie algebras over it, are in fact the formal completion of subcategories of certain compact objects by sifted colimits. \end{rem} For technical reasons in Section \ref{seccoopkosz} it will be useful to consider certain subcategories of $\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})$. Let $\mathpzc{Alg}_{\mathfrak{P}}^{|c|}$ denote the full subcategory of $\mathpzc{Alg}_{\mathfrak{P}}$ consisting of algebras whose underlying object of $\mathpzc{M}$ is cofibrant, let $\mathpzc{Alg}_{\mathfrak{P}}^{c}$ denote the full subcategory of $\mathpzc{Alg}_{\mathfrak{P}}$ consisting of cofibrant algebras, and we let $\mathpzc{Alg}_{\mathfrak{P}}^{|K|}$ denote the full subcategory of $\mathpzc{Alg}_{\mathfrak{P}}$ whose underlying complex is $K$-flat. We regard them as relative categories in the obvious way. Fortunately, all these relative categories present the same $(\infty,1)$-category. \begin{prop}\label{infinitysame} The inclusions $$\mathpzc{Alg}_{\mathfrak{P}}^{c}\rightarrow \mathpzc{Alg}_{\mathfrak{P}}^{|c|}\rightarrow\mathpzc{Alg}_{\mathfrak{P}}^{|K|}\rightarrow\mathpzc{Alg}_{\mathfrak{P}}$$ all induce equivalences of $(\infty,1)$-cateogries. \end{prop} \begin{proof} This follows from an easy dual argument to Corollary 4.5 in \cite{nlab:simplicial_localization}. \end{proof} \subsection{Example: Commutative Algebras} Before proceeding to colagebras and Koszul duality, let us conclude this section with a detailed look at the homotopy theory of commutative algebras in a ${\mathbb Q}$-Koszul category which arise from elementary exact categories. We will make a connection with (affine) derived geometry and formal geometry by using our results above to analyse cotangent complexes. These results will prove useful later when we discuss operadic Koszul duality. \subsubsection{HA Contexts and the Cotangent Complex}\label{seccotangentcomplex} Recall that in \cite{toen2004homotopical} To{\"e}n and Vezzosi introduce an abstract categorical framework in which one can `do' homotopical algebra, namely a homotopical algebra context. Let us recall the (slightly modified) definition. \begin{defn}\label{Defn:HA context} Let $\mathpzc{M}$ be a combinatorial symmetric monoidal model category. We say that $\mathpzc{M}$ is an \textbf{homotopical algebra context} (or HA context) if for any $A\in\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{M})$. \begin{enumerate} \item The model category $\mathpzc{M}$ is proper, pointed and for any two objects $X$ and $Y$ in $\mathpzc{M}$ the natural morphisms $$QX\coprod QY\rightarrow X\coprod Y\rightarrow RX\times RY$$ are equivalences. \item $Ho(\mathpzc{M})$ is an additive category. \item With the transferred model structure and monoidal structure $-\otimes_{A}$, the category ${}_{A}\mathpzc{Mod}$ is a combinatorial, proper, symmetric monoidal model category. \item For any cofibrant object $M\in{}_{A}\mathpzc{Mod}$ the functor $$-\otimes_{A}M:{}_{A}\mathpzc{Mod}\rightarrow{}_{A}\mathpzc{Mod}$$ preserves equivalences. \item With the transferred model structures $\mathpzc{Alg}_{\mathfrak{Comm}}({}_{A}\mathpzc{Mod})$ and $\mathpzc{Alg}_{\mathfrak{Comm}_{nu}}({}_{A}\mathpzc{Mod})$ are combinatorial proper model categories. \item If $B$ is cofibrant in $\mathpzc{Alg}_{\mathfrak{Comm}}({}_{A}\mathpzc{Mod})$ then the functor $$B\otimes_{A}-:{}_{A}\mathpzc{Mod}\rightarrow{}_{B}\mathpzc{Mod}$$ preserves equivalences. \end{enumerate} \end{defn} \begin{thm}\label{HAcont} Let $(\mathpzc{E},\otimes,\underline{\textrm{Hom}},k)$ be a monoidal elementary exact category. Suppose that countable coproducts are admissibly coexact and countable products are admissibly exact. Let $R\in\mathpzc{Alg}_{\mathfrak{P}}(Ch(\mathpzc{E})$. Then ${}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ is an HA context. \end{thm} By \cite{toen2004homotopical} Section 1.2 a homotopical algebra context has sufficient structure to define the relative cotangent complex of a map $f:A\rightarrow B$ in $\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{M})$. Let us briefly recall the discussion here. For a commutative monoid $B$ write $\mathpzc{Alg}_{\mathfrak{Comm}}^{aug}({}_{B}\mathpzc{Mod})\defeq \mathpzc{Alg}_{\mathfrak{Comm}}({}_{B}\mathpzc{Mod})\big\slash B$ for the category of augmented commutative $B$-algebras. There is an adjunction $$\adj{K}{\mathpzc{Alg}_{\mathfrak{Comm}^{nu}}({}_{B}\mathpzc{Mod})}{\mathpzc{Alg}_{\mathfrak{Comm}}^{aug}({}_{B}\mathpzc{Mod})}{I}$$ where $K$ is the trivial extension functor and $I$ sends an algebra $C$ to the kernel of the map $C\rightarrow B$. This is both an equivalence of categories and a Quillen equivalence of model categories. There is also a Quillen adjunction $$\adj{Q}{\mathpzc{Alg}_{\mathfrak{Comm}^{nu}}({}_{B}\mathpzc{Mod})}{{}_{B}\mathpzc{Mod}}{Z}$$ where $Q(C)$ is defined by the pushout \begin{displaymath} \xymatrix{ C\otimes_{B}C\ar[d]\ar[r] & C\ar[d]\\ \bullet\ar[r] & Q(C) } \end{displaymath} and $Z$ just equips a module $M$ with the trivial non-unital commutative monoid structure. Now given a map $f:A\rightarrow B$ in $\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{E})$ we define the \textbf{relative cotangent complex} by $$\mathbb{L}_{B\big\slash A}\defeq\mathbb{L}Q\mathbb{R}I(B\otimes_{A}^{\mathbb{L}}B)$$ It is shown in \cite{toen2004homotopical} that $\mathbb{L}_{B\big\slash A}$ corepresents the functor of $(\infty,1)$-categories $${}_{B}\textbf{Mod}\rightarrow\textbf{sSet},\;M\mapsto Map_{\textbf{Alg}_{\mathfrak{Comm}}({}_{A}\mathpzc{Mod})\big\slash B}(B,B\ltimes M)$$ where $B\ltimes M$ is the square-zero extension of $B$ by $M$ (see Section \ref{secder}). Here $Map$ is the simplicial mapping space. We also write $\mathbb{L}_{B}\defeq\mathbb{L}_{B\big\slash k}$. Now let $C$ be any $A$-algebra and consider the category $\textbf{Alg}_{\mathfrak{Comm}}({}_{A}\mathpzc{Mod})\big\slash C$. There is a functor $$\textbf{Alg}_{\mathfrak{Comm}}({}_{A}\mathpzc{Mod})\big\slash C\rightarrow{}_{C}\textbf{Mod},\; B\mapsto\mathbb{L}_{B\big\slash A}\otimes^{\mathbb{L}}_{B}C$$ It is left adjoint to the functor sending a $C$-module $M$ to the square zero extension $C\ltimes M$. When $C=A=k$ so that $\textbf{Alg}_{\mathfrak{Comm}}({}_{A}\mathpzc{Mod})\big\slash k=\textbf{Alg}_{\mathfrak{Comm}}^{aug}$ we denote this functor by $\mathbb{L}_{0}$. We will make use of the following facts which constitute Proposition 1.2.1.6 in \cite{toen2004homotopical}. \begin{prop}\label{cotangentfacts} \begin{enumerate} \item Let $f:A\rightarrow B$ and $g:B\rightarrow C$ be morphisms of algebras. Then there is a homotopy cofiber sequence in ${}_{C}\mathpzc{Mod}$. $$\mathbb{L}_{B\big\slash A}\otimes_{B}^{\mathbb{L}} C\rightarrow\mathbb{L}_{C\big\slash A}\rightarrow\mathbb{L}_{C\big\slash B}$$ \item If \begin{displaymath} \xymatrix{ A\ar[d]\ar[r] &B\ar[d]\\ A'\ar[r] &B' } \end{displaymath} is a homotopy pushout in $\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{E})$ then the natural map $\mathbb{L}_{B\big\slash A}\otimes_{B}^{\mathbb{L}}B'\rightarrow\mathbb{L}_{B'\big\slash A'}$ is an equivalence. \end{enumerate} \end{prop} \begin{prop} There is a natural equivalence of functors $\mathbb{L}_{0}(A)\cong \mathbb{L}_{k\big\slash A}[1]$, where $\mathbb{L}_{k\big\slash A}[1]$ is the suspension of $\mathbb{L}_{k\big\slash A}$. \end{prop} \begin{proof} Consider the composition $k\rightarrow A\rightarrow k$ We get a homotopy cofiber sequence $$\mathbb{L}_{0}(A)\rightarrow 0\rightarrow\mathbb{L}_{k\big\slash A}$$ This proves the claim. \end{proof} Note that in the context of an elementary Koszul category, the suspension functor coincides with the shift functor. For cofibrant algebras we have the following result. The proof for vector spaces over a field is standard (for more general operads it can be found in Section 12.3.19, \cite{loday2012algebraic}), and goes through with minor modifications. \begin{prop}\label{maximalideal} Let $A\rightarrow k$ be a cofibrant augmented algebra. Then $\mathbb{L}_{0}(A)\cong coker(I\otimes I\rightarrow I)$ where $I=Ker(A\rightarrow k)$ is the augmentation ideal. \end{prop} \begin{proof} Since $A$ is a cofibrant we may assume everything is underived. We need to show that the functor sending $A$ to the $k$-module $coker(I\otimes I\rightarrow I)$ is left adjoint to the square-zero extension functor. Let $f:A\rightarrow k\ltimes M$ be map of augmented algebras. This induces a map $I\rightarrow M$ of augmentation ideals which clearly descends to a map $\tilde{f}:coker(I\otimes I\rightarrow I)\rightarrow M$. Conversely suppose we are given a map $g: coker(I\otimes I\rightarrow I)\rightarrow M$. Consider the map of modules $A\cong k\coprod I\rightarrow k\coprod I\big\slash I^{2}\rightarrow k\coprod M$. This is in fact a map of algebras $A\rightarrow k\ltimes M$. These maps on hom sets are inverse, realising the adjunction. \end{proof} \begin{cor} If $A$ is a cofibrant augmented algebra then the unit of the adjunction $$\adj{\mathbb{L}_{0}}{\textbf{Alg}_{\mathfrak{Comm}}^{aug}}{{}_{k}\textbf{Mod}}{k\ltimes(-)}$$ is the natural map $$A\rightarrow k\ltimes coker(I\otimes I\rightarrow I)$$ \end{cor} \subsubsection{Homotopy Pushouts in Koszul Categories} As is evident from results quoted in the previous section, knowing how to compute homotopy pushouts might help with computing relative cotangent complexes. Let us see how to do this in a projective ${\mathbb Q}$-Koszul category by using Lemma \ref{standard}. First let us give a general definition and technical proposition. \begin{defn} A map $f:A\rightarrow B$ of commutative algebras in a model category $\mathpzc{M}$ is said to be $K$-\textbf{flat} if $B$ is K-flat as an $A$-module. \end{defn} \begin{prop}\label{htpypushoutKflatmodel} Suppose that $\mathpzc{M}$ is a monoidal model category such that the transferred model structure exists on $\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{M})$, and cofibrations between cofibrant objects in $\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{M})$ are $K$-flat. The homotopy pushout of a diagram \begin{displaymath} \xymatrix{ A\ar[d]\ar[r] & B\\ C } \end{displaymath} is equivalent in $\mathpzc{M}$ (in fact as a $(B,C)$-bimodule) to $B\otimes_{A}^{\mathbb{L}}C$. In particular if $B\otimes_{A}^{\mathbb{L}}C\rightarrow B\otimes_{A}C$ is an equivalence then $B\otimes_{A}C$ is a homotopy pushout of the diagram. \end{prop} \begin{proof} Let \begin{displaymath} \xymatrix{ \tilde{A}\ar[d]\ar[r] & \tilde{B}\ar[d]\\ \tilde{C}\ar[r] & P } \end{displaymath} be a diagram presenting the homotopy pushout of the diagram. There are equivalences $A\rightarrow\tilde{A}$, $B\rightarrow\tilde{B}$, and $C\rightarrow\tilde{C}$. Moreover we may assume that $\tilde{A},\tilde{B}$ and $\tilde{C}$ are cofibrant and that $\tilde{A}\rightarrow\tilde{B}$ is a cofibration. Then $$P\cong \tilde{B}\otimes_{\tilde{A}}\tilde{C}\cong \tilde{B}\otimes^{\mathbb{L}}_{\tilde{A}}\tilde{C}\cong B\otimes^{\mathbb{L}}_{A}C$$ \end{proof} In projective Koszul categories the conditions of Proposition \ref{htpypushoutKflatmodel} are satisfied. \begin{prop} Let $B$ be an object in $\mathpzc{M}$. Suppose that $B=lim_{\rightarrow_{\mathcal{I}}}B_{i}$ where $\mathcal{I}$ is a filtered category, and each $B_{i}\rightarrow B_{j}$ is a pure monomorphism with $K$-flat cokernel. Then $B$ is $K$-flat. \end{prop} \begin{proof} Let $f:M\rightarrow N$ be an equivalence in $\mathpzc{M}$. Then $$Id_{B}\otimes^{\mathbb{L}} f\cong lim_{\rightarrow_{\mathcal{I}}}(Id_{B_{i}}\otimes^{\mathbb{L}}) f\cong lim_{\rightarrow_{\mathcal{I}}}Id_{B_{i}}\otimes^{\mathbb{L}} f\cong lim_{\rightarrow_{\mathcal{I}}}Id_{B_{i}}\otimes f\cong Id_{B}\otimes f$$ \end{proof} \begin{cor}\label{sullivanKflat} Let $\mathpzc{M}$ be a ${\mathbb Q}$-Koszul category, and let $f:A\rightarrow B$ be in $Sull_{\mathfrak{Comm}}(\mathpzc{M};\mathfrak{G})$, where $\{G_{i}\}_{i\in\mathcal{I}}$ consists of flat objects. Then $f$ is $K$-flat. \end{cor} \begin{proof} As an $A$-module, $B$ is a retract of $A\otimes_{\alpha}S(V)$ for some $K$-flat complex $V$. Without loss of generality we may assume that $B\cong A\otimes_{\alpha}S(V)$. Endow $A\otimes_{\alpha}S(V)$ with the filtration induced by the grading on $S(V)$ by $n$-th symmetric powers. This is a filtration by pure monomorphisms in the category ${}_{A}\mathpzc{Mod}(\mathpzc{M})$. The associated graded module is $A\otimes S(V)$. Since $V$ is $K$-flat $S(V)$ is as well. Therefore $A\otimes S(V)$ is free on a $K$-flat object of $\mathpzc{M}$, and so is a $K$-flat object of ${}_{A}\mathpzc{Mod}(\mathpzc{M})$. This suffices to prove the claim. \end{proof} \begin{cor} If $\mathpzc{M}$ is projective ${\mathbb Q}$-Koszul then cofibrations in $\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{M})$ are $K$-flat maps. \end{cor} The utility of this corollary is that in projective ${\mathbb Q}$-Koszul categories we can compute cotangent complexes very easily. \begin{prop}\label{cotangentqfree} Let $\mathpzc{M}$ be a projective ${\mathbb Q}$-Koszul category. Let $A$ be a $\mathfrak{G}$-Sullivan model where $\mathfrak{G}$ consists of flat objects. Then $\mathbb{L}_{0}(A)\cong I\big\slash I^{2}$. \end{prop} \begin{proof} We break the proof into several steps. First suppose that $A\cong S(V)$ is free on a $K$-flat complex $V$. Pick a cofibrant resolution $W\rightarrow V$ of $V$. Then $S(W)\rightarrow S(V)$ is a weak equivalence. Thus $\mathbb{L}_{0}(S(V))\cong\mathbb{L}_{0}(S(W))\cong W\cong V$. Now let $A\in\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{M})$ and suppose that $\mathbb{L}_{0}(A)\cong V$ Let $f:X\rightarrow Y$ be a map in $preCell(\mathfrak{G})$. Consider the pushout \begin{displaymath} \xymatrix{ S(X)\ar[d]\ar[r] & A\ar[d]\\ S(Y)\ar[r] & A' } \end{displaymath} Since $S(X)\rightarrow S(Y)$ is a $K$-flat map by Proposition \ref{sullivanKflat} the diagram above is a homotopy pushout. Therefore the diagram \begin{displaymath} \xymatrix{ X\ar[d]\ar[r] & I\big\slash I^{2}\ar[d]\\ Y\ar[r] & \mathbb{L}_{0}(A') } \end{displaymath} is a homotopy pushout diagram in $\mathpzc{M}$. Since $X\rightarrow Y$ is an admissible monomorphism, this is in fact just presented by the normal pushout diagram by Proposition 7.5 in \cite{kelly2016projective}. The functor $(A\rightarrow k)\mapsto coker(I\otimes I\rightarrow I)$ is a left adjoint so commutes with colimits. Therefore $\mathbb{L}_{0}(A')\cong I'\big\slash I'^{2}$. Now let $\lambda$ be an ordinal, and let $F:\lambda\rightarrow\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{M})$ be a functor such that each algebra $A_{j}\defeq F(j)$ satisfies $\mathbb{L}_{0}(A_{j})\cong I_{j}\big\slash I_{j}^{2}$, and each map $A_{i}\rightarrow A_{j}$ is a pure monomorphism in $\mathpzc{M}$. The functor $lim_{\rightarrow}:\mathpzc{Fun}_{\textbf{PureMon}}(\lambda,\mathpzc{Alg}_{\mathfrak{Comm}})\rightarrow\mathpzc{Alg}_{\mathfrak{Comm}}$ is exact and commutes with colimits. Therefore $$\mathbb{L}_{0}(lim_{\rightarrow}A_{j})\cong lim_{\rightarrow}\mathbb{L}_{0}(A_{j})\cong lim_{\rightarrow}I_{j}\big\slash I_{j}^{2}\cong I\big\slash I^{2}$$ where $I$ is the augmentation ideal of $A$. The last step follows since $\otimes$ commutes with colimits, and because the map $lim_{\rightarrow}I_{j}\rightarrow I$ is an isomorphism, which follows from Proposition \ref{3pure} and the fact that the functor $lim_{\rightarrow}:\mathpzc{Fun}_{\textbf{PureMon}}(\lambda,\mathpzc{M})\rightarrow\mathpzc{M}$ is exact. \end{proof} \subsubsection{Formal Completions and Cotangent Complexes} Let $A\in\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{M})$ be $\mathfrak{G}$-triangulated, $A=lim_{\rightarrow_{n\ge 0}}S(V_{n})$. There is a morphism $S(V_{0})\rightarrow A$ in $\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{M})$. Consider the commutative monoid $\hat{S}(V_{0})\defeq\prod_{n=0}^{\infty}Sym^{n}(V_{0})$. There is a morphism of $R$-modules $S(V_{0})\rightarrow\hat{S}(V_{0})$. We define the algebra $\hat{A}$ to be the pushout $\hat{A}\defeq\hat{S}(V_{0})\otimes_{S(V_{0})} A$. Note that in general this construction depends on the presentation of $A$ as a $\mathfrak{G}$-triangulated object. \begin{defn} Let $\mathpzc{M}={}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ be ${\mathbb Q}$-Koszul category. We say that an object $V$ of $\mathpzc{M}$ is \textbf{decent} if there is an equivalence $$\mathbb{L}_{\hat{S}(V)\big\slash S(V)}\otimes^{\mathbb{L}}_{\hat{S}(V)}R\cong 0$$ in $\mathpzc{M}$. \end{defn} \begin{prop} Let $\mathpzc{M}$ be a projective ${\mathbb Q}$-Koszul category , and let $A$ be a $\mathfrak{G}$-triangulated commutative algebra, where $\mathfrak{G}$ consists of flat objects. If $V_{0}$ is decent then the map $\mathbb{L}_{0}(A)\rightarrow\mathbb{L}_{0}(\hat{A})$ is an equivalence. \end{prop} \begin{proof} Consider the homotopy cofibre sequence $$\mathbb{L}_{\hat{A}\big\slash A}\otimes^{\mathbb{L}}_{\hat{A}}R\rightarrow\mathbb{L}_{0}(A)[1]\rightarrow\mathbb{L}_{0}(\hat{A})[1]$$ We claim that $\mathbb{L}_{\hat{A}\big\slash A}\otimes^{\mathbb{L}}_{\hat{A}}R\cong 0$, which would prove the result. Now using Corollary \ref{completionhpush} and Proposition \ref{cotangentfacts} gives that $\mathbb{L}_{\hat{S}(V_{0})\big\slash S(V_{0})}\otimes^{\mathbb{L}}_{\hat{S}(V)}\hat{A}\cong\mathbb{L}_{\hat{A}\big\slash A}$. Therefore $$\mathbb{L}_{\hat{S}(V_{0})\big\slash S(V_{0})}\otimes^{\mathbb{L}}_{\hat{S}(V)}R\cong \mathbb{L}_{\hat{A}\big\slash A}\otimes^{\mathbb{L}}_{\hat{A}}R$$ Since $V_{0}$ is decent the term on the left is equivalent to $0$. \end{proof} \begin{prop}\label{completionhpush} Suppose that $\mathfrak{G}$ consists of flat objects. Then $\hat{A}$ is the homotopy pushout of the diagram \begin{displaymath} \xymatrix{ S(V_{0})\ar[r]\ar[d] & \hat{S}(V_{0})\\ A } \end{displaymath} \end{prop} \begin{proof} The map $S(V_{0})\rightarrow A$ is clearly in $Sull(\mathpzc{M};\mathfrak{G})$ and is therefore $K$-flat. By Proposition \ref{htpypushoutKflatmodel} the claim follows. \end{proof} Let us give our main example of this construction, suppose that $\mathpzc{M}={}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$ where $R$ is non-negatively graded. Let $A$ be quasi-free on an object of the form $R\otimes V$, where $V$ is a $\mathfrak{G}$-non-negatively graded complex in $Ch(\mathpzc{E})$ such that each $V_{n}$ is in $\mathfrak{G}$. $A$ is $\mathfrak{G}$-non-negatively graded. After forgetting differentials, the underlying algebra of $A$ is $$S_{R}(R\otimes V)\cong R\otimes S(V)\cong R\otimes S(V_{0})\otimes S(V\big\slash V_{0})$$ where $S_{R}(R\otimes V)$ denotes the free commutative algebra taken in ${}_{R}\mathpzc{Mod}(Ch(\mathpzc{E}))$. Then after forgetting differentials $$\hat{A}\cong \hat{S}_{R}(R\otimes V_{0})\otimes S(V\big\slash V_{0})$$ We claim that under certain circumstances this is isomorphic to $\hat{S}_{R}(R\otimes V)$. \begin{defn} Let $\mathpzc{M}$ be a complete monoidal category and let $\mathfrak{O}$ be a class of objects in $\mathpzc{M}$. An object $V$ of $\mathpzc{M}$ is said to be \textbf{weakly} $\aleph_{1}$-\textbf{filtered relative to} $\mathfrak{O}$ if the canonical map $V\otimes\prod_{n=1}^{\infty}(W_{n})\rightarrow \prod_{n=1}^{\infty}(V\otimes W_{N})$ is an isomorphism for any countable collection $\{W_{n}\}$ of objects of $\mathfrak{O}$. If $\mathfrak{O}=Ob(\mathpzc{M})$ then $V$ is said to be \textbf{weakly }$\aleph_{1}$-\textbf{filtered}. \end{defn} The following is clear: \begin{prop} Let $\mathfrak{O}$ be a class of objects in a complete additive monoidal category $\mathpzc{M}$. \begin{enumerate} \item If $W$ is a summand of an object $V$ which is weakly $\aleph_{1}$-filtered relative to $\mathfrak{O}$ then $W$ is weakly $\aleph_{1}$-filtered relative to $\mathfrak{O}$. \item If $V$ is weakly $\aleph_{1}$-filtered relative to $\mathfrak{O}$, and $W$ is weakly $\aleph_{1}$-filtered relative to $V\otimes\mathfrak{O}\defeq \{V\otimes O:O\in\mathfrak{O}\}$, then $W\otimes V$ is weakly $\aleph_{1}$-filtered relative to $\mathfrak{O}$. \end{enumerate} \end{prop} \begin{example}\label{aleph1examples} \begin{enumerate} \item If $\mathpzc{M}={}_{R}\mathpzc{Mod}$ for $R$ a ring, then any weakly $\aleph_{1}$-filtered colimit of finitely presented $R$-modules is weakly $\aleph_{1}$-filtered. In particular if $R$ is Noetherian, then any Noetherian $R$-module $V$ is weakly $\aleph_{1}$-filtered. Indeed any $R$-module is the colimit of its finitely generated $R$-submodules. The Noetherian condition on $R$ implies that such submodules are finitely presented, and hence weakly $\aleph_{1}$-filtered. The Noetherian condition on $V$ implies that this colimit is weakly $\aleph_{1}$-filtered. \item If $\mathpzc{M}={}_{R}\mathpzc{Mod}(Ind(Ban_{R}))$ or $\mathpzc{M}={}_{R}\mathpzc{Mod}(CBorn_{R}))$ for $R$ a Banach ring, then Corollary 3.65 in \cite{bambozzi2015stein} says that any binuclear bornological Fr\'{e}chet space is weakly $\aleph_{1}$-filtered relative to the class of all bornological Fr\'{e}chet spaces. \end{enumerate} \end{example} \begin{defn} Let $\mathpzc{M}={}_{R}\mathpzc{Mod}(Ch\mathpzc{E})$. For an object $V$ of $Ch(\mathpzc{E})$ we denote by $R\otimes V^{\otimes}\otimes V^{sym}$ the collection of objects in $\mathpzc{M}$: $\{R_{i}\otimes (V^{\otimes m})_{j}\otimes (Sym^{n}(V))_{k}:m,n\in\mathbb{N},i,j,k\in\mathbb{Z}\}$ \end{defn} \begin{prop}\label{completiontwoways} Let $A\in\mathpzc{Alg}_{\mathfrak{Comm}}(\mathpzc{M})$ be quasi-free on an object $V=\bigoplus_{n=0}^{\infty}V_{i}[i]\in\underline{Gr}_{\mathbb{N}_{0}}(\mathpzc{E})$. Consider the presentation of $A$ as a $\mathfrak{G}$-triangulated algebra $A=lim_{\rightarrow}A_{n}$ where the underlying graded algebra of $A_{n}$ is $S(R\otimes\bigoplus_{i=0}^{n}V_{i})$. Suppose that for all $0\le i<\infty$ $V_{i}$ is weakly $\aleph_{1}$-filtered relative to $R\otimes V^{\otimes}\otimes V^{sym}$. Then the natural map of graded objects $\hat{A}\rightarrow\hat{S}_{R}(R\otimes V_{0})\otimes S(V\big\slash V_{0})$ is an isomorphism. \end{prop} \begin{proof} Consider the graded object. $$\hat{S}_{R}(R\otimes V)=\prod_{n=0}^{\infty}R\otimes Sym^{n}(V)\cong\prod_{n=0}^{\infty}R\otimes\bigoplus_{i+j=n}Sym^{i}(V_{0})\otimes Sym^{j}(V\big\slash V_{0})\cong\prod_{n=0}^{\infty}\bigoplus_{j=0}^{n}(R\otimes Sym^{n-j}(V_{0}))\otimes Sym^{j}(V\big\slash V_{0})$$ This is isomorphic to $$\bigoplus_{j=0}^{\infty}\prod_{n=j}^{\infty}((R\otimes Sym^{n-j}(V_{0}))\otimes Sym^{j}(V\big\slash V_{0}))$$ By the $\aleph_{1}$ condition this is isomorphic to $$\bigoplus_{j=0}^{\infty}(\prod_{n=j}^{\infty}R\otimes Sym^{n-j}(V_{0}))\otimes Sym^{j}(V\big\slash V_{0})\cong\bigoplus_{j=0}^{\infty}\hat{S}_{R}(R\otimes V_{0})\otimes Sym^{j}(V\big\slash V_{0})\cong\hat{S}_{R}(R\otimes V_{0})\otimes S(V\big\slash V_{0})$$ \end{proof} \section{Cooperadic Koszul Duality}\label{seccoopkosz} \subsection{Filtered Cooperads and Filtered Coalgebras} The homotopy theory of coalgebras is much more involved. For the purposes of Koszul duality we need to consider filtered cooperads and filtered co-algebras over them. In many cases of interest, such as $\mathfrak{C}=\mathfrak{coComm}^{nu}$ the filtration is induced by a \textbf{weight grading}, $\mathfrak{C}=\bigoplus_{n=0}^{\infty}\mathfrak{C}^{n}$. The category $\overline{\mathpzc{Filt}}_{\textbf{PureMon}}(\mathpzc{M})$ is not in general a symmetric monoidal category, but for our purposes this is not a problem. Rather we make the following definition. \begin{defn}\label{filtcoop} A \textbf{filtered cooperad} is an object $\mathfrak{C}=((\mathfrak{C})_{top},\gamma_{n},c_{n})$ of $\overline{\mathpzc{Filt}}_{{}_{\Sigma}\textbf{PureMon}}(\mathpzc{Mod}_{\Sigma}(\mathpzc{M}))$ together with a cooperad structure on $(\mathfrak{C})_{top}$ such that \begin{enumerate} \item $\mathfrak{C}\circ_{ns}\mathfrak{C}$ is exhaustively and admissibly filtered and has admissibly filtered coinvariants. \item the maps $(\mathfrak{C})_{top}\rightarrow(\mathfrak{C})_{top}\circ(\mathfrak{C})_{top}$ and $(\mathfrak{C})_{top}\rightarrow I$ preserve filtrations, where $I$ is endowed with the trivial filtration. \end{enumerate} \end{defn} Note it follows that $\mathfrak{C}\circ\mathfrak{C}$ is also exhaustively filtered. For posterity we will also define filtered operads. \begin{defn}\label{filtop} A \textbf{filtered operad} is an object $\mathfrak{P}=((\mathfrak{P})_{top},\gamma_{n},c_{n})$ of $\overline{\mathpzc{Filt}}_{{}_{\Sigma}\textbf{PureMon}}(\mathpzc{Mod}_{\Sigma}(\mathpzc{M}))$ together with an operad structure on $(\mathfrak{P})_{top}$ such that \begin{enumerate} \item $(\mathfrak{P})_{top}\circ_{ns}(\mathfrak{P})_{top}$ is exhaustively and admissibly filtered and has admissibly filtered coinvariants. \item the maps $(\mathfrak{P})_{top}\circ(\mathfrak{P})_{top}\rightarrow(\mathfrak{P})_{top}$ and $(\mathfrak{P})_{top}\rightarrow I$ are maps of filtered objects, where $I$ is endowed with the trivial filtration. \end{enumerate} \end{defn} Filtered cooperads and filtered operads obviously arrange into categories. which we denote by $\mathpzc{coOp}(\overline{\mathpzc{Filt}}_{\textbf{PureMon}}(\mathpzc{M}))$ and $\mathpzc{Op}(\overline{\mathpzc{Filt}}_{\textbf{PureMon}}(\mathpzc{M}))$ respectively. \begin{defn}\label{filtcoalg} Let $\mathfrak{C}=((\mathfrak{C})_{top},\gamma_{n},c_{n})$ be a filtered cooperad. A conilpotent \textbf{filtered} $\mathfrak{C}$-\textbf{coalgebra} is an object $((A)_{top},\alpha_{n},a_{n})$ of $\overline{\mathpzc{Filt}}_{\textbf{PureMon}}(\mathpzc{M})$, together with a conilpotent $(\mathfrak{C})_{top}$-coalgebra structure on $(A)_{top}$ such that \begin{enumerate} \item $(\mathfrak{C})_{top}\circ_{ns} (A)_{top}$ is exhaustively and admissibly filtered and has admissible coinvariants. \item the map $(A)_{top}\rightarrow(\mathfrak{C})_{top}\circ (A)_{top}$ is a map of filtered objects. \end{enumerate} \end{defn} Filtrations on coalgebras are often induced by the filtration on $\mathfrak{C}$ in the following precise sense. If $C\in\mathpzc{coAlg}_{(\mathfrak{C})_{top}}(\mathpzc{M})$ then $\mathpzc{M}$ can be equipped with a canonical \textbf{induced filtration}, where $C_{n}$ is given by the following pullback \begin{displaymath} \xymatrix{ C_{n}\ar[d]\ar[r] & C\ar[d]^{\Delta}\\ \mathfrak{C}_{n}(C)\ar[r] & \hat{\mathfrak{C}}(C) } \end{displaymath} The following results are clear. \begin{prop} If the induced filtration is exhaustive then $C$ is conilpotent. \end{prop} \begin{prop} \begin{enumerate} \item If the underlying filtered $\Sigma$-module of $\mathfrak{C}$ is a retract of a free $\Sigma$-module on an object of $\overline{\mathpzc{Filt}}_{\textbf{SplitMon}}(\mathpzc{Gr}_{\mathbb{N}_{0}}(\mathpzc{M}))$ then the first condition in Definition \ref{filtcoop} is superfluous. \item If the underlying $\Sigma$-module of $\mathfrak{C}$ is a retract of a free $\Sigma$-module on an object of $\overline{\mathpzc{Filt}}_{\textbf{SplitMon}}(\mathpzc{Gr}_{\mathbb{N}_{0}}(\mathpzc{M}))$, and the filtration on a $(\mathfrak{C})_{top}$-coalgebra $A$ arises from a grading, then the first condition of Definition \ref{filtcoalg} is automatically satisfied. \end{enumerate} \end{prop} Again conilpotent filtered coalgebras arrange into a category. We denote it by $\mathpzc{coAlg}^{conil}_{\mathfrak{C}}$. Note that there are obvious functors $(-)_{top}:\mathpzc{coOp}(\overline{\mathpzc{Filt}}_{\textbf{PureMon}}(\mathpzc{M}))\rightarrow\mathpzc{coOp}(\mathpzc{M})$ and $(-)_{top}:\mathpzc{coAlg}^{conil}_{\mathfrak{C}}\rightarrow\mathpzc{coAlg}^{conil}_{(\mathfrak{C})_{top}}(\mathpzc{M})$. The point of all these technical assumptions on filtered cooperads and filtered co-algebras is essentially that we can pass to associated graded objects without much hassle \begin{prop} Let $\mathfrak{C}\in\mathpzc{coOp}(\overline{\mathpzc{Filt}}_{\textbf{PureMon}}(\mathpzc{M}))$ and $A\in\mathpzc{coAlg}^{conil}_{\mathfrak{C}}$. Then the natural maps $gr(\mathfrak{C})\circ gr(\mathfrak{C})\rightarrow gr(\mathfrak{C}\circ\mathfrak{C})$ and $gr(\mathfrak{C})\circ gr(A)\rightarrow gr(\mathfrak{C}\circ A)$ are isomorphisms. In particular $gr(\mathfrak{C})$ is a (graded) cooperad and $gr(A)$ is a (graded) $gr(\mathfrak{C})$-coalgebra. \end{prop} For a class of objects $\mathfrak{O}$ of $\mathpzc{M}$ we denote by $\mathpzc{coAlg}_{\mathfrak{C}}^{|\mathfrak{O}|}$ the the full subcategory of $\mathpzc{coAlg}^{conil}_{\mathfrak{C}}$ consisting of coalgebras $A$ such that for each $n\in\mathbb{N}_{0}$, $gr_{n}(A)$ is in $\mathfrak{O}$. Typically $\mathfrak{O}$ will either be the class of $K$-flat objects or the class $c$ of cofibrant objects. In the former case we will denote this category by $\mathpzc{coAlg}_{\mathfrak{C}}^{|K|}$ and in the latter case by $\mathpzc{coAlg}_{\mathfrak{C}}^{|c|}$. Similarly for a class of objects $\mathfrak{O}$ of ${}\mathpzc{Mod}_{\Sigma}$ we define categories $\mathpzc{coOp}^{|\mathfrak{O}|}\overline{\mathpzc{Filt}}_{\textbf{PureMon}}(\mathpzc{M})$. By unwinding the definitions the following is clear: \begin{prop}\label{cofreefiltcoalg} Let $\mathfrak{O}$ be a class of objects in $\mathpzc{M}$ which is closed under taking tensor products. Let $\Sigma\otimes\mathfrak{O}$ denote the class of objects of ${}\mathpzc{Mod}_{\Sigma}$ consisting of $\Sigma$-modules which in each arity $n$ are free $\Sigma_{n}$-modules on an object of $\mathfrak{O}$. Let $(\mathfrak{C},\Delta)\in\mathpzc{coOp}^{|\Sigma\otimes\mathfrak{O}|}\overline{\mathpzc{Filt}}_{\textbf{PureMon}}(\mathpzc{M})$. Then for any object $V$ of $\mathfrak{O}$, $\mathfrak{C}(V)$ is in $\mathpzc{coAlg}_{\mathfrak{C}}^{|\mathfrak{O}|}$. \end{prop} \subsection{The Bar-Cobar Adjunction} We now develop the homotopy theory of $\mathfrak{C}$-coalgebras, and see how it behaves under the bar-cobar adjunction. \begin{defn}\label{coalgmod} A morphism $f:C\rightarrow D$ of filtered $\mathfrak{C}$-coalgebras is said to be a \textbf{weak equivalence} (resp. \textbf{cofibration}) if the underlying map $|f|$ of filtered complexes is a weak equivalence (resp. cofibration). $f$ is said to be a \textbf{strict cofibration} if it is a cofibration and after forgetting differentials $|f|$ is a split monomorphism of filtered objects. $f$ is said to be a \textbf{fibration} if it has the right lifting property with respect to those maps which are both strict cofibrations and weak equivalences. \end{defn} Let $\mathfrak{P}$ be a filtered operad and $\mathfrak{C}$ a filtered co-operad. For the rest of this paper we shall assume that that the following condition is satisfied. \begin{ass}\label{compatiblecompositefilt} For each $n\in\mathbb{N}_{0}$ and any $p_{1},q_{1},\ldots,p_{l},q_{l}\in\mathbb{N}_{0}$, when equipped with the tensor product filtration, the object $(\mathfrak{C}^{p_{1}}\circ_{ns}\mathfrak{P}^{q_{1}}\circ_{ns}\ldots\circ_{ns}\mathfrak{C}^{p_{l}}\circ_{ns}\mathfrak{P}^{q_{l}})(n)$ is in $\overline{\mathpzc{Filt}}^{K}_{\textbf{PureMon}}(\mathpzc{M})$ and has admissible coinvariants. \end{ass} The assumption is always satisfied if $\mathfrak{P}$ and $\mathfrak{C}$ are retracts of free $\Sigma$-modules on filtered objects whose filtrations arise from gradings. It has the following immediate implication, using Proposition \ref{gradestronmon} \begin{prop} For each $n\in\mathbb{N}_{0}$ and any $p_{1},q_{1},\ldots,p_{l},q_{l}\in\mathbb{N}_{0}$ the map $$gr(\mathfrak{C})^{p_{1}}\circ gr(\mathfrak{P})^{q_{1}}\circ \ldots\circ gr(\mathfrak{C}^{p_{l}})\circ gr(\mathfrak{P}^{q_{l}})\rightarrow gr(\mathfrak{C}^{p_{1}}\circ\mathfrak{P}^{q_{1}}\circ\ldots\circ\mathfrak{C}^{p_{l}}\circ\mathfrak{P}^{q_{l}})$$ is an isomorphism \end{prop} Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a filtered twisting morphism, i.e. a degree $-1$ map of filtered objects such that $(\alpha)_{top}$ is a twisting morphism. Further suppose that $\mathfrak{P}$ is an admissible operad which is $K$-flat as a $\Sigma$-module. Fix a twisting morphism $$\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$$ where $\mathfrak{C}\in\mathpzc{coOp}(\overline{\mathpzc{Filt}}^{K}_{\textbf{PureMon}}(\mathpzc{M}))$, and consider the category $\mathpzc{coAlg}^{|K|}_{\mathfrak{C}}$. Let $\mathpzc{coAlg}^{|K|,\alpha-adm}_{\mathfrak{C}}$ denote the subcategory of $\mathpzc{coAlg}^{|K|}_{\mathfrak{C}}$ consisting of those filtered coalgebras $C$ such that for any $n$ the filtration on $$(\mathfrak{P}\circ_{ns}\mathfrak{C}\circ_{ns}\mathfrak{P})(n)\otimes C^{\otimes n}$$ is exhaustive, admissible, and has admissible coinvariants. Note that if $\mathpzc{M}={}_{R}Ch(\mathpzc{E})$ where $\mathpzc{E}$ is abelian then this last condition is automatic. \begin{example} Let $V$ be an object of $\mathpzc{M}$. If, after forgetting differentials and the coalgebra structure a $\mathfrak{C}$-coalgebra $C$ is isomorphic $\mathfrak{C}\circ V$ with the filtration $F_{n}\mathfrak{C}\circ V=(F_{n}\mathfrak{C})\circ V$ , then $C$ is in $\mathpzc{coAlg}^{|K|,\alpha-adm}_{\mathfrak{C}}$. \end{example} This is in fact our main example. The following is clear, and is the main reasons behind the technical definition. \begin{prop} If $C\in\mathpzc{coAlg}^{|K|,\alpha-adm}_{\mathfrak{C}}$ then the maps \begin{enumerate} \item $gr(\mathfrak{P})\circ gr(C)\rightarrow gr(\mathfrak{P}\circ C)$ \item $gr(\mathfrak{C})\circ gr(\mathfrak{P})\circ gr(C)\rightarrow gr(\mathfrak{C}\circ \mathfrak{P}\circ C)$ \item $gr(\mathfrak{P})\circ gr(\mathfrak{C})\circ gr(\mathfrak{P})\circ gr(C)\rightarrow gr(\mathfrak{P}\circ\mathfrak{C}\circ \mathfrak{P}\circ C)$ \end{enumerate} are isomorphisms. \end{prop} We denote by $\Omega_{\alpha}^{filt}$ the composite functor \begin{displaymath} \xymatrix{ \mathpzc{coAlg}^{conil}_{\mathfrak{C}}\ar[r]^{(-)_{top}} & \mathpzc{coAlg}_{(\mathfrak{C})_{top}}\ar[r]^{\Omega_{\alpha}} & \mathpzc{Alg}_{\mathfrak{P}} } \end{displaymath} and by $B_{\alpha}^{filt}:\mathpzc{Alg}_{\mathfrak{P}}\rightarrow\mathpzc{coAlg}^{conil}_{\mathfrak{C}}$ the functor which sends a $\mathfrak{P}$-algebra $A$ to the coalgebra $\Omega_{\alpha}A$ equipped with the filtration $\mathfrak{C}_{n}\circ A$. For good homotopical properties, we shall need to assume that our twisting morphism satisfies the property that $\alpha|_{F_{0}\mathfrak{C}}=0$. Throughout the rest of this paper we shall assume this. Our discussion in Section \ref{standard} allows us to generalise Proposition 2.8 of \cite{vallette2014homotopy}. \begin{prop}\label{factorfibcofib} Let $(C,\Delta_{C})$ and $(C',\Delta_{C'})$ be filtered $\mathfrak{C}$-coalgebras, and let $i:C'\rightarrow C$ be strict filtered cofibration of coalgebras. Suppose $gr(\Delta_{C})|_{\bigoplus_{n\ge 1}gr_{n}(C)}$ factors through $gr(C')$. Then $\Omega_{\alpha}(i)$ is a cofibration of $\mathfrak{P}$-algebras. \end{prop} \begin{proof} After forgetting differentials there is an isomorphism $C\cong C'\oplus E$ where $E$ is cofibrant. $$\Omega_{\alpha}C\cong\mathfrak{P}(C')\coprod\mathfrak{P}(E)$$ Under the decomposition $C\cong C'\oplus E$, $d_{C}$ is the sum of three degree $-1$ maps $$d_{C'}:C'\rightarrow C',\;d_{E}:E\rightarrow E,\alpha:E\rightarrow C'$$ By the assumption on $gr(\Delta_{C})$ the composition \begin{displaymath} \xymatrix{ \beta:E\;\;\ar@{>->}[r] & C\ar[r]^{\Delta_{C}}&\mathfrak{C}(C)\ar[r]^{\alpha(C)} & \mathfrak{P}(C) } \end{displaymath} inducing the twisted differential on $\Omega_{\alpha}C$ factors through $\mathfrak{P}(C')$. Thus $\Omega_{\alpha}C'\rightarrow\Omega_{\alpha}C$ is given by the standard cofibration $\Omega_{\alpha}C'\rightarrowtail\Omega_{\alpha}C'\coprod_{\alpha+\beta}\mathfrak{P}(E)$ which is a cofibration by Lemma \ref{standard} \end{proof} Now we are able to give analogues of \cite{vallette2014homotopy} Theorem 2.9. \begin{thm}\label{preQuillen} \begin{enumerate} \item $\Omega_{\alpha}^{filt}$ sends weak equivalences in $\mathpzc{coAlg}_{\mathfrak{C}}^{|K|,\alpha-adm}$ to weak equivalences in $\mathpzc{Alg}_{\mathfrak{P}}$ \item If $C\in\mathpzc{coAlg}_{\mathfrak{C}}^{|K|,\alpha-adm}$ then $\Omega_{\alpha}^{filt}(C)\in\mathpzc{Alg}^{|K|}_{\mathfrak{P}}$. \item If $A\in\mathpzc{Alg}_{\mathfrak{P}}^{|K|}$ then $B_{\alpha}^{filt}(A)\in\mathpzc{coAlg}_{\mathfrak{C}}^{|K|,\alpha-adm}$. If $f:A\rightarrow B$ is a weak equivalence in $\mathpzc{Alg}_{\mathfrak{P}}^{|K|}$ then $B_{\alpha}^{filt}(f)$ is a weak equivalence. \item If $\mathpzc{M}$ is hereditary Koszul and the underlying filtered $\Sigma$-module of $\mathfrak{C}$ is filtered cofibrant then $B_{\alpha}^{filt}$ sends cofibrations between objects in $\mathpzc{Alg}_{\mathfrak{P}}^{|c|}$ to cofibrations in $\mathpzc{coAlg}_{\mathfrak{C}}^{|K|,\alpha-adm}$. \item If $\mathpzc{M}$ is hereditary Koszul the cobar construction $\Omega_{\alpha}^{filt}$ sends strict cofibrations in $\mathpzc{coAlg}_{\mathfrak{C}}^{|K|,\alpha-adm}$ to cofibrations in $\mathpzc{Alg}_{\mathfrak{P}}$. \end{enumerate} \end{thm} \begin{proof} \begin{enumerate} \item Let $f:C\rightarrow D$ be a filtered quasi-isomorphism of objects in $\mathpzc{coAlg}_{\mathfrak{C}}^{|K|,\alpha-adm}$. Consider the following filtration on $\Omega_{\alpha}(C)=(\mathfrak{P}(C),d_{1}+d_{2})$: $$F_{n}\Omega_{\alpha}C=\sum_{k\ge 1,\; n_{1}+\ldots+n_{k}\le n}\mathfrak{P}(k)\otimes_{\Sigma_{k}}(F_{n_{1}}C\otimes\ldots\otimes F_{n_{k}}C)$$ Recall that $d_{1}=d_{\mathfrak{P}}\circ Id_{C}+\mathfrak{P}\circ ' d_{C}$. Now $d_{\mathfrak{P}}\circ Id_{C}$ lowers the filtration and by inspecting the formula defining $d_{2}$ it lowers the filtration. Thus $\textrm{gr}(\Omega_{\alpha}(C))\cong(\mathfrak{P}(\textrm{gr}(C)))$. By assumption $\textrm{gr}(f):\textrm{gr}(C)\rightarrow\textrm{gr}(D)$ is a weak-equivalence of graded objects. Hence $\textrm{gr}(\Omega_{\alpha}f)=\mathfrak{P}(\textrm{gr}(f))$ is a weak-equivalence. Hence $\Omega_{\alpha}f$ is a quasi-isomorphism. \item Let $C\in \mathpzc{coAlg}_{\mathfrak{C}}^{|K|}$. Consider filtration on $\Omega_{\alpha}^{filt}(C)$ from the previous part. First note that $\textrm{gr}(\Omega_{\alpha}(C))\cong\mathfrak{P}(C)$. Therefore it suffices to show that the underlying complex of $\mathfrak{P}(C)$ is $K$-flat. But this is just a coproduct of tensor products of $K$-flat objects, so it is clearly $K$-flat. \item The filtration on the underlying graded object of $B_{\alpha}A$ is given by $$(B_{\alpha}A)_{n}=\mathfrak{C}_{n}\circ A$$ Its differential is given by the sum $d_{\mathfrak{C}}\circ Id_{A}+Id_{\mathfrak{C}}\circ' d_{A}+d_{2}$ where $d_{2}$ is the unique coderivation extending the map \begin{displaymath} \xymatrix{ \mathfrak{C}\circ A\ar[r]^{\alpha\circ Id_{A}} & \mathfrak{P}\circ A\ar[r]^{\gamma_{A}} & A } \end{displaymath} The formula defining this coderivation implies that $d_{2}$ lowers the filtration. By assumption $d_{\mathfrak{C}}\circ Id_{A}$ also lowers the filtration. So $$\textrm{gr}(B_{\alpha}f)=gr(\mathfrak{C})(f):(gr(\mathfrak{C})(A),Id_{gr(\mathfrak{C})}\circ 'd_{A})\rightarrow(gr(\mathfrak{C})(A),Id_{gr(\mathfrak{C})}\circ 'd_{A})$$ which is a graded quasi-isomorphism between graded $K$-flat objects. \item Let $f:A\rightarrow B$ be a cofibration between $\mathfrak{P}$-algebras whose underlying objects of $\mathpzc{M}$ are cofibrant. We need to show that $gr(B_{\alpha}^{filt}(f))$ is a cofibration. But as above $gr(B_{\alpha}^{filt}(f))\cong gr(\mathfrak{C})(f)$. Since $\mathfrak{C}$ is filtered cofibrant, $gr(\mathfrak{C})$ is filtered cofibrant. Moreover $f$ has cofibrant domain and codomain. Hence the underlying map of $gr(\mathfrak{C})(f)$ is a tensor product of cofibrations with cofibrant domain and codomain, so is a cofibration. \item $\Omega_{\alpha}$ preserves weak equivalences by definition. Now let $f:C\rightarrow D$ be a strict cofibration of $\mathfrak{C}$-coalgebras with cokernel $E$. For any $n\in\mathbb{N}$ consider the sub-coalgebra of $D$ defined by $$D^{[n]}=f(C)+F_{n-1}D\defeq Im(C\oplus F_{n-1}D\rightarrow D)$$ for $n\ge1$ and $$D^{[0]}=C$$ Let us first check that $D^{[n]}\rightarrow D^{[n+1]}$ is an admissible monomorphism with cofibrant cokernel. Let $g:D\rightarrow E$ be the cokernel of $f$. We claim that $\textrm{coker}(D^{[n]}\rightarrow D^{[n+1]})=\textrm{gr}_{n}(E)$. Since $E$ is a cofibrant object of $\overline{\mathpzc{Filt}}(\mathpzc{M})$ this proves the claim. Let us first show that the map $C\oplus F_{n}D\rightarrow D$ is admissible. It is sufficient to show that it is admissible after forgetting the differentials. In this case $D\cong C\oplus E$, as filtered objects. and we have the following commutative diagram \begin{displaymath} \xymatrix{ C\oplus F_{n}D\ar[r]\ar[d]^{\sim}\ar[r] & D\ar[d]^{\sim}\\ C\oplus F_{n}C\oplus F_{n}E\ar[r] & C\oplus E } \end{displaymath} The bottom map is clearly admissible, so the top one is as well. Consider the map $$(0,g_{n}):C\oplus F_{n}D\twoheadrightarrow \textrm{gr}_{n}(E)$$ $C\oplus F_{n}D=F_{n}C$ is contained in its kernel, so we get a well defined map $$C+F_{n}D\rightarrow \textrm{gr}_{n}(E)$$ which is an admissible epimorphism. We claim that its kernel is the map $$D^{[n]}\rightarrow D^{[n+1]}$$ Again we may ignore differentials. Then in the direct sum composition this map corresponds to the inclusion $$C\oplus F_{n-1}E\hookrightarrow C\oplus F_{n}E$$ which clearly has cokernel equal to $\textrm{gr}_{n}(E)$. We endow $D^{[n]}$ with the following filtration. \[ F_{k}D^{[n]}= \begin{cases} D^{[k]} & \text{if $k\le n$} \\ D^{[n]} & \text{if $n\ge k$} \end{cases} \] \end{enumerate} We have shown that with this filtration $D^{[n]}$ is an object of $\mathpzc{coAlg}_{\mathfrak{C}}^{|K|}$. Moreover each of the maps $D^{[n]}\rightarrow D^{[n+1]}$ satisfies the conditions of Proposition \ref{factorfibcofib}. Therefore each of the maps $\Omega_{\alpha}D^{[n]}\rightarrow\Omega_{\alpha}D^{[n+1]}$. Therefore the map $\Omega_{\alpha}C=\Omega_{\alpha}D^{[0]}\rightarrow\Omega_{\alpha}D$ is a transfinite composition of cofibrations, and is therefore a cofibration. \end{proof} \subsubsection{Restricted Bar-Cobar Adjunctions} Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a filtered twisting morphism. We denote by $\mathpzc{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}$ the full subcategory of $\mathpzc{coAlg}_{(\mathfrak{C})_{top}}^{conil}$ consisting of coalgebras which are in the image of the functor $(-)_{top}:\mathpzc{coAlg}_{\mathfrak{C}}^{|K|,\alpha-adm}\rightarrow\mathpzc{coAlg}^{conil}_{(\mathfrak{C})_{top}}$. Similarly we define the category $\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|,\alpha-adm}$. \begin{defn}\label{coalgmod} Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a twisting morphism. A morphism $f$ in $\mathpzc{Alg}_{\mathfrak{C}}^{|{K}^{f}|,\alpha-adm}$ is said to be an \begin{enumerate} \item $\alpha$-\textbf{weak equivalence} if $\Omega_{\alpha}f$ is a weak equivalence in $\mathpzc{Alg}_{\mathfrak{P}}$. \item \textbf{pre-cofibration} if there is a strict cofibration $\tilde{f}$ in $\mathpzc{Alg}_{\mathfrak{C}}^{|K|,\alpha-adm}$ such that $(\tilde{f})_{top}=f$. \item \textbf{cofibration} if it is a finite composition of pre-cofibrations \item an $\alpha$-\textbf{fibration} is map which has the right lifting property with respect to those maps which are both cofibrations and $\alpha$-weak equivalences. \end{enumerate} \end{defn} Note that if $\mathpzc{M}$ is projective Koszul then the condition that a cofibration be split after forgetting differentials is superfluous. The next result follows immediately from Theorem \ref{preQuillen}. \begin{prop}\label{basicallyQuillen} The bar-cobar adjunction restricts to adjunctions of relative categories. $$\adj{\Omega_{\alpha}}{\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|,\alpha-adm}}{\mathpzc{Alg}_{\mathfrak{P}}^{|c|}}{B_{\alpha}}$$ $$\adj{\Omega_{\alpha}}{\mathpzc{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}}{\mathpzc{Alg}_{\mathfrak{P}}^{|K|}}{B_{\alpha}}$$ If in addition $\mathpzc{M}$ is hereditary Koszul, it restricts to an adjunction of relative categories $$\adj{\Omega_{\alpha}}{\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|,\alpha-adm}}{\mathpzc{Alg}_{\mathfrak{P}}^{c}}{B_{\alpha}}$$ Moreover in this case $\Omega_{\alpha}$ sends cofibrations to cofibrations, and $B_{\alpha}$ sends fibrations to $\alpha$-fibrations. \end{prop} \subsection{Koszul Morphisms} In this section we are going to discuss twisting morphisms $\alpha$, called Koszul morphisms, for which the relative adjunctions of Proposition \ref{basicallyQuillen} are relative equivalences. Essentially we shall set up the necessary technical machinery so that we can generalise the proof in \cite{vallette2014homotopy} of Theorem 2.1 parts (1) and (3) (and also some of the proofs in \cite{hirsh2012curved}) to exact categories. As will become clear, some of the proof goes through with only minor modifications, while some requires significant effort to generalise. We shall assume from now on that $\mathfrak{C}$ is coaugmented. Moreover we shall assume that there is an augmented operad $\tilde{\mathfrak{P}}\in\mathpzc{Op}(\overline{\mathpzc{Filt}}_{\textbf{PureMon}}^{K}(\mathpzc{M}))$ such that $\mathfrak{P}=(\tilde{\mathfrak{P}})_{top}$. The filtration is a technical condition, and we will not need to consider filtrations on algebras over $\mathfrak{P}$. From now on we shall identify $\mathfrak{P}$ with $\tilde{\mathfrak{P}}$. Now let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a degree $-1$-map of filtered objects such that $|\alpha|_{filt}$ is a twisting morphism. Then we may regard $\mathfrak{C}\circ_{\alpha}\mathfrak{P}$ and $\mathfrak{P}\circ_{\alpha}\mathfrak{C}$ as filtered complexes. Let $\overline{\mathfrak{C}}$ be the kernel of $\mathfrak{C}\rightarrow I$, and $\overline{\mathfrak{P}}$ the cokernel of $I\rightarrow\mathfrak{P}$. Then as filtered objects we have $\mathfrak{C}\cong I\oplus\overline{\mathfrak{C}}$ and $\mathfrak{P}\cong I\oplus\overline{\mathfrak{P}}$. Then $$\mathfrak{C}\circ\mathfrak{P}\cong I\oplus\overline{\mathfrak{C}}\oplus \overline{\mathfrak{P}}\oplus\overline{\mathfrak{C}}\circ\overline{\mathfrak{P}}$$ and $$\mathfrak{P}\circ\mathfrak{C}\cong I\oplus\overline{\mathfrak{P}}\oplus \overline{\mathfrak{C}}\oplus\overline{\mathfrak{P}}\circ\overline{\mathfrak{C}}$$ In particular there are filtered maps $$\mathfrak{C}\circ\mathfrak{P}\rightarrow I,\;\;\;\; I\rightarrow \mathfrak{P}\circ\mathfrak{C}$$ and $$\mathfrak{P}\circ\mathfrak{C}\circ\mathfrak{P}\rightarrow\mathfrak{P}$$ \begin{prop} Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a twisting morphism. Suppose that it preserves filtrations and that $\alpha|_{F_{0}\mathfrak{C}}=0$, $d_{\mathfrak{C}}|_{F_{0}\mathfrak{C}}=0$, and $d_{\mathfrak{P}}|_{F_{0}\mathfrak{P}}=0$. Then the maps $$\mathfrak{C}\circ\mathfrak{P}\rightarrow I,\;\;\;\; I\rightarrow \mathfrak{P}\circ\mathfrak{C},\;\;\;\; \mathfrak{P}\circ\mathfrak{C}\circ\mathfrak{P}\rightarrow\mathfrak{P}$$ induce maps of complexes $$\mathfrak{C}\circ_{\alpha}\mathfrak{P}\rightarrow I,\;\;\;\; I\rightarrow \mathfrak{P}\circ_{\alpha}\mathfrak{C},\;\;\;\; \mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P}\rightarrow\mathfrak{P}$$ \end{prop} \begin{proof} We prove it for the first map, the second and third being similar. It suffices to show that $d_{\alpha}$ preserves the kernel of the augmentation. Since $\alpha$ preserves the filtrations it suffices to show that $d_{\mathfrak{C}\circ_{\alpha}\mathfrak{P}}|_{F_{0}\mathfrak{C}\circ_{\alpha}\mathfrak{P}}=0$. But this is clear from the assumptions. \end{proof} \begin{prop} Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a twisting morphism as above. Suppose that $\alpha|_{F_{0}\mathfrak{C}}=0$, $d_{\mathfrak{C}}|_{F_{0}\mathfrak{C}}=0$, and $d_{\mathfrak{P}}|_{F_{0}\mathfrak{P}}=0$. Then the following are equivalent. \begin{enumerate} \item $\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P}\rightarrow\mathfrak{P}$ is a filtered weak equivalence. \item $\mathfrak{C}\circ_{\alpha}\mathfrak{P}\rightarrow I$ is a filtered weak equivalence. \item $I\rightarrow \mathfrak{P}\circ_{\alpha}\mathfrak{C}$ is a filtered weak equivalence. \end{enumerate} \end{prop} \begin{proof} Let us show $1\Leftrightarrow 2$. Suppose that $\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P}\rightarrow\mathfrak{P}$ is a filtered weak equivalence. The map $gr(\mathfrak{C}\circ_{\alpha}\mathfrak{P})\rightarrow gr(I)$ is a retract of $gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P})\rightarrow gr(\mathfrak{P})$ and is therefore a graded weak equivalence. Conversely suppose that $gr(\mathfrak{C}\circ_{\alpha}\mathfrak{P})\rightarrow I$ is a graded weak equivalence. It suffices to show that the map $gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P})\rightarrow gr(\mathfrak{P})$ is a graded equivalence. Consider the filtration on $gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P})$ given by $$F_{n}gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P})=\sum_{k+l\le n}gr(\mathfrak{P})\circ gr_{k}(\mathfrak{C})\circ gr_{l}(\mathfrak{P})$$ The associated graded of this filtration is $gr(\mathfrak{P})\circ(\mathfrak{C}\circ_{\alpha}\mathfrak{P})$. Since the map $gr(\mathfrak{C}\circ_{\alpha}\mathfrak{P})\rightarrow I$ is a graded weak equivalence, the map $gr(\mathfrak{P})\circ(\mathfrak{C}\circ_{\alpha}\mathfrak{P})\rightarrow gr(\mathfrak{P})$ is a graded weak equivalence. Now let us show $1\Leftrightarrow 3$. Suppose that $\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P}\rightarrow\mathfrak{P}$ is a filtered weak equivalence. Consider the map $gr(\mathfrak{P})\rightarrow gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P})$. The composition $gr(\mathfrak{P})\rightarrow gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P})\rightarrow gr(\mathfrak{P})$ is the identity. Therefore $gr(\mathfrak{P})\rightarrow gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P})$ is a graded weak equivalence. Now notice that the map $I\rightarrow gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C})$ is a retract of the map $gr(\mathfrak{P})\rightarrow gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P})$, and is therefore a graded weak equivalence. Conversely suppose that $I\rightarrow gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C})$ is a graded weak equivalence. Consider the filtration on $\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P}$ given on the underlying graded object by $$F_{n}gr(\mathfrak{P}\circ\mathfrak{C}\circ\mathfrak{P})=\sum_{k+l\le n}gr_{k}(\mathfrak{P})\circ gr_{l}(\mathfrak{C})\circ gr(\mathfrak{P})$$ The associated graded is $gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C})\circ gr(\mathfrak{P})$. Since the map $I\rightarrow gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C})$ is a graded weak equivalence, the map $gr(\mathfrak{P})\rightarrow gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C})\circ gr(\mathfrak{P})$ is as well. Hence the map $gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C})\circ gr(\mathfrak{P})\rightarrow gr(\mathfrak{P})$ is itself a weak equivalence. \end{proof} We are now ready to define Koszul morphisms. \begin{defn} A twisting morphism is said to be \textbf{Koszul} if \begin{enumerate} \item Assumption \ref{compatiblecompositefilt} is satisfied. \item $\mathfrak{P}$ is admissible and the differential $d_{\mathfrak{P}}$ lowers the filtration. \item $\mathfrak{C}$ is in $\mathpzc{coOp}^{|K|}$, and the differential lowers the filtration. \item $\alpha$ induces a morphism of filtered objects, and $\alpha|_{F_{0}\mathfrak{C}}=0$. \item $\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P}\rightarrow \mathfrak{P}$ is a filtered weak equivalence. \end{enumerate} \end{defn} On its face this looks somewhat different to the usual definition from \cite{vallette2014homotopy}. However when $\mathpzc{E}$ is the category of vector spaces over some field $k$ then the definitions agree. Indeed in this case the first three conditions are automatic. \subsubsection{Examples of Koszul Morphisms} Before we proceed to Koszul duality for Koszul morphisms, let us make sure we have some examples. When $\mathpzc{E}$ is the category of vector spaces over some field $k$ then there are many known examples. It turns out we can bootstrap these . \begin{prop}\label{inducedkoszul} Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a Koszul morphism in $Ch({}_{k}\mathpzc{Vect})$. Let $\mathpzc{M}$ be a Koszul category with unit $R$ which is enriched over $k$. Then $$R[\alpha]:R[\mathfrak{C}]\rightarrow R[\mathfrak{P}]$$ is a Koszul morphism in $\mathpzc{M}$. \end{prop} \begin{proof} Together with Proposition \ref{preserveMC}, this follows immediately from the fact that $R\otimes(-)$ is a strong monoidal, kernel and cokernel preserving functor whose image consists of cofibrant objects. \end{proof} In particular if $k=\mathbb{Q}$, and $\kappa:\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}\rightarrow\mathfrak{Lie}$ is the canonical Koszul morphism in ${}_{{\mathbb Q}}\mathpzc{Mod}$ then we get a Koszul morphism \begin{displaymath} \xymatrix{ \mathfrak{S}^{c}\otimes_{H}(R\otimes\mathfrak{coComm})\ar[r]^{\cong} & R\otimes(\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm})\ar[r]^{\;\;\;\;\;\;\;\;\;\;\;\;\;R\otimes\alpha} & R\otimes\mathfrak{Lie} } \end{displaymath} in $\mathpzc{M}$. This generalises classical Koszul duality between Lie algebras and cocommutative coalgebras to arbitrary ${\mathbb Q}$-Koszul categories. \begin{prop}\label{Koszulfree} Let $V$ be an object in ${}\mathpzc{Mod}_{\Sigma}(\mathpzc{M})$ and consider the twisting morphism $$\alpha:\mathfrak{T}^{c}(V[1])\rightarrow V[1]\rightarrow V\rightarrow\mathfrak{T}(V)$$ from Example \ref{cofreetwist}. Let us suppose that $V=R\otimes V_{0}$ where $V_{0}$ is an object of ${}\mathpzc{Mod}_{\Sigma}(Ch(\mathpzc{E}))$ concentrated in homological degree $0$ and arity $1$. Finally suppose that $\mathpzc{M}$ is enriched over ${\mathbb Q}$. \begin{enumerate} \item If $V$ is $K$-flat then $\mathfrak{T}(V)$ has $K$-flat entries. \item The map $\mathfrak{T}^{c}(V[1])\circ_{\alpha}\mathfrak{T}(V)\rightarrow R$ is a homotopy equivalence. \item $\mathfrak{T}(V)$ and $\mathfrak{T}^{c}(V[1])$ are equipped with filtrations such that the differentials on $\mathfrak{T}(V)$ and $\mathfrak{T}^{c}(V[1])$ lower the filtration, and $\alpha$ preserves the filtration. Moreover $\mathfrak{T}^{c}(V[1])\in\mathpzc{coOp}^{|K|}$. \end{enumerate} In particular if $V$ is $K$-flat then $\alpha$ is a Koszul morphism. \end{prop} \begin{proof} The only non-trivial claim is the second one. By Proposition \ref{inducedkoszul} it suffices to prove the claim for $R=k$ where $k$ is the unit of $\mathpzc{E}$. The proof of Proposition 3.4.13 in \cite{loday2012algebraic} works mutatis mutandis in the setting of $Ch(\mathpzc{E})$. \end{proof} \begin{rem} We expect that without much effort, the condition that $\mathpzc{M}$ is enriched over ${\mathbb Q}$ and $V$ is concentrated in arity $1$ can be removed. However for our purposes we do not need such a general result. \end{rem} \subsection{An Equivalence of $(\infty,1)$-Categories} In this section we are going to prove our first version of Koszul duality. Namely we will show the following. \begin{thm}[Koszul Duality]\label{coopKoszuldual} Let $\mathpzc{M}$ be a Koszul category, and $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ a Koszul morphism in $\mathpzc{M}$. The bar-cobar adjunction induces an adjoint equivalence of $(\infty,1)$-categories $$\adj{\Omega_{\alpha}}{\textbf{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}}{\textbf{Alg}_{\mathfrak{P}}}{\textbf{B}_{\alpha}}$$ \end{thm} This shall in fact follow from the result below, using Proposition \ref{infinitysame} and Proposition \ref{relequivuc}. \begin{prop} Let $\alpha$ be a Koszul morphism. Then the adjunctions of relative categories. $$\adj{\Omega_{\alpha}}{\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|,\alpha-adm}}{\mathpzc{Alg}_{\mathfrak{P}}^{|c|}}{B_{\alpha}}$$ $$\adj{\Omega_{\alpha}}{\mathpzc{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}}{\mathpzc{Alg}_{\mathfrak{P}}^{|K|}}{B_{\alpha}}$$ are relative equivalences. If $\mathpzc{M}$ is hereditary Koszul then $$\adj{\Omega_{\alpha}}{\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|,\alpha-adm}}{\mathpzc{Alg}_{\mathfrak{P}}^{c}}{B_{\alpha}}$$ is a relative equivalence. \end{prop} We have the following generalisation of \cite{vallette2014homotopy} Theorem 2.6 (2) we have the following. Some of the techniques used in the following Proposition are similar to those in \cite{hirsh2012curved} Proposition 5.1.5. \begin{prop}\label{unitacyclic} Let $\alpha$ be a Koszul morphism and $C\in\mathpzc{coAlg}_{\mathfrak{C}}^{|K|,\alpha-adm}$. Then there exists a filtration on $ B_{\alpha}\Omega_{\alpha}C$ such that the unit $\nu_{\alpha}(C):C\rightarrow B_{\alpha}\Omega_{\alpha}C$ is a filtered weak equivalence. \end{prop} \begin{proof} On underlying graded objects, the unit is the composition \begin{displaymath} \xymatrix{ C\ar[rr]^{\Delta_{C}\;\;\;\;\;\;\;\;\;\;} & & \mathfrak{C}(C)\cong\mathfrak{C}\circ I\circ C\ar@{>->}[r] & \mathfrak{C}\circ\mathfrak{P}\circ C } \end{displaymath} Both of these maps are clearly admissible monomorphisms. Let us now show that $\nu_{\alpha}(C)$ is an $\alpha$-weak equivalence. Consider the filtration on $\Omega_{\alpha}C$ given by $$F_{n}\Omega_{\alpha}C=\sum_{k\ge1,m+n_{1}+\ldots+n_{k}\le n}F_{m}\mathfrak{P}(k)\otimes_{\Sigma_{k}}(F_{n_{1}}C\otimes\ldots\otimes F_{n_{k}}C)$$ and the one on $\Omega_{\alpha}B_{\alpha}\Omega_{\alpha}C$ given by $$F_{n}\Omega_{\alpha}B_{\alpha}\Omega_{\alpha}C=\sum_{k\ge1,p+q+m+n_{1}+\ldots+n_{k}\le n}(F_{p}\mathfrak{P}\circ F_{q}\mathfrak{C}\circ F_{m}\mathfrak{P})(k)\otimes_{\Sigma_{k}}(F_{n_{1}}C\otimes\ldots\otimes F_{n_{k}}C)$$ $\Omega_{\alpha}(\nu_{\alpha}(C))$ preserves these filtrations. Passing to associated graded objects gives $$gr(\Omega_{\alpha}(\nu_{\alpha}(C))):gr(\Omega_{\alpha}C)\rightarrow gr(\Omega_{\alpha}B_{\alpha}\Omega_{\alpha}C)$$ The underlying object on the left-hand side is $gr(\mathfrak{P})\circ gr(C)$ and the underlying object on the right-hand side is $ gr(\mathfrak{P}\circ\mathfrak{C}\circ\mathfrak{P})\circ gr(C)$. Now consider the filtration on $gr(\Omega_{\alpha}C)$ given by $$F_{n}gr(\Omega_{\alpha}C)=\sum_{k\ge1,n_{1}+\ldots+n_{k}\le n}gr(\mathfrak{P}(k))\otimes gr_{n_{1}}(C)\otimes\ldots\otimes gr_{n_{k}}(C)$$ and the filtration on $gr(\Omega_{\alpha}B_{\alpha}\Omega_{\alpha}C)$ given by $$F_{n}gr(\Omega_{\alpha}B_{\alpha}\Omega_{\alpha}C)=\sum_{k\ge1,n_{1}+\ldots+n_{k}\le n}gr(\mathfrak{P}\circ \mathfrak{C}\circ\mathfrak{P})(k)\otimes_{\Sigma_{k}}(gr_{n_{1}}(C)\otimes\ldots\otimes gr_{n_{k}}(C))$$ Then $gr(\Omega_{\alpha}(\nu_{\alpha}(C)))$ preserves these filtrations. The associated graded of the filtration on $gr(\Omega_{\alpha}C)$ is $gr(\mathfrak{P})\circ gr(C)$, and the associated graded of the filtration on $gr(\Omega_{\alpha}B_{\alpha}\Omega_{\alpha}C)$ is $gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P})\circ gr(C)$. Denote by $\tilde{gr}(\Omega_{\alpha}(\nu_{\alpha}(C)))$ the associated graded of $gr(\Omega_{\alpha}(\nu_{\alpha}(C)))$. The composite \begin{displaymath} \xymatrix{ gr(\mathfrak{P})\circ gr(C)\ar[rrr]^{\tilde{gr}(\Omega_{\alpha}(\nu_{\alpha}(C)))\;\;\;\;}& & &gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P})\circ gr(C)\ar[r] & gr(\mathfrak{P})\circ gr(C)} \end{displaymath} is the identity. The map $gr(\mathfrak{P}\circ_{\alpha}\mathfrak{C}\circ_{\alpha}\mathfrak{P})\circ gr(C)\rightarrow gr(\mathfrak{P})\circ gr(C)$ is a weak equivalence. By the $2$-out-of-$3$ property $\tilde{gr}(\Omega_{\alpha}(\nu_{\alpha}(C)))$ is an equivalence. Therefore $\Omega_{\alpha}(\nu_{\alpha}(C))$ is an equivalence. Now with the stipulated filtrations the map $\nu_{\alpha}C$ is a filtered retract of the map $\Omega_{\alpha}(\nu_{\alpha}C)$ which is a filtered weak equivalence. \end{proof} This also allows us to prove that $\alpha$-weak equivalences are contained in the class of quasi-isomorphisns as in \cite{vallette2014homotopy} Proposition 2.5. \begin{prop} If $f:C\rightarrow D$ is an $\alpha$-weak equivalence in $\mathpzc{coAlg}_{\mathfrak{C}}^{|K|,\alpha-adm}$. Then it is a quasi-isomorphism of the underlying complexes. \end{prop} \begin{proof} Consider the commutative diagram \begin{displaymath} \xymatrix{ C\ar[d]^{f}\ar[r]^{\nu_{C}} & B_{\alpha}\Omega_{\alpha}C\ar[d]^{B_{\alpha}\Omega_{\alpha}f}\\ D\ar[r]^{\nu_{D}} & B_{\alpha}\Omega_{\alpha}D } \end{displaymath} The top, bottom, and right-hand maps are quasi-isomorphisms by Proposition \ref{preQuillen} and Proposition \ref{unitacyclic}. Therefore $f$ is a quasi-isomorphism. \end{proof} As in \cite{vallette2014homotopy} Theorem 2.6 (1) we have the following. \begin{prop} Let $A\in\mathpzc{Alg}^{|K|}_{\mathfrak{P}}$. The counit $\epsilon_{\alpha}(A):\Omega_{\alpha}B_{\alpha}A\rightarrow A$ is an $\alpha$-weak equivalence \end{prop} \begin{proof} After forgetting differentials, the underlying graded object of $\Omega_{\alpha}B_{\alpha}A$ is $\mathfrak{P}\circ\mathfrak{C}\circ A$. We filter it by $$F_{n}\Omega_{\alpha}B_{\alpha}A=\sum_{k\ge1, n_{1}+\ldots+n_{k}\le n}(\mathfrak{P})(k)\otimes_{\Sigma_{k}}(F_{n_{1}}\mathfrak{C}(A)\otimes\ldots\otimes F_{n_{k}}\mathfrak{C}(A))$$ This induces a filtration of the chain complex $\Omega_{\alpha}B_{\alpha}A$. We regard $A$ as a filtered algebra with the constant filtration. The counit is a morphism of filtered complexes. Consider the composite associated graded map $$A\rightarrow (gr(\mathfrak{P})\circ_{gr(\alpha)}gr(\mathfrak{C}))\circ A\rightarrow A$$ The composite is the identity. The map $A\rightarrow (gr(\mathfrak{P})\circ_{gr(\alpha)}gr(\mathfrak{C}))\circ A$ is an equivalence since $(\mathfrak{P}\circ_{\alpha}\mathfrak{C})$ is equivalent to $I$ and $A$ is $K$-flat. By the two-out-of-three property, the map $(gr(\mathfrak{P})\circ_{gr(\alpha)}gr(\mathfrak{C}))\circ A\rightarrow A$ is an equivalence, which completes the proof. \end{proof} Being an equivalence of categories, the functor $\textbf{B}_{\alpha}$ preserves sifted colimits. When $\mathpzc{M}$ is a Koszul category and $\mathfrak{P}$ is a rectifiably admissible operad, we saw that the category $\textbf{Alg}_{\mathfrak{P}}(\textbf{M})$ is generated under sifted colimits by free algebras on cofibrant objects. Thus computing $\textbf{B}_{\alpha}$ essentially reduces to computing what it does to free algebras. Let $V$ be a $K$-flat object of $\mathpzc{M}$. Consider the trivial $\mathfrak{C}$-algebra structure on $V$ given by $\mathfrak{C}\circ V\cong(\overline{\mathfrak{C}}\circ V)\oplus I\circ V\rightarrow V$. It is a filtered $\mathfrak{C}$-coaglebra when we equip $V$ with the trivial filtration, i.e. for each $n$ $F_{n}V=V$. This gives a functorial construction $tr_{\mathfrak{C}}:\mathpzc{M}\rightarrow\mathpzc{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}$. \begin{prop} There is a natural transformation of functors, which is an object-wise weak equivalence, $$B_{\alpha}\circ Free_{\mathfrak{P}}(-)\rightarrow tr_{\mathfrak{C}}$$ \end{prop} \begin{proof} The map $\mathfrak{C}\circ_{\alpha}\mathfrak{P}\rightarrow I$ induces a map of $\mathfrak{C}$-coalgebras $B_{\alpha}\circ Free_{\mathfrak{P}}(V)\rightarrow tr_{\mathfrak{C}}(V)$ which is natural in $V$. By the Koszul condition this is an equivalence. \end{proof} \subsection{Factorisation in $\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|,\alpha-adm}$ and the Model Structure} By Theorem \ref{coopKoszuldual} we have an abstract equivalence of $(\infty,1)$-categories. However in order to be able to do computations it would be convenient to have some sort of factorisation property for morphisms. We will need the following assumptions. \begin{ass}\label{times12} Let $D\in\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|}$-and $p:A\twoheadrightarrow\Omega_{\alpha}D$ be a fibration of $\mathfrak{P}$-algebras where $A\in\mathpzc{Alg}_{\mathfrak{P}}^{|c|}$. Then in the following pullback diagram \begin{displaymath} \xymatrix{ B_{\alpha}A\times_{B_{\alpha}\Omega_{\alpha}D}D\ar[d]^{j}\ar[r] & D\ar[d]^{\nu_{\alpha}D}\\ B_{\alpha}A\ar[r]^{B_{\alpha}p} & B_{\alpha}\Omega_{\alpha}D } \end{displaymath} the map $j$ is an $\alpha$-weak equivalence, and an admissible monomorphism and $B_{\alpha}A\times_{B_{\alpha}\Omega_{\alpha}D}D\in\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|}$ \end{ass} The underlying graded map of $B_{\alpha}A\rightarrow B_{\alpha}\Omega_{\alpha} D$ is cofree, so by a dual argument to \cite{spitzweck2001operads} Proposition 4.5 we can give an explicit description of the map $j$. Using this we expect to be able to prove the following. \begin{conj}\label{quasiabelianassumptions} Let $\mathpzc{E}$ be a monoidal $\textbf{AdMon}$-elementary exact category. Then for any twisting morphism $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ with $\mathfrak{C}$ being cofibrant in ${}\mathpzc{Mod}_{\Sigma}$, Assumption \ref{times12} holds. \end{conj} As we shall see later there are many interesting examples in which this assumption condition doesn't hold, therefore we do not assume it in general. For the moment we have it at least in the following important circumstance. Our proof is similar to the proof of Lemma B.1 in \cite{vallette2014homotopy}. \begin{prop} Let $\mathpzc{M}$ be hereditary Koszul. The twisting morphism $\kappa:\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}^{nu}\rightarrow\mathfrak{Lie}$ satisfies Assumption \ref{times12} \end{prop} \begin{proof} Let us begin by forgetting the differentials. Consider the exact sequence of Lie algebras \begin{displaymath} \xymatrix{ 0\ar[r] & K\ar@{>->}[r] & A\ar@{->>}[r]&\Omega_{\alpha}D=L(D)\ar[r] & 0 } \end{displaymath} It splits in the category of graded Lie algebras since the underlying graded Lie algebra is free on a cofibrant object. As graded Lie algebras we therefore get an isomorphism $A\cong K\oplus L(D)$. We can turn this into an isomorphism of dg-Lie algebras when we equip the right-hand side with the differential given by the sum of the differentials: $$d_{K}:K\rightarrow K,\; d_{\Omega_{\kappa}D}:L(D)\rightarrow L(D),\; d': L(D)\rightarrow K$$ The underlying graded coalgebra of $B_{\alpha}A$ is just the cofree coalgebra on $A$, we get isomorphisms $$B_{\kappa}A\cong B_{\kappa}K\times B_{\kappa}\Omega_{\kappa}D$$ and $$B_{\kappa}A\times_{B_{\kappa}\Omega_{\kappa}D}D\cong B_{\kappa}K\times D$$ where both have additional differentials coming from $d'$. Equip $K$ with the constant filtration $F_{i}K=K$ for all $0\le i\le\infty$, $\Omega_{\kappa}D$ with the filtration $$F_{n}\Omega_{\kappa}D=\sum_{k+n_{1}+\ldots+n_{k}\le n}\mathfrak{Lie}(k)\otimes_{\Sigma_{k}}F_{n_{1}}D\otimes\ldots\otimes F_{n_{k}}D$$ and $A$ with the direct sum filtration. Note that these are filtrations of algebras. Moreover the differential on $A$ preserves this filtration. We claim that $d'$ lowers the filtration. Indeed it suffices to show that $d'|_{F_{0}\Omega_{\kappa}D}=0$. But $F_{0}\Omega_{\kappa}D=0$. Now consider the filtrations on $B_{\kappa}A$ and $B_{\kappa}K$ induced by the ones on $A$ and $K$ respectively, and the tensor product filtration on $B_{\kappa}K\otimes D$. The product of any two cocommutative coalgebras $C$ and $D$ is $(C\oplus D\oplus C\otimes D)[-1]$. By Corollary \ref{unitacyclic} the map $\nu_{\alpha}D:D\rightarrow B_{\alpha}\Omega_{\alpha} D$ is a filtered quasi-isomorphism for the filtrations stipulated in the statement of the Corollary. Since the filtrations on $B_{\kappa}K$, $\mathfrak{Lie}$, and $\mathfrak{coComm}^{nu}$ arise from gradings, and $\mathpzc{M}$ is ${\mathbb Q}$-Koszul, with this filtration $B_{\kappa}K\otimes D$ is in $\mathpzc{coAlg}_{\mathfrak{coComm}^{nu}}^{|c^{f}|,\alpha-adm}$. Also $gr(j)=gr(Id\otimes\nu_{\kappa}D)$. Now $B_{\alpha}K$ is filtered cofibrant. Therefore $id\otimes\nu_{\alpha}D$ is a filtered weak equivalence. Therefore by Proposition \ref{preQuillen} $j$ is an $\alpha$-weak equivalence. \end{proof} With the technology developed above much of the proof of Vallette's Theorem 2.1 1) works in our setup, as we show below. \begin{thm}\label{coalgmodel} Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be Koszul. Suppose that Assumption \ref{times12} holds. Then in $\mathpzc{coAlg}^{|c^{f}|,\alpha-adm}_{\mathfrak{C}}$ every morphism can be factored into an admissible monomorphism followed by an $\alpha$-acyclic fibration, or an admissible monomorphism which is $\alpha$-acyclic followed by a fibration. If in addition Assumption \ref{times12} holds, $\mathpzc{M}$ is elementary, and every object in $\mathpzc{M}$ is cofibrant, then the $\alpha$-weak equivalences, $\alpha$-fibrations, and $\alpha$-cofibrations define a model category structure on $\mathpzc{coAlg}^{|c^{f}|,\alpha-adm}_{\mathfrak{C}}$. \end{thm} \begin{proof}[Proof of Theorem \ref{coalgmodel}] Suppose that $f:C\rightarrow D$ is a morphism in $\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|}$. In $\mathpzc{Alg}_{\mathfrak{P}}$ we get the the following commutative diagram \begin{displaymath} \xymatrix{ \Omega_{\alpha}C\ar[rr]^{\Omega_{\alpha}f}\ar@{>->}[dr]^{i} & & \Omega_{\alpha}D\\ & A\ar@{->>}[ur]^{p} & } \end{displaymath} where $i$ is a cofibration, $p$ a fibration, and one of them is a quasi-isomorphism. Applying $B_{\alpha}$ we get the following commutative diagram. \begin{displaymath} \xymatrix{ B_{\alpha}\Omega_{\alpha}C\ar[rr]^{B_{\alpha}\Omega_{\alpha}f}\ar@{>->}[dr]^{B_{\alpha}i} & & B_{\alpha}\Omega_{\alpha}D\\ & B_{\alpha}A\ar@{->>}[ur]^{B_{\alpha}p} & } \end{displaymath} and using the universal property we get the following commutative diagram \[ \begin{tikzcd}[row sep=2.5em] B_{\alpha}\Omega_{\alpha}C\arrow[rr,"B_{\alpha}\Omega_{\alpha}f"]\arrow[dr,swap,"B_{\alpha}i"] & & B_{\alpha}\Omega_{\alpha} D\\ & B_{\alpha}A \arrow[ur,"B_{\alpha}p"]& \\ C\arrow[uu,"\nu_{\alpha}C"]\arrow[rr,"f" near end]\arrow[dr,dotted,"\tilde{i}"] && D\arrow[uu,"\nu_{\alpha}D"]\\ & B_{\alpha}A\times_{B_{\alpha}\Omega_{\alpha}D}D\ar[uu,crossing over]\arrow[ur,"\tilde{p}"] & \end{tikzcd} \] We first show that $\tilde{i}$ is an admissible monomorphism and $\tilde{p}$ is a fibration. $\tilde{p}$ is a fibration since it is the pullback of $B_{\alpha}p$, which is a fibration by Proposition \ref{basicallyQuillen}. The map $B_{\alpha}i\circ\nu_{\alpha}C$ is given by the composite $\mathfrak{P}(i_{C})\circ\Delta_{C}$ where $i_{C}$ is the restriction of $i$ to $C$. It is clearly an admissible monomorphism. The composition $B_{\alpha}i\circ\nu_{\alpha}C$ is an admissible monomorphism. Hence $\tilde{i}$ is as well. Suppose that $i$ is a weak equivalence. Then $B_{\alpha}i$ is a weak equivalence by Proposition \ref{preQuillen}. By Assumption \ref{times12} the map $B_{\alpha}A\times_{B_{\alpha}\Omega_{\alpha}D}D\rightarrow B_{\alpha}A$ is a weak equivalence. By the two-out-of-three property $\tilde{i}$ is a weak-equivalence. A similar proof shows that $\tilde{p}$ is a weak equivalence assuming $p$ is.\newline \\ Now suppose Assumption \ref{times12} is satisfied, $\mathpzc{M}$ is elementary, and every object of $\mathpzc{M}$ is cofibrant. We claim that $\tilde{i}$ is a strict cofibration. Since $\mathpzc{M}$ is elementary and every object of $\mathpzc{M}$ is cofibrant, every admissible monomorphism splits. This it suffices to prove that $\tilde{i}$ is an admissible monomorphism. Now $\nu_{\alpha}C$ is split monomorphism, and hence admissible. Since $i$ is a cofibration in $\mathpzc{Alg}_{\mathfrak{P}}^{|c|}$, $B_{\alpha}(i)$ is a strict cofibration and hence a n admissiblemonomorphism. Therefore $B_{\alpha}i\circ\nu_{\alpha}C=j\circ\tilde{i}$ is a split monomorphism. Hence $\tilde{i}$ is a split monomorphism. Thus we get the factorisation axioms for a model category. It remains to prove the lifting property. Consider the following commutative diagram in $\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|,\alpha-adm}$ \begin{displaymath} \xymatrix{ E\ar@{>->}[d]^{c}\ar[r] & C\ar@{->>}[d]^{f}\\ F\ar@{-->}[ur]^{\alpha}\ar[r] & D } \end{displaymath} where $c$ is a cofibration and $f$ is a fibration. The right lifting property for fibrations against acyclic cofibrations is built into the definition of fibration. Thus it remains to check the right lifting property for acyclic fibrations against cofibrations $c$. We may assume that $c$ is a strict cofibration. We suppose now that $f$ is a weak equivalence. By the previous part of the proof we may factor $f$ as $\tilde{p}\circ\tilde{i}$ where $\tilde{i}$ is a strict cofibration, $\tilde{p}$ is a fibration, and by the $2$-out-of-$3$ property, both are weak equivalences. Again by the definition of fibration, there is a lift in the following diagram \begin{displaymath} \xymatrix{ C\ar@{>->}[d]^{\tilde{i}}\ar[r]^{id_{C}} & C\ar@{->>}[d]^{f}\\ B_{\alpha}A\times_{B_{\alpha}\Omega_{\alpha}D}D\ar@{-->}[ur]^{r}\ar@{->>}[r]^{\;\;\;\;\;\tilde{p}} & D } \end{displaymath} It therefore remains to find a lift in the diagram \begin{displaymath} \xymatrix{ E\ar@{>->}[d]^{c}\ar[r] & B_{\alpha}A\times_{B_{\alpha}\Omega_{\alpha}D}D\ar@{->>}[d]^{\tilde{p}}\\ F\ar@{-->}[ur]\ar[r] & D } \end{displaymath} By the universal property of the pullback, it is sufficient to find a lift in the diagram \begin{displaymath} \xymatrix{ E\ar@{>->}[d]^{c}\ar[r] & B_{\alpha}A\ar[d]^{B_{\alpha}p}\\ F\ar[r]\ar@{-->}[ur] & B_{\alpha}\Omega_{\alpha}D } \end{displaymath} By adjunction we can instead consider the diagram \begin{displaymath} \xymatrix{ \Omega_{\alpha}E\ar@{>->}[d]^{\Omega_{\alpha}c}\ar[r] & A\ar[d]^{p}\\ F\ar[r]\ar@{-->}[ur] & B_{\alpha}\Omega_{\alpha}D } \end{displaymath} By Proposition \ref{preQuillen} $\Omega_{\alpha}c$ is a cofibration, and by assumption $p$ is an acyclic fibration. Using the model category structure on $\mathpzc{Alg}_{\mathfrak{P}}$ gives the required lifting. \end{proof} \subsection{Connective Koszul Duality} Classically \cite{quillen1969rational} Koszul duality was formulated for bounded complexes of ${\mathbb Q}$-vector spaces. In this section we consider bounded Koszul duality in the more general setting of connective Koszul categories. Let $\mathfrak{P}$ be a non-negatively graded operad, and $\mathfrak{C}$ a non-negatively graded co-operad. \begin{defn} An object $C$ of $\mathpzc{coAlg}_{(\mathfrak{C})_{top}}$ is said to be \textbf{connected} if $C$ is concentrated in positive degrees. \end{defn} If $C$ is connected $\mathfrak{C}$-coalgebra we define the \textbf{connected filtration} on $\Omega_{\alpha}(C)$ as follows. We set $$F_{p}\Omega_{\alpha}C=\bigoplus_{n\ge -p}\mathfrak{P}(n)\otimes_{\Sigma_{n}}|C|^{\otimes n}_{filt}$$ For $n\in\mathbb{Z}$ we denote by $\mathpzc{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm,\ge n}$ the full subcategory of $\mathpzc{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}$ whose underlying complexes are concentrated in degrees $\ge n$. We also denote by $\mathpzc{Alg}_{\mathfrak{P}}^{\ge n}$ denote the category of $\mathfrak{P}$-algebras concentrated in degrees $\ge n$. These are both relative categories in the obvious way \begin{prop}\label{qisconn} Suppose that $f:C\rightarrow D$ is a map of connected cofibrant $(\mathfrak{C})_{top}$-coalgebras, that $\alpha|_{\mathfrak{C}(1)}$ vanishes, and that $f$ is a quasi-isomorphism. Then $\Omega_{\alpha}(f)$ is a weak equivalence. \end{prop} \begin{proof} We adapt the proof of \cite{loday2012algebraic} Lemma 11.2.3. Consider the connected filtration on $C$ and $D$. The condition that $\alpha|_{\mathfrak{C}(1)}$ vanishes ensures that $d_{\alpha}$ lowers the filtration. The filtration is clearly bounded. Moreover $\textrm{gr}(\Omega_{\alpha}f)=\mathfrak{P}(f)$, which is a weak equivalence. \end{proof} Let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a Koszul morphism. $$\adj{B_{\alpha}}{\mathpzc{coAlg}_{\mathfrak{C}}^{\ge n}}{\mathpzc{Alg}_{\mathfrak{P}}^{\ge n}}{\Omega_{\alpha}}$$ which restricts to an adjunction $$\adj{B_{\alpha}}{\mathpzc{coAlg}^{|K^{f}|,\alpha-adm,\ge n}_{\mathfrak{C}}}{\mathpzc{Alg}_{\mathfrak{P}}^{|K|,\alpha-adm,\ge n}}{\Omega_{\alpha}}$$ \begin{thm} Let $\mathpzc{M}$ be a strong Koszul category. The adjunction $$\adj{B_{\alpha}}{\mathpzc{coAlg}^{|K^{f}|,\alpha-adm,\ge n}_{\mathfrak{C}}}{\mathpzc{Alg}_{\mathfrak{P}}^{|K|,\alpha-adm,\ge n}}{\Omega_{\alpha}}$$ induces an adjoint equivalence of $(\infty,1)$-categories. $$\adj{\textbf{B}_{\alpha}}{\textbf{coAlg}^{|K^{f}|,\alpha-adm,\ge n}_{\mathfrak{C}}}{\textbf{Alg}_{\mathfrak{P}}^{\ge n}}{\Omega_{\alpha}}$$ Moreover $\textbf{coAlg}^{|K^{f}|,\alpha-adm,\ge n}_{\mathfrak{C}}$ is the localisation of $\mathpzc{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm,\ge n}$ at the class of quasi-isomorphisms. Finally if $\mathpzc{M}$ satisfies Assumption \ref{times12} then every morphism in $\mathpzc{coAlg}^{|K^{f}|,\alpha-adm,\ge n}_{\mathfrak{C}}$ factors as an admissible monomorphism which is a weak equivalence followed by an $\alpha$-fibration. If $\alpha$ satisfies Assumption \ref{times12}, every object in $\mathpzc{M}$ is cofibrant, and $\mathpzc{M}$ is elementary, then there is a model structure on $\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|,\alpha-adm,\ge n}$ \end{thm} \begin{proof} The first claim is clear, and the second follows immediately from Proposition Proposition \ref{qisconn}. For the final two claims all that remains to observe is that $\mathpzc{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm,\ge n}$ (resp. $\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|,\alpha-adm,\ge n}$) is closed under the factorisation procedure in $\mathpzc{coAlg}_{\mathfrak{C}}^{|K^{f}|}$ (resp. $\mathpzc{coAlg}_{\mathfrak{C}}^{|c^{f}|}$). \end{proof} Applied to the twisting morphism $\kappa:\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}^{nu}\rightarrow\mathfrak{Lie}$ in the case $\mathpzc{E}={}_{k}\mathpzc{Vect}$ for $k$ a field of characteristic $0$, this recovers the famous result of Quillen \cite{quillen1969rational}. Indeed suppose $\mathpzc{M}$ is elementary and every object in $\mathpzc{M}$ is cofibrant. There is a model category structure on $\mathpzc{coAlg}^{|c^{f}|,\alpha-adm,\ge r+1}$ such that the bar-cobar adjunction induces a Quillen equivalence. $$\adj{\Omega_{\kappa}}{\mathpzc{coAlg}_{\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}^{nu}}^{|c^{f}|,\alpha-adm,\ge r}}{\mathpzc{Alg}_{\mathfrak{Lie}}^{|c|,\ge r}{B_{\kappa}}$$ Consider the model category structure on $\Omega_{\kappa}}{\mathpzc{coAlg}_{\mathfrak{coComm}^{nu}}^{|c^{f}|,\alpha-adm,\ge r+1}$ induced by the equivalence $$[1]:\mathpzc{coAlg}_{\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}^{nu}}^{|c^{f}|,\alpha-adm,\ge r}\rightarrow \Omega_{\kappa}}\mathpzc{coAlg}_{\mathfrak{coComm}^{nu}}^{|c^{f}|,\alpha-adm,\ge r+1}$$ There is then a Quillen equivalence $$\adj{\Omega_{\kappa}\circ[-1]}{\mathpzc{coAlg}_{\mathfrak{coComm}^{nu}}^{|c^{f}|,\alpha-adm,\ge r+1}}{\mathpzc{Alg}_{\mathfrak{Lie}}^{|c|,\ge r}}{[1]\circ B_{\kappa}}$$ \section{Operadic Koszul Duality}\label{secopkosz} In this section we fix a Koszul twisting morphism $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ in a Koszul category $\mathpzc{M}$. We shall assume that $\mathpzc{M}$ is \textit{closed Koszul}, i.e. that $\mathpzc{M}$ is a closed monoidal model category. We are going to consider the functor $$\hat{C}_{\alpha}:\mathpzc{Alg}_{\mathfrak{P}}(\mathpzc{M})\rightarrow\mathpzc{Alg}_{\mathfrak{S}^{c}\otimes_{H}\mathfrak{C}^{\vee}}(\mathpzc{M})$$ defined in Section \ref{appcatalg}, which is given by the composition $[-1]\circ(-)^{\vee}\circ B_{\alpha}$. \subsection{Operadic Koszul Categories and Morphisms} For the functor $\hat{C}_{\alpha}$ to have nice properties we need some assumptions on both $\mathpzc{M}$ and $\alpha$. \begin{defn} A closed Koszul category $\mathpzc{M}$ is said to be \textbf{operadic} if any cofibrant object $X$ is finitely $K$-cotorsion and the functor $\prod_{n\in\mathbb{N}_{0}}$ preserves weak equivalences. \end{defn} \begin{example} Let $\mathpzc{M}$ be an elementary Koszul category which is closed. Then $\mathpzc{M}$ is operadic. Indeed in an exact category with enough projectives filtered projective limits are exact by the proof of Proposition \ref{boundedbelowKcotors}. It remains to check that cofibrant objects are finitely $K$-cotorsion. However in a closed monoidal model category the map $Hom(C,F)\rightarrow\mathbb{R}Hom(C,F)$ is an equivalence whenever $C$ is cofibrant and $F$ fibrant. In an elementary Koszul category every object is fibrant. \end{example} \begin{defn} A Koszul morphism $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ is said to be \textbf{operadic} if $(\mathfrak{S}^{c}\otimes\mathfrak{C})^{\vee}$ is a rectifiably admissible operad. \end{defn} From now on $\mathpzc{M}$ will be an operadic Koszul category and $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ will be an operadic Koszul morphism. \begin{prop} Let $\mathpzc{M}$ be an operadic Koszul category and $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ an operadic Koszul morphism. Then the functor $(-)^{\vee}:\mathpzc{coAlg}_{\mathfrak{C}}\rightarrow(\mathpzc{Alg}_{\mathfrak{C}^{\vee}})^{op}$ induces a functor of $(\infty,1)$-categories $$(-)^{\vee}:\textbf{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}(\mathpzc{M})\rightarrow\textbf{Alg}_{\mathfrak{C}^{\vee}}(\mathpzc{M})^{op}$$ \end{prop} By Theorem \ref{preQuillen} and Section \ref{Kcotorproj} there is an induced functor of $(\infty,1)$-categories $$\hat{\textbf{C}}_{\alpha}:\textbf{Alg}_{\mathfrak{P}}\rightarrow\textbf{Alg}_{\mathfrak{S}^{c}\otimes_{H}\mathfrak{C}^{\vee}}$$ We are going to prove the following. The approach is a generalisation of \cite{DAGX} Proposition 2.2.12. \begin{thm}\label{alphadjoint} The functor $\hat{\textbf{C}_{\alpha}}:\textbf{Alg}_{\mathfrak{P}}\rightarrow(\textbf{Alg}_{\mathfrak{S}^{c}\otimes_{H}\mathfrak{C}^{\vee}})^{op}$ admits a right adjoint $\textbf{D}_{\alpha}$ \end{thm} \begin{proof} Using Lurie's $(\infty,1)$-adjoint functor theorem and noting that $\textbf{Alg}_{\mathfrak{P}}$ is locally presentable, we need to show that $\hat{\textbf{C}}_{\alpha}$ preserves colimits. Since the functor $|-|:\textbf{Alg}_{\mathfrak{S}^{c}\otimes_{H}\mathfrak{C}^{\vee}}^{\vee}\rightarrow\textbf{M}$ is conservative, and preserves and reflects limits, and preserves sifted colimits by Section \ref{siftedgeneration}, it suffices to show that $|-|\circ\hat{\textbf{C}}_{\alpha}:\textbf{Alg}_{\mathfrak{P}}\rightarrow(\textbf{M})^{op}$ preserves colimits. Let us first show that it preserves sifted colimits. Now we have a commutative diagram \begin{displaymath} \xymatrix{ \textbf{Alg}_{\mathfrak{P}}\ar[d]^{\textbf{B}_{\alpha}}\ar[r]^{\textbf{B}_{\alpha}} &\textbf{coAlg}^{|K^{f}|,\alpha-adm}_{\mathfrak{C}}\ar[r]^{(-)^{\vee}} &(\textbf{Alg}_{\mathfrak{S}^{c}\otimes_{H}\mathfrak{C}^{\vee}})^{op}\ar[d]^{|-|}\\ \textbf{coAlg}^{|K^{f}|,\alpha-adm}_{\mathfrak{C}} \ar[r]^{|-|}& \textbf{M} \ar[r]^{(-)^{\vee}} & (\textbf{M})^{op} } \end{displaymath} and both compositions are equal to $\hat{\textbf{C}_{\alpha}}$. The functor $(-)^{\vee}:\textbf{M}\rightarrow (\textbf{M})^{op}$ is a left adjoint so it preserves all colimits. Therefore we reduce to showing that $|-|\circ \textbf{B}_{\alpha}$ preserves sifted colimits. There is a factorisation of $|-|\circ \textbf{B}_{\alpha}$ \begin{displaymath} \xymatrix{ \textbf{Alg}_{\mathfrak{P}}\ar[r]^{\textbf{B}_{\alpha}} & \textbf{coAlg}^{|K^{f}|,\alpha-adm}_{\mathfrak{C}}\ar[rr]^{(-)_{top}} && \textbf{Filt}(\textbf{M})\ar[rr]^{(-)_{top}} & & \textbf{M} } \end{displaymath} where $(-)_{top}$ is the forgetful functor. The functor $(-)_{top}$ is colimit preserving. Therefore it remains to show that $|-|\circ (-)_{top}\circ \textbf{B}_{\alpha}$ preserves colimits. Now $|-|\circ (-)_{0}\circ (-)_{top}\circ\textbf{B}_{\alpha}=|-|$ which preserve sifted colimits. By Proposition \ref{inductgraded} we finally reduce to showing that the composition $\textbf{gr}\circ (-)_{top}\circ |-|\circ \textbf{B}_{\alpha}$ is colimit preserving. But this functor is equivalent to the composition \begin{displaymath} \xymatrix{ \textbf{Alg}_{\mathfrak{P}}\ar[r]^{|-|} & \textbf{M}\ar[r]^{|-|\circ\mathfrak{C}(-)} & \textbf{M} } \end{displaymath} All the functors in this composition preserve sifted colimits by Section \ref{siftedgeneration} so we are done. It remains to show that $\hat{\textbf{C}}_{\alpha}$ preserves products. Now by Section \ref{siftedgeneration} the category $\textbf{Alg}_{\mathfrak{P}}$ is generated under sifted colimits by free objects $\mathfrak{P}(V)$ on cofibrant objects $V$. Thus it is enough to show that $\hat{\textbf{C}}_{\alpha}$ preserves finite coproducts of the form $\mathfrak{P}(V)\coprod\mathfrak{P}(W)\cong\mathfrak{P}(V\oplus W)$. But $$\hat{\textbf{C}}_{\alpha}(\mathfrak{P}(V))\cong(\mathfrak{C}\circ_{\alpha}\mathfrak{P}(V))^{\vee}\cong V^{\vee}$$ So if \begin{displaymath} \xymatrix{ \mathfrak{P}(0)\ar[r]\ar[d] & \mathfrak{P}(V)\ar[d]\\ \mathfrak{P}(W)\ar[r] & \mathfrak{P}(V\oplus W) } \end{displaymath} is a coproduct diagram in $\textbf{Alg}_{\mathfrak{P}}$ then applying $\hat{\textbf{C}}_{\alpha}$ gives the diagram \begin{displaymath} \xymatrix{ V^{\vee}\oplus W^{\vee}\ar[r]\ar[d] & V^{\vee}\ar[d]\\ W^{\vee}\ar[r] & 0 } \end{displaymath} which is a product diagram in $\textbf{Alg}_{\mathfrak{C}^{\vee}}$. \end{proof} Since we use the adjoint functor theorem the proof of the existence of $\textbf{D}_{\alpha}$ is not constructive. However for Koszul duality between Lie algebras and commutative algebras we will give an interpretation of $\textbf{D}_{\alpha}$ in terms of the shifted tangent complex. \subsection{Main Example: The Lie Operad} Recall that in the category ${}_{{\mathbb Q}}\mathpzc{Vect}$ of vector spaces over ${\mathbb Q}$ there is a Koszul morphism $$\kappa:\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}^{nu}\rightarrow\mathfrak{Lie}$$ This follows from the fact, which we will not explore in detail here, that $\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}^{nu}$ is the quadratic dual cooperad of $\mathfrak{Lie}$. See \cite{dehling2017weak} Section 2.1 for details. This duality also extends to monoidal elementary exact categories. Let us now study this example in greater detail, in particular its relevance to (affine) derived geometry. \subsubsection{The Shifted Tangent Complex} Recall the functor $\mathbb{L}_{0}:\textbf{Alg}^{aug}_{\mathfrak{Comm}}(\mathpzc{M})\rightarrow\textbf{M}$ defined in section \ref{seccotangentcomplex}. The \textbf{tangent complex functor} is the composition $\mathbb{T}_{0}\defeq (-)^{\vee}\circ\mathbb{L}_{0}$. We shall abuse notation and write $\textbf{D}_{\kappa}:\textbf{Alg}_{Lie}(\mathpzc{M})^{op}$ for the composite $\textbf{D}_{\kappa}\circ I:\textbf{Alg}_{\mathfrak{Comm}}^{aug}(\mathpzc{M})\rightarrow \textbf{Alg}_{\mathfrak{Comm}^{nu}}(\mathpzc{M})\rightarrow\textbf{Alg}_{\mathfrak{Lie}}(\mathpzc{M})^{op}$. Here $I$ is the functor involved in the construction of the cotangent complex in Section \ref{seccotangentcomplex}. In classical Koszul duality for Lie algebras and commutative algebras the functor $$ |-|\circ\textbf{D}_{\kappa}:\textbf{Alg}^{aug}_{\mathfrak{Comm}}(Ch(\mathpzc{Vec}_{k}))\rightarrow\textbf{Alg}_{\mathfrak{Lie}}(\textbf{Ch}(\mathpzc{Vec}_{k}))^{op}\rightarrow\textbf{Ch}(\mathpzc{Vec}_{k})$$ is equivalent to the shifted tangent complex. We will now see that this works when $\mathpzc{M}$ is enriched over ${\mathbb Q}$. \begin{prop}\label{shiftedtanget} The functor $|-|\circ\textbf{D}_{\kappa}$ is naturally equivalent to the shifted tangent complex functor $\mathbb{T}_{0}[1]$. \end{prop} \begin{proof} We show that $|-|\circ \textbf{D}_{\kappa}$ and $\mathbb{T}_{0}[1]$ are both right adjoint to the same functor. Now $|-|\circ\textbf{D}_{\kappa}$ is right adjoint to the functor $\hat{\textbf{C}}_{\kappa}\circ\mathfrak{Lie}(-)$ which is equivalent to the functor $R\oplus(-)^{\vee}[-1]$. But this functor is left adjoint to $\mathbb{T}_{0}[1]$. \end{proof} \begin{prop}\label{doubledualunit} After forgetting to $\textbf{M}$ the unit $|\eta_{\mathfrak{g}}|:|\mathfrak{g}|\rightarrow|\textbf{D}_{\kappa}\circ\hat{\textbf{C}}_{\kappa}(\mathfrak{g})|$ factors through the map $|\mathfrak{g}|\rightarrow\mathbb{R}(|\mathfrak{g}|)^{\vee\vee}$ \end{prop} \begin{proof} Consider the map $\textbf{L}(|\mathfrak{g}|)\rightarrow\mathfrak{g}$. There is a commutative diagram \begin{displaymath} \xymatrix{ |\mathfrak{g}|\ar[r]\ar@{=}[d] & |\textbf{L}(|\mathfrak{g}|)|\ar[d]\ar[r] & |\textbf{D}_{\kappa}\circ\hat{\textbf{C}}_{\kappa}\circ\textbf{L}(|\mathfrak{g}|)|\ar[d]\\ |\mathfrak{g}|\ar@{=}[r]& |\mathfrak{g}|\ar[r] & |\textbf{D}_{\kappa}\circ\hat{\textbf{C}}_{\kappa}(\mathfrak{g})| } \end{displaymath} The top vertical composition is the unit of the composite adjunction $$\adj{\hat{\textbf{C}}_{\kappa}\circ\textbf{L}}{\textbf{M}}{(\textbf{Alg}_{\mathfrak{Comm}}^{aug})^{op}}{|-|\circ\textbf{D}_{\kappa}}$$ which is the adjunction $$\adj{R\ltimes(-)^{\vee}[-1]}{\textbf{M}}{(\textbf{Alg}_{\mathfrak{Comm}}^{aug})^{op}}{\mathbb{T}_{0}[1]}$$ This can be written as the composition of the adjunctions $$\adj{(-)^{\vee}\circ[-1]}{\textbf{M}}{\textbf{M}^{op}}{[1]\circ (-)^{\vee}},\;\;\;\;\adj{R\ltimes(-)}{\textbf{M}}{(\textbf{Alg}_{\mathfrak{Comm}^{aug}}(\textbf{M}))^{op}}{\mathbb{L}_{0}}$$ In particular the unit of the composite adjunction factors through the unit of the first adjunction, which is $M\rightarrow\mathbb{R}(M)^{\vee\vee}$. \end{proof} \begin{defn} A Lie algebra $\mathfrak{g}$ is said to be \textbf{very passable} if it satisfies the following properties. \begin{enumerate} \item $\mathfrak{g}$ is separable. \item $|\mathfrak{g}|$ is finitely $K$-cotorsion and $|\mathfrak{g}|^{\vee}$ is $\mathfrak{G}$-positively graded, where $\mathfrak{G}$ consists of flat objects. \item \item For each $0\le i<\infty$, $\mathfrak{g}_{i}^{\vee}$ is weakly $\aleph_{1}$-filtered relative to the class $R\otimes(|\mathfrak{g}_{-1}|^{\vee})^{\otimes}\otimes Sym(|\mathfrak{g}_{-1}|^{\vee})$. \end{enumerate} $\mathfrak{g}$ is said to be \textbf{very good} if in addition $|\mathfrak{g}|$ is homotopically reflexive. $\mathfrak{g}$ is said to be \textbf{passable} (resp. \textbf{good}) if it is equivalent to a very passable (resp. very good) algebra. \end{defn} Let us populate the class of very good algebras. Following \cite{hennion2015tangent} we define cellular finite Lie algebras. \begin{example}\label{goodthings} Say that a Lie algebra $\mathfrak{g}$ is cellular finite if there is a filtration $$0=L_{0}\rightarrow L_{1}\rightarrow\ldots\rightarrow L_{m}=\mathfrak{g}$$ $\mathfrak{g}=lim_{\rightarrow}L_{n}$ where $L_{0}=0$, for each $0\le m<n$ there is a pushout diagram \begin{displaymath} \xymatrix{ L(R\otimes S^{m_{n}-1}(k^{p_{m_{n}}}))\ar[d]\ar[r] & L_{n}\ar[d]\\ L(R\otimes D^{m_{n}}(k^{p_{m_{n}}}))\ar[r] & L_{n+1} } \end{displaymath} where $m_{n}\le -1$, and for each integer. This is the definition in \cite{hennion2015tangent} Definition 1.3.7 of a very good Lie algebra.. Then $R$ is very passable. Indeed the presentation of $\mathfrak{g}$ immediately implies that it is cofibrant, and hence finitely $K$-cotorsion. Its $R$-linear dual is of the form $\prod_{n\ge 1}R^{p_{m_{n}}}[-m_{n}]$. This is in fact isomorphic to $\bigoplus_{n\ge 1}R^{p_{m_{n}}}[-m_{n}]$. The underlying $R$-module is quasi-free on a bounded below cofibrant object, and hence is cofibrant. It is also $\mathfrak{G}$-non-negatively graded, where $\mathfrak{G}$ consists of flat objects. Therefore it is $K$-flat and (finitely $K$-cotorsion). If $R$ is cohomologically bounded as a complex, then $\mathfrak{g}$ is very good. \end{example} \begin{prop}\label{conditionfordeceny} If $V$ be a flat object in $\mathpzc{E}$, regarded as a complex concentrated in degree $0$. Suppose that $V$ is weakly $\aleph_{1}$-filtered relative to the class $R\otimes V^{\otimes}\otimes Sym(V)$. Then $\hat{S}_{R}(R\otimes V)\otimes^{\mathbb{L}}_{S_{R}(R\otimes V)}R\cong \hat{S}_{R}(R\otimes V)\otimes_{S_{R}(R\otimes V)}R\cong R$. Hence in the diagram below \begin{displaymath} \xymatrix{ S_{R}(R\otimes V)\ar[d]\ar[r] & R\ar[d]\\ \hat{S}_{R}(R\otimes V)\ar[r] & R } \end{displaymath} is a homotopy pushout. In particular $V$ is decent. \end{prop} \begin{proof} Consider the Koszul resolution of $R$ from Proposition \ref{Koszulfree}, $$S_{R}^{c}(R\otimes V[1])\otimes_{\kappa} S_{R}(R\otimes V)\rightarrow R$$ This is a resolution by a $K$-flat object. Moreover regarding $R$ as a graded object concentrated in degree $0$, and with the tensor product grading induced from the gradings on $S_{R}^{c}(R\otimes V[1])$ and $S_{R}(R\otimes V)$, this is a graded equivalence. Tensoring with $\hat{S}_{R}(R\otimes V)$ gives the complex $$S_{R}^{c}(R\otimes V[1])\otimes_{\kappa} \hat{S}_{R}(R\otimes V)$$ We want to show that the map $S_{R}^{c}(R\otimes V[1])\otimes_{\kappa} \hat{S}_{R}(R\otimes V)\rightarrow R$ is an equivalence. By the $\aleph_{1}$-filtered assumptions, each graded piece of $S_{R}^{c}(R\otimes V)$ is $\aleph_{1}$-filtered relative to $R\otimes V^{\otimes}\otimes Sym(V)$. Therefore $$S_{R}^{c}(R\otimes V[1])\otimes_{\kappa} \hat{S}_{R}(R\otimes V)\cong\prod_{n=0}^{\infty}(S_{R}^{c}(R\otimes V[1])\otimes_{\kappa} S_{R}(R\otimes V))_{n}$$ Since countable products are exact, the map $\prod_{n=0}^{\infty}(S_{R}^{c}(R\otimes V[1])\otimes_{\kappa} S_{R}(R\otimes V))_{n}\rightarrow R$ is an equivalence, and we're done. \end{proof} Recall the subalgebra $C_{\kappa}(\mathfrak{g})\rightarrow\hat{C}_{\kappa}(\mathfrak{g})$ considered in Proposition \ref{polykoszul} for $\mathfrak{g}$ separable. By Proposition \ref{conditionfordeceny}, Propsotion \ref{completiontwoways} and Proposition \ref{completionhpush} we get the following. \begin{cor}\label{koszulpushout} Let $\mathfrak{g}$ be very passable. The map $\hat{S}(\mathfrak{g}_{-1}^{\vee})\otimes^{\mathbb{L}}_{S(\mathfrak{g}_{-1}^{\vee})}C_{\kappa}(\mathfrak{g})\rightarrow\hat{C}_{\kappa}(\mathfrak{g})$ is an equivalence. \end{cor} The abstract machinery we have set up allows us to generalise Lurie's \cite{DAGX} Lemma 2.3.5 and its proof. \begin{thm}\label{operadickoszul} Let $\mathpzc{M}$ be an elementary Koszul category. If $\mathfrak{g}$ is passable then the map $\mathbb{R}(|\mathfrak{g}|)^{\vee\vee}\rightarrow|\textbf{D}_{\kappa}\circ\hat{\textbf{C}}_{\kappa}(\mathfrak{g})|$ is an equivalence. In particular if $\mathfrak{g}$ is good then the unit is an equivalence \end{thm} \begin{proof} Suppose that $\mathfrak{g}$ is passable. Without loss of generality we may assume that $\mathfrak{g}$ is very passable. It suffices to show that the map of complexes $\mathbb{R}(|\mathfrak{g}|)^{\vee\vee}\rightarrow\mathbb{T}_{0}(\hat{C}(\mathfrak{g}))[1]$ is an equivalence. The proof of Proposition \ref{doubledualunit} shows that this is obtained from the map $$\mathbb{L}_{0}(\hat{C}_{\kappa}(\mathfrak{g}))\rightarrow|\mathfrak{g}^{\vee}|[-1]$$ by applying $\mathbb{R}Hom(-,R)$, so it suffices to show that this map is an equivalence. By Proposition \ref{koszulpushout} we have a an equivalence $\mathbb{L}_{\hat{S}(\mathfrak{g}_{-1}^{\vee})\big\slash S(\mathfrak{g}_{-1}^{\vee})}\otimes^{\mathbb{L}}_{\hat{S}(\mathfrak{g}_{-1}^{\vee})}\hat{C}_{\kappa}(\mathfrak{g}_{-1}^{\vee}) \cong\mathbb{L}_{\hat{C}_{\kappa}(\mathfrak{g})\big\slash C_{\kappa}(\mathfrak{g})}$. In particular it suffices to show that $$\mathbb{L}_{0}(C_{\kappa}(\mathfrak{g}))\rightarrow|\mathfrak{g}^{\vee}|[-1]$$ is an equivalence. This follows from Proposition \ref{cotangentqfree}. \end{proof} \section{Examples, Applications, and Further Directions}\label{secexamples} \subsection{Algebraic Koszul Duality} Let $R$ be any differentially graded ring in $Ch(\mathpzc{Ab})$, and let $\alpha:\mathfrak{C}\rightarrow\mathfrak{P}$ be a Koszul morphism. There is an equivalence of categories $$\textbf{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}({}_{R}\mathpzc{Mod})\cong\textbf{Alg}_{\mathfrak{P}}({}_{R}\mathpzc{Mod})$$ Suppose now that $R=R_{0}$ is concentrated in degree $0$ and is a von Neumann regular ring (for example a field or, more generally, a semisimple ring). Then every object of $Ch({}_{R}\mathpzc{Mod})$ is $K$-flat and every monomorphism is pure. Moreover since ${}_{R}\mathpzc{Mod}$ is abelian the conditions of Proposition \ref{gradestronmon} are always satisfied. In particular the categories $\mathpzc{coAlg}^{conil}_{(\mathfrak{C})_{top}}$ and $\mathpzc{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}$ coincide. There is therefore an equivalence of categories $$\textbf{coAlg}_{(\mathfrak{C})_{top}}^{conil}({}_{R}\mathpzc{Mod})\cong\textbf{Alg}_{\mathfrak{P}}({}_{R}\mathpzc{Mod})$$ Equip $Ch({}_{R}\mathpzc{Mod})$ with projective model structure, so that it is an elementary Koszul category. For $R=R_{0}$ a field every object is cofibrant and according to Lemma B.1 in \cite{vallette2014homotopy} every Koszul morphism satisfies Assumption \ref{times12}. Thus in this case the equivalence of $(\infty,1)$-categories arises from a Quillen equivalence $$\adj{\Omega_{\alpha}}{\mathpzc{coAlg}^{conil}_{(\mathfrak{C})_{top}}({}_{R}\mathpzc{Mod})}{\mathpzc{Alg}_{\mathfrak{P}}({}_{R}\mathpzc{Mod})}{B_{\alpha}}$$ which is Theorem 2.1 (1) and (2) of \cite{vallette2014homotopy}. Let $R_{0}$ contain ${\mathbb Q}$, and consider the Koszul morphism $\kappa:\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}^{nu}\rightarrow\mathfrak{Lie}$. Suppose that $R$ is cohomologically bounded. As we have mentioned, the cellular finite Lie algebras from Example \ref{goodthings} in this case include the very good algebras of \cite{hennion2015tangent}. Since $R_{0}$ is not required to be Noetherian, Theorem \ref{operadickoszul} in fact generalises \cite{hennion2015tangent} Lemma 1.4.12. If $R=R_{0}=k$ is a field, then if $\mathfrak{g}$ is concentrated in negative degrees and each $\mathfrak{g}_{n}$ is free of finite rank, $\mathfrak{g}$ is very good. Indeed in these cases any complex of free objects of finite rank is split. In particular it is a coproduct of objects of the form $S^{n}(k^{m})$ and $D^{r}(k^{s})$, so it is cofibrant. This recovers Lurie's conditions in Lemma 2.3.5 of \cite{DAGX}. \subsection{Geometric Examples} \subsubsection{Sheaves on Spaces} Let $\mathcal{X}$ be a topological space and let $\mathcal{O}_{\mathcal{X}}$ be a sheaf of rings on $\mathcal{X}$. By \cite{gillespie2006flat}, when equipped with the flat model structure, $_{\mathcal{O}_{\mathcal{X}}}\mathpzc{Mod}$ is a pre-Koszul category. Thus we get an equivalence of $(\infty,1)$-categories. $$\textbf{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}(Ch(_{\mathcal{O}_{\mathcal{X}}}\mathpzc{Mod}))\cong\textbf{Alg}_{\mathfrak{P}}(Ch(_{\mathcal{O}_{\mathcal{X}}}\mathpzc{Mod}))$$ Now flatness is a local condition. Therefore if $\mathcal{O}_{\mathcal{X}}$ is locally von Neumann regular (for example if $\mathcal{O}_{\mathcal{X}}$ is the constant sheaf associated to a field), every object in $Ch({}_{\mathcal{O}_{X}}\mathpzc{Mod})$ is cofibrant. Once again we get an equivalence of $(\infty,1)$-categories $$\textbf{coAlg}_{(\mathfrak{C})_{top}}^{conil}({}_{R}\mathpzc{Mod})\cong\textbf{Alg}_{\mathfrak{P}}({}_{R}\mathpzc{Mod})$$ where $\textbf{coAlg}_{(\mathfrak{C})_{top}}^{conil}({}_{R}\mathpzc{Mod})$ is localisation of the entire category of conilpotent $(\mathfrak{C})_{top}$-coalgebras. \subsubsection{Quasi-coherent Sheaves on Stacks} Let $\mathcal{X}$ be an Artin stack with enough flat objects (for example if $\mathcal{X}$ is geometric), and let $\mathcal{O}_{\mathcal{X}}$ be its structure sheaf. By Theorem 8.1 in \cite{estrada2014derived}, $QCoh(\mathcal{X})$ is a pre-Koszul category. Therefore there is an equivalence of $(\infty,1)$-categories. $$\textbf{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}(QCoh(\mathcal{X}))\cong\textbf{Alg}_{\mathfrak{P}}(QCoh(\mathcal{X}))$$ \subsection{Analytic Koszul Duality} Every category considered thus far has been abelian. Let us consider some quasi-abelian examples Let $k$ be a Banach ring, and let $\mathpzc{E}$ denote either $Ind(Ban_{k})$ the formal completion of $Ban_{k}$ by inductive limits, or its full subcategory $CBorn_{k}$ of complete bornological spaces over $k$. For details about these monoidal elementary quasi-abelian categories see \cite{koren}, \cite{orenbambozzi}, \cite{bambozzi}, and \cite{kelly2016projective}. Since $\mathpzc{E}$ is monoidal elementary quasi-abelian, it is pre-Koszul. For $R\in\mathpzc{Alg}_{\mathfrak{Comm}}(Ch(\mathpzc{E})$ let $\mathpzc{M}={}_{R}\mathpzc{Mod}$. Once more we get an equivalence of $(\infty,1)$-categories $$\textbf{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}(\mathpzc{M})\cong\textbf{Alg}_{\mathfrak{P}}(\mathpzc{M})$$ Now suppose that $R=k$ contains ${\mathbb Q}$, and consider the Koszul morphism $\kappa:\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}^{nu}\rightarrow\mathfrak{Lie}$. Let $\mathfrak{g}\in\mathpzc{Alg}_{\mathfrak{Lie}}(\mathpzc{M})$ be concentrated in negative degrees, and suppose that $|\mathfrak{g}|_{n}$ is a bornological nuclear Fr\'{e}chet space for each $n\in\mathbb{Z}$. We will show that $\mathfrak{g}$ is very passable, and give conditions such that it is very good. \begin{prop}\label{dnf} Let $F$ be a bornological dual nuclear Fr\'{e}chet space. \begin{enumerate} \item $F$ is finitely $K$ cotorsion. \item $F$ is reflexive. \item $F^{\vee}$ is $\aleph_{1}$-filtered relative to the class of bornological nuclear Fr\'{e}chet spaces. \item $F^{\vee}$ is nuclear. \end{enumerate} \end{prop} \begin{proof} The first claim follows from Proposition 2.12. in \cite{reconstruction}. Let $E$ be a nuclear Fr\'{e}chet space such that $F=(E^{\vee})^{b}$. By Proposition 1.13 in \cite{reconstruction} we have $$F^{\vee}=\underline{Hom}_{CBorn_{k}}((E^{\vee})^{b},k)=(\underline{Hom}_{\mathcal{T}_{c}}(E,k))^{b}\cong E^{b}$$ Since $E$ is a nuclear Fr\'{e}chet space, $E^{b}$ is a nuclear bornological Fr\'{e}chet space. The third claim is a consequence of Corollary 3.65 in \cite{bambozzi2015stein}. Moreover, again using Proposition 1.13 in \cite{reconstruction} we once more have $F^{\vee}=(E^{\vee})^{b}=(E^{\vee\vee})^{b}=(E)^{b}=F$, using the fact that nuclear Fr\'{e}chet spaces are reflexive. \end{proof} \begin{prop} Let $\mathpzc{E}$ be an elementary exact category and $X_{\bullet}$ a complex in $Ch(\mathpzc{E})$ such that for sufficiently negative $n$ $Z_{n}X\rightarrow X_{n}$ is an admissible morphism, and both $B_{n}=Im(d_{n})$ and $X_{n}$ are $\mathfrak{O}$-cotorsion for all $n$, then $X_{\bullet}$ is $\mathfrak{O}$-K-cotorsion. \end{prop} \begin{proof} We know the claim holds for bounded complexes by Proposition \ref{boundedKflat}. Since $\mathpzc{E}$ has enough projectives, projective limits in which connecting morphisms are admissible epimorphisms (Mittag-Leffler systems) are exact. Indeed we may test exactness by passing to abelian groups using $Hom(P,-)$ for $P$ projective, and these send our projective systems to Mittag-Leffler systems. Now we write $X_{\bullet}=lim_{\rightarrow}\tau_{\ge n}X_{\bullet}$. Then $\underline{Hom}(X_{\bullet},O)\cong lim_{\leftarrow}\underline{Hom}(\tau_{\ge n}X_{\bullet},O)$. Each $\underline{Hom}(\tau_{\ge n}X_{\bullet},O)$ is acyclic by the first part. Now by the long exact sequence (\cite{Buehler} Section 12) each $Z_{n}X$ is $\mathfrak{O}$-cotorsion, and the sequence $$0\rightarrow\underline{Hom}(B_{n},O)\rightarrow\underline{Hom}(X_{n},O)\rightarrow\underline{Hom}(Z_{n},O)\rightarrow0$$ is short exact. Therefore the system $lim_{\leftarrow}\underline{Hom}(\tau_{\ge n}X_{\bullet},O)$ is a Mittag-Leffler system. Moreover in an elementary exact category we have $$\mathbb{R}\underline{Hom}(lim_{\rightarrow}\tau_{\ge n}X_{\bullet},k)\cong\mathbb{R}lim_{\leftarrow}\mathbb{R}\underline{Hom}(\tau_{\ge n}X_{\bullet},k)\cong lim_{\leftarrow}\underline{Hom}(\tau_{\ge n}X_{\bullet},k)$$ where the last line follows from the fact that the projective system is Mittag-Leffler, and each $\tau_{\ge n}X_{\bullet}$ is finitely $\mathfrak{O}$-K-cotorsion. \end{proof} \begin{cor} Let $F_{\bullet}$ be a complex of dual nuclear Fr\'{e}chet spaces. Then $F_{\bullet}$ is finitely $K$-cotorsion and $K$-flat, and $F_{\bullet}^{\vee}$ is $K$-flat. \end{cor} \begin{prop}\label{BanachKcotors} Let $k$ be a spherically complete Banach field and let $E$ be a Banach space. Then $E$ is finitely $K$-cotorsion. \end{prop} \begin{proof} When $k$ is spherically complete it is injective as an object in the category of Banach spaces over $k$. Let $P_{\bullet}\rightarrow E$ be a projective resolution of $E$ in $Ind(Ban_{k})$. We may assume that each $P_{i}$ is a projective \textit{Banach space}. Since $k$ is an injective Banach space, we have equivalences $$Hom(E,k)\cong Hom(P_{\bullet},k)\cong \mathbb{R}Hom(E,k)$$ as required. \end{proof} \begin{defn} A \textbf{bornological Fredholm complex} over a spherically complete Banach field $k$ is a complex $X_{\bullet}$ in $CBorn_{k}$ such that each $X_{n}$ is a topological bornological space and each map $d_{n}:X_{n}\rightarrow X_{n-1}$ has finite dimensional kernel and cokernel. \end{defn} \begin{prop} Let $k$ be spherically complete. A complex $X_{\bullet}$ is homotopically reflexive in each of the following cases. \begin{enumerate} \item $X_{\bullet}$ is a Fredholm complex which is cohomologically bounded in one direction. \item $X_{\bullet}$ is equivalent to a bounded complex of reflexive Banach spaces. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item Since the kernel and cokernel of the maps in the complex are finite dimensional, and $k$ is spherically complete, $X_{\bullet}$ is homotopy equivalent to its cohomology, which is a complex of the form $\bigoplus S^{n}(F_{n})$ for finite dimensional Banach spaces $F_{n}$. Clearly such complexes are cofibrant, have cofibrant dual, and are reflexive. \item This follows from Proposition \ref{BanachKcotors} and Proposition \ref{boundedbelowKcotors}. \end{enumerate} \end{proof} As a consequence we get the following. \begin{thm}\label{analyticopkoszul} Let $\mathfrak{g}$ be a Lie algebra in $Ch(CBorn_{k})$ concentrated in negative degrees such that each $|\mathfrak{g}_{n}|$ is a dual nuclear Fr\'{e}chet space. Then the map $|\mathfrak{g}|^{\vee\vee}\rightarrow |\textbf{D}_{\kappa}\circ\hat{\textbf{C}}_{\kappa}(\mathfrak{g})|$ is an equivalence. In particular the unit $\eta_{\mathfrak{g}}:\mathfrak{g}\rightarrow\textbf{D}_{\kappa}\circ\hat{\textbf{C}}_{\kappa}(\mathfrak{g})$ is an equivalence in the following cases. \begin{enumerate} \item $|\mathfrak{g}|$ is a Fredholm complex \item $|\mathfrak{g}|$ is equivalent to a bounded complex of reflexive Banach spaces. \end{enumerate} \end{thm} In future work we expect to be able to use this result to study analytic moduli spaces of instantons. \subsubsection{Non-Archimedean Banach Spaces} Suppose now that $R=k$ is a non-Archimedean Banach field. By Lemma 3.49 \cite{bambozzi2015stein} every object in $CBorn^{nA}_{k}$ is flat. Therefore by Proposition \ref{boundedKflat} every complex in $Ch(CBorn^{nA}_{k})$ is $K$-flat, and every admissible monomorphism is pure. In this case we get the following result. \begin{thm} The bar-cobar construction induces an equivalence of $(\infty,1)$-categories $$\adj{\Omega_{\alpha}}{\textbf{coAlg}_{\mathfrak{C}}^{conil}(Ch(CBorn^{nA}_{k}))}{\textbf{Alg}_{\mathfrak{P}}(Ch(CBorn^{nA}_{k}))}{\textbf{B}_{\alpha}}$$ \end{thm} Finally we consider an exact, but not quasi-abelian, example. The wide subcategory $Ban^{nA,\le 1}_{k}\subset Ban^{nA}_{k}$ of non-Archimedean Banach spaces consists of maps with norm at most $1$. This is a closed symmetric monoidal quasi-abelian category. However there is another exact structure on this category, introduced in \cite{kelly2016projective}, called the \textit{strong exact structure} which makes $Ban^{nA,\le 1}$ into a monoidal elementary exact category. Yet again we get the familiar equivalence of $(\infty,1)$-categories: $$\textbf{coAlg}_{\mathfrak{C}}^{|K^{f}|,\alpha-adm}(Ch(Ban^{na,\le 1}_{k}))\cong\textbf{Alg}_{\mathfrak{P}}(Ch(Ban^{na,\le 1}_{k}))$$ Suppose that $k$ is a spherically complete non-Archimedean Banach field containing ${\mathbb Q}$, and once more consider the Koszul morphism $\kappa:\mathfrak{S}^{c}\otimes_{H}\mathfrak{coComm}^{nu}\rightarrow\mathfrak{Lie}$. If $\mathfrak{g}\in\mathpzc{Alg}_{\mathfrak{Lie}}(Ban^{nA,\le1}_{k})$ is concentrated in negative degrees with $|\mathfrak{g}_{n}|$ being free of finite rank and each $d_{n}$ is admissible, then $\mathfrak{g}$ is very good. Therefore the unit $\eta_{\mathfrak{g}}:\mathfrak{g}\rightarrow\textbf{D}_{\kappa}\circ\hat{\textbf{C}}_{\kappa}(\mathfrak{g})$ is an equivalence.Suppose that $\mathfrak{g}_{-1}$ is $n$-dimensional. The underlying space of $(B_{\kappa}(\mathfrak{g})[-1])_{0}$ is the subspace of the space of formal power series $k[[t_{1},\ldots,t_{n}]]$ consisting of power series $\sum_{I\in\mathbb{N}_{0}^{n}}a_{I}t^{I}$ with the condition $|a_{I}|\rightarrow 0$ as $|I|\rightarrow\infty$. This is a Banach space with norm $||\sum_{I\in\mathbb{N}_{0}^{n}}a_{I}t^{I}||=max_{I}|a_{I}|$. The coproduct is uniquely determined by $t_{i}\mapsto t_{i}\otimes t_{i}$. \subsection{Further Applications} In this final section we suggest further directions which we intend to pursue. \subsubsection{Non-symmetric Koszul Duality} Let $(\mathpzc{E},\otimes,k)$ be a non-symmetric monoidal additive category. All of the definitions of the previous sections have obvious non-symmetric analogues. Moreover by inspecting the proofs of our results so far, it is clear that the symmetric structure does not play a crucial role in any of them. Therefore analogous results should hold in the non-symmetric setting. \subsubsection{Coloured and Curved Koszul Duality} There are numerous generalisations of Koszul duality over fields. \textit{Curved Koszul duality} \cite{hirsh2012curved} is a version which works for (co)operads which are not necessarily (co)augmented. Coloured (co)operads encode algebraic structures which involved multiple objects, such as (co)operads and Lie-Rinehart algebras. \textit{Coloured Koszul duality}, also discussed in \cite{hirsh2012curved}, relates coalgebras over coloured cooperads and algebras over coloured operads. We expect that both of these stories generalise to exact categories. \subsubsection{Chiral Koszul Duality} Chiral Koszul duality, established in the algebraic case by Francis and Gaitsgory in their seminal paper \cite{francis2012chiral}, gives an equivalence between factorisation algebras and chiral algebras on a complex algebraic variety. A major aspect of their work is that it generalises the notion of a vertex algebra and the chiral algebras of Beilinson and Drinfeld \cite{beilinson2004chiral} to higher dimensions. H\`{\^{o}} has also established some connective versions of chiral Koszul duality. \cite{ho2016atiyah}. We expect to be able to use our work to give a proof of their results using our formalism, which would also generalise to other rings, and to the analytic/ bornological setting. \bibliographystyle{amsalpha}
1,314,259,996,408
arxiv
\section{Related Work} \label{rw} In this section, we provide a brief review of the related work. \subsection{Learning in Dynamic Environments} One of the popular classic statistical models for handling dynamic time series-based streams is the autoregressive moving average (ARMA). However, ARMA and its integrated version ARIMA~\citep{ARIMA:05} and other improved versions~\citep{TS:94,IM:16} rely heavily on the assumption that the error terms are i.i.d. drawn from a normal distribution with zero mean value. To adapt to a nonstationary environment, a state-of-the-art approach~\citep{TS:15} learns sample weights according to training errors with the prior knowledge that the fresh samples share larger weights than previous. However, this approach cannot be applied to an online setting because it needs to calculate each new sample's weight by accessing the entire database. For online optimization methods, the regret theory~\citep{RG:16} for measuring the performance has been extensively studied. The dynamic regret~\citep{OSG:03} and its restricted form~\citep{DR:15} have been introduced to manage changing environments. A basic idea behind such regrets is to compare the learned expert's cumulative loss with several experts rather than the best one. Along this line of study, adaptive learning for dynamic regret (Ader)~\citep{DR:18} considers multiple experts with various learning rates updated by online gradient descent (OGD)~\citep{OSG:03}, and the established upper bound matches the lower bound. Another independent work for dynamic regret in a nonstationary environment is about multi-armed bandit (MAB)~\citep{DR:15}, where the work in~\citep{DMAB:19} reveals how the statistical variance of the loss distributions affect the dynamic regret bound. However, these dynamic regrets depend on the distribution changing times, which are usually unknown. When the sequence of samples is very long, the data distribution may have changed many times. As a result, the loose bound cannot measure the learned expert's performance in the current interval. Another limitation is that the bound is inappropriate for analyzing experts learned on the fly because these regrets only act on observed samples. \subsection{Learning Theory for Data Streams} For non-i.i.d. processes, under the stationary and $\beta$-mixing assumptions, the early work~\citep{MVC:94} establishes the convergence rate over VC-dimension, and the work in~\citet{RM:08} presents data-dependent bounds in terms of the Rademacher complexity. By exploiting a specific learning algorithm's stability properties, generalization bounds for $\phi$-mixing and $\beta$-mixing sequences are provided in~\citet{PB:10}. However, the mixing assumption is hard to be verified in practice. There are some attempts to relax the stationary and mixing assumptions. The uniform convergence under ergodicity sampling is shown in the work of~\citet{SES:10}. For an asymptotically stationary (mixing) process, although a generalization error is derived in~\citet{DS:13} through the regret of an online algorithm, and their analysis depends on the assumption that the output from an online learning algorithm tends to be stable, which is invalid in a dynamic environment. In~\citet{GNS:14}, the guarantee of the learning rate for nonstationary mixing processes is given by a sub-sample selection technique with the Rademacher complexity. Further, the convergence rate for sequential samples without mixing or ergodicity assumptions is established in~\citet{SR:15} by applying the sequential Rademacher complexity, which can be bounded by a Dudley's integral in terms of covering number~\citep{CN:02} through the chaining technique. In~\citet{TS:15}, a more general scenario of nonstationary and non-mixing processes is considered, which proves the learning guarantees with the conditions of the discrepancies between distributions. \section{Our CO$_2$ Method} \label{alg} After introducing several common assumptions, we then present the proposed CO$_2$ method in detail. Inspired by the adaptive online learning algorithm MetaGrad~\citep{MG:16}, CO$_2$ maintains several offline experts for the corresponding offline intervals and an online expert for the current online interval and then integrates all of these experts by a meta-expert. \subsection{Assumptions} \label{alg:1} We assume there are $G$ intervals: $G - 1$ offline intervals, and an online interval. \begin{assumption} \label{as:1} Let $\mathcal{D}_g$ be the data distribution in the $g^{\text{th}}$ interval and $\mathcal{D}_{\mathcal{U}} = \bigcup_{g = 1}^G \mathcal{D}_g$ be the mixture distribution, the distributions in a multi-distributional data stream are nonstationary, i.e., \begin{align*} \mathcal{D}_g \neq \mathcal{D}_{g'}, \forall g,g' \in [G], g \neq g'. \end{align*} \end{assumption} \begin{assumption} \label{as:3} The norm of every input sample $\mathbf{x}$ with label $y$ in the Hilbert space i.i.d. drawn from the distribution $\mathcal{D}_{G}$ of the online interval is upper bounded by a constant $D$: \begin{align*} \Vert \mathbf{x}\Vert \leq D, \forall (\mathbf{x},y) \thicksim \mathcal{D}_{G}. \end{align*} The eigendecomposition of the Hilbert-Schmidt operator is \begin{align*} \mathbb{E}_{(\mathbf{x},y) \thicksim \mathcal{D}_{G}}[\mathbf{x} \mathbf{x}^T] = \sum_{i = 1}^\infty \lambda_i \mathbf{u}_i\mathbf{u}_i^T \end{align*} where $(\mathbf{u}_i)_{i = 1}^\infty$ forms an orthonormal basis of Hilbert Space and $(\lambda_i)_{i = 1}^\infty$ corresponds to the eigenvalues in a non-increasing order. \end{assumption} \begin{assumption} \label{as:2} For any sample $(\mathbf{x},y) \thicksim \mathcal{D}_{\mathcal{U}}$, the hypothesis class is \begin{align*} \mathcal{H} \triangleq \{h: \mathbf{x} \mapsto \left< \mathbf{w}, \mathbf{x}\right> \, | \, \mathbf{w} \in \mathcal{W} , \Vert \mathbf{w}\Vert \leq R\} \end{align*} where the domain $\mathcal{W}$ bounded by $R$ is a convex subspace of a Hilbert space. \end{assumption} \begin{assumption} \label{as:4} For any sample $(\mathbf{x},y) \thicksim \mathcal{D}_{\mathcal{U}}$, the loss function family $\mathcal{L}$ with the hypothesis class $\mathcal{H}$ is bounded in the interval $[0,1]$: \begin{align*} \mathcal{L} \triangleq \{ (\mathbf{x},y) \mapsto l(h(\mathbf{x}),y ) \, | \, h \in \mathcal{H}, l(h(\mathbf{x}),y ) \in [0,1] \}. \end{align*} \end{assumption} \begin{assumption} \label{as:5} For any $(\mathbf{x},y) \thicksim \mathcal{D}_{\mathcal{U}}$ and all $\mathbf{w},\mathbf{w}' \in \mathcal{W}$, $l(\left<\cdot,\mathbf{x} \right>,y ) $ is convex and $\beta$-smooth over the domain $\mathcal{W}$:\\ \begin{equation*} \begin{aligned} \left\Vert \nabla l(\left<\mathbf{w},\mathbf{x} \right>,y ) - \nabla l(\left<\mathbf{w}',\mathbf{x} \right>,y )\right\Vert \leq \beta \left\Vert \mathbf{w} - \mathbf{w}' \right\Vert.\\ \end{aligned} \end{equation*} \end{assumption} \begin{remark} {\rm Although the whole data stream is dynamic, we make an i.i.d. data assumption in Assumption~\ref{as:3} for the online interval based on the assumption taken in Section~\ref{intro} that data distributions do not change drastically in practice. Assumption~\ref{as:4} is mild since we can ensure the loss function $l$ is nonnegative by adding a large constant. We assume the interval of the loss function is $[0,1]$ for convenience without loss of generality.} \end{remark} \begin{remark} {\rm Because the loss function $l$ is nonnegative as well as $\beta$-smooth, according to the self-bounding property~\citep{SM:10} of smooth functions and Assumption~\ref{as:4}, we obtain the following upper bound on the norm of the gradients of $l(\left<\cdot,\mathbf{x} \right>,y )$ for any $(\mathbf{x},y) \thicksim \mathcal{D}_{\mathcal{U}}$ and all $\mathbf{w} \in \mathcal{W}$: \begin{equation} \begin{aligned} \label{eq:func} \Vert \nabla l(\left<\mathbf{w},\mathbf{x} \right>,y ) \Vert \leq \sqrt{4 \beta \cdot l(\left<\mathbf{w},\mathbf{x} \right>,y )} \leq 2\sqrt{\beta}. \end{aligned} \end{equation} } \end{remark} \subsection{The CO$_2$ Working Mechanism} \begin{figure*} \centering \begin{adjustbox}{width=.8\textwidth,center} \pgfmathdeclarefunction{gauss}{2}{% \pgfmathparse{1/(#2*sqrt(2*pi))*exp(-((x-#1)^2)/(2*#2^2))}% } \begin{tikzpicture} [ scale = 0.5, node distance=1.9cm, MetaExpert/.style = {circle, draw=black, dash pattern={on 6pt off 3pt},align=center, anchor=east, inner sep=0, right color=green!30,left color=red!30,shading angle=-45,minimum size=0.9cm,anchor=base,font=\small}, OfflineExpert/.style = {circle, draw=black, densely dotted,align=center, anchor=east, inner sep=0, fill=red!30,minimum size=0.9cm,anchor=base,font=\small}, OnlineExpert/.style = {circle, draw=black, solid,align=center, anchor=east, inner sep=0, fill=green!30,minimum size=0.9cm,anchor=base,font=\small}, Sample/.style = {rectangle, draw, solid, align=center, anchor=east, inner sep=0, fill=yellow!30,minimum size=0.4cm,anchor=base,outer sep = 0,rounded corners=0,font=\small, outer sep=0}, ExpertSet/.style = {matrix,fill=cyan!20,draw=black,loosely dotted,align=center, outer sep = 0,inner sep = 2,rounded corners=.8ex, column sep=0.1cm, row sep=0.1cm}, OfflineInterval/.style = {matrix,fill=red!20,draw=black,densely dotted,align=center, outer sep = 0,inner sep = 2.5,rounded corners=.8ex, column sep=0cm}, OnlineInterval/.style = {matrix,fill=green!20,draw=black,align=center, outer sep = 0,inner sep = 2.5,rounded corners=.8ex}, ] \node[OnlineInterval] (I1) { \node[Sample] {$\mathbf{x}_1$}; & \node {$\cdots$} ; & \node[Sample](RS) {$\mathbf{x}_t$}; & \node {$\quad$} ; & \node {$\quad$} ;\\}; \node[OfflineInterval, left of =I1,xshift = -1cm] (Ih1) { \node[Sample] {}; & \node {$\cdots$} ; & \node[Sample] {}; & \node {$\cdots$} ; & \node[Sample] {};\\}; \node[outer sep = 0,left of =Ih1](Ih2) {$\cdots$} ; \node[OfflineInterval, left of =Ih2] (Ih3) { \node[Sample] {}; & \node {$\cdots$} ; & \node[Sample] {}; & \node {$\cdots$} ; & \node[Sample] {};\\}; \node[OfflineInterval, below of=I1, yshift=0.7cm] (I2) { \node[Sample] {$\mathbf{x}_1$}; & \node {$\cdots$} ; & \node[Sample] {$\mathbf{x}_t$}; & \node {$\cdots$} ; & \node[Sample] {$\mathbf{x}_B$}; \\}; \node[OnlineExpert, above right of=I1,yshift=0.5cm](WtK) {$\mathbf{w}_t^K$}; \node[anchor=north,inner sep=.1cm,align=center,fill=white,right of=WtK] (How1) {CO$_2$}; \node[MetaExpert, below of=How1, yshift=0.05cm](Wt) {$\mathbf{w}_t$}; \node[OnlineExpert, below right of=I2, yshift=-0.5cm](WBK) {$\mathbf{w}_B^K$}; \node[anchor=north,inner sep=.1cm,align=center,fill=white,right of=WBK] (How2) {$L_\mathcal{\widetilde{S}}^{\gamma}(\mathbf{w})$}; \node[OfflineExpert, above of=How2](WK) {$\mathbf{w}^K$}; \node[ExpertSet, right of=WK, xshift = 1cm] (ES){ \node[OfflineExpert](W1) {$\mathbf{w}^1$}; \\ \node[OfflineExpert](W2) {$\mathbf{w}^2$}; \\ \node[outer sep = 3, rotate = 90](WS) {$\cdots$} ; \\ \node[OfflineExpert](WK1) {$\mathbf{w}^{K-1}$};\\}; \draw[->,draw=black] (Ih1.east) -- (I1.west); \draw[->,draw=black] (Ih2.east) -- (Ih1.west); \draw[->,draw=black] (Ih3.east) -- (Ih2.west); \draw[->,draw=black,double,dashed] (I1.south) -- node [anchor=east] {Turn to} (I2.north); \draw[->,draw=black] (I1.north) -- node[fill=white]{Update} (WtK); \draw[->,draw=black] (WtK) -- (How1); \draw[->,draw=black] (ES) -- (How1.east); \draw[->,draw=black] (How1) -- node[fill=white]{Integrate} (Wt); \draw[->,draw=black] (Wt) -- node [near start, anchor=south] {Predict} (RS); \draw[->,draw=black] (I2.south) -- node[fill=white]{Update} (WBK); \draw[->,draw=black] (WBK) -- (How2); \draw[->,draw=black] (ES) -- (How2.east); \draw[->,draw=black] (How2) -- node[fill=white]{Minimize} (WK); \draw[->,draw=black] (WK) -- node [anchor=south,fill=white] {Refine} (ES); \matrix [draw=black,anchor=west,below of=Ih3, xshift = 0.6cm, yshift = -0.1cm,font=\footnotesize,minimum size=0.3cm, outer sep = 0, inner sep = 1.5] { \node[Sample,anchor=west, minimum size=0.3cm] {}; & \node[anchor=west] {: Sample}; \\ \node[Sample,densely dotted,fill=red!20,rounded corners=.3ex,anchor=west, minimum size=0.3cm] {\qquad \quad}; & \node[anchor=west] {: Offline Interval}; \\ \node[Sample,fill=green!20,rounded corners=.3ex,anchor=west, minimum size=0.3cm] {\qquad \quad}; & \node[anchor=west] {: Online Interval}; \\ \node[OfflineExpert, anchor=west, minimum size=0.3cm] {} ; & \node[anchor=west] {: Offline Expert}; \\ \node[OnlineExpert, anchor=west, minimum size=0.3cm] {} ; & \node[anchor=west] {: Online Expert}; \\ \node[MetaExpert, anchor=west, dash pattern={on 2pt off 1pt},minimum size=0.3cm] {} ; & \node[anchor=west] {: meta-expert}; \\ \node[Sample,fill=cyan!20,loosely dotted,rounded corners=.3ex,anchor=west, minimum size=0.3cm] {\qquad \quad} ; & \node[anchor=west] {: Offline Expert Set}; \\ }; \begin{axis} [ xticklabels={}, yticklabels={}, x=0.2cm, y = 1cm, line width=0.5mm, no markers, samples=50,smooth, axis x line*=bottom, axis y line*=left, enlargelimits=upper, yshift = -0.75cm, xshift = -13.7cm, above of=Ih3, anchor=east] \addplot {gauss(-2,0.5)}; \end{axis} \begin{axis} [ xticklabels={}, yticklabels={}, x=0.2cm, y = 1cm, line width=0.5mm, no markers, samples=50,smooth, axis x line*=bottom, axis y line*=left, enlargelimits=upper, yshift = -0.9cm, xshift = -6.1cm, above of=Ih1, anchor=east] \addplot {gauss(0,0.75)}; \end{axis} \begin{axis} [ xticklabels={}, yticklabels={}, x=0.2cm, y = 1cm, line width=0.5mm, no markers, samples=50,smooth, axis x line*=bottom, axis y line*=left, enlargelimits=upper, yshift = -0.95cm, xshift = -0.3cm, above of=I1, anchor=east] \addplot {gauss(2,1)}; \end{axis} \end{tikzpicture} \end{adjustbox} \caption{The working process of the CO$_2$ method on a multi-distribution data stream.} \label{fg:fw} \end{figure*} Figure~\ref{fg:fw} introduces the working process of CO$_2$ for dynamically learning multi-distributional data streams. There are $G - 1$ offline intervals and one online interval, each offline interval contains $B$ samples, and the online interval contains only $T~(T \in [B])$ samples. We have assumed the distribution changes gradually, and the samples in each interval can be approximately drawn from a distribution. Therefore, it is reasonable to set the maximal sample size $B$ as a hyperparameter even if the time between distribution changes is not a constant and usually unknown. Recall that the online interval will become offline, a new out-of-distribution online interval emerges if $T = B$, and the number of intervals $G$ increases with new samples joining. Accordingly, the total number of observed labeled samples is $B \cdot (G - 1) + T$. Let $K_{\text{max}}$ be a hyperparameter denoting the maximal number of maintained experts in CO$_2$. The actual number $K$ cannot be larger than the number of existing intervals because each offline interval can only generate an offline expert, we have \begin{align*} K = \begin{cases} K_{\text{max}}, & \text{if $G \geq K_{\text{max}}$} \\ G, & \text{if $G < K_{\text{max}}$} \end{cases}. \end{align*} We cannot maintain all $G$ experts in CO$_2$ because its regret bound increases with an increase of $K$ (c.f. Theorem~\ref{th:rg}). \textit{In the interest of brevity, an expert and its corresponding advice are denoted as its parameters $\mathbf{w}$ at the same time.} We assume the $k^{\text{th}}~(k \in [K - 1])$ offline expert is $\mathbf{w}^k$ and the online expert is $\mathbf{w}_t^K $, we can simply assign the $k^{\text{th}} $ offline expert for the $t^{\text{th}} (t \in [T])$ labeled sample in the online interval by $\mathbf{w}^k_t = \mathbf{w}^k$ because these offline experts learned from previous offline intervals are fixed for the online interval. For the $t^{\text{th}}$ labeled sample in the online interval, CO$_2$ firstly generates $K$ experts \begin{align*} \{\underbrace{\mathbf{w}^1_t, \ldots, \mathbf{w}^{K - 1}_t}_{\text{Offline Expert}}, \underbrace{\mathbf{w}^K_t}_{\text{Online Expert}} \} \end{align*} and then output an integrated expert $\mathbf{w}_t$. When $T = B$, the online interval becomes offline, and a new online interval appears. We generate the new offline expert $\mathbf{w}^K$ for the just-passed complete online interval and refresh $K - 1$ offline experts if $G \geq K_{\text{max}}$ according to their priorities. We will discuss how to set priorities in Section~\ref{pri}. The online expert trained on the fly for the new online interval can either reinitialize its parameters or inherit $\mathbf{w}^K_B$. Also, the prediction for each unlabeled sample is made based on the latest $\mathbf{w}_t$, which is not counted to avoid increasing $t$. According to the definition of (agnostic) PAC learnability and the multi-distributional data stream setting, our concern is about the generalization error of the output hypothesis from CO$_2$ with respect to the current distribution $\mathcal{D}_G$. We replace $\mathcal{D}_G$ with $\mathcal{D}$ and $l(\left<\mathbf{w},\mathbf{x}_t \right>,y_t )$ with $f_t(\mathbf{w})$ for brevity, where $f_t(\mathbf{w})$ can be regarded as the loss of the expert $\mathbf{w}$ for sample $(\mathbf{x}_t, y_t)$. In the $G^{\text{th}}$ interval, we would like to learn an expert $\mathbf{w} \in \mathcal{W}$ with a small popular risk with respect to the nonnegative loss function $l$ \begin{align} \label{PR} L_\mathcal{D}(\mathbf{w}) = \mathbb{E}_{(\mathbf{x},y) \thicksim \mathcal{D} } [l(\left<\mathbf{w},\mathbf{x} \right>,y )] \end{align} by minimizing the corresponding empirical risk using the proposed method: \begin{align} \label{ER} L_\mathcal{S}(\mathbf{w}) = \frac{1}{T} \sum_{t = 1}^T l(\left<\mathbf{w},\mathbf{x}_t \right>,y_t ) = \frac{1}{T} \sum_{t = 1}^T f_t(\mathbf{w}) \end{align} where $\mathcal{S} = \{ (\mathbf{x}_1, y_1),\ldots,(\mathbf{x}_T, y_T)\}$ is the data set consisting of $T (T \in [B])$ samples in the online interval, and we use $L_{\mathcal{\widetilde{S}}}(\mathbf{w})$ to denote the specific case when $T = B$. Let $\mathbf{w}^* \in \arg \min_{w \in \mathcal{W}} L_\mathcal{D}(\mathbf{w})$ be an optimal solution to (\ref{PR}) and $\widehat{\mathbf{w}} \in \arg \min_{w \in \mathcal{W}} L_\mathcal{S}(\mathbf{w})$ be an empirical minimizer to (\ref{ER}). Accordingly, $\widehat{\mathbf{w}}$ is the best fixed expert so far for the online interval when there are $T$ observed labeled samples. \begin{algorithm} \caption{CO$_2$} \label{alg:algorithm} \begin{algorithmic}[1] \STATE {\textbf{Input:} step size $\nu$, \\ \quad \quad \quad online expert $\mathbf{w}_1^{K}$ \\ \quad \quad \quad offline expert set $\{\mathbf{w}^1, \ldots, \mathbf{w}^{K - 1}\}$} \STATE Initialize $\alpha^1_1 < \alpha^2_1 < \cdots < \alpha^K_1$ according to Eq. (\ref{eq:in}) \FOR{t = 1,\ldots,T} \STATE Receive online expert $\mathbf{w}^K_t$ \STATE Assign offline expert $\mathbf{w}^k_t = \mathbf{w}^k, \forall k \in [K - 1]$ \STATE Output weighted average: $\mathbf{w}_t = \sum_{k = 1}^{K} \alpha_t^k \mathbf{w}^k_t$ \STATE Receive the loss function $f_t(\cdot)$ \STATE Update expert weights:\\ $\alpha^k_t = \frac{\alpha^k_t e^{-\nu f_t(\mathbf{w}^{k}_t)}}{\sum_{k' = 1}^{K} \alpha^{k'}_t e^{-\nu f_t(\mathbf{w}^{k'}_t)}}, \forall k \in [K]$ \STATE Send gradient $\nabla f_t(\mathbf{w}^{K}_t)$ to the online expert \ENDFOR \end{algorithmic} \end{algorithm} \subsubsection{Meta-expert} The meta-expert adjusts its strategy of integrating the $K$ experts according to their losses received on labeled samples, and the meta-expert will adapt to the out-of-distribution samples in the online interval. For the online interval, we track the best expert~\citep{TB:95} based on the exponentially weighted average forecaster~\citep{PLG:06} by assigning a considerable weight to the expert with a small cumulative loss, and vice verse. Accordingly, the CO$_2$ method is summarized in Algorithm~\ref{alg:algorithm}. For each sample, the meta-expert adjusts all experts' weights according to the performances on the newest labeled sample. At iteration $t$ in the online interval, the meta-expert outputs a weighted average solution \begin{equation} \label{eq:wf} \mathbf{w}_t = \sum_{k = 1}^{K - 1} \alpha_t^k \mathbf{w}^k_t + \alpha_t^K \mathbf{w}^K_t = \sum_{k = 1}^{K} \alpha_t^k \mathbf{w}^k_t \end{equation} $\alpha_t^k$ is the weight of the $k^{\text{th}}$ expert $\mathbf{w}^k_t$. To lead to a compact regret bound, ensure that $\sum_{k = 1}^K \alpha_1^k = 1$, and provide different weights for experts according to their priorities, according to the proof of Theorem~\ref{th:rg}, $\alpha_t^k$ is initialized as \begin{equation} \begin{aligned} \label{eq:in} \alpha_1^k = \frac{K + 1}{(K + 1 -k)(K + 2 -k)K}. \end{aligned} \end{equation} Note that it is unnecessary to project $\mathbf{w}_t$ into the domain $\mathcal{W}$. Because each expert satisfies $\mathbf{w}^k_t \in \mathcal{W} (k \in [K])$ and the weighting function Eq.~(\ref{eq:wf}) is linear, the weighted average $\mathbf{w}_t$ is still in the domain $\mathcal{W}$ according to convex properties. After obtaining the loss at iteration $t$, the $K$ weights are updated according to the exponential weighting scheme \begin{equation} \label{eq:update:a} \alpha^k_t = \frac{\alpha^k_t e^{-\nu f_t(\mathbf{w}^{k}_t)}}{\sum_{k' = 1}^{K} \alpha^{k'}_t e^{-\nu f_t(\mathbf{w}^{k'}_t)}} \end{equation} where $\nu = 4\sqrt{\frac{\ln K}{T}}$ is the step size, and the way to derive this value can be found in the proof of Theorem~\ref{th:rg}. \subsubsection{Offline Expert} \label{pri} Labeled samples are insufficient to train a desired online expert at the early stage because the number is limited. To compensate for this disadvantage by exploiting knowledge from previous offline intervals, we extract knowledge by learning an offline expert for each online interval when all of its samples are available. Each interval is coupled with its previous offline experts and online expert when its online expert has passed this interval once, and its previous offline experts may be learned from similar distributions. Thus, we transfer their knowledge adaptively to its offline expert. For each online interval, we calculate a new offline expert $\mathbf{w}^{K}$ once $T = B$ by taking advantage of the prior knowledge of the $K-1$ offline experts $\mathbf{w}^1_B, \ldots, \mathbf{w}^{K - 1}_B$ and the online expert $\mathbf{w}^K_B$. According to the strategy of the meta-expert, the expert performing best in this interval has the largest weight, therefore the new offline expert $\mathbf{w}^{K}$ should be close to the best expert. Accordingly, we use the regularization term $\Omega(\mathbf{w}) = \left\Vert\mathbf{w} - \sum_{k = 1}^K \alpha_B^k \mathbf{w}^k_B \right\Vert_2^2$ to constrain the search space of $\mathbf{w}^{K}$, and we obtain $\mathbf{w}^{K}$ by \begin{equation} \begin{aligned} \label{eq:off} \mathbf{w}^K & = \arg \min_{w \in \mathcal{W}} \frac{1}{B} \sum_{t = 1}^B f_t(\mathbf{w}) + \frac{\gamma}{2} \Omega(\mathbf{w}) = \arg \min_{w \in \mathcal{W}} L_\mathcal{\widetilde{S}}^{\gamma}(\mathbf{w}) \end{aligned} \end{equation} where $\gamma$ is a hyperparameter to control the effect of the prior knowledge and $T$ is assigned as $B$ in $\mathcal{\widetilde{S}}$. This can adaptively weight the $K$ experts $\sum_{k = 1}^K \alpha_B^k \mathbf{w}^k_B$ in $\Omega(\mathbf{w})$, because the gap $\Vert \mathbf{w}^K - \mathbf{w}^*\Vert$ is bounded by the weighted loss of the $K$ experts on the data set $\mathcal{\widetilde{S}}$, i.e., $\sum_{k = 1}^K \alpha_B^k L_\mathcal{\widetilde{S}}(\mathbf{w}_B^k)$, as proved in Theorem~\ref{th:hy2}. It is important to notice that the regularization $\Omega(\mathbf{w})$ and hyperparameter $\gamma$ vary for training different offline experts since the regularization is related to the maintained $K$ experts ($K - 1$ offline experts and one online expert). For example, $\left( \Omega^g(\mathbf{w}), \gamma^g \right) (g \in [G])$ transfers the knowledge of the $K$ experts available at the end of the $g^{\text{th}}$ online interval. Taking the newest complete online interval as an example, we omit the superscript and use $\left( \Omega(\mathbf{w}), \gamma \right)$ to denote $\left( \Omega^G(\mathbf{w}), \gamma^G \right)$ for simplicity. The hyperparameter $\gamma$ relates to the performance of these $K$ experts on the complete online interval, that is $\{ L_\mathcal{\widetilde{S}}(\mathbf{w}_B^1), \ldots, L_\mathcal{\widetilde{S}}(\mathbf{w}_B^K) \}$. Smaller loss values imply these experts perform better in this interval, and we thus set larger value for $\gamma$ so as to utilize more prior knowledge. We set $\gamma \geq \sum_{k = 1}^K \alpha_B^k L_\mathcal{\widetilde{S}}(\mathbf{w}_B^k) /4 R^2 $ to adapt to the performance of these experts. We will explain how to obtain the lower bound of the hyperparameter and analyze the benefit of this regularization term $\Omega(\mathbf{w})$ in Lemma~\ref{le:hy1} and Theorem~\ref{th:hy2} respectively. After receiving the new offline expert $\mathbf{w}^K$, we first set priorities for all $K$ offline experts because the potential abilities of these experts for the next new online interval are different. Then we select $K - 1$ offline experts by eliminating the expert with the lowest priority and then initialize the weights for the selected $K-1$ offline experts in the meta-expert according to their priorities, as shown in Eq.~(\ref{eq:in}). However, the mechanism of setting priorities does not affect our theoretical results for the CO$_2$ method. Thus we do not dig into this problem in this paper instead provide two simple mechanisms as references\footnote{Designing more specific mechanisms adaptively to establish sharper convergence rates is our future work.}. One naive solution is to maintain an expert queue in which the first expert has the lowest priority and vice versa. Accordingly, the latest offline expert is enqueued, and the oldest offline expert in the queue is removed. According to the multi-distribution data stream's assumptions, the distributions do not change drastically, and the information in one interval may still be valuable for the next. Therefore, another solution is to set the newest offline expert with the highest priority and assign the previous $K-1$ offline experts' priorities according to their weights. \subsubsection{Online Expert} To quickly adapt to the change of data distributions, obtain the online interval knowledge, and transfer it to the new offline expert once the online interval is complete, an online expert is learned for the online interval. As discussed above, to train an online expert for a new online interval, we can reinitialize its parameters randomly or by inheriting its solution from the just-passed complete online interval as a warm start. Recall that we can train the online expert by any off-the-shelf online optimization methods on the fly. In this paper, we use the standard OGD~\citep{OSG:03} method as an instance because it is the most common and famous online optimization method. On the online interval, the online expert submits its advice $\mathbf{w}^{K}_t$ to the meta-expert and receives the gradient $\nabla f_t(\mathbf{w}^{K}_t)$ to update its parameters: \begin{equation*} \mathbf{w}^{K}_{t + 1} = \Pi_{\mathcal{W}} [ \mathbf{w}^{K}_t - \eta_t \nabla f_t(\mathbf{w}^{K}_t) ] \end{equation*} $\eta_t = \frac{D}{\sqrt{\beta t}}$ is the step size explained in the Theorem~\ref{th:rg} proof, and $\Pi_{\mathcal{W}}$ is the proximal operator onto space $\mathcal{W}$. \section{Main Results} \label{theory} In this section, we provide theoretical guarantees for CO$_2$, which match our expectations. Specifically, we analyze the properties of the regularization term $\Omega(\mathbf{w})$ and provide the regret and the generalization error of the output hypothesis. To exploit the convexity, smoothness, and nonnegativity conditions of the loss function, the hypothesis class, the data distribution, and the regret, we involve the data-independent excess risk of $\widehat{\mathbf{w}}$, the Rademacher complexity of hypothesis class $\mathcal{H}$ w.r.t. $\mathcal{D}$ and the regret for implying the generalization. \subsection{Regularization} The hyperparameter $\gamma$ for $\Omega(\mathbf{w})$ should be assigned with considerable value to ensure the validity of the regularization. Otherwise, for an offline expert's objective function, the attention paid to this term is too little to transfer knowledge. We show the lower bound of this hyperparameter by deriving the upper bound of this regularization firstly. \begin{lemma} \label{le:hy1} The upper bound of the regularization term $\Omega(\mathbf{w})$ is $4R^2$. According to the strongly-convex property of this function, we can also obtain \begin{align*} \Omega(\mathbf{w}^K) \leq \frac{\sum_{k = 1}^K \alpha_T^k L_{\mathcal{\widetilde{S}}}(\mathbf{w}_B^k)}{\gamma}, \end{align*} and we set $\gamma \geq \left( \sum_{k = 1}^K \alpha_B^k L_{\mathcal{\widetilde{S}}}(\mathbf{w}_B^k) \right) / 4 R^2$ to ensure the validity of the regularization term. \end{lemma} Without taking the loss function $L_{\mathcal{\widetilde{S}}}(\cdot)$ and hyperparameter $\gamma$ into consideration, we can obtain a $4R^2$ upper bound of the regularization term by using the convex property and Assumption~\ref{as:2}. When this regularization term is applied to propagate prior knowledge, the upper bound, obtained from the loss function and hyperparameter, should be tighter than $4R^2$. Otherwise, this term does not contribute to the search space of parameters. The following theorem shows the benefit of this regularization, it can narrow the gap between the minimizer $\mathbf{w}^K$ and the optimal solution $\mathbf{w}^*$ by applying the maintained $K$ experts adaptively. \begin{theorem} \label{th:hy2} By using $\mathbf{w}_B^1,\mathbf{w}_B^2,\ldots,\mathbf{w}_B^K$ as prior knowledge to obtain $\mathbf{w}^K$ from $L_{\mathcal{\widetilde{S}}}^{\gamma}(\mathbf{w})$, we have \begin{align*} \left\Vert \mathbf{w}^K - \mathbf{w}^*\right\Vert \leq \sqrt{2 \Omega(\mathbf{w}^*) + \frac{32\beta}{\gamma^2} + \frac{6 \sum_{k = 1}^K \alpha_B^k L_{\mathcal{\widetilde{S}}}(\mathbf{w}_B^k)}{\gamma}}. \end{align*} \end{theorem} Although it is impossible for us to obtain $\mathbf{w}^*$ since the distribution for this interval is unknown, we can obtain an approximate solution by using the regularization term $\Omega(\mathbf{w})$. If the optimal solution is close to the weight average $\sum_{k = 1}^K \alpha_T^k \mathbf{w}^k_T$, the value of $\Omega(\mathbf{w}^*)$ and the upper bound of the difference $\left\Vert \mathbf{w}^K - \mathbf{w}^*\right\Vert$ are small. Although it is also impossible for us to measure $\Omega(\mathbf{w}^*)$, we can measure the weighted term $\sum_{k = 1}^K \alpha_B^k L_{\mathcal{\widetilde{S}}}(\mathbf{w}_B^k)$ in the above upper bound where $L_{\mathcal{\widetilde{S}}}(\mathbf{w}_B^k)$ is the empirical error of the $k^{\text{th}}$ expert in the latest interval. As a result, we know that the empirical minimizer $\mathbf{w}^K$ of $L_{\mathcal{\widetilde{S}}}^{\gamma}(\mathbf{w})$ approaches the optimal solution $\mathbf{w}^*$ of the original problem $L_\mathcal{D}(\mathbf{w})$ if these experts considered in the regularization term $\Omega(\mathbf{w})$ are effective in the latest interval. To sharpen this bound, the weights for experts with small empirical errors should be larger and the design of the meta-expert can meet the need. Therefore, we can draw a conclusion that $\mathbf{w}^K$ should be close to these experts with small empirical errors in the domain $\mathcal{W}$. This conclusion leads to the design of the regularization term $\Omega(\mathbf{w})$. \subsection{Regret Bound} The following regret measures the performance of CO$_2$ \begin{align*} \text{Regret}_{\text{CO$_2$}} = \sum_{t = 1}^T f_t(\mathbf{w}_t) - \min_{\mathbf{w} \in \mathcal{W}} \sum_{t = 1}^T f_t(\mathbf{w}). \end{align*} However, it is hard to minimize the regret directly because the output $\mathbf{w}_t$ is related to a meta-expert, an online expert, and $K - 1$ offline experts. Therefore, we decompose the regret into two regrets: $\text{Regret}_\text{ME}$ w.r.t. the meta-expert and $\text{Regret}_\text{KE}$ w.r.t. maintained experts. Further, we can bound $\text{Regret}_\text{KE}$ by $\text{Regret}_\text{OE}$ which corresponds to the online expert. Therefore, we can obtain the regret bound of $\text{Regret}_{\text{CO$_2$}}$ by bounding $\text{Regret}_\text{ME}$ and $\text{Regret}_\text{OE}$ separately. \begin{equation} \begin{aligned} \label{th:sp} \text{Regret}_{\text{CO$_2$}} & = \text{Regret}_\text{ME} + \text{Regret}_\text{KE}\\ & \leq \text{Regret}_\text{ME} + \text{Regret}_\text{OE}, \end{aligned} \end{equation} where \begin{align*} \text{Regret}_\text{ME} & = \sum_{t = 1}^T f_t(\mathbf{w}_t) - \min_{k \in [K]} \sum_{t = 1}^T f_t(\mathbf{w}^k_t),\\ \text{Regret}_\text{KE} & = \min_{k \in [K]} \sum_{t = 1}^T f_t(\mathbf{w}^k_t) - \sum_{t = 1}^T f_t(\widehat{\mathbf{w}}), \\ \text{Regret}_\text{OE} & = \sum_{t = 1}^T f_t(\mathbf{w}^K_t) - \sum_{t = 1}^T f_t(\widehat{\mathbf{w}}). \end{align*} We notice that the first term $\text{Regret}_\text{ME}$ in~Eq. (\ref{th:sp}) is the goal of the MAB problem~\citep{MAB:13} minimized against the best expert among the set of $K$ experts. Also, $\text{Regret}_\text{OE}$ is the goal of a standard online convex optimization problem~\citep{OSG:03} minimized against the best constant expert. The above inequality follows the idea that the online expert's performance $\mathbf{w}^K_t$ never surpasses that of the best expert among all the $K$ experts because it is also one of them. Besides, it is impossible to obtain the regret for the offline experts since they are pre-given and their parameters do not change after receiving the loss $f_t(\cdot)$. Specifically, we have the following theorem. \begin{theorem} \label{th:rg} The CO$_2$ method with step sizes $\{ \nu = 4\sqrt{\frac{\ln K}{T}}, \eta_t = \frac{D}{\sqrt{\beta t}}, t \in [T] \}$ guarantees the following regret for all $ 1 \leq T \leq B$, \begin{align*} \text{Regret}_{\text{CO$_2$}} \leq \sqrt{T \ln K} + \text{Regret}_\text{KE} \end{align*} in the general case and \begin{align*} \text{Regret}_{\text{CO$_2$}} \leq \sqrt{T \ln K} + 6D\sqrt{T\beta} \end{align*} in the worst case, and the number of experts $K$ and samples $T$ should satisfy \begin{align*} K \leq 2 \exp \left(6D \sqrt{\beta} - \frac{\text{Regret}_\text{KE}}{\sqrt{T}} \right) \end{align*} to ensure that the advice from CO$_2$ gives an equivalent or better result than that from its online expert. \end{theorem} Accordingly, the regret of CO$_2$ for the online interval is $O(\sqrt{T})$, which is consistent with that of the chosen online expert. However, CO$_2$ works better, i.e. $\text{Regret}_{\text{CO$_2$}} \leq \text{Regret}_{\text{OE}}$, if $K$ and $T$ satisfy the condition in Theorem~\ref{th:rg}. In theory, we have $\text{Regret}_\text{KE} \leq \text{Regret}_\text{OE} \leq 6D\sqrt{T\beta}$. As discussed in Section~\ref{intro}, these offline experts are better than the online expert when their corresponding data distributions are approximately matched, or the number of observed labeled samples in the current interval is limited. The first inequality is strict, and $K$ is bounded by a positive value. On the other hand, the number of samples in an interval $T \leq B$ should not be too large. Although the bound of $K$ depends on $\text{Regret}_\text{KE}$, it is impossible to bound this term without any further assumptions because the $K$ experts are trained from different data sets. Fortunately, it is unnecessary to set K strictly according to its conditions. We can apply CO2 if we believe that the regret of the best offline expert can surpass that of the online expert at least $6D\sqrt{T\beta} - \text{Regret}_\text{KE} = \sqrt{T \ln K}$. The assumption is mild since we can set a small K (like 2 or 3) even without prior knowledge. An intuitive understanding is that: if $K$ is too large, it is difficult for the meta-expert to derive effective advice because of the dilution effect from those weak experts; if $B$ is too large, the samples in an interval may come from various distributions, and the assumption about the setting may not hold. \subsection{Excess Risk Bound} The CO$_2$ performance is measured by the excess risk: \begin{align*} L_\mathcal{D}(\overline{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^* ) \end{align*} where $\overline{\mathbf{w}} = \frac{1}{T} \sum_{t = 1}^T \mathbf{w}_t$ is an average output solution of the online interval. To derive an algorithmic bound, we introduce the intermediate term $L_\mathcal{D}(\widehat{\mathbf{w}})$ because $\widehat{\mathbf{w}}$ as an empirical minimizer of $L_\mathcal{S}(\widehat{\mathbf{w}})$ is necessary for analyzing the regret. Taking the divide-and-conquer approach, we have \begin{equation} \begin{aligned} \label{eq:all:1} & L_\mathcal{D}(\overline{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^*)\\ \leq & \frac{1}{T} \sum_{t = 1}^T L_\mathcal{D}(\mathbf{w}_t) - L_\mathcal{D}(\mathbf{w}^*) \\ = &\frac{1}{T} \sum_{t = 1}^T L_\mathcal{D}(\mathbf{w}_t) - L_\mathcal{D}(\widehat{\mathbf{w}}) + L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^*). \\ \end{aligned} \end{equation} The inequality is owing to the convexity of $L_\mathcal{D}(\cdot)$, which implies $ L_\mathcal{D}( \frac{1}{T} \sum_{t = 1}^T \mathbf{w}_t) \leq \frac{1}{T} \sum_{t = 1}^T L_\mathcal{D}(\mathbf{w}_t)$. The regret of CO$_2$ is applied to imply the upper bound of $\frac{1}{T} \sum_{t = 1}^T L_\mathcal{D}(\mathbf{w}_t) - L_\mathcal{D}(\widehat{\mathbf{w}})$ in Lemma~\ref{le:erm:rg}. The residual term $L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^*)$ is an excess risk of empirical risk minimization (ERM)~\citep{SL:98} and it is natural to separate this term into two equal parts as shown in Lemma~\ref{le:erm:di}. In order to incorporate more information (like loss function properties, hypothesis class and data distribution) to derive a more precise bound, we bound the residual term by combining two results based on different known assumptions and techniques as shown in Lemmas~\ref{le:erm:di} and ~\ref{le:erm:dd} respectively. We use the regret of CO$_2$ to imply a part of the excess risk of CO$_2$ first and combine the obtained result with the others conditional on other assumptions. \begin{lemma} \label{le:erm:rg} Following Theorem~\ref{th:rg}, with probability at least $1 - \delta$, we have \begin{equation*} \begin{aligned} \frac{1}{T} \sum_{t = 1}^T L_\mathcal{D}(\mathbf{w}_t) - L_\mathcal{D}(\widehat{\mathbf{w}}) \leq \frac{\sqrt{\ln K} + 6D\sqrt{\beta} + 4 \log(4 / \delta)}{\sqrt{T}}.\\ \end{aligned} \end{equation*} \end{lemma} Although ~\citet{OL:16} and ~\citet{DS:13} have made a preliminary step towards connecting optimization with statistical learning theory, we couple the intermediate result implied by the regret with the following results to derive a more specific convergence rate rather than imply the excess risk by the regret directly. The reminder $L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^*)$ is no more algorithmic since neither $\widehat{\mathbf{w}}$ nor $\mathbf{w}^*$ depend on the specific optimization algorithm used. Below, taking a data perspective, we derive data-independent and data-dependent bounds for it. Next, we present the excess risk bound of $\widehat{\mathbf{w}}$ under the conditions about the properties of the loss function. \begin{lemma} \label{le:erm:di} Exploiting the convexity, smoothness, and nonnegativity conditions of the loss function family $\mathcal{L}$, with probability at least $1 - \delta$, we have \begin{align*} L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^*) = & \frac{\left(12\beta R^2 + 4R \sqrt{\beta} \right)\log(4 / \delta) }{T} + 4R \sqrt{\frac{2 \beta \log(4/\delta)}{T}} \\ & + \frac{L_{\mathcal{D}}(\widehat{\mathbf{w}}) - L_{\mathcal{D}}(\mathbf{w}^*)}{2}. \end{align*} \end{lemma} The above excess risk is data-independent, which means it ignores the information of data. Further, to obtain the data-dependent excess risk, we use the Rademacher complexity~\citep{RC:02}. Rather than indirectly bounding the Rademacher complexity through the empirical Rademacher complexity with covering number and fat-shattering dimension, we follow the advanced study for any norm-regularized hypothesis class~\citep{LRC:18} to obtain a sharp bound directly. \begin{lemma} \label{le:rademacher} Let $\mathcal{S}$ be a set of i.i.d. samples from the distribution $\mathcal{D}$, the Rademacher complexity $\mathcal{R}$ of hypothesis class $\mathcal{H}$ w.r.t. distribution $\mathcal{D}$ at the online interval is bounded as \begin{equation*} \begin{aligned} \mathcal{R}_\mathcal{D}(\mathcal{H}) \leq R\sqrt{\frac{1}{T} \sum_{i = 1}^{\infty} \left( D^2 \wedge \frac{e \lambda_i}{T} \right)} + \frac{DR\sqrt{e}}{T}. \end{aligned} \end{equation*} \end{lemma} Based on the above measure and the self-bound property of smooth functions~\citep{SM:10}, we can derive the following data-dependent generalization bound. \begin{lemma} \label{le:erm:dd} Exploiting the hypothesis class $\mathcal{H}$ and the distribution $\mathcal{D}$ of the observed data at the online interval, with probability at least $1 - \delta$, we have \begin{equation*} \begin{aligned} & L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^*) \leq & 42\sqrt{6 \beta} \log^{\frac{3}{2}}(64T) \mathcal{R}_{\mathcal{D}}(\mathcal{H}) + 3 \sqrt{\frac{\log(4 / \delta)}{T}}.\\ \end{aligned} \end{equation*} \end{lemma} The first method, which is shown in Lemma~\ref{le:erm:di}, considers some properties of the loss function $l$ while the second method, which is shown in Lemma~\ref{le:erm:dd}, considers the hypothesis class as well as the data distribution. The upper bound in Lemma~\ref{le:erm:di} is not complete since there is a term $(L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^*)) / 2$ on the right hand side. It means that this upper bound only bounds half of the excess risk. Although we can obtain a complete bound by reorganizing the obtained result, we choose to bound the other half by applying the result in Lemma~\ref{le:erm:dd}. In that way, we can obtain a complete upper bound considering all conditions used in the two lemmas. By substituting the results of Lemmas~\ref{le:erm:rg},~\ref{le:erm:di},~\ref{le:rademacher} and~\ref{le:erm:dd} into the excess risk bound framework in Eq.~(\ref{eq:all:1}), we can obtain the excess risk of $\overline{\mathbf{w}}$. Because $\overline{\mathbf{w}}$ depends on the outputs of the proposed CO$_2$ method, the following generalization error is algorithmic. \begin{theorem} \label{th:all} Exploiting the loss function properties (convexity, smoothness, and nonnegativity) of $\mathcal{L}$, the hypothesis class $\mathcal{H}$, the data distribution $\mathcal{D}$ and the regret of CO$_2$, with probability at least $1 - \delta$, we have \begin{equation*} \begin{aligned} L_\mathcal{D}(\overline{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^*) \leq & \frac{\left(12\beta R^2 + 4R \sqrt{\beta} \right)\log(16 / \delta) }{T} \\ & + \frac{28R\sqrt{\beta} \log^{\frac{3}{2}}(64T)}{T} \left(\sqrt{ \sum_{i = 1}^{\infty} \left( T D^2 \wedge e \lambda_i \right)} + D\sqrt{e}\right) \\ & + \frac{\left( 6R \sqrt{\beta} + 2\right) \sqrt{\log(16/\delta)} + 4 \log(8 / \delta) + \sqrt{\ln K} + 6D\sqrt{\beta}}{\sqrt{T}}. \\ \end{aligned} \end{equation*} \end{theorem} The convergence rate for the generalization error is $O(1 / \sqrt{T})$, which is consistent with that in stationary and non-algorithmic cases~\citep{CO:08,RC:16}. The convergence rate is directly related to the sample complexity, and the result is algorithmic. This result triggers an immediate problem: Can we use fewer samples to achieve a desirable generalization error if the used off-the-shelf online optimization method can achieve a better regret? Unfortunately, the answer is not affirmative. The intuition behind the problem is that the bottleneck is not on the optimization method. As shown above, our target Eq. (\ref{eq:all:1}) is decomposed to three parts with the bound obtained by three lemmas, where each step reflects at least one bottleneck: \begin{itemize} \item Lemma~\ref{le:erm:rg}: Although we can consider the strongly convex property to improve the regret from $O(\sqrt{T})$ to $O(\log T)$~\citep{SC:14} and its corresponding part in the upper bound of excess risk is $O(\log T / T)$, the corresponding part of the meta-expert is still $O(1 / \sqrt{T})$ and the way to imply excess risk by using the regret will introduce an $O(1 / \sqrt{T})$ rate. \item Lemma~\ref{le:erm:di}: Although we can obtain a better bound $O(1 / T)$ as shown in~\citep{ERM:17}, stronger assumptions are required such as the loss function should be strongly convex, or the value of the minimal risk should approach zero. \item Lemma~\ref{le:erm:dd}: The convergence rate of the first term in the upper bound related to the Rademacher complexity is $O(\log T / T)$, but a $O(1 / \sqrt{T})$ convergence rate of the second term is inevitable if we want to introduce the Rademacher complexity to consider both hypothesis class and data distribution. \item The divide-and-conquer framework in Eq.~(\ref{eq:all:1}): As discussed above, the intermediate term $L_\mathcal{D}(\widehat{\mathbf{w}})$ is required for obtaining algorithmic bound, but it causes an indirect bottleneck, since Lemmas~\ref{le:erm:di} and ~\ref{le:erm:dd} (which lead to an $O(1 / \sqrt{T})$ rate) are necessary to analyze the non-algorithmic part in Eq.~ (\ref{eq:all:1}). \end{itemize} This section makes the generalization error analysis for the multi-distribution dynamic setting. We establish an $O(1 / \sqrt{T})$ rate for this setting, which is consistent with that in the stationary and non-algorithmic cases. The bound reflects the best result achieved so far without any other assumptions. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{GeneratedSamples.pdf} \caption{Synthetic samples in ten different intervals.} \label{fig:samples} \end{figure*} \begin{figure*}[t] \centering \subfigure{ \includegraphics[width=0.3\textwidth]{syntheticLoss.pdf} \includegraphics[width=0.3\textwidth]{ijcnnLoss.pdf} \includegraphics[width=0.3\textwidth]{codLoss.pdf} } \subfigure{ \includegraphics[width=0.3\textwidth]{syntheticRegret.pdf} \includegraphics[width=0.3\textwidth]{ijcnnRegret.pdf} \includegraphics[width=0.3\textwidth]{codRegret.pdf} } \caption{Regret and loss of CO$_2$ and OGD methods.} \label{fig:res} \end{figure*} \section{Empirical Analysis} In this section, we present empirical analysis to support our proposed theory and model. \subsection{Experimental Settings} We consider the problem of binary classification on multi-distributional data streams and compare the proposed CO$_2$ method with the OGD method on a synthetic dataset and two real-world datasets (i.e., ijcnn and cod-rna) from the LIBSVM repository~\cite{LIBSVM:11}. On the synthetic dataset, in each interval, independent and identically distributed samples in the same class are drawn from the same two-dimensional Gaussian distribution, and we randomly select two different means to construct two different Gaussian distributions with the same covariance $\mathbf{I}_2$ for generating samples with different ground truth labels. We slightly change the two means by adding different Gaussian noises for sampling in the next interval to model dynamic data streams. We construct $15$ intervals in our experiment, and the samples from the first $10$ intervals are presented in Fig.~\ref{fig:samples}. We find that the data distributions change gradually, which matches the assumption of our dynamic setting. Similarly, for the two real-world datasets ijcnn and cod-rna, we randomly divide the entire dataset into $15$ intervals. After accessing all samples in one interval, we construct two different Gaussian distributions, and Gaussian noise is applied to the rest samples in other intervals, where the noise to samples in a class is drawn from the same distribution. Note that the two distributions for sampling noise should be reconstructed for each interval to ensure the data stream's dynamic nature. For fair comparisons, we set the maximal number of maintained experts $K_{\text{max}} = 5$ in CO$_2$ without loss of generality. \subsection{Experimental Results} Although the convergence rate for the generalization error of CO$_2$ is $O(1 / \sqrt{T})$ which is consistent with the result of OGD in stationary and non-algorithmic cases, the theoretical analyses in Theorem~\ref{th:rg} and Theorem~\ref{th:all} reveal that the proposed CO$_2$ method outperforms OGD on dynamic environments. To verify these two methods, we measure the regret $\sum_{t = 1}^T f_t(\mathbf{w}_t) - \sum_{t = 1}^T f_t(\mathbf{w^*})$ and the loss of each sample $f_T(\mathbf{w}_T)$ in the online interval, and the main results are summarized in Fig.~\ref{fig:res}. We observe that CO$_2$ has significantly lower loss than OGD at the early state, where only a few samples are received. The main reason is that CO$_2$ can apply offline expert knowledge to fill the gap of insufficient training samples. Also, the regret of CO$_2$ is small when $T$ is sufficiently large, and the regret gaps between CO$_2$ and OGD tend to be widened as $T$ increases. This is because CO$_2$ adapts to out-of-distribution samples in the online interval by adjusting the strategy of integrating the $K - 1$ offline experts and the online expert after receiving the loss of each sample. \section{Conclusion} In this paper, we consider a more general and realistic scenario about nonstationary non-mixing \textit{multi-distributional data streams}, in which several offline intervals with various distributions exist in addition to an out-of-distribution online interval. A novel online optimization method named Coupling Online-Offline learning (CO$_2$) is proposed to apply a meta-expert to adaptively couple the offline experts learned from the previous offline intervals and the online expert being trained on the fly in the online interval. We provide the theoretical guarantees about the CO$_2$ ways for knowledge transfer, the regret, and the generalization. To obtain a high-probability data-dependent bound on the generalization error of the output hypothesis from CO$_2$, we derive a specific excess risk bound by considering the loss function properties, the hypothesis class, the data distribution, and the regret of CO$_2$. Exploiting other assumptions and techniques to break through the bottlenecks in the generalization bound forms part of our future work. \section{Appendix - Supplementary Analysis} \label{supp} In this section, we present the proofs of all the theorems and lemmas. Our analysis follows some advanced techniques, including the self-bound property of smooth functions~\citep{SM:10}, the analysis of adaptive online optimization method with multiple learning rates~\citep{MG:16}, the connection between agnostic PAC learning and online convex optimization~\citep{OL:16}, empirical risk minimization for stochastic convex optimization~\citep{ERM:17}, and the bound of Rademacher complexity for any norm-regularized hypothesis class~\citep{LRC:18}. \subsection{Proof of Lemma~\ref{le:hy1}} According to the property of strong convexity~\citep[Lemma 13.5.2]{ML:14}, we know $L_{\mathcal{\widetilde{S}}}^{\gamma}(\mathbf{w})$ is $\gamma$-strongly convex since $L_{\mathcal{S}}(\mathbf{w})$ is convex and $\frac{\gamma}{2} \Omega(\mathbf{w})$ is $\gamma$-strongly convex. Accordingly, we have \begin{align*} \frac{\gamma}{2} \Omega \left(\mathbf{w}^K \right) & \leq L_{\mathcal{\widetilde{S}}}\left(\sum_{k = 1}^K \alpha_B^k \mathbf{w}_B^k \right) + \frac{\gamma}{2} \Omega\left(\sum_{k = 1}^K \alpha_B^k \mathbf{w}^k_B\right) - L_{\mathcal{\widetilde{S}}}\left(\mathbf{w}^K\right) - \frac{\gamma}{2} \Omega\left(\mathbf{w}^K\right)\\ & \leq \sum_{k = 1}^K \alpha_B^k L_{\mathcal{\widetilde{S}}}\left(\mathbf{w}_B^k\right) + 0 - 0 - \frac{\gamma}{2} \Omega\left(\mathbf{w}^K\right).\\ \end{align*} Above, the first inequality is due to the strongly-convex property that $\frac{\gamma}{2} \Vert x - x^*\Vert_2^2 \leq f(x) - f(x^*)$~\citep[Lemma 13.5.3]{ML:14} and $\mathbf{w}^K$ is an empirical minimizer of $L_{\mathcal{\widetilde{S}}}^{\gamma}(\mathbf{w})$; the second inequality uses the condition that $L_{\mathcal{\widetilde{S}}}(\mathbf{w}^K) \geq 0$ as assumed. \subsection{Proof of Theorem~\ref{th:hy2}} By using the fact that $\mathbf{w}^*$ minimizes $L_\mathcal{D}(\mathbf{w})$ over the domain $\mathcal{W}$, we have \begin{align} \label{eq:hy2:1} L_\mathcal{D}(\mathbf{w}^*) - L_\mathcal{D}(\mathbf{w}^K) \leq 0. \end{align} According to the property of strong convexity~\citep[Lemma 13.5.2]{ML:14}, $L_\mathcal{D}(\mathbf{w}) + \frac{\gamma}{2} \Omega(\mathbf{w})$ is $\gamma$-strongly convex because the former term is convex and the last term is $\gamma$-strongly convex. Following the definition of strongly convex function, we have \begin{equation} \begin{aligned} \label{eq:hy2:2} \frac{\gamma}{2} \left\Vert \mathbf{w}^K - \mathbf{w}^*\right\Vert_2^2 \leq & L_\mathcal{D}(\mathbf{w}^*) + \frac{\gamma}{2} \Omega(\mathbf{w}^*) - L_\mathcal{D}(\mathbf{w}^K) - \frac{\gamma}{2} \Omega(\mathbf{w}^K)\\ & + \left< \nabla L_\mathcal{D}(\mathbf{w}^K) + \gamma \left( \mathbf{w}^K - \sum_{k = 1}^K \alpha_B^k \mathbf{w}^k_B\right) , \mathbf{w}^K - \mathbf{w}^*\right>.\\ \end{aligned} \end{equation} To upper bound the last term above, we have \begin{equation} \begin{aligned} \label{eq:hy2:3} & \left< \nabla L_\mathcal{D}\left(\mathbf{w}^K\right) + \gamma \left( \mathbf{w}^K - \sum_{k = 1}^K \alpha_B^k \mathbf{w}^k_B\right) , \mathbf{w}^K - \mathbf{w}^*\right>\\ \leq & \left( \left\Vert \nabla L_\mathcal{D}(\mathbf{w}^K) \right\Vert + \gamma \left\Vert \mathbf{w}^K - \sum_{k = 1}^K \alpha_B^k \mathbf{w}^k_B \right\Vert \right) \left\Vert \mathbf{w}^K - \mathbf{w}^*\right\Vert\\ \leq & \frac{4\left\Vert \nabla L_\mathcal{D}\left(\mathbf{w}^K\right) \right\Vert^2_2}{2 \gamma} + \frac{\gamma \left\Vert \mathbf{w}^K - \mathbf{w}^*\right\Vert^2_2}{2 \cdot 4} + \gamma \left( \frac{4 \left\Vert \mathbf{w}^K - \sum_{k = 1}^K \alpha_B^k \mathbf{w}^k_B \right\Vert_2^2}{2} + \frac{\left\Vert \mathbf{w}^K - \mathbf{w}^*\right\Vert^2_2}{2 \cdot 4} \right)\\ \end{aligned} \end{equation} where the first inequality uses the Cauchy-Schwarz inequality and the second inequality uses the Young's inequality $\left<a,b\right> \leq \frac{1}{2} \Vert a \Vert_2^2 + \frac{1}{2} \Vert b \Vert_2^2$. Substituting Eqs. (\ref{eq:hy2:1}) and (\ref{eq:hy2:3}) into Eq. (\ref{eq:hy2:2}), we have \begin{align*} \left\Vert \mathbf{w}^K - \mathbf{w}^*\right\Vert^2_2 \leq & 2 \Omega\left(\mathbf{w}^*\right) + \frac{8}{\gamma^2}\left\Vert \nabla L_\mathcal{D}\left(\mathbf{w}^K\right) \right\Vert^2_2 + 6 \Omega\left(\mathbf{w}^K\right)\\ \leq & 2 \Omega\left(\mathbf{w}^*\right) + \frac{32L}{\gamma^2} + \frac{6 \sum_{k = 1}^K \alpha_B^k L_{\mathcal{\widetilde{S}}}\left(\mathbf{w}_B^k\right)}{\gamma} \end{align*} where the second inequality is owing to Eq. (\ref{eq:func}) and Lemma~\ref{le:hy1}. \subsection{Proof of Theorem~\ref{th:rg}} As shown in Eq. (\ref{th:sp}), the analysis is divided into two parts. First, we show the $\text{Regret}_\text{ME}$ of the meta-expert: the difference between the total cost it has incurred and that of the best existing expert of $K$ experts. Then, we demonstrate the $\text{Regret}_\text{OE}$ of the online expert: the difference between the total cost it has incurred and that of the empirical minimizer. Based on the pervious study~\citep[Theorem 2.2]{PLG:06}, we define $W_j = \sum_{k = 1}^K \alpha_1^k e^{- \nu \sum_{t = 1}^j f_t(\mathbf{w}_t^k)}$ and the lower bound of the related quantities is \begin{equation} \begin{aligned} \label{eq:rg:1} \ln \frac{W_T}{W_0} = & \ln \left( \frac{\sum_{k = 1}^K \alpha_1^k e^{- \nu \sum_{t = 1}^T f_t(\mathbf{w}_t^k)}}{\sum_{k = 1}^K \alpha_1^k} \right)\\ = & \ln\left( \sum_{k = 1}^K \alpha_1^k e^{- \nu \sum_{t = 1}^T f_t(\mathbf{w}_t^k)}\right) - \ln\left( \sum_{k = 1}^K \alpha_1^k\right) \\ \geq & \ln \max_{k \in [K]} \left( \alpha_1^k e^{-\nu \sum_{t = 1}^T f_t(\mathbf{w}_t^k)}\right) - \ln\left( \sum_{k = 1}^K \alpha_1^k\right) \\ = & -\nu \min_{k \in [K]} \left(\sum_{t = 1}^T f_t(\mathbf{w}_t^k) + \frac{1}{\nu}\ln\frac{1}{\alpha_1^k} \right) - \ln\left( \sum_{k = 1}^K \alpha_1^k\right). \\ \end{aligned} \end{equation} On the other hand, for $j \in [T]$ and $k \in [K]$, we can use the updating rule of $\alpha_j^k$ defined in Eq. (\ref{eq:update:a}) to obtain \begin{equation} \begin{aligned} \label{eq:rg:ho1} \ln \frac{W_j}{W_{j - 1}} = &\log \left( \frac{\sum_{k = 1}^K \alpha_1^k e^{- \nu \sum_{t = 1}^j f_t(\mathbf{w}_t^k)}}{\sum_{k = 1}^K \alpha_1^k e^{- \nu \sum_{t = 1}^{j - 1} f_t(\mathbf{w}_t^k)}} \right) \\ = & \ln \left( \sum_{k = 1}^{K} \frac{\alpha_1^k e^{- \nu \sum_{t = 1}^{j - 1} f_t(\mathbf{w}_t^k)}}{\sum_{k = 1}^K \alpha_1^k e^{- \nu \sum_{t = 1}^{j - 1} f_t(\mathbf{w}_t^k)}} e^{- \nu f_j(\mathbf{w}_j^k)}\right) \\ = & \ln \left( \sum_{k = 1}^{K} \alpha_j^k e^{- \nu f_j(\mathbf{w}_j^k)}\right). \\ \end{aligned} \end{equation} To bound the above result further, we require the following Hoeffding's Lemma. \begin{lemma}\citep[Lemma 2.2]{PLG:06} \label{le:Hoeffding} Let $X$ be a random variable with $a \leq X \leq b$, then for any $s \in \mathbb{R}$, \begin{align*} \ln E [e^{s X}] \leq s E[X] + \frac{s^2 (b - a)^2}{8}. \end{align*} \end{lemma} Recalling Assumption~\ref{as:4} that $0 \leq f_t(\cdot) \leq 1$ and combining that with Lemma~\ref{le:Hoeffding}, we have \begin{equation} \begin{aligned} \label{eq:rg:ho2} \ln \left( \sum_{k = 1}^{K} \alpha_j^k e^{- \nu f_j(\mathbf{w}_j^k)}\right) \leq - \nu \sum_{k = 1}^K \alpha_t^k f_t(\mathbf{w}_t^k) + \frac{\nu^2 \left(1 - 0\right)^2}{8} \leq - \nu f_t(\mathbf{w}_t) + \frac{\nu^2}{8}\\ \end{aligned} \end{equation} where the last inequality is owing to the Jensen's inequality. Substituting Eqs. (\ref{eq:rg:ho2}) into (\ref{eq:rg:ho1}) and accumulating the result from $t = 1$ to $t = T$, we have \begin{equation} \begin{aligned} \label{eq:rg:2} \sum_{j = 1}^T \ln \frac{W_j}{W_{j - 1}} = \ln \frac{W_T}{W_0} = - \nu \sum_{t = 1}^T f_t(\mathbf{w}_t) + \frac{T\nu^2}{8}. \end{aligned} \end{equation} By combining Eq. (\ref{eq:rg:1}) with Eq. (\ref{eq:rg:2}), we have \begin{align*} \sum_{t = 1}^T f_t(\mathbf{w}_t) - \min_{k \in [K]} \left(\sum_{t = 1}^T f_t(\mathbf{w}_t^k) + \frac{1}{\nu}\ln\frac{1}{\alpha_1^k} \right) \leq \frac{T \nu}{8} + \ln\left( \sum_{k = 1}^K \alpha_1^k\right)\\ \end{align*} which implies \begin{equation} \begin{aligned} \label{eq:rg:3} \sum_{t = 1}^T f_t(\mathbf{w}_t) - \min_{k \in [K]} \sum_{t = 1}^T f_t(\mathbf{w}_t^k) \leq \frac{T \nu}{8} + \ln\left( \sum_{k = 1}^K \alpha_1^k\right) + \max_{k \in [K]} \frac{1}{\nu}\ln\frac{1}{\alpha_1^k}. \end{aligned} \end{equation} With Eq. (\ref{eq:in}), we have \begin{equation} \begin{aligned} \label{eq:rg:as} \sum_{k = 1}^K \alpha_1^k = \frac{K + 1}{K}\left[ \left( \frac{1}{K } - \frac{1}{K + 1}\right) + ,\ldots, + \left( \frac{1}{1} - \frac{1}{2}\right)\right] = \frac{K + 1}{K} \cdot \frac{K }{K + 1 } = 1, \end{aligned} \end{equation} and for any $2 \leq k \leq K$, we have \begin{align*} \frac{K}{K + 1} \left( \alpha_1^{k - 1} - \alpha_1^k \right) \geq (\frac{1}{K - k} - \frac{1}{K + 1 - k}) - (\frac{1}{K + 1 - k} - \frac{1}{K + 2 - k}) \geq 0, \end{align*} so $\{\alpha_1^{k}\}$ is in the ascending order and \begin{equation} \begin{aligned} \label{eq:rg:ab} \max_{k \in [K]}\ln\frac{1}{\alpha_1^k} = \ln\frac{1}{\alpha_1^1} = 2 \ln K. \end{aligned} \end{equation} Substitute Eqs. (\ref{eq:rg:as}) and (\ref{eq:rg:ab}) into Eq. (\ref{eq:rg:3}), we have \begin{align} \label{eg:rg:mee} \frac{T \nu}{8} + \frac{2}{\nu}\ln K, \end{align} and minimize this function toward $\nu$ to obtain \begin{align} \label{eq:rg:v} \nu = 4 \sqrt{\frac{\ln K}{T}}. \end{align} Substitute Eqs.~(\ref{eq:in}) and~(\ref{eq:rg:v}) into Eq.~(\ref{eg:rg:mee}), we have \begin{equation} \begin{aligned} \label{eq:rg:me} \text{Regret}_\text{ME} \leq \sqrt{T \ln K}. \end{aligned} \end{equation} We apply the standard OSG to optimize the online expert $\mathbf{w}^K_t$. Following the previous result~\citep[Theorem 3.1]{OL:16}, setting the step size $\eta_t = \frac{D}{\sqrt{\beta t}}$, we have \begin{equation} \begin{aligned} \label{eq:rg:oe} \text{Regret}_\text{OE} \leq 6D\sqrt{T \beta}. \end{aligned} \end{equation} We obtain $\text{Regret}_{\text{CO$_2$}}$ by combining Eq.~(\ref{eq:rg:me}) with Eq.~(\ref{eq:rg:oe}). According to Eq. (\ref{th:sp}) and the requirement that CO$_2$ should surpass its online expert, we obtain the upper bound of $K$ by solving the following inequality \begin{align*} \sqrt{T \ln K} + \text{Regret}_\text{KE} \leq 6D\sqrt{T \beta}. \end{align*} \subsection{Proof of Lemma~\ref{le:erm:rg}} To proceed, we introduce the following norm concentration inequality. \begin{lemma}\citep[Proposition 1]{CM:09} \label{eq:cm:e} Let $\xi$ be a random variable on $(Z,\rho)$ with values in a Hilbert space and be randomly drawn according to $\rho$ satisfying $\Vert \xi \Vert \leq M \leq \infty$. Then, for any $0 < \delta < 1$, with a probability at least $1 - \delta$, \begin{align*} \left\Vert \frac{1}{m}\sum_{i = 1}^m \left( \xi_i - \mathbb{E}[\xi_i]\right) \right\Vert \leq \frac{4M}{\sqrt{m}} \log \frac{2}{\delta}. \end{align*} \end{lemma} Using Lemma~\ref{eq:cm:e} with the results $\Vert \frac{1}{T} \sum_{t = 1}^T f_t(\mathbf{w}_t) \Vert \leq 1$ and $\Vert \frac{1}{T} \sum_{t = 1}^T f_t(\mathbf{w}^*) \Vert \leq 1$ implied from Assumption~\ref{as:4}, with probability at least $1 - \delta$, we have \begin{equation} \begin{aligned} \label{eq:all:2} \frac{1}{T} \sum_{t = 1}^T L_\mathcal{D}(\mathbf{w}_t) & \leq \frac{1}{T} \sum_{t = 1}^T f_t(\mathbf{w}_t) + \frac{4 \log(2 / \delta)}{\sqrt{T}} \\ \frac{1}{T} \sum_{t = 1}^T f_t(\widehat{\mathbf{w}}) & \leq \frac{1}{T} \sum_{t = 1}^T L_\mathcal{D}(\widehat{\mathbf{w}}) + \frac{4 \log(2 / \delta)}{\sqrt{T}}. \\ \end{aligned} \end{equation} Putting the two results in Eq. (\ref{eq:all:2}) together, with probability at least $1 - \delta$, we have \begin{equation} \begin{aligned} \label{eq:all:3} \frac{1}{T} \sum_{t = 1}^T L_\mathcal{D}(\mathbf{w}_t) - L_\mathcal{D}(\widehat{\mathbf{w}}) \leq & \frac{1}{T} \left( \sum_{t = 1}^T f_t(\mathbf{w}_t) - \sum_{t = 1}^T f_t(\widehat{\mathbf{w}})\right) + \frac{4 \log(4 / \delta)}{\sqrt{T}}\\ \leq & \frac{\sqrt{\ln K} + 6D\sqrt{\beta}}{\sqrt{T}} + \frac{4 \log(4 / \delta)}{\sqrt{T}}\\ \end{aligned} \end{equation} where the first inequality is owing to the Hoeffding's inequality~\citep[Theorem 2.8]{CMB:13} and the second inequality follows Theorem~\ref{th:rg}. \subsection{Proof of Lemma~\ref{le:erm:di}} Our analysis is based on the techniques used in~\citep{ERM:17}. For simplicity, we assume $\dot{b}_t = \nabla f_t(\mathbf{w}^*)$ and $\ddot{b}_t = \nabla f_t(\widehat{\mathbf{w}}) - \nabla f_t(\mathbf{w}^*)$, so we know \begin{equation} \begin{aligned} & \nabla L_\mathcal{S}(\mathbf{w}^*) = \frac{1}{T} \sum_{t = 1}^T \dot{b}_t \\ & \nabla L_\mathcal{D}(\mathbf{w}^*) = \mathbb{E} [\dot{b}]\\ & \nabla L_\mathcal{S}(\widehat{\mathbf{w}}) - \nabla L_\mathcal{S}(\mathbf{w}^*) = \frac{1}{T} \sum_{t = 1}^T \ddot{b}_t \\ & \nabla L_\mathcal{D}(\widehat{\mathbf{w}}) - \nabla L_\mathcal{D}(\mathbf{w}^*) = \mathbb{E} [\ddot{b}].\\ \end{aligned} \end{equation} By the Karush-Kuhn-Tucker (KKT)~\citep[Theorem 2.2]{OL:16} condition for convex function $L_\mathcal{S}(\cdot)$ and $L_\mathcal{D}(\cdot)$, we have \begin{equation} \begin{aligned} & \left<\nabla L_\mathcal{S}(\widehat{\mathbf{w}}), \mathbf{w} - \widehat{\mathbf{w}}\right> \geq 0, \\ & \left<\nabla L_\mathcal{D}(\mathbf{w}^*), \mathbf{w} - \mathbf{w}^*\right> \geq 0, \forall \mathbf{w} \in \mathcal{W}. \\ \end{aligned} \end{equation} We first upper bound the excess risk $L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^*)$ by two terms with the same form and then further derive the upper bounds of the two terms by similar methods. \begin{equation} \begin{aligned} \label{eq:erm1:1} L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^*) \leq & \left< \nabla L_\mathcal{D}(\widehat{\mathbf{w}}) , \widehat{\mathbf{w}} - \mathbf{w}^*\right>\\ = & \left< \mathbb{E} [\ddot{b}] + \mathbb{E} [\dot{b}] , \widehat{\mathbf{w}} - \mathbf{w}^*\right> \\ = & \left< \mathbb{E} [\ddot{b}] - \frac{1}{T} \sum_{t = 1}^T \ddot{b}_t + \mathbb{E} [\dot{b}] + \frac{1}{T} \sum_{t = 1}^T \ddot{b}_t , \widehat{\mathbf{w}} - \mathbf{w}^*\right> \\ \leq & \left< \mathbb{E} [\ddot{b}] - \frac{1}{T} \sum_{t = 1}^T \ddot{b}_t + \mathbb{E} [\dot{b}] + \frac{1}{T} \sum_{t = 1}^T \dot{b}_t , \widehat{\mathbf{w}} - \mathbf{w}^*\right> \\ \leq & \underbrace{ \left \Vert \mathbb{E} [\ddot{b}] - \frac{1}{T} \sum_{t = 1}^T \ddot{b}_t \right\Vert \left\Vert \widehat{\mathbf{w}} - \mathbf{w}^* \right\Vert }_{:=B_1} + \underbrace{\left\Vert \mathbb{E} [\dot{b}] - \frac{1}{T} \sum_{t = 1}^T \dot{b}_t \right\Vert \left\Vert \widehat{\mathbf{w}} - \mathbf{w}^* \right\Vert}_{:B_2}. \end{aligned} \end{equation} In the above, the first inequality is owing to the convexity of $L_{\mathcal{D}}(\cdot)$ over the domain $\mathcal{W}$, the second inequality applies Eq. (\ref{eq:erm1:1}) for convex function $L_\mathcal{S}(\cdot)$ with respect to $\mathbf{w}^*$: $\left<\nabla L_\mathcal{S}(\widehat{\mathbf{w}}), \widehat{\mathbf{w}} - \mathbf{w}^*\right> \leq 0$ and uses $\mathbb{E} [\dot{b}] + \nabla L_\mathcal{S}(\widehat{\mathbf{w}}) - \frac{1}{T} \sum_{t = 1}^T \dot{b}_t$; the third inequality uses the Cauchy-Schwarz inequality. Note that $B_1$ and $B_2$ have the same structure. To bound the variance terms in $B_1$ and $B_2$, we introduce the following norm concentration inequality in a Hilbert space. \begin{lemma}\citep[Lemma 2]{CM:07} \label{le:cm} Let $\xi$ be a random variable on $(Z,\rho)$ with values in a Hilbert space, assume $\Vert \xi \Vert \leq M \leq \infty$ almost surely holds, denote $\sigma^2(\xi) = \mathbb{E}(\Vert \xi \Vert^2)$, and let ${\xi_i}_{i = 1}^m$ be independent random drawers of $\rho$, for any $0 \leq \delta \leq 1$, with confidence $1 - \delta$, \begin{align*} \left\Vert \frac{1}{m}\sum_{i = 1}^m \left( \xi_i - \mathbb{E}[\xi_i]\right) \right\Vert \leq \frac{2 M \log(2/\delta)}{m} + \sqrt{\frac{2 \sigma^2(\xi) \log(2/\delta)}{m}}. \end{align*} \end{lemma} To use this lemma to upper bound $B_1$ and $B_2$, we need the bounds for $\Vert \dot{b}_t \Vert$, $\mathbb{E}\Vert \dot{b} \Vert^2_2$, $\Vert \ddot{b}_t \Vert$, $\mathbb{E}\Vert \ddot{b} \Vert^2_2$. From Eq. (\ref{eq:func}), we have \begin{equation} \begin{aligned} \label{eq:erm1:2} \Vert \dot{b}_t \Vert \leq 2 \sqrt{\beta}, \mathbb{E}\Vert \dot{b} \Vert^2_2 \leq 4 \beta.\\ \end{aligned} \end{equation} With Assumption~\ref{as:5}, we have \begin{align*} \Vert \ddot{b}_t \Vert \leq \beta \Vert \widehat{\mathbf{w}} - \mathbf{w}^* \Vert. \end{align*} Based on that, by using the properties of smooth function~\citep[Theorem 2.1.5]{Nes:04}, we have \begin{align} \label{eq:erm1:3} \Vert \ddot{b}_t \Vert_2^2 \leq 2 \beta \left(f_t(\widehat{\mathbf{w}}) - f_t(\mathbf{w}^*) - \left<\nabla f_t(\widehat{\mathbf{w}}) , \widehat{\mathbf{w}} - \mathbf{w}^* \right> \right). \end{align} Taking the expectation on both sides, we have \begin{equation} \begin{aligned} \label{eq:erm1:4} \mathbb{E} \Vert \ddot{b} \Vert_2^2 & \leq 2 \beta \left(L_{\mathcal{D}}(\widehat{\mathbf{w}}) - L_{\mathcal{D}}(\mathbf{w}^*) - \left<\nabla L_{\mathcal{D}}(\widehat{\mathbf{w}}) , \widehat{\mathbf{w}} - \mathbf{w}^* \right> \right) \leq 2 \beta \left(L_{\mathcal{D}}(\widehat{\mathbf{w}}) - L_{\mathcal{D}}(\mathbf{w}^*) \right),\\ \end{aligned} \end{equation} where the last inequality applies Eq. (\ref{eq:erm1:1}) to the convex function $L_{\mathcal{D}}(\cdot)$ with respect to $\widehat{\mathbf{w}}$. Based on Lemma~\ref{le:cm}, we establish the uniform convergence of $\frac{1}{T} \sum_{t = 1}^T \dot{b}_t$ to $\mathbb{E} [\dot{b}]$ and $\frac{1}{T} \sum_{t = 1}^T \ddot{b}_t$ to $\mathbb{E} [\ddot{b}]$. By using Eqs. (\ref{eq:erm1:3}) and (\ref{eq:erm1:4}) with Lemma~\ref{le:cm}, with probability at least $1 - \delta$, we have \begin{equation} \begin{aligned} \label{eq:erm1:5} B_1 \leq & \frac{2\beta \Vert \widehat{\mathbf{w}} - \mathbf{w}^* \Vert_2^2 \log(2 / \delta) }{T} + 2 \Vert \widehat{\mathbf{w}} - \mathbf{w}^* \Vert \sqrt{\frac{\beta L_{\mathcal{D}}(\widehat{\mathbf{w}}) - L_{\mathcal{D}}(\mathbf{w}^*) \log(2 / \delta) }{T}}\\ \leq & \frac{3\beta \Vert \widehat{\mathbf{w}} - \mathbf{w}^* \Vert_2^2 \log(2 / \delta) }{T} + \frac{L_{\mathcal{D}}(\widehat{\mathbf{w}}) - L_{\mathcal{D}}(\mathbf{w}^*)}{2} \\ \leq & \frac{12\beta R^2 \log(2 / \delta) }{T} + \frac{L_{\mathcal{D}}(\widehat{\mathbf{w}}) - L_{\mathcal{D}}(\mathbf{w}^*)}{2},\\ \end{aligned} \end{equation} where the second inequality uses the Young's inequality and the third inequality is owing to that Assumption~\ref{as:2} implies $\Vert \widehat{\mathbf{w}} - \mathbf{w}^* \Vert \leq 2R$. By using the same method as above with Eq. (\ref{eq:erm1:2}), with probability at least $1 - \delta$, we have \begin{equation} \begin{aligned} \label{eq:erm1:6} B_2 \leq \frac{4R \sqrt{\beta} \log(2/\delta)}{T} + 4R \sqrt{\frac{2 \beta \log(2/\delta)}{T}}. \end{aligned} \end{equation} We complete the proof by substituting Eqs. (\ref{eq:erm1:5}) and (\ref{eq:erm1:6}) into Eq. (\ref{eq:erm1:1}). \subsection{Proof of Lemma~\ref{le:rademacher}} By using the definition of the Rademacher complexity~\citep{RC:02}, we have \begin{align*} \mathcal{R}_{\mathcal{D}}(\mathcal{W}) = \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \left[ \sup_{\mathbf{w} \in \mathcal{W}} \frac{1}{T} \sum_{t = 1}^T \sigma_t \left< \mathbf{w},\mathbf{x}_t\right> \right]. \end{align*} Let $\Gamma_i = \left<\sum_{t = 1}^T \sigma_t\mathbf{x}_t, \mathbf{u}_i\right>/T$ for any $i \geq 1$ and $\theta \geq i$, by using advanced techniques of the Rademacher complexities~\citep{LRC:18}, we have \begin{align*} & \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \left[ \sup_{\mathbf{w} \in \mathcal{W}} \sigma_t \left< \mathbf{w},\frac{1}{T} \sum_{t = 1}^T \mathbf{x}_t\right> \right] \\ = & \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \left[ \sup_{\mathbf{w} \in \mathcal{W}} \sigma_t \left< \mathbf{w}, \sum_{i = 1}^\infty \left< \frac{1}{T} \sum_{t = 1}^T \mathbf{x}_t, \mathbf{u}_i\right>\mathbf{u}_i \right> \right]\\ \leq & \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \left[ \sup_{\mathbf{w} \in \mathcal{W}} \left< \mathbf{w}, \sum_{i = 1}^\theta \Gamma_i \mathbf{u}_i \right> \right] + \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \left[ \sup_{\mathbf{w} \in \mathcal{W}} \left< \mathbf{w}, \sum_{i = \theta + 1}^\infty \Gamma_i\mathbf{u}_i \right> \right]\\ \end{align*} The above inequality is owing to the Jensen's inequality. Based on the Cauchy-Schwarz inequality, we have the following upper bounds for the last two terms \begin{equation*} \begin{aligned} \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \left[ \sup_{\mathbf{w} \in \mathcal{W}} \left< \mathbf{w}, \sum_{i = 1}^\theta \Gamma_i \mathbf{u}_i \right> \right] = & \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \left[ \sup_{\mathbf{w} \in \mathcal{W}} \left< \sum_{i = 1}^\theta \sqrt{\lambda_i} \left<\mathbf{w}, \mathbf{u}_i\right>\mathbf{u}_i, \sum_{i = 1}^\theta \frac{1}{\sqrt{\lambda_i}} \Gamma_i\mathbf{u}_i \right> \right]\\ \leq & \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \left[ \sup_{\mathbf{w} \in \mathcal{W}} \left \Vert \sum_{i = 1}^\theta \sqrt{\lambda_i} \left<\mathbf{w}, \mathbf{u}_i\right>\mathbf{u}_i \right \Vert_2 \left\Vert \sum_{i = 1}^\theta \frac{1}{\sqrt{\lambda_i}} \Gamma_i\mathbf{u}_i \right\Vert_2 \right]\\ = & \underbrace{\sup_{\mathbf{w} \in \mathcal{W}} \sqrt{ \sum_{i = 1}^\theta \lambda_i \left<\mathbf{w}, \mathbf{u}_i\right>^2}}_{:= U_1} \underbrace{\mathbb{E}_{\mathbf{S},\mathbf{\sigma}}\sqrt{ \sum_{i = 1}^\theta \frac{1}{\lambda_i} \Gamma_i^2 }}_{:= U_2}, \end{aligned} \end{equation*} and \begin{align*} \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \left[ \sup_{\mathbf{w} \in \mathcal{W}} \left< \mathbf{w}, \sum_{i = \theta + 1}^\infty \Gamma_i\mathbf{u}_i \right> \right] \leq \underbrace{ \sup_{\mathbf{w} \in \mathcal{W}} \left\Vert \mathbf{w} \right\Vert \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \left[ \left\Vert \sum_{i = \theta + 1}^\infty \Gamma_i\mathbf{u}_i \right\Vert \right]}_{:= U_3}.\\ \end{align*} \subsubsection{Bounding $U_1$:} Enlarging $U_1$ by replacing $\theta$ with $\infty$, we have the following upper bound \begin{align*} U_1 \leq \sup_{\mathbf{w} \in \mathcal{W}} \sqrt{ \sum_{i = 1}^\infty \lambda_i \left<\mathbf{w}, \mathbf{u}_i\right>\left<\mathbf{w}, \mathbf{u}_i\right>} = \sup_{\mathbf{w} \in \mathcal{W}} \sqrt{\mathbb{E} \left<\mathbf{w} \mathbf{w}^T, [\mathbf{x} \mathbf{x}^T]\right>} \leq DR, \end{align*} where the last inequality uses Assumptions~\ref{as:2} and ~\ref{as:3}. \subsubsection{Bounding $U_2$:} By using the Jensen's inequality, we have \begin{align*} U_2 \leq & \sqrt{ \sum_{i = 1}^\theta \mathbb{E}_{\mathbf{S},\mathbf{\sigma}}\frac{1}{\lambda_i} \left< \frac{1}{T} \sum_{t = 1}^T \sigma_t \mathbf{x}_t, \mathbf{u}_i\right>^2 }\\ = & \sqrt{ \sum_{i = 1}^\theta \mathbb{E}_{\mathbf{S},\mathbf{\sigma}}\frac{1}{\lambda_i T^2} \sum_{t = 1}^T \sum_{t' = 1}^T\sigma_t\sigma_t'\left< \mathbf{x}_t, \mathbf{u}_i\right>\left< \mathbf{x}_t', \mathbf{u}_i\right> }\\ = & \sqrt{\sum_{i = 1}^\theta \frac{1}{\lambda_i T} \left< \frac{1}{T}\sum_{t = 1}^T \mathbb{E}_{\mathbf{S}}\left[\mathbf{x}_t\mathbf{x}_t^T\right],\mathbf{u}_i\mathbf{u}_i^T\right> }\\ = & \sqrt{\sum_{i = 1}^\theta \frac{1}{\lambda_i T} \left< \sum_{j = 1}^\infty \lambda_j \mathbf{u}_j\mathbf{u}_j^T,\mathbf{u}_i\mathbf{u}_i^T\right> }\\ = & \sqrt{\sum_{i = 1}^\theta \frac{1}{\lambda_i T} \lambda_i} = \sqrt{\frac{\theta}{T}}. \end{align*} \subsubsection{Bounding $U_3$:} Recall the definition of $\Gamma_i$, we have \begin{equation} \begin{aligned} \label{eq:u3:1} U_3 \leq & R \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \sqrt{ \left\Vert \sum_{i = \theta + 1}^\infty \left< \frac{1}{T} \sum_{t = 1}^T \sigma_t \mathbf{x}_t, \mathbf{u}_i\right>\mathbf{u}_i \right\Vert^2_2 },\\ \end{aligned} \end{equation} where the inequality is owing to our assumption $\sup_{\mathbf{w} \in \mathcal{W}} \left\Vert \mathbf{w} \right\Vert \leq R$. To derive its bound further, we introduce the Khintchine-Kahane inequality. \begin{lemma}\citep{KKI:12} \label{le:KKI} Let $\mathcal{H}$ be an inner-product space with the induced norm $\Vert \cdot \Vert_{\mathcal{H}}$, $v_1, \ldots ,v_n \in \mathcal{H}$ and $\sigma_1, \ldots, \sigma_n$ i.i.d. Rademacher random variables. Then, for any $p \geq 1$, we have \begin{align*} \mathbb{E} \Vert \sum_{i = 1}^n \sigma_i v_i\Vert_{\mathcal{H}}^p \leq \left( c \sum_{i = 1}^n \Vert v_i\Vert_{\mathcal{H}}^2 \right)^{\frac{p}{2}} \end{align*} where $c:= \max{1,p-1}$. The inequality also holds for $p$ in place of $c$. \end{lemma} With Lemma~\ref{le:KKI}, we have \begin{equation} \begin{aligned} \label{eq:u3:2} \mathbb{E}_{\mathbf{S},\mathbf{\sigma}} \sqrt{ \left\Vert \sum_{i = \theta + 1}^\infty \left< \frac{1}{T} \sum_{t = 1}^T \sigma_t \mathbf{x}_t, \mathbf{u}_i\right>\mathbf{u}_i \right\Vert^2_2 } \leq & \frac{1}{\sqrt{T}} \mathbb{E}_{\mathbf{S}} \sqrt{ \frac{1}{T} \sum_{t = 1}^T\left\Vert \sum_{i = \theta + 1}^\infty \left<\mathbf{x}_t, \mathbf{u}_i\right>\mathbf{u}_i \right\Vert^2_2 } \\ = & \frac{1}{\sqrt{T}} \mathbb{E}_{\mathbf{S}} \sqrt{ \frac{1}{T} \sum_{t = 1}^T \sum_{i = \theta + 1}^\infty \left<\mathbf{x}_t, \mathbf{u}_i\right>^2 }.\\ \end{aligned} \end{equation} Although we can bound Eq. (\ref{eq:u3:2}) further according to the following result obtained from Assumption~\ref{as:3} \begin{equation} \begin{aligned} \label{eq:u3:xx} \sum_{i = \theta + 1}^\infty \left<\mathbf{x}_t, \mathbf{u}_i\right>^2 = \left<\sum_{i = \theta + 1}^\infty \left<\mathbf{x}_t, \mathbf{u}_i\right> \mathbf{u}_i, \mathbf{x}_t\right> \leq \left<\mathbf{x}_i, \mathbf{x}_i\right> = D^2, \end{aligned} \end{equation} we pursue a tighter bound by introducing the Rosenthal-Young inequality. \begin{lemma}\citep[Lemma 3]{RYI:12} \label{le:RYI} Let the independent nonnegative random variables $X_1, \ldots ,X_n$ satisfy $X_i \leq B \leq +\infty$ almost surely for all $i = 1, \ldots ,n$, if $q \geq \frac{1}{2}$, $c_q := (2 q e)^q$, then the following holds \begin{align*} \mathbb{E} \left( \frac{1}{n} \sum_{i = 1}^n X_i\right)^q \leq c_q \left[ \left( \frac{B}{n}\right)^q + \left( \frac{1}{n} \sum_{i = 1}^n \mathbb{E} X_i\right)^q \right]. \end{align*} \end{lemma} By combining Eq. (\ref{eq:u3:xx}) with Lemma~\ref{le:RYI}, we have \begin{equation} \begin{aligned} \label{eq:u3:3} \mathbb{E}_{\mathbf{S}} \sqrt{ \frac{1}{T} \sum_{t = 1}^T \sum_{i = \theta + 1}^\infty \left<\mathbf{x}_t, \mathbf{u}_i\right>^2 } \leq & \sqrt{e} \left( \frac{D}{\sqrt{T}} + \sqrt{ \frac{1}{T} \sum_{i = \theta + 1}^\infty \sum_{t = 1}^T \mathbb{E}_{\mathbf{S}}\left<\mathbf{x}_t, \mathbf{u}_i\right>^2 }\right)\\ = & D\sqrt{\frac{e}{T}} + \sqrt{\frac{e}{T} \sum_{i = \theta + 1}^\infty \lambda_i}. \end{aligned} \end{equation} To obtain the upper bound of $U_3$, we combine Eqs. (\ref{eq:u3:1}), (\ref{eq:u3:2}) and (\ref{eq:u3:3}). To sum up, we can complete the proof by \begin{align*} \mathcal{R}_{\mathcal{D}}(\mathcal{W}) = & U_1 \cdot U_2 + U_3 \\ \leq & DR \sqrt{\frac{\theta}{T}} + \frac{R}{T}\sqrt{e \sum_{i = \theta + 1}^\infty \lambda_i} + \frac{DR\sqrt{e}}{T}\\ \leq & R\sqrt{\frac{1}{T} \left( D^2 \theta + \sum_{i = \theta + 1}^\infty \frac{e \lambda_i}{T} \right)} + \frac{DR\sqrt{e}}{T}\\ = & R\sqrt{\frac{1}{T} \left( \sum_{i = 1}^{\theta} D^2 + \sum_{i = \theta + 1}^\infty \frac{e \lambda_i}{T} \right)} + \frac{DR\sqrt{e}}{T}.\\ \end{align*} In the above, the first inequality uses the bounds of $U_1$, $U_2$ and $U_3$ we obtained, the second inequality is owing to the inequality of arithmetic and geometric means for nonnegative numbers: $2 \sqrt{xy} \leq x + y \Leftrightarrow \sqrt{x} + \sqrt{y} \leq \sqrt{2x + 2y}$. Note that the above bound for $\mathcal{R}_{\mathcal{D}}(\mathcal{W})$ holds for any positive integer $\theta \geq 1$, we obtain the tightest result by minimizing the upper bound, that is \begin{align*} \mathcal{R}_{\mathcal{D}}(\mathcal{W}) \leq & R\sqrt{\frac{1}{T} \min_{\theta \in \mathbb{N}} \left(\sum_{i = 1}^{\theta} D^2 + \sum_{i = \theta + 1}^\infty \frac{e \lambda_i}{T} \right)} + \frac{DR\sqrt{e}}{T}\\ = & R\sqrt{\frac{1}{T} \sum_{i = 1}^{\infty} \left( D^2 \wedge \frac{e \lambda_i}{T} \right)} + \frac{DR\sqrt{e}}{T}, \end{align*} where the equality is owing to that the sequence of eigenvalues $\{ \lambda_i \}$ is in the ascending order. \subsection{Proof of Lemma~\ref{le:erm:dd}} Based on the divide-and-conquer idea, we split our target into three more comfortable summands and derive their bounds respectively, \begin{equation} \begin{aligned} \label{eq:erm2:1} L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{D}(\mathbf{w}^*) = \left( L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{S}(\widehat{\mathbf{w}}) \right) + \left( L_\mathcal{S}(\widehat{\mathbf{w}}) - L_\mathcal{S}(\mathbf{w}^*) \right) + \left( L_\mathcal{S}(\mathbf{w}^*) - L_\mathcal{D}(\mathbf{w}^*) \right). \end{aligned} \end{equation} Recall that $l$ is a nonnegative function. To bound the generation error $L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{S}(\widehat{\mathbf{w}})$ by the Rademacher complexity for this nonnegative function, we need the following two lemmas. \begin{lemma}\citep[Theorem 26.5]{ML:14} \label{le:data_bound} Assume that the loss function $l(h,z)$ is bounded by $b$ for all $z$ and $h \in \mathcal{H}$, $S$ is a $m$-samples data set. With probability of at least $1 - \delta$, for all $h \in \mathcal{H}$, \begin{align*} L_{\mathcal{D}}(h) - L_{\mathcal{S}}(h) \leq 2 \mathcal{R}(l\circ\mathcal{H}) + b \sqrt{\frac{2 \ln(2 / \delta)}{m}}. \end{align*} \end{lemma} \begin{lemma}\citep[Lemma 2.2]{SM:10} \label{le:contraction} For a nonnegative $\beta$-smooth loss $l$ bounded by $b$ and any function class $\mathcal{H}$, the Rademacher Complexity on a $m$-sample data set is \begin{align*} \mathcal{R}(l \circ \mathcal{H}) \leq 21\sqrt{6 \beta b} \log^{\frac{3}{2}}(64m) \mathcal{R}(\mathcal{H}). \end{align*} \end{lemma} With Assumption~\ref{as:4}, we know $\vert l( \left<\mathbf{w},\mathbf{x} \right>, y) \vert \leq 1$. By combing Lemmas~\ref{le:data_bound} and ~\ref{le:contraction} under the condition that $\widehat{\mathbf{w}} \in \mathcal{W}$, with probability at least $1 - \delta$, we have \begin{equation} \begin{aligned} \label{eq:erm2:5} L_\mathcal{D}(\widehat{\mathbf{w}}) - L_\mathcal{S}(\widehat{\mathbf{w}}) \leq & 2 \mathcal{R}_{\mathcal{D}}(l\circ\mathcal{W}) + \sqrt{\frac{2 \log(2 / \delta) }{T}}\\ \leq & 42\sqrt{6 \beta} \log^{\frac{3}{2}}(64T) \mathcal{R}_{\mathcal{D}}(\mathcal{W}) + \sqrt{\frac{2 \log(2 / \delta) }{T}}. \end{aligned} \end{equation} Note that the above result ignores the specificity of $\widehat{\mathbf{w}}$. We utilize the specificity of $\widehat{\mathbf{w}}$ as well as $\mathbf{w}^*$ to bound the next two summands. Recall that $\widehat{\mathbf{w}}$ is an empirical minimizer of $L_\mathcal{S}(\mathbf{w})$ over the domain $\mathcal{W}$, we have \begin{align} \label{eq:erm2:2} L_\mathcal{S}(\widehat{\mathbf{w}}) - L_\mathcal{S}(\mathbf{w}^*) \leq 0. \end{align} Because $\mathbf{w}^*$ is independent of the data set $\mathcal{S}$ and $f_1(\mathbf{w}^*), \ldots ,f_T(\mathbf{w}^*)$ is a sequence of i.i.d. random variables, we have $\mathbb{E}_{\mathcal{S}'} [L_{\mathcal{S}'}(\mathbf{w}^*) ] = L_\mathcal{D}(\mathbf{w}^*)$. With Assumption~\ref{as:4}, we know that $\mathbb{P}[0 \leq f_t(\mathbf{w}^*) \leq 1] = 1$ for every $t \in [T]$. The Hoeffding's inequality~\citep[Theorem 2.8]{CMB:13} implies that with probability at least $1 - \delta$, we have \begin{align} \label{eq:erm2:3} L_\mathcal{S}(\mathbf{w}^*) - L_\mathcal{D}(\mathbf{w}^*)\leq \sqrt{\frac{\log(2 / \delta)}{2 T}}. \end{align} We complete the proof by substituting Eqs. (\ref{eq:erm2:2}), (\ref{eq:erm2:3}) and (\ref{eq:erm2:5}) into Eq. (\ref{eq:erm2:1}).
1,314,259,996,409
arxiv
\section{Introduction} An array containing symbols $\symb{0}$ and $\symb{1}$ is given. We would like to determine which of the two symbols $\symb{0}$ and $\symb{1}$ appears more often in this array. The challenge is to perform this task in a local, uniform and decentralized fashion, that is, by means of a cellular automaton. A cellular automaton solving this problem is to receive the input array as its initial configuration and to end by reaching a consensus, that is, by turning every symbol in the array into the majority symbol. All computations must be done on the same array with no additional symbols. If we require the cellular automaton to solve the task for all odd-sized finite arrays with periodic boundary conditions (i.e., arrays indexed by a ring $\ZZ_n$ or a $d$-dimensional torus $\ZZ_n^d$, where $n$ is odd), then no perfect solution exists~\cite{LanBel95} (see also~\cite{BusFatMaiMar13}). Indeed, the effect of an isolated $\symb{1}$ deep inside a large region of $\symb{0}$'s will soon disappear, hence its removal from the starting configuration should not affect the end result. However, removing such an isolated $\symb{1}$ could shift the balance of the majority from $\symb{1}$ to $\symb{0}$ in a borderline case. Here, we consider a variant of the problem on infinite arrays, and focus on the one-dimensional case. We ask for a cellular automaton that classifies a randomly chosen configuration (say, using independent biased coin flips) according to its density \emph{almost surely} (i.e., with probability~$1$). We relax the notion of classification to allow computations that take infinitely long: we only require that the content of each site is eventually turned into the majority symbol and remains so forever, but we allow the fixation time to depend on the site. Almost sure classification of random initial configurations is closely related to the question of stability of cellular automata trajectories against noise and the notion of ergodicity for probabilistic cellular automata. Constructing a cellular automaton with at least two distinct trajectories that remain distinguishable in presence of positive Bernoulli noise is far from trivial. Toom~\cite{Too74,Too80} produced a family of examples in two dimensions. Each of Toom's cellular automata has two or more distinct fixed points that are stable against noise: in presence of sufficiently small (but positive) Bernoulli noise, the cellular automaton starting from each of these fixed points remains close to that fixed point for an indefinite amount of time. The noisy version of each of these cellular automata is thus non-ergodic in that it has more than one invariant measure. The most well-known of Toom's examples is the so-called \emph{NEC} rule (NEC standing for North, East, Center). The NEC rule replaces the symbol at each site with the majority symbol among the site itself and its north and east neighbors. Combining the combinatorial properties of the NEC rule and well-known results from percolation theory, Bu\v{s}i\'c, Fat\`es, Mairesse and Marcovici~\cite{BusFatMaiMar13} showed that the NEC cellular automaton also solves the classification problem: starting from a random Bernoulli configuration with parameter $p$ on $\ZZ^2$ (i.e., using independent coin flips with probability $p$ of having $\symb{1}$ at each site), the cellular automaton converges almost surely to the uniform configuration $\unif{\symb{0}}$ if $p<1/2$ and to $\unif{\symb{1}}$ if $p>1/2$. The situation in dimension one is more complicated. No one-dimensional cellular automaton with binary alphabet is known to classify Bernoulli random configurations. Moreover, Toom's examples do not extend to one dimension; the only example of a one-dimensional cellular automaton with distinct stable trajectory in presence of noise is a sophisticated construction due to G\'acs~\cite{Gac86,Gac01} based on error-correction and self-simulation, which uses a huge number of symbols per site. There are however candidate cellular automata in one dimension that are suspected to both classify Bernoulli configurations and to remain bi-stable in presence of noise. The oldest, most studied candidate is the \emph{GKL} cellular automaton, introduced by G\'acs, Kurdyumov and Levin~\cite{GacKurLev78}. Another candidate with similar properties and same degree of simplicity is the \emph{modified traffic} cellular automaton studied by K\r{u}rka~\cite{Kur03} and Kari and Le Gloannec~\cite{KarGlo12}. Both of these two automata have the important property that they ``wash out finite islands of errors'' on either of the two uniform configurations~$\unif{\symb{0}}$ and $\unif{\symb{1}}$~\cite{GonMae92,KarGlo12}. In other words, each of the two uniform configurations~$\unif{\symb{0}}$ and $\unif{\symb{1}}$ is a fixed point that attracts all configurations that differ from it at no more than finitely many sites. Incidentally, this same property is also shared among Toom's cellular automata, and is crucial (but not sufficient) for its noise stability and density classification properties. A cellular automaton that washes out finite islands of errors, also washes out infinite sets of errors that are sufficiently sparse. In this context, a set should be considered sparse if it can be covered with disjoint finite islands that are washed out before sensing the effect of (or having an effect on) one another. It turns out that a Bernoulli random configuration with sufficiently small parameter is sparse with probability~$1$. The proof is via a beautiful and relatively simple argument that goes back to G\'acs~\cite{Gac86,Gac01}, who used it to take care of the probabilistic part of his result. The author has learned this argument in a more streamlined form from a recent paper of Durand, Romashchenko and Shen~\cite{DurRomShe12}, who used it in the context of aperiodic tilings. Given its simplicity and potential, we shall repeat this argument below. An immediate consequence of the sparseness of low-density Bernoulli sets is that any cellular automaton that washes out finite islands of errors on~$\unif{\symb{0}}$ and~$\unif{\symb{1}}$ (e.g., GKL and modified traffic) almost surely classifies a Bernoulli random configuration correctly, as long as the Bernoulli parameter $p$ is close to either $0$ or $1$. It remains open whether the same classification occurs for all values of $p$ in $(0,1/2)\cup(1/2,1)$. \subsection{Terminology} Let us proceed by fixing the terminology and formulating the problem more precisely. By a \emph{configuration}, we shall mean an infinite array of symbols $x_i$ chosen from an alphabet $S$ that are indexed by integers $i\in\ZZ$, or equivalently, a function $x:\ZZ\to S$. The evolution of a cellular automaton is obtained by iterating a transformation $\Phi:S^\ZZ\to S^\ZZ$ on a starting configuration $x:\ZZ\to S$. The transformation $x\mapsto \Phi x$ is carried out by applying a \emph{local update rule} $f$ simultaneously on every site so that the new symbol at site $i$ reads $(\Phi x)_i\isdef f(x_{i-r},x_{i-r+1},\ldots,x_{i+r})$. We call the sites $i-r,i-r+1,\ldots,i+r$ the \emph{neighbors} of site $i$ and refer to $r$ as the neighborhood \emph{radius} of the cellular automaton. The \emph{density} of a symbol $a$ in a configuration $x$ is not always well-defined or non-ambiguous. We take as the definition, \begin{align} \rho_{a}(x) &\isdef \lim_{N\to\infty} \frac{\abs{\{i\in[-N,N]: x_i=a\}}}{2N+1} \end{align} when the limit exists. According to the law of large numbers, the density of a symbol $a$ in a Bernoulli random configuration is almost surely the same as the probability of occurrence of $a$ at each site. Formally, if $X$ is a random configuration $\ZZ\to S$ in which the symbol at each site is chosen independently of the others, taking value $a$ with probability $p(a)$, then $\xPr\{\rho_a(X)=p(a)\}=1$. When $S=\{\symb{0},\symb{1}\}$, we simply write $\rho(x)\isdef\rho_{\symb{1}}(x)$ for the density of $\symb{1}$'s in $x$. We say that a cellular automaton $\Phi:\{\symb{0},\symb{1}\}^\ZZ\to\{\symb{0},\symb{1}\}^\ZZ$ \emph{classifies} a configuration $x:\ZZ\to\{\symb{0},\symb{1}\}$ according to density if $\Phi^t x\to\unif{\symb{0}}$ or $\Phi^t x\to\unif{\symb{1}}$ as $t\to\infty$, depending on whether $\rho(x)<1/2$ or $\rho(x)>1/2$. The notation $\unif{a}$ is used to denote a uniform configuration with symbol $a$ at each site. For us, the meaning of the convergence of a sequence of configurations $x^{(1)},x^{(2)},\ldots$ to another configuration $x$ is \emph{site-wise eventual agreement}: for each site $i$, there must be an index $n_i$ after which all the following configurations in the sequence agree with $x$ on the content of site $i$. (Formally, $x^{(n)}_i=x_i$ for all $n\geq n_i$.) This is the concept of convergence in the product topology of $S^\ZZ$, which is a compact and metric topology. \section{Eroder Property} Let us describe two candidates that are suspected to solve the density classification problem in one dimension: the cellular automaton of G\'acs, Kurdyumov and Levin and the modified traffic rule. Both cellular automata are defined on binary configurations $\ZZ\to\{\symb{0},\symb{1}\}$ and have neighborhood radius $3$. The cellular automaton of G\'acs, Kurdyumov and Levin~\cite{GacKurLev78} (\emph{GKL} for short) is defined by the transformation \begin{align} (\Phi x)_i &\isdef \begin{cases} \maj(x_{i-3}, x_{i-1}, x_i) & \text{if $x_i=\symb{0}$,} \\ \maj(x_i, x_{i+1}, x_{i+3}) & \text{if $x_i=\symb{1}$,} \end{cases} \end{align} where $\maj(a,b,c)$ denotes the majority symbol among $a,b,c$. The \emph{modified traffic} cellular automaton~\cite{Kur03,KarGlo12} is defined as a composition of two simpler automata: the traffic automaton followed by a smoothing filter. The \emph{traffic} automaton transforms a configuration by replacing every occurrence of $\symb{1}\symb{0}$ with $\symb{0}\symb{1}$. The follow-up filter replaces the $\symb{1}$ in every occurrence of $\symb{0}\symb{0}\symb{1}\symb{0}$ with~$\symb{0}$, and symmetrically, turns the $\symb{0}$ in every occurrence of $\symb{1}\symb{0}\symb{1}\symb{1}$ into a~$\symb{1}$. Sample space-time diagrams of the GKL and the modified traffic automata are depicted in Figure~\ref{fig:space-time:sample}. \begin{figure} \begin{center} \begin{tabular}{ccc} \begin{minipage}[c]{0.49\textwidth} \centering \includegraphics[scale=0.125]{figures/GKL-300x150-bmp-x.pdf} \end{minipage} & & \begin{minipage}[c]{0.49\textwidth} \centering \includegraphics[scale=0.125]{figures/mTraffic-300x150-bmp-x.pdf} \end{minipage} \\ (a) GKL & & (b) modified traffic \end{tabular} \end{center} \caption{ Finding the majority in a biased coin-flip configuration. Time goes downwards. } \label{fig:space-time:sample} \end{figure} Note that both GKL and modified traffic have the following symmetry: exchanging $\symb{0}$ with $\symb{1}$ and right with left leaves the cellular automaton unchanged. The uniform configurations $\unif{\symb{0}}$ and $\unif{\symb{1}}$ are fixed points of both GKL and modified traffic automata. The following theorem states that both automata wash out finite islands of errors on either of the two uniform configurations $\unif{\symb{0}}$ and $\unif{\symb{1}}$. This is sometimes called the \emph{eroder property}. For the GKL automaton, the eroder property was proved by Gonzaga de~S\'a and Maes~\cite{GonMae92}; for modified traffic, the result is due to Kari and Le~Gloannec~\cite{KarGlo12}. Let us write $\diff(x,y)\isdef \{i\in\ZZ: x_i\neq y_i\}$ for the set of sites at which two configurations $x$ and $y$ differ. We call $x$ a \emph{finite perturbation} of $z$ if $\diff(z,x)$ is a finite set. \begin{theorem}[Eroder property~\cite{GonMae92,KarGlo12}] Let $\Phi$ be either the GKL or the modified traffic cellular automaton. For every finite perturbation $x$ of $\unif{\symb{0}}$, there is a time $t$ such that $\Phi^t x=\unif{\symb{0}}$. If $\diff(\unif{\symb{0}},x)$ has diameter at most $n$ (i.e., covered by an interval of length $n$), then $\Phi^{2n} x = \unif{\symb{0}}$. The analogous statement about finite perturbations of $\unif{\symb{1}}$ holds by symmetry. \end{theorem} Let us emphasize that many simple cellular automata have the eroder property on some uniform configuration. For instance, the cellular automaton $\Phi:\{\symb{0},\symb{1}\}^\ZZ\to \{\symb{0},\symb{1}\}^\ZZ$ defined by $(\Phi x)_i\isdef x_{i-1} \land x_i \land x_{i+1}$ washes out finite islands on the uniform configuration $\unif{\symb{0}}$. What is remarkable about GKL and modified traffic is the fact that they have the eroder property on \emph{two distinct} configurations $\unif{\symb{0}}$ and~$\unif{\symb{1}}$. This double eroder property may lead one to guess that these two cellular automata could indeed classify Bernoulli configurations according to density or that the trajectories of the fixed points $\unif{\symb{0}}$ and $\unif{\symb{1}}$ are stable in presence of small but positive noise. \section{Washing Out Sparse Sets} In this section, we consider a slightly more general setting. We assume that $\Phi:S^\ZZ\to S^\ZZ$ is a cellular automaton that washes out finite islands of errors on a configuration $z$ in linear time; that is, there is a constant $m$ such that $\Phi^{ml} x = \Phi^{ml} z$ for any finite perturbation of $x$ for which $\diff(z,x)$ has diameter at most~$l$. For GKL and modified traffic, $z$ can be either $\unif{\symb{0}}$ or $\unif{\symb{1}}$, which are fixed points (hence $\Phi^{ml} z=z$), and the constant $m$ can be chosen to be $2$. The above eroder property automatically implies that $\Phi$ also washes out (possibly infinite) sets of error that are ``sparse enough''. Indeed, an island of errors which is well separated from the rest of the errors will disappear before sensing or affecting the rest of the error set. We are interested in an appropriate notion of ``sparseness'' for $\diff(z,x)$ that guarantees the attraction of the trajectory of $x$ towards the trajectory of $z$. To elaborate this further, let us denote the neighborhood radius of $\Phi$ by~$r$. Consider an arbitrary configuration $x$ and think of it as a perturbation of $z$ with errors occurring at sites in $\diff(z,x)$. Let $I\subseteq\ZZ$ be an interval of length $l$ such that $x$ agrees with $z$ on a margin of width $2rml$ around $I$, that is, $x_j=z_j$ for $j\in\ZZ\setminus I$ within distance $2rml$ from $I$. We call such an interval an \emph{isolated island} (of errors) on $x$. Let $y$ be a configuration obtained from $x$ by \emph{erasing} the errors on $I$, that is, by replacing $x_i$ with $z_i$ for each $i\in I$. (Note on terminology: we shall use ``erasure'' to refer to this abstract construction of one configuration from another, and reserve the word ``washing'' for what the cellular automaton does.) Observe that within $ml$ steps, the distinction between $x$ and $y$ disappears and we have $\Phi^{ml}x=\Phi^{ml}y$ (see Figure~\ref{fig:washing:isolated}).% \begin{figure} \begin{center} \begin{tikzpicture}[>=stealth',shorten >=0.5pt,shorten <=0.5pt] \useasboundingbox (-5,-1.5) rectangle (5,1); \fill[fill=gray!10] (-4,0) -- (-2.5,-1.5) -- (2.5,-1.5) -- (4,0) -- cycle; \draw[help lines] (-4,0) -- (-2.5,-1.5) -- (2.5,-1.5) -- (4,0) -- cycle; \fill[fill=gray!60] (-1,0) -- (1,0) -- (2.5,-1.5) -- (-2.5,-1.5) -- cycle; \draw[help lines] (-5,0) -- (5,0); \draw[ultra thick, line cap=round] (-1,0) -- (1,0); \draw[<->,very thin] (-4.3,0) -- (-4.3,-1.5) node[midway,left] {$ml$}; \draw[<->,very thin] (-1,0.2) -- (1,0.2) node[midway,above] {$l$}; \draw[<->,very thin] (-4,0.2) -- (-2.5,0.2) node[midway,above] {$rml$}; \draw[<->,very thin] (-2.5,0.2) -- (-1,0.2) node[midway,above] {$rml$}; \draw[<->,very thin] (1,0.2) -- (2.5,0.2) node[midway,above] {$rml$}; \draw[<->,very thin] (2.5,0.2) -- (4,0.2) node[midway,above] {$rml$}; \end{tikzpicture} \end{center} \caption{ Forgetting an isolated region of errors. } \label{fig:washing:isolated} \end{figure} Namely, the island $I$ is washed out before time $ml$ and the sites in $\diff(z,x)\setminus I=\diff(z,y)\setminus I$ do not get a chance to feel the distinction between $x$ and $y$. We find that erasing an isolated island of length at most $l$ from $x$ does not affect whether the trajectory of $x$ is attracted towards the trajectory of $z$ or not. Neither does erasing several (possibly infinitely many) isolated islands of length $\leq l$ at the same time. On the other hand, erasing some isolated islands from $x$ makes the error set $\diff(z,x)$ sparser and possibly turns larger portions of $\diff(z,x)$ into isolated islands (see Figure~\ref{fig:washing:sparse}).% \begin{figure} \begin{center} \begin{tikzpicture} \clip (-6,-2) rectangle (6,0.5); \fill[fill=gray!60] (-4,0) -- (-5.5,-1.5) -- (-0.5,-1.5) -- (-2,0) -- cycle; \fill[fill=gray!60] (1.5,0) -- (-0.375,-1.875) -- (6.375,-1.875) -- (4.5,0) -- cycle; \fill[fill=gray!60] (-0.25,0) -- (-0.625,-0.375) -- (0.625,-0.375) -- (0.25,0) -- cycle; \fill[fill=gray!60] (-5.8,0) -- (-6.25,-0.45) -- (-4.75,-0.45) -- (-5.2,0) -- cycle; \draw[help lines] (-6,0) -- (6,0); \draw[ultra thick, line cap=round] (-4,0) -- (-2,0); \draw[ultra thick, line cap=round] (1.5,0) -- (4.5,0); \draw[ultra thick, line cap=round] (-0.25,0) -- (0.25,0); \draw[ultra thick, line cap=round] (-5.8,0) -- (-5.2,0); \end{tikzpicture} \end{center} \caption{ Washing out a sparse set of errors. } \label{fig:washing:sparse} \end{figure} Hence, we can perform the erasure procedure recursively, by first erasing the isolated islands of length $1$, then erasing the isolated islands of length $2$, then erasing the isolated islands of length $3$ and so forth. In this fashion, we obtain a sequence $x^{(0)}, x^{(1)}, x^{(2)}, \ldots$ with $x^{(0)}=x$ and $\diff(z,x^{(l)})\supseteq \diff(z,x^{(l+1)})$ obtained by successive erasure of isolated islands. We say that the error set $\diff(z,x)$ is \emph{sparse} if all errors are eventually erased, that is, if $\bigcap_l \diff(z,x^{(l)})=\varnothing$. However, this notion of sparseness still does not guarantee the attraction of the trajectory of $x$ towards the trajectory of $z$. (The trajectory of $x$ is considered to be \emph{attracted} towards the trajectory of $z$ if for each site $i$, there is a time $t_i$ such that $(\Phi^t x)_i=(\Phi^t z)_i$ for all time steps $t\geq t_i$. If $\Phi z=z$, this attraction becomes equivalent to the convergence $\Phi^t x\to z$.) Note that it is well possible that all errors are eventually washed out from $x$ (hence, their information is lost) but the washing out procedure for larger and larger islands affects a given site $i$ indefinitely, so that $(\Phi^t x)_i\neq(\Phi^t z)_i$ for infinite many time steps $t$ (see Figure~\ref{fig:washing:non-attracting}).% \begin{figure} \begin{center} \begin{tikzpicture} \clip (-6,-4.25) rectangle (6,0.5); \fill[fill=gray!60] (-0.2,0) -- (-0.425,-0.225) -- (0.325,-0.225) -- (0.1,0) -- cycle; \fill[fill=gray!60] (0.6,0) -- (-0.15,-0.75) -- (2.35,-0.75) -- (1.6,0) -- cycle; \fill[fill=gray!60] (-3.3,0) -- (-4.8,-1.5) -- (0.2,-1.5) -- (-1.3,0) -- cycle; \fill[fill=gray!60] (3.5,0) -- (-0.25,-3.75) -- (12.25,-3.75) -- (8.5,0) -- cycle; \draw[help lines] (-6,0) -- (6,0); \draw[ultra thick, line cap=round] (-0.2,0) -- (0.1,0); \draw[ultra thick, line cap=round] (0.6,0) -- (1.6,0); \draw[ultra thick, line cap=round] (-3.3,0) -- (-1.3,0); \draw[ultra thick, line cap=round] (3.5,0) -- (8.5,0); \draw[dashed, help lines] (0,0.25) -- (0,-4); \end{tikzpicture} \end{center} \caption{ Washing out but not attracting. } \label{fig:washing:non-attracting} \end{figure} To clarify this possibility, note that an isolated island of length $l$ can affect the state of sites within distance $rml$ up to time $ml$ (see Figure~\ref{fig:washing:isolated}). Let us denote by $A_l\isdef\diff(z,x^{(l)})\setminus\diff(z,x^{(l-1)})$ the union of isolated islands of length $l$ that are erased from $x^{(l-1)}$ during the $l$'th stage of the erasure procedure. The only possibility for a site $i$ to have a value other than $(\Phi^t z)_i$ at time $t$ is that site $i$ is within distance $rml$ from $A_l$ for some $l$ satisfying $ml>t$. In this case, we say that $i$ is within the \emph{territory} of such $A_l$. A sufficient condition for the attraction of the trajectory of $x$ towards the trajectory of $z$ is that the error set $\diff(z,x)$ is sparse, and furthermore, each site $i$ is within the territory of $A_l$ for at most finitely many values of $l$. If this condition is satisfied, we say that the error set $\diff(z,x)$ is \emph{strongly sparse}. In summary, the trajectory of $x$ is attracted towards the trajectory of $z$ if $\diff(z,x)$ is \emph{strongly sparse}. \section{Sparseness} The notion of (strong) sparseness described in the previous section can be formulated and studied without reference to cellular automata, and that is what we are going to do now. This notion is of independent interest, as it commonly arises in error correcting scenarios. More sophisticated applications appear in~\cite{Gac86,Gac01} and~\cite{DurRomShe12}. Our exposition is close to that of~\cite{DurRomShe12}. We refer to a finite interval $I\subseteq\ZZ$ as an \emph{island}. Let $k$ be a fixed positive integer. The \emph{territory} (or the \emph{interaction range}) of an island $I$ of length $l$ is the set of sites $i\in\ZZ$ that are within distance $kl$ from $I$. We denote the territory of $I$ by $R(I)$. Two disjoint islands $I$ and $I'$ of lengths $l$ and $l'$, where $l\leq l'$, are considered \emph{well separated} if $I'\cap R(I)=\varnothing$, that is, if the larger island does not intrude the territory of the smaller one. A set $E\subseteq\ZZ$ is said to be \emph{sparse} if it can be covered by a family $\family{I}$ of (disjoint) pairwise well-separated islands. A sparse set is \emph{strongly sparse} if the cover $\family{I}$ can be chosen so that each site $i$ is in the territory of at most finitely many elements of $\family{I}$. Note that for $k\isdef 2rm$, we get essentially the same notion of sparseness as in the previous section. Indeed, let $\family{I}_l$ be the sub-family of $\family{I}$ containing all islands of length at most $l$, and denote by $E_l\isdef E\setminus\bigcup_{I\in\family{I}_l}I$ the subset of $E$ obtained by erasing the islands of length at most~$l$. Then, every island $I\in\family{I}$ having length~$l$ is \emph{isolated} in $E_{l-1}$, because its territory is not intruded by $E_{l-1}\setminus I$. The new notion of strong sparseness might be slightly more restrictive, as we define the territory by the constant $k=2rm$ rather than $k/2=rm$, but the arguments below are not sensitive to this distinction. The most basic observation about sparseness is its monotonicity. \begin{proposition}[Monotonicity] Any subset of a (strongly) sparse set is (strongly) sparse. \end{proposition} One expects a ``small'' set to be sparse. The following theorem due to Levin~\cite{Lev00} is an indication of this intuition. \begin{theorem}[Sparseness of small sets~\cite{Lev00}] There are constants $\varepsilon,c\in(0,1)$ depending on the sparseness parameter $k$ such that every periodic set $E\subseteq\ZZ$ with period $n$ and at most $c\, n^\varepsilon$ elements per period is strongly sparse. \end{theorem} The reverse intuition is misleading: a sparse set does not need to be ``small''. In fact, there are sets with arbitrarily large densities that are sparse. The existence of such sets is demonstrated by Kari and Le~Gloannec~\cite{KarGlo12}, and in special cases, was also noted by Levin~\cite{Lev00} and K\r{u}rka~\cite{Kur03}. \begin{theorem}[Large sparse sets~\cite{KarGlo12}] There are periodic subsets of $\ZZ$ with density arbitrarily close to $1$ that are strongly sparse. \end{theorem} It immediately follows that the set of possible densities of strongly sparse (periodic) subsets of $\ZZ$ is dense in $[0,1]$. A more important corollary is a strengthening of the impossibility result of Land and Belew~\cite{LanBel95} for cellular automata with \emph{linear-time} eroder property: for any such automaton, there are configurations $x$ with density $\rho(x)$ close to any number in $[0,1]$ that are incorrectly classified. The main result of interest for us is the sparseness of sufficiently biased Bernoulli random sets. \begin{theorem}[Sparseness of Bernoulli sets~\cite{Gac86,Gac01,DurRomShe12}] \label{thm:sparseness:Bernoulli} A Bernoulli random set $E\subseteq\ZZ$ with parameter $p$ is almost surely strongly sparse as long as $p<(2k)^{-2}$, where $k$ is the sparseness parameter. \end{theorem} \begin{proof} For a set $E\subseteq\ZZ$, we recursively construct a family $\family{I}$ of pairwise well-separated islands as a candidate for covering $E$. The family $\family{I}$ will be divided into sub-families $\family{J}_l$ consisting of islands of length $l$, and $E_l$ will be the set obtained by erasing the selected islands of length at most $l$ from $E$. Let $E_0\isdef E$. For $l\geq 1$, recursively define $\family{J}_l$ as the family of islands $I\subseteq\ZZ$ of length $l$ that intersect $E_{l-1}$ and are isolated in $E_{l-1}$ (i.e., $E_{l-1}\setminus I$ does not intersect the territory of $I$), and set $E_l\isdef E_{l-1}\setminus\bigcup_{I\in\family{J}_l}I$. Let $\family{I}\isdef\bigcup_l\family{J}_l$. To see that the elements of $\family{I}$ are pairwise well separated, let us first argue that every island $I\in\family{J}_l$ is minimal, in that, it is the smallest interval containing $I\cap E_{l-1}$. Indeed, let $J\subseteq I$ be the smallest island containing $I\cap E_{l-1}$, and assume that $\abs{J}<l$. Then, the endpoints of $J$ must be in $E_{l-1}$. Therefore, every island $I'\in\family{J}_{l'}$ with $l'<l$ must have been at distance more than $kl'$ from $J$, for otherwise, $I'$ would not have been isolated in $E_{l'-1}$. In particular, for $l'$ satisfying $\abs{J}\leq l' <l$, the island $J$ has distance more than $k\abs{J}$ from every $I'\in\family{J}_{l'}$. Since the distance between $J$ and $E_{l-1}$ is also more than $kl\geq k\abs{J}$, it follows that $J$ is isolated in $E_{\abs{J}-1}$. On the other hand, $J$ intersects $E_{\abs{J}-1}$, because it intersects $E_{l-1}$ and $E_{l-1}\subseteq E_{\abs{J}-1}$. We find that $J\in\family{J}_{\abs{J}-1}$, which is a contradiction. Therefore, $I$ is minimal. The well-separation of two islands $I\in\family{J}_l$ and $I'\in\family{J}_{l'}$ with $l\leq l'$ follows from the minimality of $I'$. We conclude that the elements of $\family{I}$ are also well separated. Now, let $E$ be a Bernoulli random configuration with parameter $p$. We choose an appropriate sequence $0<l_1<l_2<l_3<\cdots$ (to be specified more explicitly below) and observe whether a site $u$ is in $E_{l_n}$. We will show that the probability that site $u$ is in $E_{l_n}$ is double exponentially small, that is, $\xPr(u\in E_{l_n})\leq\alpha^{2^n}$ for some $\alpha<1$. Let $u$ be an arbitrary site. In order for $u$ to be in $E_{l_n}$, it is necessary that $u$ is also in $E_{l_n-1}$, and furthermore, $u$ is not covered by any island in $\family{J}_{l_n}$. Therefore, $E_{l_{n-1}}$ (which includes $E_{l_n-1}$) must contain two elements $u_{\symb{0}}\isdef u$ and $u_{\symb{1}}$ that are farther than $l_n/2$ from each other but no farther than $(k+1/2)l_n$ from each other (see Figure~\ref{fig:explanation-tree:children}). \begin{figure} \begin{center} \begin{tikzpicture}[>=stealth',shorten >=0.5pt,shorten <=0.5pt] \draw[help lines] (-4,0) -- (4,0); \fill (0,0) circle (2pt) node[below=1ex] {$u_{\symb{0}}=u$}; \fill (3,0) circle (2pt) node[below=1ex] {$u_{\symb{1}}$}; \foreach \x in {-4, -1, 1, 4} \draw[help lines] (\x,0.1) -- (\x,-0.1); \draw[->,very thin] (0,0.3) -- (1,0.3) node[midway,above] {$l_n/2$}; \draw[->,very thin] (0,0.3) -- (-1,0.3) node[midway,above] {$l_n/2$}; \draw[<->,very thin] (-4,0.3) -- (-1,0.3) node[midway,above] {$kl_n$}; \draw[<->,very thin] (1,0.3) -- (4,0.3) node[midway,above] {$kl_n$}; \end{tikzpicture} \end{center} \caption{ An evidence for $u\in E_{l_n}$ in $E_{l_{n-1}}$ (see the proof of Theorem~\ref{thm:sparseness:Bernoulli}). } \label{fig:explanation-tree:children} \end{figure} In a similar fashion, in order for $u_{\symb{0}}$ and $u_{\symb{1}}$ to be in $E_{l_{n-1}}$, the set $E_{l_{n-2}}$ must contain elements $u_{\symb{0}\symb{0}}\isdef u_{\symb{0}}$, $u_{\symb{0}\symb{1}}$, $u_{\symb{1}\symb{0}}\isdef u_{\symb{1}}$ and $u_{\symb{1}\symb{1}}$ such that \begin{align} \frac{1}{2}l_{n-1} &< d(u_{\symb{0}\symb{0}},u_{\symb{0}\symb{1}}) \leq \Big(k+\frac{1}{2}\Big)l_{n-1} \;,\\ \frac{1}{2}l_{n-1} &< d(u_{\symb{1}\symb{0}},u_{\symb{1}\symb{1}}) \leq \Big(k+\frac{1}{2}\Big)l_{n-1} \;. \end{align} Repeating this procedure, we find a binary tree of depth $n$ with roots in $E_0=E$ that provides an evidence for the presence of $u$ in $E_{l_n}$. We call such a tree an \emph{explanation tree}. Thus, in order to have $u\in E_{l_n}$, there must be at least one explanation tree for it. We estimate the probability of the existence of an explanation tree for $u\in E_{l_n}$. Let $T=(u,u_{\symb{0}},u_{\symb{1}}, u_{\symb{0}\symb{0}}, u_{\symb{0}\symb{1}}, \ldots, u_{\symb{1}\symb{1}\cdots\symb{0}},u_{\symb{1}\symb{1}\cdots\symb{1}})$ be a \emph{candidate} explanation tree, that is, a tree with the right distances between the nodes. To simplify the estimation, we choose the lengths $l_1,l_2,\ldots$ in such a way to make sure that the leaves of $T$ are distinct elements of $\ZZ$. A sufficient condition for the distinctness of the leaves of $T$ is that for each $m$, \begin{align} \frac{1}{2}l_m &\geq 2\Big(k+\frac{1}{2}\Big)(l_{m-1}+l_{m-2}+\cdots+l_1) \;. \end{align} This would guarantee that the two subtrees descending from each node do not intersect. We choose $l_m\isdef (4k+3)^{m-1}$, which is a solution of the above system of inequalities. A candidate tree $T$ is an explanation tree for $u\in E_{l_n}$ if and only if all its leaves are in $E$. Whether or not each leaf $u_w$ of $T$ is in $E$ is determined by a biased coin flip with probability $p$ of falling in $E$. With the above choice of $l_m$, the events $u_w\in E$ for different leaves of $T$ are independent. It follows that $T$ is an explanation tree for $u\in E_{l_n}$ with probability $p^{2^n}$. Let us now estimate the number of candidate trees of depth $m$. Denote this number by $f_m$. Observe that $f_m$ satisfies the recursive inequality \begin{align} f_m &\leq 2k l_m\, f_{m-1}^2 \end{align} with $f_0\isdef 1$. Indeed, $2k l_m$ counts for the number of possible positions for $u_{\symb{1}}$ and $f_{m-1}^2$ counts the number of possibilities for each of the two subtrees. Letting $g_m\isdef\log f_m$, we have \begin{align} g_m &\leq a\,m + b + 2g_{m-1} \;, \end{align} where $a\isdef\log(4k+3)$ and $b\isdef\log 2k-\log(4k+3)$. Expanding the last recursion we get \begin{align} g_m &\leq 2^m(2b + a\sum_{i=0}^m\frac{i}{2^i}) \\ &\leq 2^m(2b + a\sum_{i=0}^\infty\frac{i}{2^i}) \\ &= 2^{m+1}(a+b) \;. \end{align} Therefore, \begin{align} f_m &\leq (2k)^{2^{m+1}} \;. \end{align} By the sub-additivity of the probabilities, we find that the probability of the existence of at least one explanation tree for $u\in E_{l_n}$ satisfies \begin{align} \xPr(u\in E_{l_n}) &\leq p^{2^n} f_n \leq \alpha^{2^n} \;, \end{align} where $\alpha\isdef p(2k)^2$. Since $p<(2k)^{-2}$, we get $\alpha<1$. The probability that a given site $u\in\ZZ$ is in $E$ but is not covered by $\family{I}$ (i.e., never erased) is \begin{align} \xPr(u\in \bigcap_l E_l) &= \xPr(u\in \bigcap_n E_{l_n}) = \lim_{n\to\infty} \xPr(u\in E_{l_n}) = \lim_{n\to\infty} \alpha^{2^n} = 0 \;. \end{align} Since $\ZZ$ is countable, we find, by sub-additivity, that $\xPr(\bigcap_l E_l\neq\varnothing)=0$, which means, $E$ is sparse with probability~$1$. That $E$ is strongly sparse with probability~$1$ follows by the Borel-Cantelli argument. Namely, the event that a site $u$ is in the territory of infinitely many islands $I\in\family{I}$ can be expressed as $\bigcap_m\bigcup_{n\geq m}\{d(u,E_{l_n})\leq kl_n\}$. (Note that an island covering a site in $E_{l_n}$ has length greater than $l_n$.) The probability that $u$ is within distance $kl_n$ from $E_{l_n}$ satisfies \begin{align} \xPr\big(d(u,E_{l_n})\leq kl_n\big) &\leq (2k l_n+1)\alpha^{2^n} = (2k(4k+3)^{n-1}+1)\alpha^{2^n} \;. \end{align} Therefore, \begin{align} \xPr\Big(\bigcup_{n\geq m}\{d(u,E_{l_n})\leq kl_n\}\Big) &\leq \sum_{n\geq m} (2k(4k+3)^{n-1}+1)\alpha^{2^n} < \infty \;. \end{align} It follows that \begin{align} \xPr\Big(\bigcap_m\bigcup_{n\geq m}\{d(u,E_{l_n})\leq kl_n\}\Big) &\leq \lim_{m\to\infty} \sum_{n\geq m} (2k(4k+3)^{n-1}+1)\alpha^{2^n} = 0 \;. \end{align} Using again the countability of $\ZZ$, we find that, with probability~$1$, no site $u$ is in the territory of more than finitely may islands $I\in\family{I}$. That is, $E$ is almost surely strongly sparse. \qed \end{proof} Theorem~\ref{thm:sparseness:Bernoulli}, along with a standard application of monotonicity, shows that when the Bernoulli parameter is varied, a non-trivial phase transition occurs. \begin{corollary}[Phase transition] \label{cor:sparseness:Bernoulli:phase-transition} There is a critical value $p_\critical\in(0,1]$ depending on the sparseness parameter $k$ such that a Bernoulli random set $E\subseteq\ZZ$ with parameter $p$ is almost surely strongly sparse if $p<p_\critical$ and is almost surely not strongly sparse if $p>p_\critical$. \end{corollary} \begin{proof} First, observe that the (strong) sparseness of $E$ is a translation-invariant event (i.e., for $a\in\ZZ$, the sparseness of $a+E$ is equivalent to the sparseness of $E$). Therefore, by ergodicity, the probability that a Bernoulli random set is (strongly) sparse is either~$0$ or~$1$. The presence of a threshold value $p_\critical\in[0,1]$ (possibly $0$) is a standard consequence of monotonicity. Indeed, let $U_i, i\in\ZZ$ be a collection of independent random variables with uniform distribution on the real interval $[0,1]$. For $p\in [0,1]$, define a set $E^{(p)}\isdef\{i\in\ZZ: U_i<p\}$. Then, $E^{(p)}$ is a Bernoulli random set with parameter $p$, and the collection of sets $E^{(p)}$ is increasing in $p$. Let $p_\critical\isdef\sup\{p: \text{$E^{(p)}$ is almost surely (strongly) sparse}\}$. By monotonicity, the set $E^{(p)}$ is almost surely (strongly) sparse for $p<p_\critical$ and is almost surely not (strongly) sparse for $p>p_\critical$. Finally, we know from Theorem~\ref{thm:sparseness:Bernoulli} that $p_\critical>0$. \qed \end{proof} \section{Restricted Classification} Let us state the claimed result of this paper explicitly as a corollary of Theorem~\ref{thm:sparseness:Bernoulli} and the discussions in the previous sections. \begin{corollary}[Restricted classification] \label{cor:classification:restricted} Let $\Phi:\{\symb{0}, \symb{1}\}^\ZZ\to\{\symb{0}, \symb{1}\}^\ZZ$ be a cellular automaton that washes out finite islands of errors on either of the two uniform configurations $\unif{\symb{0}}$ and $\unif{\symb{1}}$ in linear time. Namely, suppose that there is a constant $m$ such that for every finite perturbation $x$ of $\unif{\symb{0}}$ for which $\diff(\unif{\symb{0}},x)$ has diameter at most $l$, we have $\Phi^{ml}x=\unif{\symb{0}}$, and similarly for $\unif{\symb{1}}$. Then, there is a constant $p_\critical\in(0,1/2]$ such that $\Phi$ classifies a Bernoulli random configuration with parameter $p\in [0,p_\critical)\cup(1-p_\critical,1]$ almost surely correctly. \end{corollary} For GKL and modified traffic, we have $k=2rm=12$. Therefore, Theorem~\ref{thm:sparseness:Bernoulli} only guarantees correct classification if the Bernoulli parameter $p$ is within distance $(2k)^{-2}=24^{-2}\approx 0.0017$ from either $0$ or $1$. \section{Discussion} We conclude with few comments and questions. Corollary~\ref{cor:classification:restricted} shows that the asymptotic behaviour of the GKL and modified traffic automata starting from a Bernoulli random configuration undergoes a phase transition: the cellular automaton converges to $\unif{\symb{0}}$ for $p$ close to $0$ and to $\unif{\symb{1}}$ for $p$ close to $1$. It remains open whether the transition occurs precisely at $p=1/2$, or if there are other transitions in between. The result of Bu\v{s}i\'c et al.{}~\cite{BusFatMaiMar13} shows that the transition in the NEC cellular automaton is unique and happens precisely at $p=1/2$. Another open issue is the behaviour of the GKL and modified traffic automata on random configurations with non-Bernoulli distributions. One might expect the sparseness argument to extend to measures that are sufficiently mixing. For instance, it should be possible to show the same kind of classification on a Markov random configuration that has density close to $0$ or~$1$. It would also be interesting to see if the sparseness method can be applied to probabilistic cellular automata that are suggested for the density classification task. Fat\`es~\cite{Fat13} has introduced a parametric family of one-dimensional probabilistic cellular automaton with a density classification property: for every $n\in\NN$ and $\varepsilon>0$, there is a setting of the parameter such that the automaton classifies a periodic configuration with period $n$ with probability at least $1-\varepsilon$. Does the majority-traffic rule of Fat\`es with a fixed parameter classify sufficiently biased Bernoulli random configurations? A two-dimensional candidate would be the noisy version of the nearest-neighbor majority rule, in which the noise occurs only when there is no consensus in the neighborhood. Finally, given its various applications, one might try to study the notion of sparseness in a more systematic fashion, trying to capture more details about the transition. It is curious that the notion of sparseness of Bernoulli random sets supports a hierarchy of phase transitions, even in one dimension where the standard notion of percolation fails. \subsubsection*{Acknowledgments.} Research supported by ERC Advanced Grant 267356-VARIS of Frank den Hollander. I would like to thank Jarkko Kari for suggesting this problem and for discussions that lead to this paper. \bibliographystyle{splncs03}
1,314,259,996,410
arxiv
\section*{Introduction} In their biologically active form, RNA molecules are folded in fairly well defined three dimensional structures \cite{PDB}. These structures are strongly constrained by the pairing of conjugate bases along the sequence, but depend also on the ionic strength of the solution \cite{MD}. It has proved very useful to describe the pairing of RNA in terms of secondary structures and pseudoknots \cite{PRB}. These structural elements can be viewed as motifs which appear repeatedly in the folds. The main structural motifs of secondary structures are helical duplexes, single stranded regions, hairpin stems, hairpin loops, bulges and internal loops, junctions and multiloops (see table \ref{secondary}). It is convenient at this stage to introduce some standard graphical representations of RNA structures. In the {\it linear representation}, one writes the base sequence on an oriented straight line, starting from the 5' to the 3' end. By replacing the straight line by a closed circle one obtains the {\it circular representation}. The pairing of two bases is represented by a dotted line, or colored line, joining the two bases in the upper side of the straight 5'-to-3' line. In the case of a circular representation, pairings are drawn inside the circle. This representation associates a unique diagram to any set of base pairings of RNA. \begin{table} \centering \includegraphics[width=0.8\textwidth]{TableOne.eps} \caption{Examples of basic RNA secondary structure motifs. From top to bottom: a single strand (PDB 283D \cite{pdb_283d}), a helical duplex (PDB 405d \cite{pdb_405d}), a hairpin stem and loop (PDB 1e4p \cite{pdb_1e4p}), a bulge (PDB 1r7w \cite{pdb_1r7w}), a multiloop (PDB 1kh6 \cite{pdb_1kh6}). From left to right: spacefill view, three-dimensional structure, secondary structure motif. The pictures are made with MolPov \cite{MolPov}, Jmol \cite{Jmol} and PovRay \cite{PovRay}.} \label{secondary} \end{table} In the circular and linear representation, a diagram represents a secondary structure if it involves only pairings which do not cross \cite{W_sec}. In table \ref{secstruc} (top row), we show a secondary structure, together with its two representations (linear and circular in the fourth and fifth column, respectively). Similarly, a diagram contains a pseudoknot if it contains pairings which do cross ( see, e.g., the bottom row in table \ref{secstruc}). \begin{table} \centering \includegraphics[width=0.9\textwidth]{TableTwo.eps} \caption{Top row: example of a secondary structure motif (a helix, PDB 1a51 \cite{pdb_1a51}). Bottom row: an example of a common RNA H-pseudoknot (PDB 1a60 \cite{pdb_1a60}). From left to right: spacefill view, three-dimensional structure, secondary structure (and base-pairings from RNAView \cite{RNAview}), linear representation and circular representation. In red are emphasized the non-planar pairings (crossing arcs).} \label{secstruc} \end{table} There are quite a few methods to predict secondary structures. Energy-based methods have proven to be the most reliable (as, e.g., \cite{zuker,ViennaPackage}). They assign some energy to the base pairings and some entropy to the loops and bulges. In addition, they take into account stacking energies, and assign precise weights to specific patterns (tetraloops, multiloops, etc.) \cite{david}. The lowest free energy folds are obtained either by dynamic programming algorithms \cite{zuker2}, or by computing the partition function of the RNA molecule \cite{partition}. The main drawback of these energy-based methods is that they deal solely with secondary structures and cannot take into account pseudoknots in a systematic way. There are several computer programs that attempt to predict RNA-folding with pseudoknots, but the problem is still mostly unsolved (see, e.g. \cite{PseudoPrograms,P1,P2,P3,P4,P5,P6,P7,P8}; the list is not exhaustive) . There exists however a novel approach: in order to include the pseudoknots, the RNA folding problem has been formulated in terms of a sophisticated mathematical theory, namely a quantum matrix field theory \cite{OZ}. These types of field theories were first introduced in particle physics, more precisely in Quantum Chromodynamics, in order to model the theory of strong interactions \cite{thooft}. Since then, these field theories have been used in many mathematical problems, such as combinatorics, number theory, etc. (for a recent review see \cite{MatrixModels}). They involve a parameter $N$, the linear dimension of the $N\times N$ matrices, which can be used as an expansion parameter for the theory (large $N$ expansion) \cite{thooft}. In the RNA folding problem, the matrix field theory can be expanded diagrammatically in various parameters. The simplest development is in terms of the number of pairings and can easily be represented in terms of diagrams. These diagrams, which are the usual Feynman diagrams of quantum field theory, can be viewed as the set of all the possible pairings of the RNA, with the correct corresponding Boltzmann weights \cite{OZ,Z}. Another possible expansion is in powers of $1/N$. As was shown in a previous paper \cite{OZ,VO}, this expansion relies on a topological number called the genus which characterizes the pairing. As we shall see, the genus of a diagram is defined by its embedding on a two-dimensional surface. It is the minimal number of handles that the surface should have so that the diagram can be drawn on the surface without crossing. Secondary structures correspond to zero genus, that is planar structures: They can be drawn on a sphere without crossing. The simplest pseudoknots, such as the ``H-pseudoknot'' (see table \ref{secstruc}) or the kissing hairpin, correspond to genus 1: they can be drawn on a torus without crossing. This classification of RNA structures allows us to completely grasp the topological complexity of a pseudoknot with a single integer number, the genus. It can be viewed as a kind of ``quantum'' number. It is reminiscent of the superfold families, such as CATH or SCOP \cite{protein}, which have proven so useful in protein structure classification. In the literature other possible classifications of RNA structures with pseudoknots have been proposed, such as the ones in, e.g. , \cite{others}. However, the one we propose in this paper is the only one that is purely topological, i.e. independent of any three-dimensional embedding and which is based only on the classical topological expansion of closed bounded surfaces. This is also the reason why this expansion can be derived mathematically with standard tools of combinatorial topology. We believe that such a mathematical framework can be exploited far beyond the simple classification of RNA pseudoknots, and could be applied also for RNA-folding predictions \cite{VO}. In this work however we restrict only to the problem of classifying known RNA-structures. In the following, we shall define more precisely the genus for a given diagram, and show how it can be simply calculated. We then present an analysis of the genii of two main databases which contain RNA structures, namely PSEUDOBASE \cite{Pseudobase} and the wwPDB (the Worldwide Protein Data Bank which contains some RNAs). The RNA structures in the latter are also listed in the RNABase database \cite{RNAbase}, that we also used as a reference database. We find that RNAs of sizes up to about 250 have a genus smaller than 2, whereas long RNAs, such as ribozomal RNA may have a genus up to 18. \section*{Materials and Methods} \underline{The genus}. The topological classification of RNA secondary structures with pseudoknots that we propose is based on the concept of topological {\it genus}. We first review the definition of genus of a given diagram. Consider a diagram representing a pairing in the linear representation. The matrix field theory representation of the problem suggests representing a pairing not by a single dotted line, but rather by a double line (which should never be twisted) \cite{OZ,thooft}. Therefore, a unique diagram in the double line representation corresponds to each dotted-line diagram. Some examples are shown in fig.\ref{double}. \begin{figure}[hbpt] \centering \includegraphics[width=0.48\textwidth]{double22.eps} \caption{A schematic view of the double line representation (right) of a generic linear representation of pairings (left). The example a) (top) represents a couple of stacked base-pairs, and b) (bottom) represents an H-pseudoknot embedded in an hairpin.} \label{double} \end{figure} Each double line diagram is characterized by its number of double lines (i.e. the number of pairings of the diagram) which we denote by $P$, and by its number of loops denoted $L$, which is the total number of closed loops made with the (single) lines of the diagram. For instance, in fig.\ref{double} (bottom) and in fig.\ref{ex1}, the diagram has $P=3$ double lines and $L=1$ loop. The genus of the diagram is the integer defined by \[ g=\frac{P-L}{2} \] \begin{figure}[hbpt] \centering \includegraphics[width=0.48\textwidth]{ex1_bis2.eps} \caption{This diagram represents a pseudoknot with genus $g=1$ since it has $P=3$ double lines and $L=1$ loops.} \label{ex1} \end{figure} It is related to the Euler characteristics of the diagram, and is a topological invariant of the diagram. Its geometrical interpretation is quite simple. Consider a sphere with $g$ handles: a sphere with 0 handles is a sphere, a sphere with one handle is topologically equivalent to a torus, a sphere with 2 handles is topologically equivalent to a double-torus, etc. (see fig.\ref{gex}). \begin{figure}[hbpt] \centering \framebox{ \includegraphics[width=0.60\textwidth]{gex.eps} \put(-291,215){g=0} \put(-268,150){g=1} \put(-210,113){g=2} \put(-120,65){g=3}} \caption{First few terms of the topological expansion of closed oriented surfaces: the term $g=0$ is a sphere, $g=1$ is a torus, $g=2$ is a double torus and so forth.} \label{gex} \end{figure} The genus $g$ of a diagram is the minimum number of handles a sphere must have in order to be able to draw the diagram on it without any crossing. The precise way to do so, is unambiguously defined only when the diagram does not have open dangling lines, such as the $5'$ or $3'$ ends. Therefore it is important to connect the ends, as is done in the circular representation. However, it is more convenient to close the two ends {\it below} the backbone-line, which results in drawing the pairing arcs all at the exterior of the backbone-circle. In that way it is simple to see how the embedding of a pseudoknotted RNA structure on a high-genus surface works. Mathematically speaking, the circle of the RNA-backbone (when the $5'$ are $3'$ connected) becomes the boundary of a hole or {\it puncture} on the surface, and the arcs corresponding to the RNA base-pairs are drawn on the surface without that hole. In fig.\ref{ge0}, we show explicit examples of diagrams having different genus. As can be seen, a diagram with genus 0 is planar, in that it can be drawn on the sphere without crossing, and corresponds to a secondary structure. More generally, it was shown in \cite{OZ} that the secondary structure diagrams are all the planar diagrams with $g=0$. Likewise, in fig.\ref{ge0} one sees also how diagrams with non-zero genus $g\neq0$ can be drawn without any crossing on a surface with $g$ handles. \begin{figure}[hbpt] \centering \framebox{ \includegraphics[width=0.80\textwidth]{Picture2.eps} \put(-360,100){g=0},\put(-280,100){g=1},\put(-190,100){g=2},\put(-90,100){g=3}} \caption{Any RNA circular diagram can be drawn on a closed surface with a suitable number of ``handles'' (the genus). For the sake of simplicity, in this figure all helices and set of pairings on the surfaces are schematically identified only by their color. Note that the circle of the RNA-backbone (in green) topologically corresponds to a hole (or puncture) on the surface.} \label{ge0} \end{figure} Clearly, different diagrams can have the same genus. Thus, in order to further simplify the classification, we first note that adding a line of pairing parallel to an existing one does not change the genus of the diagram, since it increases by one the number of pairing lines, and increases by one the number of loops of the diagram. Therefore, all diagrams with parallel pairings are equivalent topologically. We will thus use a reduced representation of the diagrams, where each pairing line can be replaced by any number of parallel pairings as in fig.\ref{parallel}. \begin{figure}[hbpt] \centering \framebox{ \includegraphics[width=0.50\textwidth]{parallel.eps} } \put(-115,20){$\rightarrow$} \put(-115,75){$\rightarrow$}\put(-115,150){$\rightarrow$} \caption{The genus of a diagram does not change by identifying a stack of paired bases with a single {\it effective} base-pair.} \label{parallel} \end{figure} With this convention, it has been shown in \cite{poz} that there are exactly 8 topologies of pseudoknots of genus 1, see fig.\ref{genus1}. Those topologies can be uniquely identified also as a) ABAB, b) ABACBC, c) ABCABC, d) ABCADBCD, where each letter A,B, etc. indicates a specific helix (or set of helices) along the RNA-backbone from the $5'$ end to the $3'$ end. Note that one recognizes the standard H-pseudoknot (ABAB) and the kissing hairpin (ABACBC) (diagrams a) and b) on the left of fig.\ref{genus1}, respectively). Among the 8 pseudoknots of genus 1, four are quite common in the databases (the rows a) and b) of fig.\ref{genus1}), two are very rare (the row c) fig.\ref{genus1}), and the remaining two have not been reported as of yet. We will discuss these pseudoknots in more details in the next section. \begin{figure}[hbpt] \centering\framebox{ \includegraphics[width=0.70\textwidth]{genus1.eps} \put(-333,215){a)} \put(-333,153){b)} \put(-333,88){c)} \put(-333,15){d)} } \caption{These are the only 8 types of irreducible pseudoknots with genus $g=1$.} \label{genus1} \end{figure} Let us insist again that the genus captures the topological complexity of the pseudoknots. It is not simply related to the number of crossings, or of pairings. It depends on the intrinsic complexity of the pseudoknot. This complexity itself depends on what kind of pairings are considered. This is of course conventional. Before discussing the statistics of the genus of pseudoknots from the databases, let us address this question. As discussed in \cite{West}, there are many possible non-canonical bonds between base-pairs. We emphasize that our classification of RNA structures according to their genus is well defined and possible even when including non-canonical bonds, or more general definitions of RNA-binding interactions (as far as such interactions are binary). The larger the number of pairings, the higher the genus of the structure might be. However, the weaker bonds, such as the Hoogsteen bonds, or even the wobble pairs, do not form the structure, they merely stabilize a structure already formed by canonical pairings. Therefore, in the following, we shall consider only Watson-Crick pairs between conjugate bases and G-U wobble pairs. \underline{Irreducibility and nesting}. In many cases, the genus of a diagram is an additive quantity. For instance, if we consider a succession of two H-pseudoknots (see fig.\ref{irrnonirr}, left), each one has genus 1, and the total genus of the diagram is 2. In order to characterize the intrinsic complexity of a pseudoknot, it is thus desirable to define the notion of {\it irreducibility}. A diagram is said to be irreducible if it can not be broken into two disconnected pieces by cutting a single line. The diagram on the left of fig.\ref{irrnonirr} is reducible, whereas the one on the right of fig.\ref{irrnonirr} is irreducible. Any diagram can thus be decomposed in a unique way into irreducible parts. It is obvious that the genus of a non-irreducible diagram is the sum of the genii of its irreducible components. \begin{figure}[hbpt] \centering \framebox{ \includegraphics[width=0.48\textwidth]{irr.eps} \put(-130,100){\vector(-1,-1){10}} \put(-100,100){\vector(1,-1){10}} } \framebox{\includegraphics[width=0.38\textwidth]{nonirr.eps} } \caption{Example of a reducible pseudoknot (left) and an irreducible one (right). The reducible pseudoknot can be split in two disconnected parts, as shown, by cutting the backbone only once. The total genus is the sum of the genus of the two components (in this example the total genus is 2).} \label{irrnonirr} \end{figure} Similarly, if one considers the diagram of fig.\ref{nestedandnot} (left), its genus is equal to 2. It is composed of an H pseudoknot, embedded inside another H pseudoknot. A diagram is said to be embedded or {\it nested} in another, if it can be removed by cutting two lines while the rest of the diagram stays connected in a single component. The diagram on the left of fig.\ref{nestedandnot} is nested, whereas the one on the right is not. It is clear that the genus of a nested diagram is the sum of the genii of its nested components. As a result, to any non-nested diagram of genus $g$ there corresponds a nested diagram of same genus, obtained by adding a pairing line between the first base and the last base of the diagram. For instance, the 8 diagrams of genus 1 in fig.\ref{genus1} can be decomposed in 4 non-nested diagrams (left column) and 4 nested diagrams (right column). Therefore, there are only 4 irreducible non-nested diagrams (a,b,c,d) of genus 1. As we shall see in the next section, pseudoknots (a) and (b) are quite common, pseudoknot (c) has been seen but is rare, and pseudoknot (d) has not yet been seen. In the following, a pseudoknot which is irreducible and non nested is said to be {\it primitive}. Clearly, all RNA structures can be constructed from primitive pseudoknots. The primitive diagram for secondary structures is obviously a single pairing. \begin{figure}[hbpt] \centering \framebox{ \includegraphics[width=0.40\textwidth]{nested.eps} \put(-135,75){\vector(1,-1){10}} } \framebox{ \includegraphics[width=0.40\textwidth]{unnested.eps} } \caption{An example of nested diagram (left) and not nested (right). A nested diagram can be disconnected in two components by cutting the backbone in two points.} \label{nestedandnot} \end{figure} \section*{Results and Discussion} \underline{Analysis of databases}. There are several databases containing RNA structures. We have analyzed two of them, namely Pseudobase \cite{Pseudobase} and the wwPDB \cite{PDB} (modulo the RNAbase database \cite{RNAbase}). \subsection*{Pseudobase} \noindent Pseudobase is a database, containing 246 pseudoknots, at the time of writing this work. These pseudoknots have been deposited and validated by several research groups. They are subsegments of larger RNA sequences, and are displayed in bracket form using several symbols (see fig.\ref{bracket}). As an example, we show below one of the pseudoknots from Pseudobase (accession number PKB210) \begin{verbatim} CGCUGCACUGAUCUGUCCUUGGGUCAGGCGGGGGAAGGCAACUUCCCAGGGGGCAACCCCGAACCGCAGCAGCGAC ((((((::(((:::[[[[[[[::))):((((((((((::::)))))):((((::::)))):::)))):)))))):: AUUCACAAGGAA :::::]]]]]]] \end{verbatim} \begin{figure}[hbpt] \centering \includegraphics[width=0.48\textwidth]{bracket2.eps} \caption{The {\it bracket} notation is commonly used for representing RNA secondary structures with simple pseudoknots. One stem of the pseudoknot is represented by parenthesis and brackets for the other stem. A dot ``." indicates a free base.} \label{bracket} \end{figure} A simple analysis shows that this is an H pseudoknot, of the type ABAB. Likewise, we analyzed all the 246 pseudoknots of Pseudobase and found that: \begin{itemize} \item there are 238 H pseudoknots (or nested H pseudoknots) of the ABAB type with genus 1 \item there are 6 kissing hairpin pseudoknots (or nested) of the ABACBC type with genus 1 \item there is 1 pseudoknot of the type ABCABC (number PKB71) with genus 1 \item there is 1 pseudoknot of the type ABCDCADB (number PKB75) with genus 2 \end{itemize} Note that the pseudoknot PKB71, from the regulatory region of the alpha ribosomal protein operon (E.coli organism) is the unique example of the ABCABC pseudoknot in Pseudobase. Its structure is \cite{Pseudobase,pkb71}: \begin{verbatim} UGUGCGUUUCCAUUUGAGUAUCCUGAAAACGGGCUUUUCAGCAUGGAACGUACAUAUUAAAUAGUAGGAGUGC (((((((:(((((::::::::[[[[::::[[[[::::{{{{:))))))))))))::::::::::::::::::: AUAGUGGCCCGUAUAGCAGGCAUUAACAUUCCUGA :::::::]]]]:::::]]]]:::::::::::}}}} \end{verbatim} Its irreducible structure is given in figure \ref{genus1} (third from the top, on the left). However, looking at sequence alignment, it is very likely that in fact at least more than 20 other RNA sequences in the EMBL database \cite{embl} contain pseudoknots of this kind (A. Mondrag\'on, A. Torres-Larios and K.K. Swinger, Department of Biochemistry, Molecular Biology and Cell Biology, Northwestern University, Evanston, IL: {\it private communication}). \subsection*{The wwPDB databank} The world wide Protein Data Bank (wwPDB) is a collection of databases comprising mostly crystallographic and NMR structures of proteins \cite{PDB}. In addition, as of today, it contains approximately 850 structures containing at least one RNA molecule. Among these structures, there are about 300 single RNA structures, 200 containing several RNA fragments, 30 RNA/DNA complexes, 250 RNA/protein complexes and 60 transfer RNA. Among these 850 structures, there are about 650 structures which have obviously genus 0 (very short sequences, or single or double stranded RNA helices). The number of bases ranges from 22 (2g1w.pdb) which is an H pseudoknot, to 2999 (chain 3 of 1s1i.pdb) which has genus 15. We have analyzed the remaining 200 structures according to the following scheme: \begin{itemize} \item removal of non RNA molecules and extraction of the molecule of interest \item search for all pairings using the program RNAview \item selection of relevant pairings (Watson-Crick and G-U wobble) \item computation of the genus of the corresponding diagram \end{itemize} Our results can be summarized in the following way \begin{itemize} \item Transfer RNAs, which are among the smallest RNAs (length of 78), are made of a single primitive pseudoknot (irreducible and non-nested) of genus 1 (a kissing-hairpin) nested inside an arch (see fig.\ref{trna}) \begin{figure}[hbpt] \centering \framebox{ \includegraphics[width=0.30\textwidth]{1evv.eps} \includegraphics[width=0.40\textwidth]{tRNAJmol.eps} \put(-23,80){\tiny Hairpin} \put(-20,175){\tiny Hairpin} \put(-20,140){\tiny Kissing} } \caption{A typical tRNA (PDB 1evv, \cite{pdb_1evv}). It has the genus 1 of a kissing hairpin pseudoknot.} \label{trna} \end{figure} \item Larger RNAs, such as RNA ribosomal 50s subunits (length larger than 2000), have total genii less than 18. For an RNA with a non designed random sequence of length $L$ and without steric constraints, the typical genus should be $L/4$ \cite{voz_prl}, which in the present case would be around 500. Even by including steric constraints \cite{vroz}, the genus would be around $2000\times0.14 \simeq 280$. In addition, if we analyze these sequences in terms of primitive pseudoknots, we find that most of the structures are built from very simple primitive blocks, with genii 1 or 2, nested inside a more complex pseudoknot, of genus smaller than 8. In fig.\ref{genre1}, we show an RNA of genus 7 and of length 2825 (the B chain of 1vou.pdb \cite{1vou_1vp0}) made of 3 H-pseudoknots, 3 kissing hairpins, nested inside a large kissing hairpin. In fig.\ref{genre4}, we display an RNA of genus 9 and of length 2825 (the B chain of 1vp0.pdb, of the 50s subunit of E.Coli \cite{1vou_1vp0}), which is made of 3 H-pseudoknots and 2 kissing hairpins, nested in a primitive pseudoknot of genus 4. \begin{figure}[hbpt] \centering \framebox{ \includegraphics[width=0.44\textwidth]{1voubis.eps} } \framebox{ \includegraphics[width=0.44\textwidth]{1vou_explained.eps} } \caption{The B chain of PDB 1vou is an RNA of genus 7 and of length 2825 bases. On the right, the outermost primitive arc structure is the pseudoknot type b) of the second column in fig.\ref{genus1}, which has genus 1. Such a primitive structure is decorated by 6 additional simple pseudoknots of type $H$ and $K$ (type a) and b) in the first column of fig.\ref{genus1}, respectively).} \label{genre1} \end{figure} \begin{figure}[hbpt] \centering \framebox{ \includegraphics[width=0.45\textwidth]{1vp0bis.eps} } \framebox{ \includegraphics[width=0.45\textwidth]{1vp0_explained.eps} \put(-95,70){\vector(0,-1){10}} } \caption{The B chain of PDB 1vp0 is an RNA of genus 9 and of length 2825 bases. The outermost primitive structure is similar to the one of fig.\ref{genre1}, with a more complex decoration on the right-hand part. There, a complex pseudoknot with genus 4 is included. Five simple $H$ and $K$ pseudoknots complete the full decoration.} \label{genre4} \end{figure} \item There is no hierarchical nesting of the pseudoknots: The general structure observed in all RNAs of the PDB is that of several low genus primitive pseudoknots in serie, nested inside a possibly higher genus ``scaffold pseudoknot". We show in fig.\ref{genre1} one example of decomposition of a structure (1vou.pdb, which is a 30s subunit of E. Coli). \item In fig.\ref{distr1} (left), we plot the distribution of genii as a function of the length of the RNA. As mentioned before, the genii are much lower than what is expected for random sequences, and this is a manifestation of the specific design of RNA. \item In fig.\ref{distr1} (right), we plot a histogram of the statistics of primitive pseudoknots in the PDB. We see that the genus of primitive pseudoknots is small, typically one or 2, and that the probability to observe large genii is very small. This reflects the fact that complex pseudoknots are built from many small primitive pseudoknots with low genii. \begin{figure}[hbpt] \centering \framebox{ \includegraphics[width=0.48\textwidth]{distr1.eps} \put(-228,130){$g$} \put(0,-7){$L$} \quad \includegraphics[width=0.48\textwidth]{distr2.eps} \put(-239,132){$f$} \put(-5,-5){$L$}} \caption{On the left: total genus as a function of the number of bases in the RNA molecule. The interpolating dashed line emphasizes an overall linear behavior. On the right: histogram distribution of the number $n$ of primitive pseudoknots as a function of their genus $g$ for all RNA molecules in the wwPDB database.} \label{distr1} \end{figure} \end{itemize} We conclude by reporting in table \ref{finaltable} the sorted list of all the PDB files with non-zero genus, according to our classification. Note that our statistical analysis is biased by the inherent bias of the PDB: the PDB sometimes contains many structures of the same molecules, and thus those utilized for the statistical analysis are not independent. \section*{Conclusion} We have shown that RNA structures can be characterized by a topological number, namely their genus. This genus is 0 for secondary structures (planar structures), and non zero for pseudoknots. We have shown how the complexity of the RNA structure can be analyzed in terms of so-called ``primitive pseudoknots". Any complex RNA structure can be uniquely decomposed as a sequence of primitive pseudoknots concatenated sequentially and nested. A survey of the existing RNA structures shows that even for large RNA ($\approx$ 3 kb), the genus remains small (smaller than 18), and natural RNA have a genus which is much smaller than that of paired structures obtained from random sequences. By capturing the intrinsic complexity of the structure, the genus provides a natural and powerful classification of RNA. Finally, a statistical study shows that complex RNA structures are built from low genii primitive pseudoknots (genii 0, 1 or 2), and that the most complex primitive pseudoknots have genus 13. In a forthcoming work, we will show how this concept of genus can be utilized to actually predict the folded structure of RNA molecules. \begin{table} \scriptsize \centering \begin{tabular}{|c|l|} \hline total genus & PDB file accession number \\ \hline \hline 1 & 1b23, 1c0a, 1e8O, 1ehz, 1eiy, 1euq, 1euy, 1f7u, 1f7v, 1fcw, 1ffy, 1fir, 1g59, 1gix-B, 1gix-C, 1grz, \\ & 1gtr, 1i9v, 1il2, 1j1u, 1jgo-D, 1jgp-D, 1jgq-D, 1kpd, 1kpy, 1kpz, 1l2x, 1l3d, 1mj1, 1mzp, 1n77, \\ & 1o0b, 1o0c, 1qf6, 1qrs, 1qrt, 1qru, 1qtq, 1qu2, 1qu3, 1ser, 1sz1, 1tn2, 1tra 1ttt 1u6b-B, 1x8w,\\ & 1yfg, 1yg3, 1ymo, 1zzn-B, 2a43, 2a64, 2csx, 2fk6, 2g1w, 2tpk, 2tra, 437d, 4tna, 4tra, 6tna,\\ & 1asy-R, 1asy-S, 1asz-R, 1asz-S \\ \hline 2 & 1cx0, 1ddy, 1drz, 1et4, 1exd, 1ffz, 1fg0, 1fka, 1pnx, 1sj3, 1sj4, 1sjf, 1u8d, 1vbx, 1vby, 1vbz, \\ & 1vc0, 1vc5, 1vc6, 1vc7, 1y0q, 1y26, 1y27, 1yoq, 2a2e \\ \hline 3 & 1i97, 1n34, 1s1h, 1voz, 1yl4-A \\ \hline 4 & 1ibm, 1fjg, 1hnw, 1hnx, 1hnz, 1hr0, 1i95, 1ibk, 1ibl, 1n32, 1n33, 1q86 1vov, 1vox, 1xmo, \\ & 1xmq-A, 1xnr, 1j5e\\ \hline 5 & 1i94, 1i96, 1n36, 1voq, 1vos, 1xnq, 2avy, 2aw7, 2aw7-A \\ \hline 6 & 1pns, 1voy-B\\ \hline 7 & 1c2w, 1vou-B, 1yl3-A \\ \hline 8 & 1ffk-0, 1vow-B \\ \hline 9 & 1vp0-B, 2aw4-B\\ \hline 10 & 1njm, 1njn, 1njo, 1njp, 2awb-B \\ \hline 11 & 1k01, 1p9x, 1pnu, 1pny \\ \hline 12 & 1j5a, 1jzx, 1jzy, 1jzz, 1nwx, 1nwy-0, 1sm1-0, 1xbp-0, 1y69-0 \\ \hline 13 & 1nkw-0, 1ond, 2d3o \\ \hline 14 & 1jj2, 1k73, 1k8a-A, 1k9m-A, 1kc8, 1kd1, 1kqs-0, 1m1k, 1m90, 1n8r, 1nji, 1q7y, 1q82, 1qvf-0,\\ & 1s72, 1vq4-0, 1vq5-0, 1vq7-0, 1vq8-0, 1vq9-0, 1vqk, 1vql, 1vql-0, 1vqm, 1vqn, 1vqo-0, 1vqp-0, \\ & 1yhq-0, 1yi2-0, 1yij-0, 1yit-0, 1yj9-0, 1yjn-0, 1yjw-0, 2aar \\ \hline 15 & 1q81, 1qvg, 1s1i-3, 1vq6-0 \\ \hline 16 & - \\ \hline 17 & 2aw4-B \\ \hline 18 & 2awb-B \\ \hline \end{tabular} \caption{List of the PDB files we considered in this paper, according to their total genus. The notation $xxxx-y$ indicates the chain number $y$ in the PDB file accession number $xxxx$.} \label{finaltable} \end{table} \section*{Acknowledgements} \noindent This work was supported in part by the National Science Foundation under Grant No. PHY 99-07949 and Grant No. DMR 04-14446, and by the European program MEIF-CT-2003-501547. G.V. acknowledges Professor Monica Olvera de la Cruz (Northwestern University) for support and stimulating discussions.
1,314,259,996,411
arxiv
\section{Introduction} How much influence does the personality of a \gls{ceo} have on their company's performance? The personal news and antics of famous \glspl{ceo} like Elon Musk, Jeff Bezos, or Bill Gates make headlines, and their personalities sometimes generate a cult-like following. But what measurable effect do they really have? The \textit{upper echelons theory} \cite{Hambrick1984} suggests that the personalities of \glspl{ceo} also reflect in the organizational outcomes of their companies. However, presumably due to the lack of labeled data, no supervised models exist to detect \glspl{ceo}' personalities from text and infer their effect on the financial performance of companies. In this paper, we close this research gap by presenting the first Transformer-based model to predict the impact of \glspl{ceo}' \gls{mbti} personality on financial risk. Ideally, personality is assessed with self-reported questionnaires. However, it is technically infeasible to request executives such as Elon Musk to fill out targeted pen and paper questionnaires. We were therefore motivated to explore crowd-sourced data. This approach is supported by past research showing that observer reports are an inexpensive and valid alternative to self-reports \citep{VAZIRE2006472}, as they usually agree with them \citep{doi:10.1177/0956797618810000}, and are particularly suitable for the assessment of top management personality \citep{https://doi.org/10.1111/j.1468-2389.2007.00371.x}. The dominant personality model is the Big 5, which presents personality on a continuum along the dimensions \textit{openness}, \textit{conscientiousness}, \textit{extraversion}, \textit{agreeableness}, and \textit{neuroticism} \cite{McCrae1992}. The available data source we use lacks Big 5 ratings, so as proxy, we explore the \gls{mbti} \citep{Briggs-Myers1995}, which has been shown to correlate along the main dimensions with the Big 5 \citep{McCrae1989, Furnham1996, Furnham2003}. This model represents personality via the categories \textit{extraversion--introversion}, \textit{sensing--intuition}, \textit{thinking--feeling}, and \textit{judging--perceiving}. Addressing methodological criticism of the \gls{mbti} \cite{McCrae1989}, we \begin{itemize} \item explore an alternative \gls{mbti} representation as a vector of continuous values (\S\ref{data:pers}); \item find a high internal and external validity of this measure (\S\ref{met:iaa}); \item show that it can be predicted from text (\S\ref{res:pers_pred}); \item and demonstrate that it is predictive of financial risk (\S\ref{res:risk}). \end{itemize} Overall, our findings lend empirical support to the \textit{upper echelons theory} of management. \section{Background and Related Work} Various personality measures exist in the literature. This section describes the personality model we explore (\gls{mbti}), the de-facto standard model (Big 5), and approaches to predict both representations of personality from text. \subsection{MBTI} The \gls{mbti} is named after Katherine Cook Briggs and Isabel Briggs Myers. They developed it based on the work of the analytical psychologist Carl Jung \cite{Briggs-Myers1995}. The \gls{mbti} classifies personalities binarily along the following axes: \begin{itemize} \item \textit{extraversion} vs.\ \textit{introversion} (E--I): describing an out- or inward-oriented social attention; \item \textit{sensing} vs.\ \textit{intuition} (S--N): information processing based on perceivable/known facts or conceptualization and imagination; \item \textit{thinking} vs.\ \textit{feeling} (T--F): decision-making based on logic and rationality or emotions and empathy; \item \textit{judging} vs.\ \textit{perceiving} (J--P): quick judgement and organized action or observation and improvisation on-the-go. \end{itemize} Combined, the four labels form one of 16 personality types (e.g., ``ENTJ''). The \gls{mbti} is widely used in human resources management and by laypeople as a tool for self-exploration. Psychological literature, however, has called assumptions of the \gls{mbti} into question. For example, \newcite{McCrae1989} find no evidence that personality can be binarized or distinguished into 16 different types. In addition, they find moderate to strong correlations between MTBI and Big 5 \cite{Mccrae2010}, which is described in greater detail below (\S\ref{big5}). We re-assess these correlations in our dataset and explore a continuous representation of the \gls{mbti} in line with the Big 5. \paragraph{MBTI Prediction from Text} In a literature study on text-based personality detection and a subsequent annotation study, \newcite{Stajner2020, Stajner2021} conclude that predicting the \gls{mbti} from textual data is a difficult task. They hypothesize that this is due to the theoretical and qualitative origin of the index, which distinguishes it from the empirical and quantitative Big 5. In particular, the dimensions \textit{sensing} vs.\ \textit{intuition} (S--N) and \textit{judging} vs.\ \textit{perceiving} (J--P) depend on behavioral rather than linguistic signals \citep[p. 6291]{Stajner2020}. In a field survey of project managers, \newcite{Cohen2013} show that managers are significantly more often of the \textit{intuitive} (N) and \textit{thinking} (T) type than the general population. We observe a similar pattern in our dataset (\S\ref{data:pers}, Figure \ref{fig:label_dist}). Classifying the \gls{mbti} of Twitter users based on count-based features, gender, and tweet $n$-grams, \newcite{Plank2015} outperform a majority class baseline for the E--I and the T--F dimensions. \newcite{Gjurkovic2018} predict the self-reported \gls{mbti} of Redditors with \gls{svm} and \gls{mlp} models based on linguistic and activity-level features. Their model outperforms a majority class baseline across all dimensions with the best results for E--I, followed by S--N, J--P, and T--F. We compare the best-performing approaches identified by prior \gls{mbti} prediction studies ($n$-grams and \gls{liwc} dictionaries with \glspl{svm} and \glspl{mlp}) to Transformer architectures. Furthermore, we consider a different domain (spoken financial disclosures) and perform a regression instead of a classification. \subsection{Big 5} \label{big5} The Big 5 are the established psychometric model. Here, personality is represented as a continuum along the five axes \textit{openness}, \textit{conscientiousness}, \textit{extraversion}, \textit{agreeableness}, and \textit{neuroticism} \cite{McCrae1992}. \paragraph{Big 5 Prediction from Text} As part of the \textit{myPersonality} project, \newcite{Kosinski2015} find that liked Facebook pages predict Big 5, IQ, and other personal characteristics to varying degrees. \newcite{Mairesse2007} create a text-based Big 5 prediction tool based on student essays and speech recordings. \newcite{https://doi.org/10.1002/smj.2974} show that CEOs' Big 5 personalities moderate the relationship between CEO compensation and risk-taking. \newcite{Hrazdil2020} use \textsc{IBM Watson Personality Insight} to predict the Big 5 of C-level executives in earnings calls and find that an executive's personality is associated with their risk tolerance and company audit fees. \newcite{doi:10.5465/amj.2018.0626} find that CEO Big 5 are related to perceived firm risk and shareholder value. Another finding is that CEO \textit{conscientiousness} moderates the effect of financial risk on returns positively, while the opposite holds for \textit{extroversion} and \textit{neuroticism}. Different to these approaches, we focus on the \gls{mbti} rather than the Big 5. We create the first supervised model to predict \glspl{ceo}' \gls{mbti} personality from text by collecting a new dataset of crowd-annotated \gls{mbti} profiles. This sets us apart from prior work using unsupervised approaches trained on out-of-domain corpora. \section{Personality Prediction} Using transcribed speech data as an input, we predict the \gls{mbti} personality of \glspl{ceo} via text regression. The following sheds light on the dataset collection and validation, methodology, and results. \subsection{Dataset Curation} \label{data:pers_pred} For this task, we collect data from two sources: (1) text data and (2) crowd-sourced personality data. \paragraph{Text Data} We obtain 88K earnings call transcripts spanning years 2002--2020 from \textsc{Refinitiv Eikon}.\footnote{\url{https://eikon.thomsonreuters.com/index.html}} Earnings calls are quarterly teleconferences consisting of a scripted presentation and a spontaneous \gls{qa} session, in which \glspl{ceo} such as Elon Musk answer open questions of banking analysts. Due to the improvised nature of these answers, earnings calls are particularly suitable for detecting personal style \citep{doi:10.1177/0001839217712240}. Figure \ref{fig:earnings_call} shows an excerpt of Tesla's Q1 earnings call in 2020. \begin{figure}[!t] \begin{dialogue} \footnotesize \speak{Elon Musk (CEO)} Thank you. So Q1 ended up being a strong quarter despite many challenges in the final few weeks. This is the first time we have achieved positive GAAP net income in a seasonally weak first quarter. Even with all the challenges, we achieved a 20\% automotive gross margin, excluding regulatory credits, while ramping 2 major products. What we've learned from this is that\textemdash we've obviously learned a lot here. \end{dialogue} \caption{Excerpt of Tesla's Q1 2020 earnings call.} \label{fig:earnings_call} \end{figure} Given the dialogue nature of the calls, we need to map utterances to individual \glspl{ceo} as we are not interested in the personality of the analysts. We identify \gls{ceo} names with regular expressions and minimal preprocessing (e.g., stripping middle name initials or titles). Next, we require a match with the executive database \textsc{Compustat Execucomp} for age and gender data (\S\ref{met:risk}),\footnote{\url{https://wrds-www.wharton.upenn.edu}} reducing our initial sample to 22K calls and 1.7K \glspl{ceo}. For these, we retrieve all of their utterances in the presentation and the \gls{qa} session of the calls. \paragraph{Personality Data} \label{data:pers} We obtain \gls{mbti} personality labels for the \glspl{ceo} from \textsc{Personality Database},\footnote{\url{https://www.personality-database.com/}} which provides crowd-sourced personality profiles for celebrities, managers, and other noteworthy people. While each profile features vote results for the four dimensions of the \gls{mbti}, a minority also contains results for the Big 5. We find that 32 \glspl{ceo} (e.g., Elon Musk and Steve Jobs) from our earnings call sample have at least three \gls{mbti} votes available. The minimum, maximum, and mean votes per \gls{ceo} are 3, 1.8K, and 140, respectively. These \glspl{ceo} participate in a total of 736 earnings calls. Table \ref{tab:stats} gives the descriptive statistics of the merged text--personality data, and Table \ref{tab:mbti} contains example \glspl{ceo} from our dataset across the \gls{mbti}. \begin{table*}[!t] \centering \footnotesize \begin{tabular}{ll} \toprule MBTI & CEO Examples\\ \midrule Extraversion & Steve Jobs (Apple), Lisa Su (AMD), Mary Barra (General Motors) \\ Introversion & Rupert Murdoch (Fox), Mark Zuckerberg (Facebook), Sheldon Adelson (Las Vegas Sands) \\ \midrule Sensing & Jack Dorsey (Twitter), John Schnatter (Papa John's), Marcus Lemonis (Camping World)\\ Intuition & Marissa Mayer (Yahoo), Bob Iger (Disney), Evan Spiegel (Snap) \\ \midrule Thinking & Elon Musk (Tesla), Tim Cook (Apple), Steve Ballmer (Microsoft)\\ Feeling & Sundar Pichai (Google), Howard Schultz (Starbucks), Naveen Jain (Infospace) \\ \midrule Judging & Jeff Bezos (Amazon), Larry Ellison (Oracle), Martha Stewart (Martha Stewart Living)\\ Perceiving & Larry Page (Alphabet), Martin Shkreli (Retrophin), Donald Trump (Trump Entertainment)\\ \bottomrule \end{tabular} \caption{CEO examples for each MBTI dimension from our dataset.} \label{tab:mbti} \end{table*} \begin{table}[!t] \centering \footnotesize \begin{adjustbox}{max width=\linewidth} \begin{tabular}{lS[table-format=7.0, round-mode=off]S[table-format=4.2, round-mode=places, round-precision=2]S[table-format=2.0, round-mode=off]S[table-format=4.0, round-mode=off]} \toprule Unit & $\Sigma_{x}$ & $\bar{x}$ & $\textrm{min}_{x}$ & $\textrm{max}_{x}$ \\ \midrule utterances & 13183 & 17.911684782608695 & 2 & 124\\ sentences & 111781 & 151.8763586956522 & 2 & 563 \\ tokens & 2526473 & 3432.7078804347825 & 22 & 9968\\ \bottomrule \end{tabular} \end{adjustbox} \caption{Statistics of the \gls{ceo}--call data considered for the personality prediction. Sums ($\Sigma_{x}$), averages ($\bar{x}$), minima ($\textrm{min}_{x}$), and maxima ($\textrm{max}_{x}$) are computed across all earnings calls ($n = 736$).} \label{tab:stats} \end{table} Instead of representing each personality as one of 16 types, we represent each personality profile as a vector of 4 continuous variables ranging from 0 to 1, based on the crowd-sourced votes. We normalize the votes for the right-hand side of a scale $s$ by the total votes: \begin{equation} \mathrm{personality}_{s} = \frac{\mathrm{votes}_{1, s}}{\mathrm{votes}_{0, s} + \mathrm{votes}_{1, s}}. \end{equation} For example, for the E--I scale, we divide the votes for introversion (I) by the total votes for E and I. The resulting number is thus the likelihood of the \gls{ceo} being intro- or extroverted. This representation is similar to the Big 5 model (excluding the \textit{neuroticism} dimension) and allows for a more granular representation of personality than the usual operationalization of the \gls{mbti}. Figure \ref{fig:label_dist} shows the distributions of the such obtained continuous labels. Most CEOs in our sample are rather \textit{extroverted}, \textit{intuitive}, \textit{thinking}, and \textit{judging} (Figure \ref{fig:label_dist}), which corresponds to the ENTJ ``Decisive Strategist'' \gls{mbti} type.\footnote{\url{https://eu.themyersbriggs.com/en/tools/MBTI/MBTI-personality-Types/ENTJ}} \paragraph{Internal Validation} \label{met:iaa} To assess the validity of the crowd-sourced votes, we analyze the inter-annotator agreement between the \gls{mbti} raters of the 32 \glspl{ceo} (Table \ref{tab:agreement}). While $p_{a}$ is high with values ranging between ca.\ 80 and 90\%, Krippendorff's $\alpha$ \cite{Krippendorff2013} yields only slight to moderate values between 0.14 and 0.43. \newcite{frequency_distribution} call this phenomenon the ``frequency distribution paradox,'' where highly skewed label distributions combined with high percentage agreements can lead to low values of $\alpha$. As measures robust to this undesirable property, they suggest the Brennan--Prediger coefficient $\kappa_{\textrm{bp}}$ \cite{brennan-prediger} and Gwet's $\gamma$ \cite{Gwet2008}, which in our case yield a high IAA between 0.60 to 0.88. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{gfx/label_dist.pdf} \caption{Label distributions for all \glspl{ceo} considered in the personality prediction ($n = 32$) across the \gls{mbti} dimensions \textit{extraversion--introversion} (E--I), \textit{sensing--intuition} (S--N), \textit{thinking--feeling} (T--F), and \textit{judging--perceiving} (J--P).} \label{fig:label_dist} \end{figure} \begin{table}[!t] \centering \footnotesize \begin{tabular}{lS[round-mode=places, round-precision=2]S[round-mode=places, round-precision=2]S[round-mode=places, round-precision=2]S[round-mode=places, round-precision=2]} \toprule MBTI & $p_{a}$ & $\alpha$ & $\kappa_{\textrm{bp}}$ & $\gamma$\\ \midrule E--I & 87.45383 & 0.39865 & 0.74908 & 0.75633\\ S--N & 80.2042 & 0.42543 & 0.60408 & 0.6197 \\ T--F & 83.33429 & 0.13636 & 0.66669 & 0.7103 \\ J--P & 90.6237 & 0.17012 & 0.81247 & 0.88128\\ \bottomrule \end{tabular} \caption{\gls{iaa} per \gls{mbti} dimension in terms of percentage agreement ($p_a$), Krippendorff's $\alpha$, Brennan--Prediger coefficient ($\kappa_{\textrm{bp}}$), and Gwet's $\gamma$.} \label{tab:agreement} \end{table} \paragraph{External Validation} \label{met:corr} \begin{figure} \centering \includegraphics[width=\linewidth]{gfx/correl_matrix.pdf} \caption{Correlation of MBTI (y-axis) and Big 5 (x-axis) scales for all profiles on the \textsc{Personality Database} with at least three votes ($n = 2.2$K).} \label{fig:corr} \end{figure} To get a notion of external validity, we construct a correlation matrix between the crowd-based \gls{mbti} and Big 5 votes of \textit{all} 2.2K profiles with more than three votes available on \textsc{Personality Database} (Figure \ref{fig:corr}). According to \newcite{McCrae1989} and subsequent work \cite{Furnham1996, Furnham2003}, strong correlations should exist between MBTI \textit{introversion} and Big 5 \textit{extraversion} ($r = -0.74$) as well as between MBTI \textit{intuition} and Big 5 \textit{openness} ($r = 0.72$). Furthermore, moderate correlations should exist between MBTI \textit{feeling} and Big 5 \textit{agreeableness} ($r = 0.44$) and between MBTI \textit{perceiving} and Big 5 \textit{conscientiousness} ($r = -0.49$). Our results confirm the findings of \newcite{McCrae1989} with similar correlations in the first two rows and stronger correlations in the third and fourth rows. This is most likely due to our increased sample size ($n = 2.2$K vs.\ $n=267$). \subsection{Methodology} \label{met:pers_pred} For each of the 32 \glspl{ceo} appearing in 736 CEO--call instances, we compare sparse approaches suggested by past literature to Transformer architectures for a regression of \gls{mbti} personality.\footnote{The supplementary material contains our implementation and the earnings call identifiers. Using those, our corpus can be re-assembled from \textsc{Refinitiv Eikon}, \textsc{Seeking Alpha}, or alternative sources.} \paragraph{Data Split} We apply an 80:10:10 split to our data to obtain separate training ($n = 568$), validation ($n = 84$), and test sets ($n = 84$). To avoid overfitting, we use \texttt{sklearn}'s \texttt{GroupShuffleSplit} with the \gls{ceo} names as group splitting criterion, i.e., we split the data such that no \gls{ceo} present in the training data appears in the validation or test data. \paragraph{Normalization} Given the highly skewed distributions, after the train--validation--test split, we apply a Box-Cox transformation \cite{Box1964} to $y$ with the following formula: \begin{equation} y(\lambda) = \begin{cases} \frac{y^{\lambda} -1}{\lambda} & \text{for } \lambda \neq 0, \\ \ln(y) & \text{for } \lambda = 0. \end{cases} \end{equation} We obtain $\lambda$ via maximum-likelihood estimation. The resulting transformation makes the four label distributions more Gaussian-like by stabilizing variance. \paragraph{Transformers} We explore cased-vocabulary BERT\textsubscript{base} (12-layer, 768-hidden, 12-heads, 109M parameters) \cite{Devlin2019} and RoBERTa\textsubscript{base} (12-layer, 768-hidden, 12-heads, 125M parameters) \cite{liu2019roberta} models with a linear regression head. The models are trained with a maximum sequence length of 512 and a sliding window approach. We determine the training batch size and learning rate by running a Bayesian optimization over the grid of batch sizes $b \in \{32, 64, 128, 256\}$ and learning rates $l \in [0, \num{5e-5}]$.\footnote{Final hyperparameter choices and results on our validation set can be found in Appendices \ref{app:params} and \ref{app:valid}.} We train a model for up to 10 epochs and early stopping with a patience of one epoch. For each of the four \gls{mbti} dimensions, we evaluate 40 combinations of hyperparameters and select the model with minimal loss on the validation set. Different to the \gls{mse} loss, which is implemented per default in the \kern1pt\scalerel*{\includegraphics{gfx/huggingface.pdf}}{\textrm{\textbigcircle}}\kern3pt Transformers \cite{wolf-etal-2020-transformers} regressors, we minimize the L1 or alternatively called \gls{mae} loss, which is less sensitive to outliers. \paragraph{Sparse Methods} We also explore the sparse representations suggested by \newcite{Plank2015} and \newcite{Gjurkovic2018}. These include \gls{tfidf} vectors with $n$-grams of length $n \in \{1,2,3\}$ and dictionary features across all dimensions of \gls{liwc} 2015 \cite{liwc2015} fed into \gls{svm} and three-layer \gls{mlp} regressors. We compare all possible feature--algorithm combinations with respect to their average \gls{mae} on the validation set and select the combination with the lowest error (\gls{svm} with trigram \gls{tfidf}). \paragraph{Evaluation} The final model performance is evaluated by inspecting the correlation and error between test set ground truth and prediction. As measures, we explore the linear correlation coefficient (i.e., Pearson's $r$) and the rank correlation coefficients Spearman's $\rho$ and Kendall's $\tau$. Instead of linear relationships, the latter two measure monotonic relationships and are more robust to outliers. In addition, we consider the error measure \gls{mae}, which is the minimized loss function of the Transformers. In case of a tie, we give precedence to $\tau$, as this measure is least sensitive to outliers and particularly suited for small sample sizes. \subsection{Results and Discussion} \label{res:pers_pred} The results of the personality prediction task are depicted in Table \ref{tab:personality_prediction}. An \gls{svm} performs competitive, especially for the dimensions E--I ($\tau = 0.44$) and S--N ($\tau = 0.20$). While the \gls{svm} outperforms BERT for all dimensions except for J--P, RoBERTa achieves the best results in most cases. The largest correlations across all models are achieved for the \textit{extraversion--introversion} (E--I) scale with strong linear and rank correlations for the RoBERTa regressor ($r = 0.70$, $\rho = 0.66$). This result is not surprising, as distinguishing between \textit{extra-} and \textit{introverted} \glspl{ceo} based on linguistic style should be comparably easy. This is followed by the \textit{sensing--intuition} (S--N) scale with moderate to strong correlations ($r = 0.45$, $\rho = 0.53$) and the \textit{judging--perceiving} (J--P) scale with weak to moderate correlations ($r = 0.40$, $\rho = 0.36$). The worst results are obtained for the \textit{thinking--feeling} (T--F) scale, with the \gls{svm} and RoBERTa obtaining correlations of around zero and BERT even obtaining weak to moderate negative correlations. There are several possible explanations for this: Conceptually, it could be the case that this dimension simply can not be captured by analyzing linguistic data. Furthermore, the predictive power could be low due to the comparably small sample size. Lastly, we hypothesize that the skewness of the label distribution, which was the highest across all \gls{mbti} dimensions for the T--F scale (Figure \ref{fig:label_dist}), has contributed to the weak performance. This warrants further research exploring whether our findings hold for larger datasets with less skewed label distributions. \newcite{Stajner2020} hypothesize that the S--N and J--P dimensions should theoretically make for the worst candidates in a text-based personality prediction task since they capture behavioral rather than linguistic dimensions of personality. Although our regressors perform worse on these dimensions than for the \textit{extraversion--introversion} dimension, they still achieve moderate to strong correlations, showing that even the more latent dimensions of personality can be predicted from text. \begin{table}[!t] \centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{llS[table-format=0.2, round-mode=places, round-precision=2]S[table-format=0.2, round-mode=places, round-precision=2]S[table-format=0.2, round-mode=places, round-precision=2]S[table-format=0.2, round-mode=places, round-precision=2]S[table-format=0.2, round-mode=places, round-precision=2]S[table-format=0.2, round-mode=places, round-precision=2]} \toprule {MBTI} & {Model} & {$r$} & {$\rho$} & {$\tau$} & {MAE} \\ \midrule & {SVM} & 0.568662426675648 & 0.5763365629649215 & 0.44230523605421745 & 0.38489306550492086 \\ {E--I} & {BERT} & 0.39332014599462595 & 0.3530214183872957 & 0.22322244094600896 & 0.5916627037556272\\\ & {RoBERTa} & \bfseries 0.7024211144120937 & \bfseries 0.6560886362838613 & \bfseries 0.5184854047554954 & \bfseries 0.3399837404126423\\ \midrule & {SVM} & 0.3167056509864913 & 0.36126396939739447 & 0.19774769732877542 & 0.2993978962939358 \\ {S--N} & {BERT} & 0.07721196397108031 & 0.2252101029128621 & 0.15826184472206342 & 0.4603169380373541\\ & {RoBERTa} & \bfseries 0.4454403260717685 & \bfseries 0.5294907077266214 & \bfseries 0.3779815084207027 & \bfseries 0.2764512316645347\\ \midrule & {SVM} & \bfseries 0.026398736793045528 & -0.11797500914450072 & -0.07785942403691712 & \bfseries 0.370752470172705\\ {T--F} & {BERT} & -0.4681990990588438 & -0.4065905762455734 & -0.2746254998456951 & 0.4110043965575551 \\ & {RoBERTa} & 0.00661499083662595 & \bfseries -0.09765070508904866 & \bfseries -0.06808627457621623 & 0.3859379094323412\\ \midrule & {SVM} & -0.046005799451036056 & 0.040052999129474944 & 0.02388257214115645 & \bfseries 0.3541108767453241\\ {J--P} & {BERT} & 0.3851371830298552 & \bfseries 0.3811454994720046 & \bfseries 0.250607790334535 & 0.5217501833637788\\ & {RoBERTa} & \bfseries 0.4015906232987949 & 0.3637491606567208 & 0.2079375947756688 & 0.364233414180257\\ \bottomrule \end{tabular} \end{adjustbox} \caption{Correlation results of the personality regression task. \gls{ceo} personality is predicted across the \gls{mbti} dimensions \textit{extraversion--introversion} (E--I), \textit{sensing--intuition} (S--I), \textit{thinking--feeling} (T--F), and \textit{judging--perceiving} (J--P). \gls{svm} is trained on trigram \gls{tfidf} vectors, BERT\textsubscript{base}, and RoBERTa\textsubscript{base} on text. Best results in bold.} \label{tab:personality_prediction} \end{table} \paragraph{Qualitative Analysis} As a brief qualitative analysis, we use Shapley Additive Explanations (SHAP) developed by \citet{NIPS2017_8a20a862} to visualize the personality predictions for an exemplary text snippet across the four \gls{mbti} dimensions with heatmaps (Figure \ref{fig:shap_examples}). The analyzed personality is Elon Musk, who, according to the crowd votes, scores high on E--I (\textit{introversion}) and on S--N (\textit{intuitive}), low on T--F (\textit{thinking}), and medium on J--P (\textit{judging/perceiving}). Particularly interesting are the results for T--F (Figure \ref{fig:shap_examples_tf}), where statements related to factual content are related to increased T, and interpretative statements (e.g., ``[e]ven with all the challenges'') to increased F. \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=0.9\textwidth]{gfx/ei_cropped.png} \caption{Result of the E--I regressor.} \label{fig:shap_examples_ei} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=0.9\textwidth]{gfx/sn_cropped.png} \caption{Result of the S--N regressor.} \label{fig:shap_examples_sn} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=0.9\textwidth]{gfx/tf_cropped.png} \caption{Result of the T--F regressor.} \label{fig:shap_examples_tf} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=0.9\textwidth]{gfx/jp_cropped.png} \caption{Result of the J--P regressor.} \label{fig:shap_examples_jp} \end{subfigure} \caption{Example snippet from our dataset (uttered by Elon Musk in Tesla's Q1 2020 earnings call) with SHAP heatmap across the \gls{mbti}. Red indicates a positive and blue a negative influence on the prediction.} \label{fig:shap_examples} \end{figure*} \section{Risk Regression} \label{risk_regression} According to \textit{upper echelons theory} \cite{Hambrick1984}, strategic choices and performance measures of organizations can be predicted by characteristics of their top management. As a use case for our personality prediction task, we explore whether we can find empirical support for this theory. We hypothesize that having a different personality to most \glspl{ceo} (i.e., ENTJ, see Figure \ref{fig:label_dist} and \newcite{Cohen2013}) should translate into increased financial risk. \subsection{Dataset Curation} \label{data:fin} As a basis for the risk regression task, we take the sample of 22K earnings calls and merge it with data obtained from the databases \textsc{CRSP}, \textsc{IBES}, and \textsc{Compustat Execucomp}, which we access via \textsc{WRDS}.\footnote{\url{https://wrds-www.wharton.upenn.edu}} To measure risk, we calculate the stock return volatility in the business week following each call as a label. We use the sample standard deviation of logarithmic stock returns for more robust measures. As features, we incorporate a comprehensive set of risk proxies suggested by \newcite{price12} and \newcite{Theil2019}.\footnote{We initially also considered including a market volatility index (VIX), but decided against it as its low explanatory power and high variation inflation factor (VIF) indicated redundancy of this variable \cite{vif}.} Furthermore, we include \gls{ceo} age and gender to control for possible confounding effects (e.g., being introverted could have a different effect for male than for female \glspl{ceo}). Definitions of all used controls are given in Table \ref{tab:finance}. \begin{table}[!t] \centering \footnotesize \begin{tabularx}{\linewidth}{p{0.15\linewidth}p{0.75\linewidth}} \toprule Feature & Definition \\ \midrule Age & CEO age on the call date\\ Gender & CEO gender\\ Past Vola & Standard deviation of logarithmic returns in the business quarter before the call\\ Size & Market value of the firm, i.e., the number of outstanding shares times stock price one day before the call\\ Volume & Stock trading volume on the call date\\ Leverage & Total liabilities divided by assets\\ Spread & Difference between the stock's bid and ask price on the call date\\ BTM & Book-to-Market = book value of the firm divided by market value\\ SUE & Mean absolute deviation of analysts' earnings-per-share forecasts from the actual value in the preceding quarter\\ ROA & Return on Assets, i.e., net income divided by assets\\ Industry & Fama--French 12 industry dummies\\ Time & Year--quarter dummies\\ \bottomrule \end{tabularx} \caption{Controls used in the risk regression task. BTM is calculated following \cite{Fama2001} and firms with a negative value are removed. Size, BTM, and volume are log1p-transformed.} \label{tab:finance} \end{table} \subsection{Methodology} \label{met:risk} We use the best-performing personality prediction model (RoBERTa) to infer the personality of the 1.7K unlabelled \glspl{ceo} present in the 22K calls. Together with the financial covariates (see above), the predicted \gls{ceo} \gls{mbti} is then used to explain short-term stock return volatility following the calls with multiple linear regression.\footnote{The supplementary material contains our dataset and implementation.} Volatility is the most common financial risk measure, and its prediction is an essential task for firm valuation and financial decision-making. Importantly, ``risk'' is a purely descriptive concept in finance, as it measures the fluctuation of stock returns. \subsection{Results and Discussion} \label{res:risk} The results of this risk regression task are shown in Table \ref{tab:riskreg}. We find that the first three MBTI dimensions are significantly associated with risk following the call. This significance is high ($p \leq 0.001$) for E--I and T--F. The direction of this association behaves as expected: a \gls{ceo} communicating in an \textit{introverted} and \textit{feeling} manner is associated with increased risk ($\beta_{i} = 0.03$, $\beta_{f} = 0.10$, while an \textit{intuitive} communication is associated with decreased risk ($\beta_{s} = -0.02$). Notably, these results are robust to age- and gender-fixed effects. Although seemingly small, the size of the personality effect (i.e., the coefficient height) is in line with that observed by related work \citep{doi:10.5465/amj.2018.0626}. It is expectable that fundamentals such as past risk or firm size have a stronger impact on future risk than, e.g., \gls{ceo} extraversion. Remarkably, T--F has the third-largest impact ($\beta_{f} = 0.10$) out of all considered features. Though only weakly correlated with the ground truth (Table \ref{tab:personality_prediction}), the results suggest that the predictions for this scale contain strong economic signal for risk regression. \sisetup{ table-format=2.3, round-integer-to-decimal = true, group-digits = true, group-minimum-digits = 4, group-separator = {\,}, table-align-text-pre = false, table-align-text-post = false, input-signs = + -, input-symbols = {*} {**} {***}, input-open-uncertainty = , input-close-uncertainty = , retain-explicit-plus } \begin{table}[!t] \centering \footnotesize \begin{tabular}{lS[table-format=2.2, round-mode=places, round-precision=2, table-space-text-pre={**}, table-space-text-post={-**}]S[table-format=2.2, round-mode=places, round-precision=2,table-space-text-pre={**}, table-space-text-post={-**}]} \toprule Feature & {\textsc{Fin}} & {\textsc{Fin} + \textsc{MBTI}} \\ \midrule E--I & & 0.0317$^{***}$\\ & & (5.007) \\ S--N & & -0.0168$^{**}$ \\ & & (-2.688) \\ T--F & & 0.1010$^{***}$ \\ & & (13.673) \\ J--P & & -0.0016 \\ & & (-0.220) \\ \addlinespace[4pt] Age & & -0.0052\\ & & (0.377) \\ Gender & & -0.0185 \\ & & (-0.748) \\ \addlinespace[4pt] Past Vola & 0.4352$^{***}$& 0.4257$^{***}$ \\ & (45.801) & (44.724) \\ Size & -0.1840$^{***}$ & -0.1920$^{***}$ \\ & (-19.065) & (-19.826) \\ Volume & 0.0445$^{***}$ & 0.0450$^{***}$ \\ & (5.282) & (5.360) \\ Leverage & -0.0573$^{***}$ & -0.0460$^{***}$ \\ & (-8.675) & (-6.883) \\ Spread & 0.0271$^{***}$ & 0.0257$^{***}$ \\ & (4.304) & (4.097) \\ BTM & -0.0421$^{***}$ & -0.0207$^{***}$ \\ & (-6.220) & (-2.916) \\ SUE & -0.0023 & -0.0041 \\ & (-0.411) & (-0.732) \\ ROA & -0.0012 & 0.0027 \\ & (-0.207) & (0.455) \\ \addlinespace[4pt] $n$ & {21,787} & {21,787} \\ Adj. $R^{2}$ & {33.40\%} & {34.00\%} \\ \bottomrule \addlinespace[3pt] \multicolumn{3}{c}{\footnotesize $^{*} p \leq 0.05$, $^{**} p \leq 0.01$, $^{***} p \leq 0.001$}\\ \end{tabular} \caption{Results of the risk regression with $z$-standardized coefficients and $t$-statistics in parentheses. The sample consists of 22K earnings calls spanning 1.7K firms and years 2002--2020. Regressions include fixed effects for industry and time. \textsc{Fin} is a model with just the financial features (defined in \S\ref{data:fin}) and \textsc{Fin} + \textsc{MBTI} is a joint model including the MBTI (E--I, S--N, T--F, and J--P) along with CEO age and gender.} \label{tab:riskreg} \end{table} In sum, these results provide new empirical evidence to support the \textit{upper echelons theory}. We show that situational aspects of \gls{ceo} personality, predicted with our MBTI regressor, also reflect firm performance measured by stock return volatility, the most common financial risk measure. \section{Ethical Considerations} \label{sec:ethics} In the following, we discuss possible biases and environmental considerations. \paragraph{Social Desirability Bias} Past literature has shown that some Big 5 personalities are more socially desirable than others, which paves the way to discrimination: Overall, it is socially desirable to score low on \textit{neuroticism} (an omitted scale in the \gls{mbti}) and high on \textit{conscientiousness} and \textit{agreeableness}. To a lesser extent, it is socially desirable to score high on \textit{extraversion} and \textit{openness} \cite[Table 2]{Ones1996}. For the MBTI, in contrast, there exist no ``bad'' personality traits. As shown in \S\ref{met:corr}, however, the Big 5 and the MBTI correlate. Therefore, the points raised about social desirability, albeit to a lesser extent, should apply here, too. \paragraph{Sample Biases} Critically, our gold standard consists of just 32 \glspl{ceo} of large American (mostly tech) companies. While these companies (Alphabet, Facebook, Apple, etc.) constitute a large share of the American market, this renders the personality prediction model less applicable to non-American, small, or non-tech companies. Only four (i.e., 12.5\%) of the 32 \glspl{ceo} are female. While this gender ratio is twice as high as that of the S\&P 500 \cite{catalyst}, this highlights that the findings of this study might generalize poorly to non-male \glspl{ceo}. In addition, as shown in \S\ref{data:pers}, Figure \ref{fig:label_dist}, \glspl{ceo} as a social cohort share a distinct distribution of personality traits, which is why we argue that the \gls{mbti} regressors should only be applied with caution, if at all, to non-executive samples. \paragraph{Energy Consumption} Training neural models can have substantial financial and environmental costs \citep{Strubell2019}, which motivates us to discuss the computational efficiency of the Transformers. Using an NVIDIA Tesla P100 GPU, we run a hyperparameter optimization over 40 configurations per \gls{mbti} dimension for both BERT and RoBERTa. The average power consumption is 200W and the optimization takes ca.\ 16 hours, i.e., 3.2 kilowatt hours (kWh) with an electricity cost of 40 cents per model.\footnote{Calculations assume the average U.S. electricity rate of 12.55 cents per 15 November 2021: \url{https://www.electricchoice.com/electricity-prices-by-state}} Labeling the 22K earnings call instances with no available ground truth takes ca.\ 4.5 hours and 140W, i.e., 0.63 kWH of GPU time and 8 cents, respectively. Training time of the \gls{svm} with trigram \gls{tfidf} is negligible (ca.\ 2 minutes on a quad-core processor with 8GB RAM). Whether the performance increases of the Transformers over a sparse method justify the added computational costs should be considered carefully on a case-by-case basis. \section{Conclusion and Future Work} We present the first text regression approach for predicting the \gls{mbti} personality of \glspl{ceo}. Although past research has contested the possibility of predicting \gls{mbti} from purely textual data, we observe moderate to strong correlations with the ground truth for three out of four dimensions. In a risk regression task, we demonstrate that\textemdash consistent with the \textit{upper echelons theory}\textemdash the predicted \gls{ceo} personality is significantly associated with financial risk in the form of stock return volatility. Qualitatively, extroverted, intuitive, and thinking \glspl{ceo} seem to incur less financial risk. In the future, we plan to model the personality prediction task as a multi-task learning problem, in which one single regressor is trained to predict all four MBTI dimensions at once. In addition, it would be interesting to incorporate speech signals of executives (e.g., voice modulation, tonality, and silence) into the personality predictions. \section*{Acknowledgments} We would like to thank Amanda Cercas Curry, Federico Bianchi, Tommaso Fornaciari, and Anne Lauscher for their helpful feedback on an earlier version of this paper. Furthermore, we are grateful to all other members of MilaNLP Lab at Bocconi University for the fruitful discussions.
1,314,259,996,412
arxiv
\section{Introduction} Differential equations are used for an enormous variety of applications, including industrial design and weather prediction. In fact, many of the main applications of supercomputers are in the form of large systems of differential equations \cite{super}. Therefore quantum algorithms for solving differential equations would be extraordinarily valuable. A quantum algorithm for differential equations was proposed in Ref.\ \cite{Leyton08}, but that algorithm had very poor scaling in the time. The complexity of the simulation scaled exponentially in the number of time-steps over which to perform the simulation. The algorithm in Ref.\ \cite{Leyton08} may have been overly ambitious, because it aimed to solve nonlinear differential equations. A more natural application for quantum computers is \emph{linear} differential equations. This is because quantum mechanics is described by linear differential equations. We find that, when we restrict to linear differential equations, it is possible to obtain an algorithm that is far more efficient than that proposed in Ref.\ \cite{Leyton08}. We consider first-order linear differential equations. Using standard techniques, any linear differential equation with higher-order derivatives can be converted to a first-order linear differential equation with larger dimension. A first-order ordinary differential equation may be written as \begin{equation} \dot x(t) = A(t)x(t) + b(t), \end{equation} where $x$ and $b$ are $N_x$-component vectors, and $A$ is an $N_x\times N_x$ matrix. Classically, the complexity of solving the differential equation must be at least linear in $N_x$. The goal of the quantum algorithm is to solve the differential equation in time $O(\poly\log N_x)$. Quantum mechanics is described by differential equations of this form, except they are homogeneous ($b(t)=0$), and $A(t)=iH(t)$, where $H(t)$ is Hermitian. This means that the solutions in quantum mechanics only include oscillating terms, whereas more general differential equations have solutions that may grow or decay exponentially. Quantum algorithms for simulating quantum mechanical systems have been extensively studied \cite{Lloyd96,Aharonov03,Childs04,Berry07,Childs08,Berry09,Wiebe10}. Classical physics is described by more general differential equations. Large systems of ordinary differential equations are produced by discreti{\s}ation of partial differential equations. Many equations in physics are linear partial differential equations, where the time derivative depends linearly on spatial derivatives and the value of a quantity at some point in physical space. Examples include Stokes equations (for creeping fluid flow), the heat equation, and Maxwell's equations. Discreti{\s}ation of the partial differential equation on a mesh of points results in an ordinary differential equation with a very large value of $N_x$. In the case where $A$ and $b$ are time independent, then one can find the equilibrium solution of the differential equation by solving \begin{equation} A x = -b. \end{equation} A quantum algorithm for this problem was given by Harrow, Hassadim and Lloyd \cite{Harrow09}, with runtime that is polynomial in $\log(N_x)$ and the condition number of $A$. Ambainis has reported development of an improved algorithm \cite{Ambainis10}, though this algorithm has not yet been released. We consider the more difficult case of solving the time evolution under linear differential equations, rather than just the equilibrium solution. We find that this case can also be solved using a modification of the method of Harrow, Hassadim and Lloyd. \section{Trotter formula approach} Before explaining that approach, we first describe an approach using Trotter formulae, and the drawback to that approach. This will not be described rigorously, because it is not our main proposal for solving differential equations. The homogeneous case, where $b=0$, is analogous to Hamiltonian evolution. If $A$ is antiHermitian, then we can take $A=iH$, where $H$ is a Hermitian Hamiltonian. Evolution under this Hamiltonian can be solved by methods considered in previous work \cite{Berry07,Berry09}. Another case that can be considered is where $A$ is Hermitian. In this case, the eigenvalues of $A$ are real, and $A$ can be diagonali{\s}ed in the form $A=V D V^{-1}$, where $D$ is a real diagonal matrix and $V$ is unitary. The formal solution is then, for $A$ independent of time, $x(t)=V e^{D(t-t_0)} V^{-1} x(t_0)$. The differential equation can be solved using a similar method to that used in Ref.\ \cite{Harrow09}. The value of $x$ is encoded in a quantum state as \begin{equation} \ket{x} = {\cal N}_x \sum_{j=1}^{N_x} x^{[j]} \ket{j}, \end{equation} where $\ket{j}$ are computational basis states of the quantum computer, $x^{[j]}$ are the components of the vector $x$, and ${\cal N}_x$ is a normali{\s}ation constant. The state can be written in a basis corresponding to the eigenvectors of $A$: \begin{equation} \ket{x} = \sum_{j} \lambda_j \ket{\lambda_j}. \end{equation} Using methods for Hamiltonian simulation, $iA$ can be simulated. By using phase estimation, if the state is an eigenstate $\ket{\lambda_j}$, then the eigenvalue $\lambda_j$ can be determined. Given maximum eigenvalue $\lambda_{\rm max}$, we would change the amplitude by a factor of $e^{(t-t_0)(\lambda_j-\lambda_{\rm max})}$. See Ref.\ \cite{Harrow09} for the method of changing the amplitude. If this is done coherently, then the final state will encode $x(t)$. For more general differential equations, $A$ will be neither Hermitian nor antiHermitian. In this case, one can break $A$ up into Hermitian ($A_{H}$) and antiHermitian ($A_{aH}$) components. The evolution under each of these components can be simulated individually, and the overall evolution simulated by combining these evolutions via the Trotter formula. The drawback to this approach is that it appears to give a complexity that increases exponentially with the time interval $\Delta t = t-t_0$ (though the complexity is still greatly improved over Ref.\ \cite{Leyton08}). If $A$ were just Hermitian, then the eigenvector (or eigenspace) corresponding to the largest eigenvalue would not decay, and the system would end up in that state. Therefore the amplitude would not drop below the amplitude on the eigenspace corresponding to the largest eigenvalue. That is not the case when $A$ is a more general matrix, because usually the maximum real part of an eigenvalue of $A$ will be strictly less than the maximum eigenvalue of $A_H$. The amplitude must therefore decay exponentially, because we must use the maximum eigenvalue of $A_H$ in simulating evolution under $A_H$. The result of this is that the complexity of the simulation will scale exponentially in the time that the differential equation needs to be simulated over, $\Delta t$. The scaling will be considerably improved over that in Ref.\ \cite{Leyton08}, but it is desirable to obtain scaling that is polynomial in $\Delta t$. Another drawback is that this approach does not enable simulation of inhomogeneous differential equations. \section{Linear systems approach} To avoid this problem we propose an approach based on the algorithm for solving linear systems from Ref.\ \cite{Harrow09}. The trick is to encode the solution of the differential equation at different times using the one state. That is, we wish to obtain the final state proportional to \begin{equation} \label{eq:fineq} \ket{\psi} := \sum_{j=0}^{N_t} \ket{t_j} \ket{x_j}. \end{equation} The number $N_t$ is the number of time steps, $t_j$ is the time $t_0+j\dt$, where $\dt$ is the time interval in the discreti{\s}ation of the differential equation, $x_j$ is the approximation of the value of $x$ at time $t_j$, and $\Delta t$ is the total time interval over which the differential equation is to be solved. We use the subscript $j$ to index the vectors, and superscript for components of these vectors. Once this state has been created, the state encoding the solution at the final time $t_0+\Delta t$ can be approximated by measuring the register encoding the time and getting that time. Just using this method, the probability of obtaining the final time is small ($1/(N_t+1)$). To obtain a significant probability of success, one can add times beyond $t_0+\Delta t$ where $x$ is constant. We take $x$ to be constant for $t_0+\Delta t$ to $t_0+2\Delta t$, so $N_t=2\Delta t/\dt$. Then any measurement result for the time in this interval will give the state corresponding to the solution. By this method, the probability of success can be boosted significantly, without changing the scaling for $N_t$. To numerically solve differential equations, the simplest method is the Euler method, which discreti{\s}es the differential equation as \begin{equation} \frac{x_{j+1}-x_j}{\dt} = A(t_j) x_j +b(t_j). \end{equation} For times after $t_0+\Delta t$, we set $x_{j+1}=x_j$ to ensure that $x$ is constant. The Euler method yields an error that scales as $O(\dt^2)$ for a single time step. Therefore, we expect that the error in the total simulation is $O(N_t \dt^2)=O(\Delta t^2/N_t)$. To achieve error bounded by $\epsilon$, we can take $N_t = O(\Delta t^2/\epsilon)$. To show these scalings rigorously requires additional constraints on the problem. In particular, to rigorously bound the error it is necessary that the eigenvalues of $A(t_j)$ have no positive real part. Otherwise the error can grow exponentially. In cases where $A(t_j)$ does have an eigenvalue with positive real part, one can simply subtract a multiple of the identity, and rescale the solution. Note that $\epsilon$ is the error in the solution of the differential equation, and is distinct from error in the solution of linear systems. More generally, linear multistep methods have the form \cite{Butcher,Hairer} \begin{equation} \label{eq:multi} \sum_{\ell=0}^{k} \alpha_\ell x_{j+\ell} = \dt \sum_{\ell=0}^{k}\beta_\ell [A(t_{j+\ell}) x_{j+\ell}+b(t_{j+\ell})]. \end{equation} Multistep methods can be chosen such that the error is of higher order in $\dt$, but there is the problem that the method may not be stable. That is, even if the exact solution of the differential equation is bounded, the solution of the difference equation may be unbounded. To examine the stability, one defines the generating polynomials \begin{equation} \rho(\zeta)=\sum_{j=0}^k \alpha_j \zeta^j, \qquad \sigma(\zeta) = \sum_{j=0}^k \beta_j \zeta^j. \end{equation} The stability can be examined via the roots of the equation \begin{equation} \label{eq:stpol} \rho(\zeta)-\mu \sigma(\zeta) = 0. \end{equation} One defines the set $S$ by \begin{equation} S := \left\{ \mu\in {\mathbb{C}}; \begin{array}{*{20}l} {{\rm all~roots~} \zeta_j(\mu) {\rm~of~} \eqref{eq:stpol} {\rm~satisfy~} |\zeta_j(\mu)|} \le 1 \\ {{\rm multiple~roots~satisfy~} |\zeta_j(\mu)|< 1} \\ \end{array} \right\}. \end{equation} $S$ is called the stability domain or stability region of the multistep method. In addition, if the roots of $\sigma(\zeta)$ all satisfy $|\zeta|\le 1$, and repeated roots satisfy $|\zeta|<1$, then the method is said to be stable at infinity. A linear multistep method is said to be order $p$ if it introduces local errors $O(\dt^{p+1})$. This means that, if it is applied with exact starting values to the problem $\dot x = t^q$ ($0\le q \le p$), it integrates the problem without error. A linear multistep method has order $p$ if and only if \cite{Butcher} \begin{equation} \rho(e^h)-h\sigma(e^h) = O(h^{p+1}). \end{equation} A useful property of linear multistep methods is for them to be $A$-stable \cite{Hairer,Dahlquist}. \begin{definition} A linear multistep method is called $A$-stable if $S \supset \mathbb{C}^-$, i.e., if \begin{equation} {\rm Re}\, \lambda \le 0 \implies \text{numerical solution for } \dot x = \lambda x \text{ is bounded.} \end{equation} \end{definition} This definition means that, if the solution of the differential equation is bounded, then the approximation given by the multistep method is bounded as well. For a scalar differential equation, the multistep method is bounded whenever $\lambda$ is in the left half of the complex plane. The Euler method is $A$-stable, but it is not possible to construct arbitrary order $A$-stable multistep methods. The second Dahlquist barrier is that an $A$-stable multistep method must be of order $p\le 2$ \cite{Hairer,Dahlquist}. As we wish to consider higher-order multistep methods, we relax the condition and require that the linear multistep method is $A(\alpha)$-stable \cite{Hairer,Widlund}. \begin{definition} A linear multistep method is $A(\alpha)$-stable, $0<\alpha<\pi/2$, if \begin{equation} S \supset S_\alpha = \{\mu ; |\arg(-\mu)| < \alpha, \mu \ne 0 \}. \end{equation} \end{definition} This definition means that, in the case of a scalar differential equation, the multistep method is bounded whenever $\lambda$ is within a wedge in the left half of the complex plane. For a vector differential equation, the eigenvalues of $A$ should be within this wedge. It is known that, for any $\alpha<\pi/2$ and $k\in \mathbb{N}$, there is an $A(\alpha)$-stable linear $k$-step method of order $p=k$ \cite{Grigoreff,Butcher}. The error in the total solution of the differential equation will be $O(N_t (\Delta t)^{p+1})$. In order to obtain a rigorous result, we specialise to the case that $A$ and $b$ are independent of time. The relevant bound is given in Theorem 7.6 in Chapter V of Ref.\ \cite{Hairer}. \begin{theorem} \label{thm2} Suppose a linear multistep method is of order $p$, $A(\alpha)$-stable and stable at infinity. If the matrix $A$ is diagonali{\s}able (i.e.\ there exists a matrix $V$ such that $V^{-1}AV=D=\diag(\lambda_1,\ldots,\lambda_n)$) with eigenvalues satisfying \begin{equation} |\arg(-\lambda_i)|\le \alpha \qquad for~i=1,\ldots,N_x, \end{equation} then there exists a constant $M$ (depending only on the method) such that for all $\dt>0$ the global error satisfies \begin{equation} \|x(t_m)-x_m\| \le M \kappa_V \left( \max_{0\le j< k} \| x(t_j)-x_j \| + \dt^p \int_{t_0}^{t_m} \|x^{(p+1)}(\xi)\|d\xi\right), \end{equation} where $\kappa_V=\|V\| \cdot \|V^{-1}\|$ is the condition number of $V$. \end{theorem} Here the superscript with round brackets denotes repeated derivative. We can use this result to show a lemma on the scaling of the error. \begin{lemma} \label{lem:ersca} Suppose a linear multistep method is of order $p$, $A(\alpha)$-stable and stable at infinity. If the matrix $A$ is diagonali{\s}able (i.e.\ there exists a matrix $V$ such that $V^{-1}AV=D=\diag(\lambda_1,\ldots,\lambda_n)$) with eigenvalues satisfying \begin{equation} |\arg(-\lambda_i)|\le \alpha \qquad for~i=1,\ldots,N_x, \end{equation} and $b$ is constant, then the global error satisfies \begin{equation} \|x(t_m)-x_m\| = O\left( \kappa_V^2 (\|x_{\rm init}\| + \|b\|/\|A\|)\left[ \kappa_V (\dt \|A\|)^2 + m(\dt \|A\|)^{p+1} \right] \right), \end{equation} where $\kappa_V=\|V\| \cdot \|V^{-1}\|$ is the condition number of $V$. \end{lemma} \begin{proof} The linear multistep method requires a starting method to obtain the values of $x_j$ for $0<j<k$. The term $\max_{0\le j< k} \| x(t_j)-x_j \|$ arises from the inaccuracy in this procedure. One can simply use the Euler method, in which case the error is $O(\dt^2)$. It is also possible to use higher-order starting methods, but there is not a convenient rigorous result that can be used. To determine the error in the Euler method, one can simply use Theorem \ref{thm2}. Because $k=1$ and $p=1$ for the Euler method, and for the initial point there is zero error ($x(t_0)=x_0$), Theorem \ref{thm2} gives \begin{equation} \|x(t_j)-x_j\| \le M_E \kappa_V \dt\int_{t_0}^{t_j}\|x^{(2)}(\xi)\|d\xi, \end{equation} where $M_E$ is the constant for the Euler method. We consider this expression for $0\le j < k$, and obtain \begin{equation} \max_{0\le j< k}\|x(t_j)-x_j\| \le M_E \kappa_V \dt^2 (k-1) \max_{\xi\in[t_0,t_0+(k-1)\dt]}\|x^{(2)}(\xi)\|. \end{equation} In using these results, it is necessary to place upper bounds on the values of $\|x^{(p+1)}(\xi)\|$ and $\|x^{(2)}(\xi)\|$. In general these will depend on the value of $b(t)$, and its time-dependence. It is well-behaved if $b$ is a constant, in which case the exact solution is \begin{equation} x(t) = e^{A(t-t_0)}(x_{\rm init} + A^{-1} b) - A^{-1} b. \end{equation} Then \begin{equation} x^{(\ell)}(t) = e^{A(t-t_0)}(A^\ell x_{\rm init} + A^{\ell-1} b), \end{equation} so \begin{align} \|x^{(\ell)}(t)\| &= \|V e^{D(t-t_0)} V^{-1}(A^\ell x_{\rm init} + A^{\ell-1} b)\| \nn &\le \kappa_V (\|A\|^\ell \|x_{\rm init}\| + \|A\|^{\ell-1}\|b\|) \end{align} In the first line we have used the diagonali{\s}ation of $A$, and in the second line we have used the condition that $|\arg(-\lambda_i)|\le \alpha$. Using Theorem \ref{thm2}, the error is bounded as \begin{align} \|x(t_m)-x_m\| &\le M \kappa_V \left( \max_{0\le j< k} \| x(t_j)-x_j \| + \dt^p \int_{t_0}^{t_m} \|x^{(p+1)}(\xi)\|d\xi\right) \nn &\le M \kappa_V \left[ M_E \kappa_V^2 \dt^2 k (\|x_{\rm init}\| + \|b\|/\|A\|)\|A\|^2 + \dt^p (t_m-t_0) \kappa_V (\|x_{\rm init}\| + \|b\|/\|A\|)\|A\|^{p+1}\right] \nn &= O\left( \kappa_V^2 (\|x_{\rm init}\| + \|b\|/\|A\|)\left[ \kappa_V (\dt \|A\|)^2 + m(\dt \|A\|)^{p+1} \right] \right) . \end{align} \end{proof} This result means that, disregarding the dependence on many of the quantities, and omitting the error due to the starting method, the error scales as $O((\|A\|\dt)^p \|A\| \Delta t)$ for total time $\Delta t$. To achieve error bounded by $\epsilon$, we then use \begin{equation} \label{eq:nosteps} N_t = O\left( \frac{(\|A\|\Delta t)^{1+1/p}}{\epsilon^{1/p}} \right). \end{equation} That is, the number of time steps required is close to linear in the time. Given a linear multistep method, it is straightforward to encode this method as a linear system \begin{equation} \matr{\vec x} = {\vec b}. \end{equation} Here $\vec x$ is the vector of blocks $x_j$, $\vec b$ is a vector of the blocks $b$, and ${\cal A}$ is a matrix describing the initial condition and discretised differential equation. As an example, the equations for the Euler method and $A$ and $b$ independent of time may be expressed as \begin{equation} \label{eq:exmat} \left[ {\begin{array}{*{20}c} \openone & 0 & 0 & 0 & 0 \\ {-(\openone + A\dt)} & \openone & 0 & 0 & 0 \\ 0 & {-(\openone + A\dt)} & \openone & 0 & 0 \\ 0 & 0 & -\openone & \openone & 0 \\ 0 & 0 & 0 & -\openone & \openone \\ \end{array}} \right]\left[ {\begin{array}{*{20}c} {x_0 } \\ {x_1 } \\ {x_2 } \\ {x_3 } \\ {x_4 } \\ \end{array}} \right] = \left[ {\begin{array}{*{20}c} {x_{\rm in} } \\ b\dt \\ b\dt \\ 0 \\ 0 \\ \end{array}} \right]. \end{equation} Each entry of $\matr$ is a block of the dimension of $A$, and each entry of $\vec x$ and $\vec b$ is a block of the dimension of $x$. The first row sets the initial value, $x_0=x_{\rm in}$. The next rows give $x_{j+1}-(x_j+A x_j \dt)=b\dt$, corresponding to the discreti{\s}ation of the differential equation via the Euler method. The final rows indicate equations where $x_{j+1}-x_j=0$. This is for the times where $x$ is required to be constant. More generally, we use the Euler method as a starting method, then continue with a higher-order method, then for the final rows again have $x_{j+1}-x_j=0$ to ensure that all the final values are equal. Therefore, we set the blocks of $\matr$ as, for $N_t\ge 2k$, \begin{equation} \label{eq:explicit} \begin{array}{*{20}l} \matr_{j,j} = \openone, & 0\le j < k, \quad N_t/2 < j \le N_t, \\ \matr_{j,j-1} = -(\openone + A\dt), & 1\le j < k, \\ \matr_{j,j-k+\ell} = \alpha_l \openone - \beta_\ell A \dt, & k\le j \le N_t/2, \quad 0\le \ell\le k. \\ \matr_{j,j-1} = -\openone, & N_t/2 < j \le N_t. \\ \end{array} \end{equation} We will always require $N_t\ge 2k$ when using $\matr$, because otherwise there are not enough time steps to start the linear multistep method. We also set the blocks of $\vec b$ as \begin{equation} \label{eq:explicitb} \begin{array}{*{20}l} b_{0} = x_{\rm in}, & \\ b_{j} = b\dt, & 1 \le j < k, \\ b_{j} = \sum_{\ell=0}^k \beta_\ell b\dt, & k\le j \le N_t/2, \\ b_{j} = 0, & N_t/2< j \le N_t. \\ \end{array} \end{equation} We require $A$, $b$, and $x_{\rm in}$ to be sparse, with no more than $s$ nonzero elements in any row or column. We assume that the oracles are of the same form as in Ref.\ \cite{Berry09}. That is, the oracle for $A$ is a unitary operator acting as \begin{equation} O_A \ket{j,\ell}\ket{z} = \ket{j,\ell} \ket{z\oplus A^{[j,\ell]}}. \end{equation} Note that we use superscript to denote indexing within the block $A$. We also require an oracle for the sparseness, that locates the nonzero elements. Given a function $f(j,\ell)$ that gives the row index of the $\ell$th nonzero element in column $j$, we require a unitary oracle \begin{equation} O_F \ket{j,\ell} = \ket{j,f(j,\ell)}. \end{equation} Because $A$ is not Hermitian, we require a similar oracle to give the positions of the nonzero elements in a given row. We also require oracles to give the values and locations of non-zero elements for $b$ and $x_{\rm in}$. These oracles ensure that the initial state corresponding to $\vec b$ can be prepared efficiently. Alternatively, it is also possible to consider $b$ and $x_{\rm in}$ such the efficient preparation procedure of Ref.\ \cite{Grover02} can be used. A linear system of equations can be solved using the algorithm of Ref.\ \cite{Harrow09} with complexity $\tilde O(\log(N)s^4\kappa^2/\epsilon_L)$, where $\kappa$ is the condition number of the matrix $\matr$, and $\epsilon_L$ is the allowable error. (Note that the power of $s$ should be 4, not 2 as given in Ref.\ \cite{Harrow09}.) We use the symbol $\epsilon_L$ to indicate the allowable error for the solution of the linear systems, which is distinct from the allowable error for the solution of the differential equation. The scaling can be improved if the method reported in Ref.\ \cite{Ambainis10} is used (the method is not given there, but is reported as being in a manuscript in preparation). The scaling reported there is $O(\kappa^{1+o(1)}\log^cN)$, which does not include the scaling in $s$ or $\epsilon_L$. \section{Bounding the condition number} To determine the complexity in either case it is necessary to determine the value of the condition number $\kappa$. To bound this condition number we first determine bounds on the norms of $\|\matr\|$ and $\|\matr^{-1}\|$. \begin{lemma} \label{norm1} The matrix $\matr$, with blocks given by Eq.\ \eqref{eq:explicit}, satisfies $\|\matr\|=O(1)$ provided $\dt=O(1/\|A\|)$. \end{lemma} \begin{proof} To determine the upper bound on $\|\matr\|$, we express $\matr$ as a sum of block-diagonal matrices, and use the triangle inequality. Let us define $\matr^{\{\ell\}}$ to be the block diagonal matrix with all entries zero, except \begin{equation} \matr^{\{\ell\}}_{j,j-\ell} = \matr_{j,j-\ell}. \end{equation} We then have \begin{equation} \matr = \sum_{\ell=0}^k \matr^{\{\ell\}} , \end{equation} so, via the triangle inequality, \begin{equation} \|\matr\| \le \sum_{\ell=0}^k \|\matr^{\{\ell\}}\|. \end{equation} The norm of a block-diagonal matrix is just the maximum norm of the blocks, so we find \begin{align} \|\matr^{\{0\}}\| &\le \max(1,|\alpha_k|+|\beta_k| h \|A\|), \nn \|\matr^{\{1\}}\| &\le \max(1 + h\|A\|,|\alpha_{k-1}|+|\beta_{k-1}| h \|A\|), \nn \|\matr^{\{\ell\}}\| &\le |\alpha_{\ell}|+|\beta_{\ell}| h \|A\|, \qquad 1<\ell\le k. \end{align} Because we require that $\dt=O(1/\|A\|)$, each of these norms is $O(1)$, and hence the overall norm is $O(1)$. \end{proof} \begin{lemma} \label{norm2} Suppose that the multistep method is $A(\alpha)$-stable, the matrix $A$ may be diagonalised as $A=VDV^{-1}$, and the eigenvalues of $A$ all satisfy $|\arg(-\lambda_i)|\le \alpha$. Then the matrix $\matr$, with blocks given by Eq.\ \eqref{eq:explicit}, satisfies $\|\matr^{-1}\|=O(N_t\kappa_V)$, where $\kappa_V$ is the condition number of $V$. \end{lemma} \begin{proof} To upper bound $\|\matr^{-1}\|$, we use a method analogous to that used to bound the error in Ref.\ \cite{Hairer}. As in the condition for Theorem \ref{thm2}, we assume that $A$ may be diagonalised as \begin{equation} A = V D V^{-1}. \end{equation} Note that $A$ need not be Hermitian, so $V$ need not be unitary. If we define ${\cal V}$ to be to the block matrix with $V$ on the diagonal, and ${\cal D}$ to be the matrix corresponding to $\matr$ except with $A$ replaced with $D$, then $\matr={\cal V}{\cal D}{\cal V}^{-1}$. We obtain \begin{equation} \|\matr^{-1}\|\le \|{\cal V}\|\cdot\|{\cal D}^{-1}\|\cdot\|{\cal V}^{-1}\| = \kappa_V \|{\cal D}^{-1}\|. \end{equation} To bound $\|\matr^{-1}\|$ we therefore just need to bound $\|{\cal D}^{-1}\|$. The matrix ${\cal D}$ corresponds to the linear multistep solution of decoupled scalar differential equations. That is, taking $z=V^{-1}x$, the differential equation becomes $N_x$ decoupled differential equations \begin{equation} \dot z^{[j]}(t) = \lambda_j z^{[j]}(t) + [V^{-1}b]^{[j]}. \end{equation} The matrix ${\cal D}$ gives decoupled linear multistep solutions of each of these differential equations. It may therefore be written in block-diagonal form, with each block corresponding to solution of each of these decoupled equations. The value of $\|{\cal D}^{-1}\|$ can therefore be bounded by the maximum of the norm of the inverse of each of these blocks. To bound the norm of the inverse, we can take ${\cal D}\vec z = \vec y$, and determine a bound on the norm of $\vec z$ for a given norm of $\vec y$. We can determine this by separately examining the uncoupled blocks in ${\cal D}$. For each of these blocks (labelled by $j$) we have the linear multistep equation, for $m=0,\ldots,N_t/2-k$, \begin{equation} \label{eq:multier} \sum_{i=0}^k (\alpha_i - \dt \lambda_j \beta_i) z_{m+i}^{[j]} = y_{m+k}^{[j]}. \end{equation} We also have, for the initial condition, $z_{0}^{[j]} = y_{0}^{[j]}$, and for the Euler method as the starting method with $0\le m<k-1$, \begin{equation} z_{m+1}^{[j]} - (1 + \dt \lambda_j) z_{m}^{[j]} = y_{m+1}^{[j]}. \end{equation} For the end of the simulation, we have for $N_t/2\le m<N_t$, \begin{equation} z_{m+1}^{[j]} - z_{m}^{[j]} = y_{m+1}^{[j]}. \end{equation} We can see that Eq.\ \eqref{eq:multier} is equivalent to Eq.\ (7.11) in the method used to bound error in Ref.\ \cite{Hairer}. We identify $z_{m+i}^{[j]}$ as equivalent to $e_{m+i}$ in Ref.\ \cite{Hairer}, and $y_{m+k}^{[j]}$ as equivalent to $\delta_h(x_m)$ in Ref.\ \cite{Hairer}. As in that method, we can define \begin{align} E_m &:= (z_{m+k-1}^{[j]},\ldots,z_{m+1}^{[j]},z_{m}^{[j]})^T, \nn \Delta_m &:= (y_{m+k}^{[j]}/(\alpha_k-\dt \lambda_j\beta_k),0,\ldots,0)^T. \end{align} As the problem is equivalent to that considered in Ref.\ \cite{Hairer}, the result given in Eq.\ (7.24) of that reference holds: \begin{equation} \|E_{m+1}\| \le M\left(\|E_0\|+\sum_{\ell=0}^m \|\Delta_\ell\| \right), \end{equation} where $M$ is a constant depending only on the method. Using the definition of $\Delta_\ell$ gives \begin{align} \|E_{m+1}\| &\le M\left(\|E_0\|+\sum_{\ell=0}^{m} |y_{\ell+k}^{[j]}|/|\alpha_k-\dt \lambda_j\beta_k| \right) \nn &\le M\left(\|E_0\|+\sum_{\ell=0}^{m} |y_{\ell+k}^{[j]}|/|\alpha_k| \right). \end{align} In the last line we have used the fact that the condition of $A(\alpha)$ stability means $\alpha_k\cdot\beta_k>0$, so $|\alpha_k - \dt \lambda_j\beta_k|^{-1}\le |\alpha_k|^{-1}$. For the starting method, we have used the Euler method, and the result is simpler. For the Euler method, $E_m$ and $\Delta_m$ are scalars, and are just $z_m^{[j]}$ and $y_{m+1}^{[j]}$. The corresponding result is therefore, for $0< m < k$, \begin{align} |z_m^{[j]}| &\le M_E\left(|z_0^{[j]}|+\sum_{\ell=0}^{m-1} |y_{\ell+1}^{[j]}| \right) \nn &= M_E\sum_{\ell=0}^{m} |y_{\ell}^{[j]}|. \end{align} Here $M_E$ is the corresponding constant for the Euler method. For the end of the simulation, we have for $N_t/2\le m<N_t$, $z_{m+1}^{[j]} - z_{m}^{[j]} = y_{m+1}^{[j]}$, so \begin{equation} |z_m^{[j]}| \le |z_{N_t/2}^{[j]}| + \sum_{\ell=N_t/2+1}^m | y_\ell^{[j]} |. \end{equation} Now we can bound the norm of $E_0$ as \begin{align} \|E_0\| &\le \sum_{m=0}^{k-1} |z_m^{[j]}| \nn &\le M_E k \sum_{\ell=0}^{k-1} |y_{\ell}^{[j]}|. \end{align} We can use this result to bound $|z_{m}^{[j]}|$ as, for $k\le m \le N_t/2$, \begin{align} |z_{m}^{[j]}| &\le \| E_{m+1-k}\| \nn & \le M\left(\|E_0\|+\sum_{\ell=0}^{m-k} |y_{\ell+k}^{[j]}|/|\alpha_k| \right) \nn & \le M\left(M_E k \sum_{\ell=0}^{k-1} |y_{\ell}^{[j]}|+\sum_{\ell=k}^{m} |y_{\ell}^{[j]}|/|\alpha_k| \right) . \end{align} For convenience we define the quantity \begin{equation} M_T := \max(MM_Ek,M/|\alpha_k|,M_E,1). \end{equation} Then we find that, for all $0\le m\le N_t$, \begin{equation} |z_{m}^{[j]}| \le M_T \sum_{\ell=0}^{m} |y_{\ell}^{[j]}|. \end{equation} Hence we can determine an overall upper bound on the norm of $z^{[j]}$ as \begin{align} \|z^{[j]}\|^2 &\le M_T^2\sum_{m=0}^{N_t} \left(\sum_{\ell=0}^{m} |y_{\ell}^{[j]}|\right)^2 \nn &\le M_T^2 N_t^2 \|y^{[j]}\|^2. \end{align} Summing over $j$, this then gives \begin{align} \|\vec z\| \le M_T N_t \|\vec y\|. \end{align} This result means that $\|{\cal D}^{-1}\|\le M_TN_t$, where $M_T$ depends only on the method. This then bounds the norm of $\matr^{-1}$ as \begin{equation} \|\matr^{-1}\| = O(N_t\kappa_V). \end{equation} \end{proof} We can now use these results to bound the condition number of $\matr$. \begin{theorem} \label{thm:conthm} Suppose that the multistep method is $A(\alpha)$-stable, the matrix $A$ may be diagonalised as $A=VDV^{-1}$, the eigenvalues of $A$ all satisfy $|\arg(-\lambda_i)|\le \alpha$, and $\dt=O(1/\|A\|)$. Then the matrix $\matr$, with blocks given by Eq.\ \eqref{eq:explicit}, has condition number $\kappa=O(N_t\kappa_V)$, where $\kappa_V$ is the condition number of $V$. \end{theorem} \begin{proof} The condition number is given by the formula \begin{equation} \kappa = \left(\max_{\vec x} \frac{\| \matr{\vec x} \|}{\|\vec x\|}\right) \left(\max_{\vec x} \frac{\|\vec x\|}{\| \matr{\vec x} \|}\right) = \|\matr\| \cdot \|\matr^{-1}\|. \end{equation} The conditions of this theorem ensure that the conditions of Lemmas \ref{norm1} and \ref{norm2} hold. Therefore we can use the bounds on $\|\matr\|$ and $\|\matr^{-1}\|$ from those lemmas to obtain $\kappa=O(N_t \kappa_V)$. \end{proof} This result for the condition number can be explained in a more intuitive way. Each value of $y_i^{[j]}$ is equivalent to an excitation of the differential equation at a single time. Therefore $\vec z$ is close to the solution of the differential equation with each of those individual excitations. An excitation can not cause a growing solution, because of the stability condition. This means that the worst that an excitation can do is cause a solution that is displaced by a proportional amount for the remaining time. Therefore the norm of $\vec z$ can not be more than a factor of $N_t$ times the norm of $\vec y$. \section{Algorithm for solving linear systems} We can use the bound on the condition number to estimate the complexity of the quantum algorithm. We will first explain the scaling in a simple way, then give a more rigorous result. Using Eq.\ \eqref{eq:nosteps} for the number of time steps, we have (ignoring dependence on many of the quantities) \begin{equation} \kappa = O\left( \frac{(\|A\|\Delta t)^{1+1/p}}{\epsilon^{1/p}} \right). \end{equation} Using this expression in the result for the complexity of solving linear systems from Ref.\ \cite{Harrow09} gives \begin{equation} \tilde O(\log(N)s^4(\|A\|\Delta t)^{2+2/p}/(\epsilon^{2/p}\epsilon_L)). \end{equation} If the technique of Ref.\ \cite{Ambainis10} can be used, then the scaling can be improved to \begin{equation} O(\log^c(N)(\|A\|\Delta t)^{1+1/p}/\epsilon^{1/p}). \end{equation} There is a question of whether the scaling in Ref.\ \cite{Harrow09} can be improved because we consider a specific application of the solution of linear systems. The scaling obtained in Ref.\ \cite{Harrow09} is based on a worst-case scenario, that $\|\vec b\|$ scales as $\Lambda_{\rm max} \|\vec x\|$. It is easily shown that $\|\vec x\|$ scales as $\sqrt{N_t}$, provided the magnitude of the solution of the differential equation does not vary greatly in time. In contrast, the magnitude of $\vec b$ is given by \begin{align} \|\vec b\|^2 &= \|x_{\rm in}\|^2 + (k-1)\|b\|^2h^2 + (N_t/2-k+1) \|b\|^2 \dt^2 \left( \sum_{\ell=0}^k \beta_\ell \right)^2 \nn &\le \|x_{\rm in}\|^2 +(k-1)\|b\|^2h^2 + \dt \Delta t \|b\|^2 \left( \sum_{\ell=0}^k \beta_\ell \right)^2. \end{align} In the case that we use the Euler method, then $\dt\propto 1/\Delta t$, so $\|\vec b\|=O(1)$. In that case it is possible to solve the linear systems more efficiently than the worst-case scaling given by Ref.\ \cite{Harrow09}. However, here we are concerned with using higher-order linear multistep methods. For these methods the scaling of $h\Delta t$ is close to $N_t$. In that case $\|\vec b\|$ has similar scaling to $\Lambda_{\rm max} \|\vec x\|$, so there is not a significant advantage to using this approach to improve on the result in \cite{Harrow09}. Therefore we will just use the result stated in Ref.\ \cite{Harrow09}. Before obtaining the overall scaling, another factor that needs to be considered is the scaling for creating the state encoding $\vec b$. This is because the algorithm of Ref.\ \cite{Harrow09} uses an amplitude amplification approach, which requires the preparation of this state at each step. Therefore, the overall complexity is multiplied by the complexity of performing this state preparation. We assume that we have oracles that give the elements of $x_{\rm in}$ and $\vec b$ in the computational basis, and that these vectors are $s$-sparse. We find the following result. \begin{lemma} \label{lem:prep} The state encoding $\vec b$, \begin{equation} \ket{\vec b} \propto \sum_{j,\ell} \vec b_j^{[\ell]} \ket{j,\ell}, \end{equation} can be prepared using $O(\sqrt{s}+\log(N_t))$ calls to the oracles for $x_{\rm in}$ and $b$, provided the normali{\s}ations of $x_{\rm in}$ and $b$ are known. \end{lemma} \begin{proof} For this preparation we can assume that the normali{\s}ations of $x_{\rm in}$ and $b$ are known. That is because this normali{\s}ation can be determined with complexity $O(s)$, and need only be determined once. Because the overall complexity of the simulation is greater than this, it can just be assumed that the normali{\s}ation is known. A state of dimension $s$ can be prepared with complexity $O(\sqrt{s})$ using the method of Ref.\ \cite{Grover00}. As discussed above, we assume an oracle of the form used in Ref.\ \cite{Berry09} for the sparseness. This means that it only requires one oracle call to prepare an $s$-sparse state from a dimension $s$ state with the same coefficients \cite{Berry09}. Therefore the complexity of preparing an $s$-sparse state is also $O(\sqrt{s})$. Let us encode the state in three registers (which may themselves be composed of multiple qubits). The first is one qubit encoding $\ket{0}$ for the times $t_1,\ldots,t_{N_t/2}$, and $\ket{1}$ for the times $t_0,t_{N_t/2+1},\ldots,t_{N_t}$. The second register provides the remainder of the encoding of the time. The third register is of dimension $N_x$, and encodes $b$ or $x_{\rm in}$. By performing a rotation on the first qubit, based on the normali{\s}ations of $x_{\rm in}$ and $b$, we obtain the correct relative weighting of the two time ranges. Then, conditional on the qubit being in the state $\ket{0}$, we can prepare a superposition of times $t_1,\ldots,t_{N_t/2}$ in the second register, as well as a state encoding $b$ in the third register. Conditional on the qubit being in the state $\ket{1}$, we can prepare the time $t_0$ in the second register, and $x_{\rm in}$ in the third register. The complexity of these controlled state preparations will be $O(\sqrt{s})$. The complexity of preparing the superposition of times can be made $O(\log(N_t))$ simply by choosing $N_t$ to be a power of two (which does not change the scaling). \end{proof} We now translate the complexity of solving linear systems into the complexity of obtaining a state corresponding to the solution of the differential equation. \begin{theorem} \label{thm:final} Suppose that the multistep method is order $p$ and $A(\alpha)$-stable, the matrix $A$ may be diagonalised as $A=VDV^{-1}$, the eigenvalues of $A$ all satisfy $|\arg(-\lambda_i)|\le\alpha$, and \begin{align} \label{eq:varcon} \max_{t\in[t_0,t_0+\Delta t]}\|x(t)\| &= O(\|x(t_0+\Delta t)\|), \\ \label{eq:xfbig} \epsilon &= o(\|x_{\rm in}\|). \end{align} Then a state encoding the solution of the differential equation at time $t_0+\Delta t$ may be obtained to within trace distance $\epsilon$ using \begin{equation} \tilde O\left(\log(N_x)s^{9/2} (\|A\|\Delta t)^{5/2}\kappa_V^{23/4}(\|x_{\rm in}\|+\|b\|/\|A\|)^{5/4}\|x(t_0+\Delta t\|/\epsilon^{9/4}\right) \end{equation} calls to the oracles for $A$, $b$, and $x_{\rm in}$. \end{theorem} \begin{proof} There are two main issues that we need to take into account in determining the complexity of obtaining the solution of the differential equation. The first is the probability of obtaining the correct time, and the second is the relation of the allowable error for solving the differential equation to the allowable error for the solution of the linear systems. To obtain the correct final state, one would need to measure the time in the ancilla register, and obtain a value in the range $t_0+\Delta t$ to $t_0+2\Delta t$. This probability can, in principle, be small or even zero. However, the conditions of the theorem ensure that it is not. The probability of success is given by \begin{equation} p_{\rm time} = \frac{\sum_{j=N_t/2}^{N_t} \|x_j\|^2}{\braket{\psi}{\psi}}. \end{equation} The normali{\s}ation of the state is given by \begin{align} \label{eq:norm} \braket{\psi}{\psi} &= \sum_{j=0}^{N_t} \|x_j\|^2 \nn &= \sum_{j=0}^{N_t} [|x(t_0+\dt j)\|+O(\epsilon)]^2 \nn &= O\left(N_t\max_{t\in[t_0,t_0+\Delta t]}\|x(t)\|^2\right) \nn &= O\left(N_t\|x(t_0+\Delta t)\|^2\right). \end{align} Here we have bounded the error in the state at all times by $\epsilon$. This is because we choose parameters that bound the error at time $t_0+\Delta t$ by $\epsilon$. The bound on the error increases monotonically with time, so the error at earlier times will also be bounded by $\epsilon$. We have also used the conditions \eqref{eq:varcon} and \eqref{eq:xfbig} to obtain that $\epsilon=o(\|x(t_0+\Delta t)\|)$. Using this result for $\braket{\psi}{\psi}$ we obtain \begin{align} p_{\rm time} &= \Omega\left( \frac{\sum_{j=N_t/2}^{N_t} [\|x(t_0+\Delta t)\|+O(\epsilon)]^2}{N_t\|x(t_0+\Delta t)\|^2} \right) \nn &= \Omega(1). \end{align} Therefore the probability of obtaining the correct time does not change the scaling of the complexity. The other main issue that needs to be addressed to obtain the overall scaling is the scaling in the error. The $\epsilon_L$ used in the expression for the solution of the linear systems is the error in the state encoding all times, not the error in the estimate of $x(t_0+\Delta t)$ obtained (for which we use $\epsilon$). To determine the relationship between $\epsilon$ and $\epsilon_L$, we use the bound on the normali{\s}ation of the state. In solving for the state $\ket{\psi}$, we obtain the state coefficients approximating \begin{equation} x(t_0+\Delta t)/\sqrt{\braket{\psi}{\psi}}. \end{equation} Using the bound on the normali{\s}ation in Eq.\ \eqref{eq:norm}, if the error in the coefficients is no more than $\epsilon_L$, the error in the solution of the differential equation will be $O(\epsilon_L\sqrt{N_t}\|x(t_0+\Delta t\|)$. Therefore, the error in the solution of the differential equation will be bounded by $\epsilon$ if we take \begin{equation} \epsilon_L = \Theta\left(\epsilon/[\sqrt{N_t}\|x(t_0+\Delta t)\|]\right). \end{equation} Using this value of $\epsilon_L$ in the scaling from Ref.\ \cite{Harrow09}, together with $\kappa=O(N_t \kappa_V)$ from Theorem \ref{thm:conthm}, and the complexity of state preparation from Lemma \ref{lem:prep}, the number of oracle calls is \begin{equation} \tilde O(\log(N_x)s^{9/2} N_t^{5/2}\kappa_V^2\|x(t_0+\Delta t\|/\epsilon). \end{equation} Note that we can omit $\log(N_t)$ from Lemma \ref{lem:prep}, because the $\tilde O$ notation omits logarithmic factors. For the same reason, we have replaced $N=N_x N_t$ with $N_x$. We use a value of $N_t$ that is sufficient to ensure that the error is no greater than $\epsilon$, and that $\dt=O(1/\|A\|)$, which is a condition needed to use Theorem \ref{thm:conthm}. If we take \begin{equation} N_t = \Theta \left( \|A\|\Delta t \sqrt{\frac{\kappa_V^3(\|x_{\rm in}\|+\|b\|/\|A\|)}{\epsilon}} + (\|A\|\Delta t)^{1+1/p} \left( \frac{\kappa_V^2(\|x_{\rm in}\|+\|b\|/\|A\|)}{\epsilon} \right)^{1/p} \right), \end{equation} then using Lemma \ref{lem:ersca}, the error will be bounded by $\epsilon$. In addition, because $\epsilon=o(\|x_{\rm in}\|)$, we obtain $N_t = \Omega (\|A\|\Delta t)$, which ensures that $\dt=O(1/\|A\|)$. We can simplify the result by taking \begin{equation} N_t = \Theta \left( (\|A\|\Delta t)^{1+1/p} \sqrt{\frac{\kappa_V^3(\|x_{\rm in}\|+\|b\|/\|A\|)}{\epsilon}} \right). \end{equation} Then the overall scaling of the number of black-box calls is \begin{equation} \tilde O\left(\log(N_x)s^{9/2} (\|A\|\Delta t)^{(5/2)(1+1/p)}\kappa_V^{23/4}(\|x_{\rm in}\|+\|b\|/\|A\|)^{5/4}\|x(t_0+\Delta t\|/\epsilon^{9/4}\right). \end{equation} Because we are using the $\tilde O$ notation, which omits sublinear terms, we can omit $1/p$ in giving the result. \end{proof} This result is somewhat conservative, because we have included the term due to the error in starting the linear multistep method. If we assume that this error is negligible, then we obtain \begin{equation} \tilde O\left(\log(N_x)s^{9/2} (\|A\|\Delta t)^{(5/2)(1+1/p)}\kappa_V^{2+5/p}(\|x_{\rm in}\|+\|b\|/\|A\|)^{5/2p}\|x(t_0+\Delta t\|/\epsilon^{1+5/2p}\right). \end{equation} This has the same scaling in $\|A\|\Delta t$, but improved scaling in other quantities. \section{Conclusions} A quantum computer may be used to solve sparse systems of linear differential equations, provided the result may be encoded in a quantum state, rather than given explicitly. By encoding the differential equation as a linear system, and using the algorithm of Ref.\ \cite{Harrow09} for solving linear systems, the complexity is (including only scaling in $\|A\|$ and $\Delta t$), \begin{equation} \tilde O\left((\|A\|\Delta t)^{5/2}\right). \end{equation} This improves upon previous results for nonlinear differential equations, which were strongly exponential in $\Delta t$ \cite{Leyton08}. This algorithm has an enormous range of possible applications, because large systems of differential equations are ubiquitous in science and engineering. In particular they arise from the discreti{\s}ation of partial differential equations. These results are for constant coefficients, because that enables an analytic error analysis. This approach can also be used to solve linear differential equations with time-dependent coefficients, though the error analysis will be more difficult. An interesting question is how the complexity will scale if the method referred to in Ref.\ \cite{Ambainis10} can be used. That reference refers to work in preparation providing an improved scaling for solving linear systems. The scaling is close to linear in the condition number, which would improve the scaling in $\|A\|\Delta t$ for the solution of linear differential equations. Another interesting direction for future research is the problem of nonlinear differential equations. It is likely that the exponential scaling obtained in Ref.\ \cite{Leyton08} is fundamental, because quantum mechanics is linear. However, it may potentially be possible to improve the constant of the exponential scaling. \acknowledgements The author is grateful for enlightening discussions with Andrew Childs and Jon Tyson.
1,314,259,996,413
arxiv
\section{Radial Velocity} \label{sec:1} The radial velocity of a star with $N$ companions is given by $ v (t) = \gamma + v_0 (t) $, where $ \gamma $ is a drift due to the global shift of the system center of mass, and\cite{Hilditch_2001} \begin{equation} v_0 (t) = \sum_{j=1}^{N} K_j \left( e_j \cos \omega_j + \cos (\omega_j + \nu_j ) \right) \ . \label{eq01} \end{equation} For each companion $j$, $ e_j $ is the eccentricity, $ \omega_j $ the longitude of the perihelium, $ \nu_j = \nu_j (t) $ the true longitude of the date and \begin{equation} K_j = n_j a_j \, \frac{m_j}{\cal M} \, \sin I_j \, (1-e_j^2)^{-1/2} \ \label{eq02} \end{equation} the amplitude of radial velocity variations. $ n_j $ is the mean motion, $ a_j $ the semi-major axis, $ I_j $ the inclination of the orbital plane with respect to the line of sight, $ m_j $ the mass and $ {\cal M} $ the total mass of the system. The orbital period of each companion is given by $ P_j = 2 \pi / n_j $. \subsection{Elliptic Expansions} There is no explicit expression for the true anomaly $ \nu_j (t) $, but making use of the Kepler equation we can expand it in power series of $ e_j $ such that\cite{Murray_Dermott_1999}: \begin{equation} \mathrm{e}^{i \nu_j} = \sum_{k=-\infty}^{+\infty} C_k (e_j) \, \mathrm{e}^{i k M_j} \ , \label{eq05} \end{equation} where $ M_j = n_j (t-T_{0j}) $ is the mean anomaly, $ T_{0j} $ the date for the passage at the perihelium and \begin{equation} C_k (e_j) = \frac{1}{2\pi} \int_{0}^{2\pi} \left( \cos E - e_j + i \sqrt{1-e_j^2} \sin E \right) \mathrm{e}^{-i k (E - e_j \sin E)} \, d E \ . \label{eq06} \end{equation} To the fifth order in eccentricity, $ C_k (e_j) $ for $ k=1$ and $ k=2$ becomes: \begin{equation} C_1 (e_j) = \left( 1 - \frac{9}{8} e_j^2 + \frac{25}{192} e_j^4 \right) + i \left( 1 - \frac{7}{8} e_j^2 + \frac{17}{192} e_j^4 \right) \ , \label{eq19} \end{equation} \begin{equation} C_2 (e_j) = \left( 1 - \frac{4}{3} e_j^2 + \frac{3}{8} e_j^4 \right) e_j + i \left( 1 - \frac{7}{6} e_j^2 + \frac{1}{3} e_j^4 \right) e_j \ . \label{eq20} \end{equation} Replacing expression (\ref{eq05}) in (\ref{eq01}) we can finally rewrite the radial velocity of the star as the real part of: \begin{equation} v_0 (t) = \sum_{j=1}^{N} K_j \, \mathrm{e}^{i \omega_j} \sum_{k = 1}^{+\infty} C_k (e_j) \, \mathrm{e}^{-i k n_j T_{0j}} \, \mathrm{e}^{i k n_j t} \ . \label{eq07} \end{equation} \section{Fourier Analysis} \label{sec:2} In this paper we will use an ordinary FFT transform of the radial velocity, \begin{equation} F (\phi) = \frac{1}{2 \pi} \int_{-\infty}^{+\infty} v (t) \mathrm{e}^{-i \phi t} \, d t \ , \label{eq03} \end{equation} but other frequency analysis that make use of weight functions to ensure a better convergence with the data are possible. However, the calculus become more complicated, and harder to follow. Notice also that in the real case we cannot compute the FFT using the previous expression, because we are restricted to discrete number of observations $ N_\mathrm{obs} $ in a time span $ [0,T] $. Thus, \begin{equation} F (\phi) \approx \frac{1}{T} \int_{0}^{T} v (t) \, \mathrm{e}^{-i \phi t} \, d t \approx \frac{1}{T} \sum_{k=2}^{N_\mathrm{obs}} v_k \, \mathrm{e}^{-i \phi t_k} (t_k-t_{k-1}) \ , \label{eq04} \end{equation} where $ v_k $ is the star radial velocity measured for the date $ t_k $. \subsection{Determination of $\gamma$} Replacing $ v(t) = \gamma + v_0(t) $ in expression (\ref{eq03}) with $ \phi = 0 $ we get: \begin{equation} \gamma = F(0) \ . \label{eq08} \end{equation} It is then possible to have an estimation of $ \gamma $ using $ \phi = 0 $ in expression (\ref{eq04}). Once we have $ \gamma $ it is preferable to subtract its value from the data $ v_k $ and then continue the Fourier analysis (Eqs.\ref{eq03},\ref{eq04}) with the expression of $ v_0 (t) $ (Eq.\ref{eq07}). \subsection{Determination of $n_j$} The orbital frequency $ n_1 $ corresponding to the companion with largest amplitude $ K_1$ is given by the frequency $ \phi $ corresponding to the highest peak in the power spectrum, that is, \begin{equation} n_1 : \quad \forall \phi \ , \; | F(n_1) | \ge | F (\phi) | \ . \label{eq09} \end{equation} After finding $ n_1 $ it is easy to determine the remaining orbital parameters (see next section). Once the orbit of the first companion is completely established, it is recommended to subtract its contribution from the data $ v_k $ and then continue the Fourier analysis (Eqs.\ref{eq03},\ref{eq04}) with the expression of \begin{equation} v_0 - K_1 \left(e_1 \cos \omega_1 + \cos (\omega_1 + \nu_1) \right) \ . \label{eq10} \end{equation} We then have to repeat this procedure for all the other $ N - 2 $ remaining companions of the star. Thus, the $ n_j $ orbital frequencies are always given by the highest peak in the spectrum (Eq.\ref{eq09}) after subtracting the signal from the already detected companions (Eq.\ref{eq10}). \subsection{Determination of the remaining orbital parameters} Replacing expression (\ref{eq07}) in (\ref{eq03}) with $ \phi = n_j $ and $ \phi = 2 n_j $ we have \begin{equation} F(n_j) = K_j \mathrm{e}^{i \omega_j} C_1 (e_j) \, \mathrm{e}^{-i n_j T_{0j}} \ , \label{eq11} \end{equation} \begin{equation} F(2 n_j) = K_j \mathrm{e}^{i \omega_j} C_2 (e_j) \, \mathrm{e}^{-i 2 n_j T_{0j}} \ , \label{eq12} \end{equation} where the quantities $ F(n_j) $ and $ F(2 n_j) $ can be computed from the data using expression (\ref{eq04}). Multiplying the above expressions by their conjugates, we get \begin{equation} | F(n_j) | = K_j | C_1 (e_j) | \quad \mathrm{and} \quad | F(2 n_j) | = K_j | C_2 (e_j) | \ , \label{eq13} \end{equation} which gives an implicit condition for the eccentricity, \begin{equation} f(e_j) = \frac{| C_2 (e_j) |}{| C_1 (e_j) |} = \frac{| F(2 n_j) |}{| F(n_j) |} \ ,\label{eq14} \end{equation} where $ e_j $ can be determined using the bisection or the Newton's method\cite{Press_etal_1992}. After determining $ e_j $ it is now straightforward to compute $ K_j $ from Eqs.(\ref{eq13}): \begin{equation} K_j = \frac{| F(n_j) |}{| C_1 (e_j) |} = \frac{| F(2 n_j) |}{| C_2 (e_j) |} \ . \label{eq15} \end{equation} From expressions (\ref{eq11}) and (\ref{eq12}) we finally have \begin{equation} \mathrm{e}^{i n_j T_{0j}} = \frac{F(n_j) \, C_2 (e_j)}{F(2 n_j) \, C_1 (e_j)} \label{eq16} \end{equation} and \begin{equation} \mathrm{e}^{i \omega_j} = \frac{F(n_j)}{K_j C_1 (e_j)} \, \mathrm{e}^{i n_j T_{0j}} = \frac{F^2(n_j) \, C_2 (e_j)}{K_j F(2 n_j) \, C_1^2 (e_j)} \ . \label{eq17} \end{equation} \section{Conclusion} For a single companion of a star, we are able to determine its orbital parameters directly from the observational data by computing the FFTs for three different frequencies, namely $ F(0) $, $ F(n) $ and $ F(2n) $. We chose $ n $ and $ 2 n $, but according to expression (\ref{eq07}) we could have chosen any frequency multiple of $ n $. However, unless the eccentricity is extremely high, this two frequencies correspond to the highest peaks produced by the companion in the spectrum and are therefore easier to identify. Moreover, if the eccentricity is close to zero (which is often the case for ``hot Jupiters'' and close binaries), $ F(kn) \approx 0 $, except for $ k = 0 $ and $ k = 1 $. In this case $ \omega_j $ and $ T_{0j} $ cannot be determined, but it is still possible to establish the position of the planet in the orbit $ \lambda_j = \omega_j - n_j T_{0j} $ as \begin{equation} \mathrm{e}^{i \lambda_j} = \frac{F(n_j)}{K_j C_1 (e_j)} \ . \label{eq21} \end{equation} The orbital parameters determined with our method present errors that are proportional to the precision of the instrument and inversely proportional to the number of data points, since a large number of points increases the convergence between expressions (\ref{eq03}) and (\ref{eq04}). The agreement between the Fourier parameters and the true parameters can be increased if we perform a $ \chi^2 $ minimization after determining the orbit of each companion. This procedure should be fast using a standard method such as a Levenberg-Marquardt algorithm\cite{Press_etal_1992}, since the Fourier parameters are already close to the minimum value of $ \chi^2 $. Even though the FFT method is established for keplerian orbits, it also works on realistic systems for which planet-planet interactions are weak. Indeed, this method has already been tested with success in the determination of the orbital parameters of three different planetary systems\cite{Correia_etal_2005,Lovis_etal_2006,Pepe_etal_2006}, where we obtained the same results as other classical alternative methods.
1,314,259,996,414
arxiv
\section{Introduction and statement of results} Cluster algebras are a family of combinatorially-defined commutative algebras which were introduced by Fomin and Zelevinsky at the turn of the millennium to axiomatize and generalize patterns appearing in the study of dual canonical bases in Lie theory \cite{FZ02}. Since their introduction, cluster algebras have been discovered in the rings of functions on many important spaces, such as semisimple Lie groups, Grassmannians, flag varieties, and Teichm\"uller spaces \cite{BFZ05,Sco06,GLS08,GSV05}.\footnote{A more interesting and morally correct statement is that each of these spaces possesses a stratification such that each stratum naturally has a cluster algebra in its ring of functions.} In each of these examples, the cluster algebra is realized as the coordinate ring of a smooth variety. This makes it all the more surprising that the varieties associated to general cluster algebras can be singular; in fact, they can possess such nightmarish pathologies as a non-Noetherian singularity \cite{MulLA}. Various approaches have been introduced to mitigate this. \begin{itemize} \item Restricting to a subclass of cluster algebras with potentially better behavior: acyclic cluster algebras \cite{BFZ05}, locally acyclic cluster algebras \cite{MulLA,BMRS15}, or cluster algebras with a maximal green sequence \cite{BDP14,MulMGS}. \item Replacing the cluster algebra by a closely-related algebra with potentially better behavior: upper cluster algebras \cite{BFZ05,BMRS15}, the span of convergent theta functions \cite{GHKK}, or \textbf{lower bound algebras} \cite{BFZ05}. \end{itemize} In this note, we study the algebraic and geometric behavior of lower bound algebras.\footnote{More specifically, we consider lower bound algebras \emph{defined by a quiver} in the body of the paper, and consider the more general context of \emph{geometric type} in Appendix \ref{section: skew}.} \subsection{Lower bound algebras} Lower bound algebras were introduced in \cite{BFZ05} as a kind of `lazy approximation' of a cluster algebra, in the following sense. A cluster algebra is defined to be the subalgebra of a field of rational functions generated by a (usually infinite) set of \emph{cluster variables}, produced by a recursive procedure called \emph{mutation}. A lower bound algebra is defined to be the subalgebra generated by truncating this process at a specific finite set of steps. The resulting algebra is contained in the associated cluster algebra and is manifestly finitely generated. A lower bound algebra is constructed from an \textbf{ice quiver} $\mathsf{Q}$: this is a quiver (i.e. a finite directed graph) without loops or directed $2$-cycles, in which each vertex is designated \textbf{unfrozen} or \textbf{frozen}. As a matter of convenience, we assume the vertices of $\mathsf{Q}$ have been indexed by the numbers $\{1,2,...,n\}$. To each unfrozen vertex $i$, we associate a pair of monomials $p_i^+,p_i^-\in \mathbb{Z}[x_1,x_2,...,x_n]$ as follows. \begin{equation} p_i^+ \coloneqq \prod_{\stackrel{\text{arrows }a\in \mathsf{Q}}{\text{source}(a)=i}}x_{\text{target}(a)},\hspace{1cm} p_i^- \coloneqq \prod_{\stackrel{\text{arrows }a\in \mathsf{Q}}{\text{target}(a)=i}}x_{\text{source}(a)} \end{equation} Each vertex then determines a Laurent polynomial $x_i'$, called the \textbf{adjacent cluster variable at $i$}, which is defined by the following formula.\footnote{This is an abuse of terminology. Technically speaking, a frozen vertex $i$ should not have an adjacent cluster variable $x_i'$, and instead we should include $x_i^{-1}$ as a generator (though this latter step is a matter of some debate). We are streamlining the process by calling the inverse $x_i^{-1}$ `the adjacent cluster variable at $i$'.} \begin{equation}\label{eq: mutation} x_i' \coloneqq \left\{\begin{array}{cc} x_i^{-1}(p_i^++p_i^-) & \text{if $i$ is unfrozen} \\ x_i^{-1} & \text{if $i$ is frozen} \end{array}\right\} \end{equation} The \textbf{lower bound algebra} $\L(\mathsf{Q})$ defined by $\mathsf{Q}$ is the subalgebra of $\mathbb{Z}[x_1^{\pm1},...,x_n^{\pm1}]$ generated by the variables $x_1,x_2,...,x_n$ and the adjacent cluster variables $x_1',x_2',...,x_n'$. \begin{figure}[h!t] \begin{tikzpicture}[scale=1.5] \node[mutable] (1) at (-1,0) {$1$}; \node[mutable] (2) at (0,0) {$2$}; \node[mutable] (3) at (1,.5) {$3$}; \node[frozen] (4) at (1,-.5) {$4$}; \draw[-angle 90,relative,out=15,in=165] (1) to (2); \draw[-angle 90,relative,out=-15,in=-165] (1) to (2); \draw[-angle 90] (2) to (3); \draw[-angle 90] (2) to (4); \end{tikzpicture} \caption{An ice quiver (the unique frozen vertex is depicted as a square)} \label{fig: mutationexample} \end{figure} \begin{ex} Consider the ice quiver $\mathsf{Q}$ in Figure \ref{fig: mutationexample}. The four adjacent cluster variables are below. \[ x_1' = \frac{x_2^2+1}{x_1},\;\;\; x_2'=\frac{x_3x_4+x_1^2}{x_2},\;\;\; x_3'=\frac{1+x_2}{x_3},\;\;\; x_4'=\frac{1}{x_4} \] \end{ex} \subsection{Relations in $\L(\mathsf{Q})$} We first consider the problem of finding relations among the generators of $\L(\mathsf{Q})$. Each adjacent cluster variable satisfies a \textbf{defining relation} immediately from its definition. \begin{equation} \forall i \text{ unfrozen} ,\; \;\; x_i'x_i = (p_i^++p_i^-) \end{equation} \begin{equation} \forall i \text{ frozen} ,\; \;\;x_i'x_i =1 \end{equation} A more interesting class of relations is given by the following proposition. \begin{prop}[The cycle relations]\label{prop: cyclerels} For each directed cycle of unfrozen vertices $v_1\rightarrow v_2 \rightarrow \cdots \rightarrow v_k\rightarrow v_{k+1}=v_1$ in $\mathsf{Q}$, the following \textbf{cycle relation} holds. \begin{equation}\label{eq: cyclerel} \sum_{\stackrel{S\subset \{1,2,...,k\}}{S\cap (S+1)=\emptyset}} (-1)^{|S|}\left(\prod_{i\in S} \frac{p_{v_i}^+p_{v_{i+1}}^-}{x_{v_i}x_{v_{i+1}}}\right)\left(\prod_{i\not\in S\cup (S+1)} x'_{v_i}\right) = \prod_{i=1}^k\frac{p_{v_i}^+}{x_{v_i}}+ \prod_{i=1}^k\frac{p_{v_i}^-}{x_{v_i}} \end{equation} \end{prop} \noindent We note that the expressions on either side reduce to polynomials in the generators, despite the presence of fractions. Also note that choosing a different initial vertex $v_1$ in the same directed cycle does not change the corresponding cycle relation. \begin{figure}[h!t] \begin{tikzpicture}[scale=1] \node[mutable] (1) at (90:1) {$1$}; \node[mutable] (2) at (-30:1) {$2$}; \node[mutable] (3) at (210:1) {$3$}; \draw[-angle 90] (1) to (2); \draw[-angle 90] (2) to (3); \draw[-angle 90] (3) to (1); \end{tikzpicture} \caption{An ice quiver (no frozen vertices)} \label{fig: 3cycle} \end{figure} \begin{rem} A quiver $\mathsf{Q}$ is called \textbf{acyclic} if it has no directed cycles of unfrozen vertices. The unifying theme of this paper is the use of the cycle relations to generalize results about $\L(\mathsf{Q})$ which were previously known only when $\mathsf{Q}$ is acyclic (that is, when there are no cycle relations). \end{rem} \begin{ex} Let $\mathsf{Q}$ be the ice quiver in Figure \ref{fig: 3cycle}. The three adjacent cluster variables are \[ x_1'= \frac{x_2+x_3}{x_1},\;\;\; x_2'=\frac{x_3+x_1}{x_2},\;\;\; x_3'=\frac{x_1+x_2}{x_3} \] The defining relations here are obtained by clearing the denominators above. A non-trivial directed $3$-cycle starting at any vertex determines the cycle relation \[ x_1'x_2'x_3' - x_1'-x_2'-x_3'=2\] which may be verified by direct computation. \end{ex} \subsection{A presentation of $\L(\mathsf{Q})$} We may ask whether there are other relations in $\L(\mathsf{Q})$ that are not an immediate consequence of the preceding relations; or more concretely, whether the defining relations and the cycle relations generate the entire ideal of relations among the generators. Explicitly, we consider the homomorphism of rings\footnote{The $y$-variables introduced here have no relation to the \emph{$y$-variables} or \emph{coefficient variables} introduced in \cite{FZ07}.} \[ \pi :\mathbb{Z}[x_1,x_2,...,x_n,y_1,y_2,...y_n]\longrightarrow \mathbb{Z}[x_1^{\pm1},x_2^{\pm1},...,x_n^{\pm1} ]\] \[ \forall i, \;\;\; \pi(x_i) = x_i,\;\;\; \pi(y_i)=x_i'\] The image of this homomorphism is $\L(\mathsf{Q})$, and so $K_\mathsf{Q} \coloneqq \ker(\pi)$ is the \textbf{ideal of relations} among the generators of $\L(\mathsf{Q})$, where each adjacent cluster variable $x_i'$ has been replaced by the abstract variable $y_i$. The homomorphism $\pi$ descends to an isomorphism \[ \mathbb{Z}[x_1,x_2,...,x_n,y_1,y_2,...,y_n]/K_\mathsf{Q}\garrow{\sim} \L(\mathsf{Q}) \] We will say a directed cycle $v_1\rightarrow v_2 \rightarrow \cdots \rightarrow v_k\rightarrow v_{k+1}=v_1$ is \textbf{vertex-minimal} if no vertex appears more than once and there is no directed cycle whose vertex set is a proper subset of $\{v_1,v_2...,v_k\}$. \begin{thm}\label{thm: relations} The ideal of relations $K_\mathsf{Q}$ is generated by the following elements. \begin{itemize} \item For each unfrozen vertex $i$, \begin{equation}\label{eq: mutationrel} y_ix_i - p_i^+-p_i^- \end{equation} \item For each frozen vertex $i$, \begin{equation}\label{eq: inverserel} y_ix_i-1 \end{equation} \item For each vertex-minimal directed cycle of unfrozen vertices $v_1\rightarrow v_2 \rightarrow \cdots \rightarrow v_k\rightarrow v_{k+1}=v_1$, \begin{equation}\label{eq: cyclerel} \sum_{\stackrel{S\subset \{1,2,...,k\}}{S\cap (S+1)=\emptyset}} (-1)^{|S|}\left(\prod_{i\in S} \frac{p_{v_i}^+p_{v_{i+1}}^-}{x_{v_i}x_{v_{i+1}}}\right)\left(\prod_{i\not\in S\cup (S+1)} y_{v_i}\right) - \prod_{i=1}^k\frac{p_{v_i}^+}{x_{v_i}}- \prod_{i=1}^k\frac{p_{v_i}^-}{x_{v_i}} \end{equation} which simplifies to a polynomial. \end{itemize} \end{thm} \noindent The theorem is true without the vertex-minimal condition, which is used here to reduce the set of relations. \begin{ex}\label{ex:3vertexgrobner} Let $\mathsf{Q}$ be the ice quiver in Figure \ref{fig: 3cycle}. By Theorem \ref{thm: relations}, $\L(\mathsf{Q})$ is isomorphic to the quotient of $\mathbb{Z}[x_1,x_2,x_3,y_1,y_2,y_3]$ by the ideal $K_\mathsf{Q}$ generated by the following $4$ relations. \[ K_\mathsf{Q} =\langle y_1x_1-x_2-x_3,y_2 x_2-x_3-x_1, y_3x_3-x_1-x_2, y_1y_2y_3-y_1-y_2-y_3-2\rangle \] \end{ex} \subsection{A Gr\"obner basis for $K_\mathsf{Q}$} We prove Theorem \ref{thm: relations} by means of a stronger result, that the given generators are a Gr\"obner basis for the ideal of relations $K_\mathsf{Q}$. Recall that, given a polynomial ring with a monomial order $<$, a \textbf{Gr\"obner basis} of an ideal $I$ is a generating set $\{g_1,g_2,..,g_k\}$ of $I$ satisfying the additional condition that $\{\text{in}_<(g_1),\text{in}_<(g_2),...,\text{in}_<(g_k)\}$ is a generating set of $\text{in}_<(I)$. The monomial orders relevant to us are those in which the $y$-variables are much more expensive than the $x$-variables, that is $\mathbf{x}^{\mathbf{\alpha}}\mathbf{y}^{\mathbf{\beta}}> \mathbf{x}^{\mathbf{\gamma}}\mathbf{y}^{\mathbf{\delta}}$ whenever $\sum_i \beta_i$ is larger than $\sum_i \delta_i$. An example of such a monomial order is the lexicographical order where the variables are ordered by \[ y_1>y_2>\cdots>y_n>x_1>x_2>\cdots>x_n. \] \begin{thm}\label{thm: grobner} For any monomial order of $\mathbb{Z}[x_1,x_2,...,x_n,y_1,y_2,...,y_n]$ in which all of the $y$-variables are much more expensive than all of the $x$-variables, the polynomials given in Theorem \ref{thm: relations} are a Gr\"obner basis for $K_\mathsf{Q}$. Consequently, the initial ideal $\text{in}_< K_\mathsf{Q}$ is squarefree monomial ideal with generating set \[ \{x_iy_i \mid 1\leq i\leq n\}\cup \{ y_{v_1}y_{v_2}\cdots y_{v_k}\mid v_1\rightarrow v_2\rightarrow \cdots \rightarrow v_k\rightarrow v_{k+1} = v_1 \textrm{ is a vertex-minimal cycle in }\mathsf{Q}\}. \] \end{thm} \begin{rem} When $\mathsf{Q}$ is acyclic, Theorem \ref{thm: grobner} specializes to Corollary 1.17 in \cite{BFZ05}. The proof of Theorem \ref{thm: grobner} given in Section \ref{sect:generators} uses \cite[Corollary 1.17]{BFZ05} in an essential way, so our proof is not independent of the original result. \end{rem} \subsection{Simplicial complexes and Cohen-Macaulayness of lower bound algebras} From here on, we work over a field $\mathbb{K}$, and consider the $\mathbb{K}$-algebra $\mathcal{L}(\mathsf{Q})\otimes_\mathbb{Z}\mathbb{K}$. We will still refer to this algebra as a lower bound algebra and, though a slight abuse of notation, will simply denote it by $\mathcal{L}(\mathsf{Q})$. Similarly, we let $K_\mathsf{Q}$ denote the associated lower bound ideal, so that it is the kernel of the map $\pi:\mathbb{K}[x_1,\dots,x_n,y_1\dots,y_n]\rightarrow \mathcal{L}(\mathsf{Q})$. To a squarefree monomial ideal $I\subseteq \mathbb{K}[z_1,\dots,z_n]$, one can associate a simplicial complex $\Delta$ on the vertex set $\{z_1,\dots,z_n\}$. This simplicial complex is called the \textbf{Stanley-Reisner complex} and is defined as follows: $\{z_{i_1},\dots z_{i_r}\}$ is a face of $\Delta$ if and only if the monomial $z_{i_1}\cdots z_{i_r}\notin I$. Observe that the minimal non-faces of $\Delta$ are in one-to-one correspondence with a minimal generating set of $I$. For further information on Stanley-Reisner complexes, see the textbook \cite[Chapter 1]{MillerSturmfels}. Whenever the Stanley-Reisner complex is a simplicial ball or a sphere,\footnote{More precisely, we mean the geometric realization of the simplicial complex is homeomorphic to a ball or sphere, respectively. Whenever we refer to a simplicial complex as a topological object, we more precisely mean its geometric realization.} the corresponding face ring $\mathbb{K}[z_1,\dots, z_n]/I$ is a Cohen-Macaulay ring \cite{Munkres}. Furthermore, when $I = \textrm{in}_< J$ for some ideal $J\subseteq \mathbb{K}[z_1,\dots, z_n]$, we may also conclude that $J$ itself is Cohen-Macaulay (see eg. \cite[Proposition 3.1]{BC03}). \begin{ex} We continue Example \ref{ex:3vertexgrobner} and observe that the initial ideal $\textrm{in}_< K_\mathsf{Q}$ (for a term order $<$ as in Theorem \ref{thm: grobner}) is $\langle x_1y_1, x_2y_2, x_3y_3, y_1y_2y_3\rangle$. The facets (i.e. the maximal faces) of the associated Stanley-Reisner complex are precisely those $\{z_1,z_2,z_3\}$ where $z_i$ is either $x_i$ or $y_i$ and at least one of $z_1, z_2, z_3$ is an $x_i$. This Stanley-Reisner complex is readily seen to be a simplicial ball, and is pictured in Figure \ref{fig:3vertexball}. \begin{figure} \begin{tikzpicture} \begin{scope} \draw[fill=blue!20] (0,0) circle (2); \clip (0,0) circle (2); \draw (90:1) circle (1.732); \draw (-30:1) circle (1.732); \draw (210:1) circle (1.732); \end{scope} \node[draw,circle,fill=white] at (90:1) {$x_1$}; \node[draw,circle,fill=white] at (210:1) {$x_2$}; \node[draw,circle,fill=white] at (330:1) {$x_3$}; \node[draw,circle,fill=white] at (150:2) {$y_1$}; \node[draw,circle,fill=white] at (30:2) {$y_2$}; \node[draw,circle,fill=white] at (270:2) {$y_3$}; \end{tikzpicture} \caption{The Stanley-Reisner complex for $\langle x_1y_1, x_2y_2, x_3y_3, y_1y_2y_3 \rangle$} \label{fig:3vertexball} \end{figure} \end{ex} This example generalizes, and we are able to conclude that all lower bound algebras are Cohen-Macaulay. \begin{thm}\label{thm:CM} Let $\mathsf{Q}$ be a quiver with $n$ vertices, let $K_\mathsf{Q}\subseteq \mathbb{K}[x_1,\dots,x_n,y_1,\dots,y_n]$ be its ideal of relations, and let $<$ be any monomial order where the $y$-variables are much more expensive than the $x$-variables. Let $\Delta_\mathsf{Q}$ be the Stanley-Reisner complex of the squarefree monomial ideal $\textrm{in}_< K_\mathsf{Q}$. \begin{enumerate} \item If $\mathsf{Q}$ is acyclic, then $\Delta_\mathsf{Q}$ is a simplicial sphere. \item If $\mathsf{Q}$ is not acyclic, then $\Delta_\mathsf{Q}$ is a simplicial ball. \end{enumerate} \end{thm} \noindent We show additional properties of $\Delta_\mathsf{Q}$. If $\mathsf{Q}$ is acyclic, then $\Delta_\mathsf{Q}$ is the boundary of a cross-polytope. In both cases, $\Delta_\mathsf{Q}$ satisfies the stronger condition of \emph{vertex-decomposibility}. Details are in Section \ref{sect:combinatorics}. \begin{cor} For any $\mathsf{Q}$ the lower bound algebra $\L(\mathsf{Q})$ over a field $\mathbb{K}$ is Cohen-Macaulay. \end{cor} \begin{rems} \begin{enumerate} \item We prove Theorem \ref{thm:CM} by way of a more general result, which gives a larger class of simplicial complexes which are automatically vertex-decomposable balls (see Theorem \ref{homeo}). \item When $\mathsf{Q}$ is acyclic, $\L(\mathsf{Q})$ was already known to be Cohen-Macaulay; specifically, \cite[Corollary 1.17]{BFZ05} implies that $\L(\mathsf{Q})$ is a complete intersection, and, consequently, that it is Cohen-Macaulay. \end{enumerate} \end{rems} \subsection{Normality of lower bound algebras} Our last main result is the normality of the $\mathbb{K}$-algebra $\L(\mathsf{Q})$. \begin{thm}\label{thm:normalityOfLowerBounds} Every lower bound cluster algebra defined by a quiver is normal. \end{thm} Since $\L(\mathsf{Q})= \mathbb{K}[x_1,...,x_n,y_1,...,y_n]/K_\mathsf{Q}$ is finitely-generated, Serre's Criterion reduces normality to a pair of geometric conditions on the variety $\mathbb{V}(K_\mathsf{Q})$. \begin{itemize} \item (R1) The variety $\mathbb{V}(K_\mathsf{Q})$ has no codimension\--$1$ singularities. \item (S2) Any regular function on an open subset in $\mathbb{V}(K_\mathsf{Q})$ with codimension\--$2$ complement extends to a regular function on all of $\mathbb{V}(K_\mathsf{Q})$. \end{itemize} The Cohen-Macaulay property implies the S2 condition, and so normality of $\L(\mathsf{Q})$ reduces to proving there are no codimension\--$1$ singularities. As with Cohen-Macaulayness, this geometric question will be reduced to Stanley-Reisner combinatorics, along with a result of Knutson-Lam-Speyer \cite[Proposition 8.1]{KLS-Richardson}. See Section \ref{sect:normality} for further information. \begin{rem} Like our other results, Theorem \ref{thm:normalityOfLowerBounds} is only new in the case of non-acyclic $\mathsf{Q}$. In the acyclic setting, $\L(\mathsf{Q})$ is equal to its upper cluster algebra\footnote{This was proven with an additional assumption in \cite[Thm. 1.18]{BFZ05}, and without said assumption in \cite{MulAU}.}, which is a normal domain by \cite[Prop. 2.1]{MulLA}. \end{rem} \subsection*{Structure of paper} Section \ref{section: algebra} considers relations in $\L(\mathsf{Q})$ and proves the associated results: Proposition \ref{prop: cyclerels}, Theorem \ref{thm: relations} and Theorem \ref{thm: grobner}. Section \ref{sect:combinatorics} introduces the relevant combinatorial tools, leading to the proof of Theorem \ref{thm:CM}. Section \ref{sect:normality} addresses normality, proving Theorem \ref{thm:normalityOfLowerBounds}. The paper concludes with a pair of appendices which frame the scope of the paper. Appendix \ref{section: singularity} considers the singularities of lower bound algebras directly, and provides an example to suggest this is a difficult problem. Appendix \ref{section: skew} explains how the results of the paper can be extended to \emph{skew-symmetrizable} lower bound algebras, which are more general but also somewhat less intuitive. \subsection*{Acknowledgements} This paper is the result of a summer 2015 REU project at the University of Michigan. This project was supported by Karen Smith's NSF grant DMS-1001764. We are also grateful to Sergey Fomin for numerous helpful comments on an early draft of this note. \section{Presentations and Gr\"obner Bases}\label{section: algebra} \subsection{Choice graphs and cycle relations} \noindent To every cycle of $\mathsf{Q}$ there exists a corresponding relation in $\L(\mathsf{Q})$. Let $\mathsf{Q}$ have a directed cycle $v_1 \to v_2 \to \cdots \to v_{k-1} \to v_k \to v_1$. By giving an alternate presentation for the product \begin{equation}\label{Product} \prod_{i=1}^k x_{v_i}' = \prod_{i=1}^k x_{v_i}^{-1} (p_{v_i}^+ + p_{v_i}^-), \end{equation} we acquire a nontrivial relation that holds in $\L(\mathsf{Q})$. It is our goal to expand the right-hand product as a sum, and from this, Proposition \ref{prop: cyclerels} will follow. Each term of the expansion of this product represents a choice, for each $i$, of either $p_{v_i}^+$ or $p_{v_i}^-$ from $x_{v_i}^{-1}(p_{v_i}^+ + p_{v_i}^-)$. Therefore, to each term, we associate a directed graph with $\mathbb{Z}/k\mathbb{Z}$ as its vertex set and $\{(i,i\pm1)\}_{i=1}^n$ as its set of arrows, where the sign of $\pm$ corresponds to the abovementioned choice of $p_{v_i}^+$ (corresponding to the positive sign because $x_{v_{i+1}}$ divides $p_{v_i}^+$) or $p_{v_i}^-$ (negative sign, since $x_{v_{i-1}}$ divides $p_{v_i}^-$). Call these graphs the \textbf{choice graphs} of the terms of the expansion of \eqref{Product}, and let $\mathfrak C$ denote the set of all choice graphs. Formally, we write the correspondence between choice graphs and terms as a function \begin{equation} M: \mathfrak C \to \{\text{monomials in }\mathbb{Z}[x_1^{\pm1},\dots,x_k^{\pm1}]\},\quad M(\mathsf g) = \prod_{i=1}^k x_{v_i}\1p_{v_i}^{\mathrm{sign}_{\mathsf g}(i)}, \end{equation} where \begin{equation} \mathrm{sign}_{\mathsf g}(i) = \begin{cases} + & \text{if } (i,i+1) \in \mathsf g,\\ - & \text{if } (i,i-1) \in \mathsf g. \end{cases}. \end{equation} \begin{ex} Let $\mathsf{Q}$ be the quiver on $\mathbb{Z}/6\mathbb{Z}$ whose set of arrows is $\{(j,j+1)\}_{j=1}^6$. The choice graph in Figure \ref{fig: choice graph} represents the term \[ (x_{1}^{-1} p_{1}^+)(x_{2}^{-1} p_{2}^+)(x_{3}^{-1} p_{3}^-)(x_{4}^{-1} p_{4}^-)(x_{5}^{-1} p_{5}^-)(x_{6}^{-1} p_{6}^-) = x_{1}^{-1} p_{1}^+ \frac{p_{2}^+ p_{3}^-}{x_3 x_2} p_4^+ \frac{p_5^-}{x_4} \frac{p_6^-}{x_5} x_{6}^{-1}. \] \end{ex} \begin{figure} \begin{tikzpicture}[scale=1] \node[mutable] (1) at (120:1) {$1$}; \node[mutable] (2) at (60:1) {$2$}; \node[mutable] (3) at (0:1) {$3$}; \node[mutable] (4) at (300:1) {$4$}; \node[mutable] (5) at (240:1) {$5$}; \node[mutable] (6) at (180:1) {$6$}; \draw[-angle 90] (2) to [bend left] (3); \draw[-angle 90] (1) to (2); \draw[-angle 90] (5) to (4); \draw[-angle 90] (4) to (3); \draw[-angle 90] (6) to (5); \draw[-angle 90] (3) to [bend left] (2); \end{tikzpicture} \caption{The choice graph for $(x_{1}^{-1} p_{1}^+)(x_{2}^{-1} p_{2}^+)(x_{3}^{-1} p_{3}^-)(x_{4}^{-1} p_{4}^-)(x_{5}^{-1} p_{5}^-)(x_{6}^{-1} p_{6}^-)$} \label{fig: choice graph} \end{figure} By our construction of $\mathfrak C$ and $M$, we have that \eqref{Product} may be written as \begin{equation}\label{eq: choice sum} \prod_{i=1}^k x_{v_i}^{-1} (p_{v_i}^+ + p_{v_i}^-) = \sum_{\mathsf g \in \mathfrak C} M(\mathsf g), \end{equation} and so it is our goal to expand the sum on the right of \eqref{eq: choice sum}. The following proposition gives us such an expansion. \begin{prop}\label{cycle} We may expand \eqref{eq: choice sum} as follows: \begin{equation}\label{cyclerel-eq} \prod_{i=1}^k x_{v_i}^{-1} (p_{v_i}^+ + p_{v_i}^-) = \prod_{i=1}^k \frac{p_{v_i}^+}{x_{v_{i-1}}} + \prod_{i=1}^k \frac{p_{v_i}^-}{x_{v_{i+1}}} + \sum_{\stackrel{\emptyset \neq S\subset \{1,2,...,k\}}{S\cap (S+1)=\emptyset}}(-1)^{|S|+1} \left(\prod_{i\in S} \frac{p_{v_i}^+}{x_{v_{i+1}}}\frac{p_{v_{i+1}}^-}{x_{v_i}}\right) \left(\prod_{i \notin S \cup (S+1)} x_{v_i}'\right) \end{equation} \end{prop} \begin{proof} \indent Note that a choice graph has a directed 2-cycle if and only if its associated term has a factor of the form $(x_{v_i}^{-1} p_{v_i}^+)(x_{v_{i+1}}^{-1} p_{v_{i+1}}^-)$, which is a monomial $\frac{p_{v_i}^+}{x_{v_{i+1}}} \frac{p_{v_{i+1}}^-}{x_{v_i}}$ in the variables $x_{v_1},\dots,x_{v_n}$ since $x_{v_{i+1}}$ divides $p_{v_i}^+$ and $x_{v_i}$ divides $p_{v_{i+1}}^-$. Also notice that no two 2-cycles may share vertices, since each vertex meets the tail of one and only one arrow. It follows that a choice graph may have at most $\lfloor \frac k2 \rfloor$ 2-cycles. Finally notice that a 2-cycle may only exist on adjacent vertices. Now, let $\mathfrak S_j$ denote the collection of subsets $S$ of $\{1,\dots,k\}$ of size $j\geq1$ such that $S \cap (S+1) = \emptyset$, and let $\mathfrak S = \bigcup_{j=1}^{\lfloor \frac k2 \rfloor} \mathfrak S_j$. These subsets $S$ correspond to the ``left endpoints'' of the 2-cycles in choice graphs with $j$ 2-cycles.\\ \indent Let $C(S)$ be the set of all choice graphs with a 2-cycle on every pair $\{i,i+1\}$ for $i \in S \in \mathfrak S$, and let $C_0$ be the set of choice graphs with no 2-cycles, then we have \begin{equation}\label{eq: separation} \mathfrak C = C_0 \cup \left(\bigcup_{S \in \mathfrak S_1}C(S)\right) \cup \left(\bigcup_{S \in \mathfrak S_2}C(S)\right) \cup \cdots \cup \left(\bigcup_{S \in \mathfrak S_{\lfloor \frac n2 \rfloor}} C(S)\right). \end{equation} Since every $\mathsf g \in C(S)$ has a pair of arrows $(i,i+1), (i+1,i)$ for every $i \in S$, we have \begin{equation}\label{eq: partial choice} \sum_{\mathsf g \in C(S)} M(\mathsf g) = \left(\prod_{i\in S} \frac{p_{v_i}^+}{x_{v_{i+1}}}\frac{p_{v_{i+1}}^-}{x_{v_i}}\right) \left(\prod_{i \notin S \cup (S+1)} x_{v_i}^{-1}\left(p_{v_i}^+ + p_{v_i}^-\right)\right). \end{equation} We also have \[ \sum_{\mathsf g \in C_0} M(\mathsf g) = \prod_{i=1}^k \frac{p_{v_i}^+}{x_{v_{i-1}}} + \prod_{i=1}^k \frac{p_{v_i}^-}{x_{v_{i+1}}}, \] since there are precisely two choice graphs with no 2-cycles, corresponding to a consistent choice of either $+$ or $-$. We wish use \eqref{eq: separation} to write \eqref{eq: choice sum} as a sum with summands of the form \eqref{eq: partial choice}. Such a sum must have precisely one term corresponding to each member of $\mathfrak C$. However, there is a certain amount of overcounting involved in \eqref{eq: separation}, since for any $S \in \mathfrak S$, we have \begin{equation}\label{overcounting} C(S) \subseteq C(S \smallsetminus \{i\}) \quad \text{for every } i \in S. \end{equation} We therefore proceed iteratively by way of the inclusion-exclusion principle. We first include a summand corresponding to $C(S)$ for every $S \in \mathfrak S_1$: \[ T_1 = \sum_{S \in \mathfrak S_1} \sum_{\mathsf g \in C(S)} M(\mathsf g). \] As (\ref{overcounting}) shows, for every $S \in \mathfrak S_2$, the summand $T_1$ contains two terms corresponding to each element of $C(S)$. Therefore, we now exclude a summand for each $C(S)$, $S \in \mathfrak S_2$: \[ T_2 = T_1 - \sum_{S \in \mathfrak S_2} \sum_{\mathsf g \in C(S)} M(\mathsf g). \] Again, (\ref{overcounting}) shows us that, for every $S \in \mathfrak S_3$, the summand $T_2$ excludes one term too many for each element of $C(S)$. We therefore define $T_3$ accordingly, and continue the process of inclusion and exclusion until we obtain \[ T_{\lfloor \frac k2 \rfloor} = \sum_{j=1}^{\lfloor \frac k2 \rfloor}\left((-1)^{j+1} \sum_{S \in \mathfrak S_j} \sum_{\mathsf g \in C(S)} M(\mathsf g)\right). \] Finally, we must include a term of $\sum_{\mathsf g \in C_0} M(\mathsf g)$ corresponding to $C_0$, and we conclude that \begin{align*} \sum_{\mathsf g \in \mathfrak C} M(\mathsf g) &= \sum_{\mathsf g \in C_0} M(\mathsf g) + T_{\lfloor \frac k2 \rfloor} \\ \prod_{i=1}^k x_{v_i}^{-1} (p_{v_i}^+ + p_{v_i}^-) &= \prod_{i=1}^k \frac{p_{v_i}^+}{x_{v_{i-1}}} + \prod_{i=1}^k \frac{p_{v_i}^-}{x_{v_{i+1}}} + \sum_{j=1}^{\lfloor \frac k2 \rfloor}\left((-1)^{j+1} \sum_{S \in \mathfrak S_j} \left(\prod_{i\in S} \frac{p_{v_i}^+}{x_{v_{i+1}}}\frac{p_{v_{i+1}}^-}{x_{v_i}}\right) \left(\prod_{i \notin S \cup (S+1)} \!\!\!x_{v_i}'\right)\right) \\ &= \prod_{i=1}^k \frac{p_{v_i}^+}{x_{v_{i-1}}} + \prod_{i=1}^k \frac{p_{v_i}^-}{x_{v_{i+1}}} + \sum_{\stackrel{\emptyset \neq S\subset \{1,2,...,k\}}{S\cap (S+1)=\emptyset}} \left((-1)^{|S|+1} \left(\prod_{i\in S} \frac{p_{v_i}^+}{x_{v_{i+1}}}\frac{p_{v_{i+1}}^-}{x_{v_i}}\right) \left(\prod_{i \notin S \cup (S+1)} \!\!\!x_{v_i}'\right)\right). \end{align*} \end{proof} Proposition \ref{prop: cyclerels} follows, since we now have \begin{align*} \prod_{i=1}^k x_{v_i}^{-1} (p_{v_i}^+ + p_{v_i}^-) - \sum_{\stackrel{\emptyset \neq S\subset \{1,2,...,k\}}{S\cap (S+1)=\emptyset}} \left((-1)^{|S|+1} \left(\prod_{i\in S} \frac{p_{v_i}^+}{x_{v_{i+1}}}\frac{p_{v_{i+1}}^-}{x_{v_i}}\right) \left(\prod_{i \notin S \cup (S+1)} \!\!\!x_{v_i}'\right)\right)&= \prod_{i=1}^k \frac{p_{v_i}^+}{x_{v_{i-1}}} + \prod_{i=1}^k \frac{p_{v_i}^-}{x_{v_{i+1}}} \\ \sum_{\stackrel{S\subset \{1,2,...,k\}}{S\cap (S+1)=\emptyset}} \left((-1)^{|S|} \left(\prod_{i\in S} \frac{p_{v_i}^+}{x_{v_{i+1}}}\frac{p_{v_{i+1}}^-}{x_{v_i}}\right) \left(\prod_{i \notin S \cup (S+1)} \!\!\!x_{v_i}'\right)\right) &= \prod_{i=1}^k\frac{p_{v_i}^+}{x_{v_{i-1}}}+ \prod_{i=1}^k\frac{p_{v_i}^-}{x_{v_{i+1}}}. \end{align*} \subsection{Generators for $K_\mathsf{Q}$}\label{sect:generators} Now, in addition to the \textbf{defining polynomials} $y_ix_i - (p_i^+ + p_i^-)$, $y_ix_i - 1$ given by the defining relations (\ref{eq: mutationrel}) and (\ref{eq: inverserel}), we have by Proposition \ref{cycle} that the ideal of relations $K_\mathsf{Q}$ also contains the \textbf{cycle polynomials}. We define the cycle polynomials in $\mathbb{Z}[x_1,x_2,\dots,x_n,y_1,y_2,\dots,y_n]$ to be those polynomials coming from vertex-minimal directed cycles whose images under $\pi$ vanish by virtue of (\ref{cyclerel-eq}). That is, for every vertex-minimal directed cycle of unfrozen vertices $v_1 \to v_2 \to \cdots \to v_k \to v_1$ in $\mathsf{Q}$, we have the cycle polynomial \[ \sum_{\stackrel{S\subset \{1,2,...,k\}}{S\cap (S+1)=\emptyset}} (-1)^{|S|}\left(\prod_{i\in S} \frac{p_{v_i}^+}{x_{v_{i+1}}}\frac{p_{v_{i+1}}^-}{x_{v_i}}\right)\left(\prod_{i\not\in S\cup (S+1)} y_{v_i}\right) - \prod_{i=1}^k\frac{p_{v_i}^+}{x_{v_{i-1}}}- \prod_{i=1}^k\frac{p_{v_i}^-}{x_{v_{i+1}}}. \] Note again that this expression reduces to a polynomial in the $x$- and $y$-variables because each $x_{v_{i+1}}$ divides $p_{v_i}^+$ and each $x_{v_i}$ divides $p_{v_{i+1}}^-$. In Table 1, we present the cycle polynomials given by some basic ice quivers. \begin{table} \begin{center} \caption{Cycle polynomials in several examples} \begin{tabular}{| >{\centering\arraybackslash}m{2in} | >{\centering\arraybackslash}m{2in} |} \hline\vspace{.1cm} \begin{tikzpicture}[scale=1] \node[mutable] (1) at (90:1) {$1$}; \node[mutable] (2) at (330:1) {$2$}; \node[mutable] (3) at (210:1) {$3$}; \draw[-angle 90] (1) to (2); \draw[-angle 90] (2) to (3); \draw[-angle 90] (3) to (1); \end{tikzpicture} & $y_1y_2y_3 - y_1 - y_2 - y_3 - 2$ \\ \hline\vspace{.1cm} \begin{tikzpicture}[scale=1] \node[mutable] (1) at (135:1) {$1$}; \node[mutable] (2) at (45:1) {$2$}; \node[mutable] (3) at (315:1) {$3$}; \node[mutable] (4) at (225:1) {$4$}; \draw[-angle 90] (1) to (2); \draw[-angle 90] (2) to (3); \draw[-angle 90] (3) to (4); \draw[-angle 90] (4) to (1); \end{tikzpicture} & $y_1y_2y_3y_4 - y_1y_2 - y_1y_4 - y_2y_3 - y_3y_4$ \\ \hline\vspace{.1cm} \begin{tikzpicture}[scale=1] \node[mutable] (1) at (90:1) {$1$}; \node[mutable] (2) at (18:1) {$2$}; \node[mutable] (3) at (306:1) {$3$}; \node[mutable] (4) at (234:1) {$4$}; \node[mutable] (5) at (162:1) {$5$}; \draw[-angle 90] (1) to (2); \draw[-angle 90] (2) to (3); \draw[-angle 90] (3) to (4); \draw[-angle 90] (4) to (5); \draw[-angle 90] (5) to (1); \end{tikzpicture} & $y_1y_2y_3y_4y_5 - y_1y_2y_3 - y_1y_2y_5 - y_2y_3y_4 - y_3y_4y_5 + y_1 + y_2 + y_3 + y_4 + y_5 - 2$ \\ \hline\vspace{.1cm} \begin{tikzpicture}[scale=1] \node[mutable] (2) {$2$}; \node[mutable] (1) [above left of=2] {$1$}; \node[mutable] (4) [above right of=2] {$4$}; \node[mutable] (3) [below right of=2] {$3$}; \node[mutable] (5) [below left of=2] {$5$}; \draw[-angle 90] (1) to (2); \draw[-angle 90] (2) to (3); \draw[-angle 90] (3) to (4); \draw[-angle 90] (4) to (2); \draw[-angle 90] (2) to (5); \draw[-angle 90] (5) to (1); \end{tikzpicture} & $y_1y_2y_5 - y_1x_3 - y_2 - y_5x_4 - x_3 - x_4$ and $y_2y_3y_4 - y_2 - y_3x_1 - y_4x_5 - x_1 - x_5$ \\ \hline \end{tabular} \end{center} \label{table: polynomials} \end{table} \\ \indent We now obtain a presentation for the ideal of relations $K_\mathsf{Q}$ and for the initial ideal $\In_< K_\mathsf{Q}$, where $<$ is a monomial order in which the $y$-variables are much more expensive than the $x$-variables. This presentation will suffice to prove Theorems \ref{thm: relations} and \ref{thm: grobner}. We first require the following standard lemma. \begin{lem}\label{basislemma} Let $J$ and $L$ be ideals in a polynomial ring. Suppose that $J \subseteq L$ and $\In_< J = \In_< L$. Then $J = L$. \end{lem} \begin{proof} Let $G$ be a Gr\"obner basis for $J$ and let $f \in L$. Since $\In_< G$ generates $\In_< L$, dividing $f$ by $G$ gives a remainder of $0$, and so $f \in J$. \end{proof} \begin{thm}\label{basisthm} Given an ice quiver $\mathsf{Q}$ on $n$ vertices, the defining polynomials together with the cycle polynomials form a Gr\"obner basis for $K_\mathsf{Q} = \ker \pi$, where \[ \pi :\mathbb{Z}[x_1,x_2,...,x_n,y_1,y_2,...y_n]\longrightarrow \mathbb{Z}[x_1^{\pm1},x_2^{\pm1},...,x_n^{\pm1} ]\] \[ \forall i, \;\;\; \pi(x_i) = x_i,\;\;\; \pi(y_i)=x_i'.\] \end{thm} \begin{proof} Let $J$ be the ideal of $\mathbb{Z}[x_1,x_2,\dots,x_n,y_1,y_2,\dots,y_n]$ that is generated by the set $G$ of defining and cycle polynomials. We know that $J \subseteq K_\mathsf{Q}$, and therefore that $\In_< J \subseteq \In_< K_\mathsf{Q}$. Let $M$ be the monomial ideal generated by the initial terms of the polynomials in $G$. We know that $M \subseteq \In_< J \subseteq \In_< K_\mathsf{Q}$, so we would like to show that $\In_< K_\mathsf{Q} \subseteq M$. Assume (for the purpose of contradiction) that there is some $f\in K_\mathsf{Q}$ such that $\In_<(f)\not\in M$. We may write (assume all $a$, $b$ and $\lambda$ non-zero for simplicity) \begin{equation}\label{form} \In_<(f)= \lambda x_{i_1}^{a_{i_1}} x_{i_2}^{a_{i_2}} \cdots x_{i_k}^{a_{i_k}} y_{j_1}^{b_{j_1}} y_{j_2}^{b_{j_2}} \cdots y_{j_\ell}^{b_{j_\ell}} \end{equation} Note that $\{j_1,j_2,...,j_\ell\}$ cannot contain the indices of a directed cycle of unfrozen vertices. Otherwise, it would also contain the indices of a vertex-minimal directed cycle, and $\In_<(f)$ would be a multiple of the initial term of a cycle polynomial, contradicting the assumption that $\In_<(f)\not\in M$. Let $Y\subset [n]$ be the indices of unfrozen vertices which are not in $\{j_1,j_2,...,j_\ell\}$, and let $\mathsf{Q}'$ be the ice quiver obtained by freezing the vertices in $\mathsf{Q}$ indexed by $Y$. By the preceding observation, $\mathsf{Q}'$ is an acyclic quiver. There is a natural inclusion \[ \L(\mathsf{Q})\hookrightarrow \L(\mathsf{Q}') \] induced by inclusions into $\mathbb{Z}[x_1^{\pm1},...,x_n^{\pm1}]$. This inclusion may be lifted to a ring homomorphism \[ \mu: \mathbb{Z}[x_1,...,x_n,y_1,...,y_n]\rightarrow \mathbb{Z}[x_1,...,x_n,y_1,...,y_n]\] \[ \mu(x_i)=x_i,\;\;\; \mu(y_i) = \left\{\begin{array}{cc} y_i(p_i^++p_i^-) & \text{if }i\in Y \\ y_i & \text{otherwise} \end{array}\right\}\] with the property that $\pi'\circ \mu = \pi$, where $\pi'$ is the map \[ \mathbb{Z}[x_1,x_2,...,x_n,y_1,y_2,...y_n]\longrightarrow \mathbb{Z}[x_1^{\pm1},x_2^{\pm1},...,x_n^{\pm1} ]\] defined by $\mathsf{Q}'$ instead of $\mathsf{Q}$. Each of the variables appearing in the initial term of $f$ are fixed by $\mu$. In lower-order terms of $f$, $\mu$ may introduce monomials in $x$; however, this will never create a term greater than $\In_<(f)$. Hence, \[ \In_<(f) = \In_<(\mu(f)) \in \In_<(K_{\mathsf{Q}'})\] Since $\mathsf{Q}'$ is acyclic, it was shown in \cite[Corollary 1.17]{BFZ05} that $\In_<(K_{\mathsf{Q}'})$ is generated by $\{x_iy_i \mid i\in [n]\}$. Hence, $\In_<(f)$ is a multiple of $x_iy_i$ for some $i$. However, this implies that $\In_<(f)$ is a multiple of the initial term of the $i$th defining polynomial in $K_\mathsf{Q}$, contradicting the assumption that $\In_<(f)\not\in M$. It follows that $\In_<(K_\mathsf{Q})\subset M$. This consequently implies that $\In_<(J)=\In_<(K_\mathsf{Q})$ and, by the preceding lemma, that $J=K_\mathsf{Q}$. Furthermore, since $\In_<(G)$ generates $\In_<(K_\mathsf{Q})$, the set $G$ is a Gr\"obner basis for $K_\mathsf{Q}$. \end{proof} \section{Simplicial Complexes and Cohen-Macaulayness}\label{sect:combinatorics} \noindent Now that we have obtained a generating set for $\In_< K_{\mathsf{Q}}$, we can explicitly construct the Stanley-Reisner complex of $\In_< K_{\mathsf{Q}}$. We first consider a larger class of simplicial complexes, defined as follows: \begin{defn} Let $S = \{1,\dots,n\}$, let $\mathscr{C}$ be a collection $\{C_1,\dots,C_k\}$ of subsets of $S$, and let $Y \subseteq S$. Define the simplicial complex $\Delta(S,\mathscr{C},Y)$ on the set $\{x_i ~|~ i \in S\} \cup \{y_i ~|~ i \in Y\}$ by the rule\footnote{If a subset $C_j$ is not contained in $Y$, we may simply ignore the condition $\{y_i\}_{i\in C_j} \not\subseteq S$, which is vacuously true because $y_i$ is not defined for $i\not\in Y$.} \[ F \in \Delta(S,\mathscr{C},Y) \iff \forall i \, \{x_i,y_i\} \not \subseteq F \text{ and } \forall j \, \{y_i\}_{i \in C_j} \not \subseteq F. \] \end{defn} Since every facet of $\Delta(S,\mathscr{C},Y)$ is of the form $\{z_1,\dots,z_n\}$, where $z_i$ is either $x_i$ or $y_i$, we see that $\Delta(S,\mathscr{C},Y)$ is always a pure simplicial complex. Note that for any quiver $\mathsf{Q}$ on vertex set $S$, where $\mathscr{C}$ is the collection of sets of vertices of vertex-minimal directed cycles on $\mathsf{Q}$, we have by Theorem \ref{basisthm} that \begin{equation}\label{idealform} \In_< K_{\mathsf Q} = \left \langle x_1y_1,\dots,x_ny_n, \prod_{i \in C_1}y_i,\dots,\prod_{i \in C_k}y_i \right \rangle, \quad C_j \subseteq S, \end{equation} and the Stanley-Reisner complex of $\In_< K_{\mathsf Q}$ is precisely $\Delta(S,\mathscr{C},S)$. \begin{rem} Whenever $\{i\}\in \mathscr{C}$, there is no vertex of the form $y_i$ in the simplicial complex $\Delta(S,\mathscr{C},Y)$. Such confusing notation is necessary for later induction. A vertex in $\Delta(S,\mathscr{C},Y)$ of the form $y_i$ will be called a \textbf{$y$-vertex}. \end{rem} We now recall some definitions. Given a simplicial complex $\Delta$ and a vertex $v$ of $\Delta$, the \textbf{link} of $v$ is the set \[\link_\Delta(v) \coloneqq \{F \in \Delta ~|~ F \not\ni v \text{ and } F \cup \{v\} \in \Delta\}, \] and the \textbf{deletion} of $v$ is the set \[ \del_{\Delta}(v) \coloneqq \overline{\{F \in \Delta ~|~ F \cup \{v\} \notin \Delta\}}, \] where the bar denotes closure, so that $\del_{\Delta}(v)$ is a simplicial complex. We call a vertex $v$ of a simplicial complex $\Delta$ a \textbf{shedding vertex} of $\Delta$ if no face of $\link_\Delta(v)$ is a facet of $\del_\Delta(v)$. Finally, we recall the (recursive) notion of vertex-decomposability: a simplicial complex $\Delta$ is \textbf{vertex-decomposable} if it is a simplex, or if it has a shedding vertex $v$ such that both $\link_\Delta(v)$ and $\del_\Delta(v)$ are vertex-decomposable (see \cite{BilleraProvan}, also \cite{BjornerWachs}). It is our goal to prove the following theorem, from which Theorem \ref{thm:CM} will follow. \begin{thm}\label{homeo} The complex $\Delta(S,\mathscr{C},Y)$ is always homeomorphic to a vertex-decomposable $(n-1)$-ball, except when $\mathscr{C} = \emptyset$ and $Y=S$, in which case $\Delta(S,\mathscr{C},Y)$ is homeomorphic to a vertex-decomposable $(n-1)$-sphere. \end{thm} Note that the case where $\mathscr{C} = \emptyset$ and $Y=S$ is precisely the case in which $\{y_j\}_{j \in S}$ is a face of $\Delta(S,\mathscr{C},Y)$. We first characterize the link and the deletion in $\Delta(S,\mathscr{C},Y)$ for any vertex of the form $y_i$ for $i\in Y$. \begin{prop}\label{link} For $y$-vertex $y_i$ in $\Delta(S,\mathscr{C},Y)$, we have $\link_{\Delta(S,\mathscr{C},Y)}(y_i) = \Delta(S^i,\mathscr{C}^i,Y^i)$, where $S^i \coloneqq S \smallsetminus \{i\}$, $\mathscr{C}^i \coloneqq \{C_j \cap S_i ~|~ C_j \in \mathscr{C}\}$, and $Y^i \coloneqq Y \cap S^i$. \end{prop} \begin{proof} We first show $\mathtt{Link} \coloneqq \link_{\Delta(S,\mathscr{C},Y)}(y_i) \subseteq \Delta(S^i,\mathscr{C}^i,Y^i)$. Since $\mathtt{Link}$ is a subcomplex of $\Delta(S,\mathscr{C},Y)$, no face of $\mathtt{Link}$ contains $\{x_j,y_j\}$ for any $j$, since $\Delta(S,\mathscr{C},Y)$ is defined so as never to contain any such face. Since no face of $\mathtt{Link}$ may contain either $x_i$ or $y_i$, we see that $\mathtt{Link}$ is a simplicial complex on $\{x_j ~|~ j \in S^i\} \cup \{y_j ~|~ j \in Y^i\}$. Finally, were some $F \in \mathtt{Link}$ to contain $\{y_j\}_{j \in C_\ell^i}$ for some $\ell$, then $F \cup \{y_i\}$ would contain $\{y_j\}_{j \in C_\ell}$, contradicting $F \cup \{y_i\} \in \Delta(S,\mathscr{C},Y)$. We now have that $\mathtt{Link} \subseteq \Delta(S^i,\mathscr{C}^i,Y^i)$.\\ \indent We now show $\Delta(S^i,\mathscr{C}^i,Y^i) \subseteq \mathtt{Link}$. Consider some $F \in \Delta(S^i,\mathscr{C}^i,Y^i)$. Clearly $F \not\ni y_i$, so suppose that $F \cup \{y_i\} \notin \Delta(S,\mathscr{C},Y)$. Then either $\{x_i,y_i\} \subseteq F \cup \{y_i\}$ or $\{y_j\}_{j \in C_\ell} \subseteq F \cup \{y_i\}$ for some $\ell$. The former case is impossible since $x_i \notin S^i$. The latter case implies $\{y_j\}_{j \in C_\ell^i} \subseteq F$, contradicting the definition of $\Delta(S^i,\mathscr{C}^i,Y^i)$. Therefore we must have $F \cup \{y_i\} \in \Delta(S,\mathscr{C},Y)$ for every face $F$ of $\Delta(S^i,\mathscr{C}^i,Y^i)$, from which it follows that $\Delta(S^i,\mathscr{C}^i,Y^i) \subseteq \mathtt{Link}$. We conclude that $\Delta(S^i,\mathscr{C}^i,Y^i) = \mathtt{Link}$. \end{proof} \begin{prop}\label{del} For $y$-vertex $y_i$ in $\Delta(S,\mathscr{C},Y)$, we have $\del_{\Delta(S,\mathscr{C},Y)}(y_i) = \Delta(S,\mathscr{C},Y^i)$, where $Y^i$ is defined as above. \end{prop} \begin{proof} We first show $\mathtt{Del} \coloneqq \del_{\Delta(S,\mathscr{C},Y)}(y_i) \subseteq \Delta(S,\mathscr{C},Y^i)$. Since no face of $\mathtt{Del}$ may contain $y_i$, we see that $\mathtt{Del}$ is a simplicial complex on $\{x_j ~|~ j \in S\} \cup \{y_j ~|~ j \in Y^i\}$. Since $\mathtt{Del}$ is a subcomplex of $\Delta(S,\mathscr{C},Y)$, no face of $\mathtt{Del}$ contains either $\{x_j,y_j\}$ for any $j$ or $\{y_j\}_{j \in C_\ell}$ for any $\ell$. Therefore $\mathtt{Del} \subseteq \Delta(S,\mathscr{C},Y^i)$.\\ \indent Since $\Delta(S,\mathscr{C},Y^i)$ has $x_i$ as a vertex but not $y_i$, we have by the definition of $\Delta(S,\mathscr{C},Y^i)$ that every facet of $\Delta(S,\mathscr{C},Y^i)$ contains $x_i$. Consider some arbitrary facet $F$ of $\Delta(S,\mathscr{C},Y^i)$. Since $x_i \in F$, we cannot have $F \cup \{y_i\} \in \Delta(S,\mathscr{C},Y)$, and so $F \in \mathtt{Del}$. Therefore $\mathtt{Del} \subseteq \Delta(S,\mathscr{C},Y^i)$, and so we conclude that $\mathtt{Del} = \Delta(S,\mathscr{C},Y^i)$. \end{proof} We may now observe an important relationship between links and deletions that arises in our case. The following result shows that any vertex $y_i$ is a shedding vertex. Note that in the case where $\mathscr{C} = \emptyset$ and $Y=S$, every vertex $y_i$ must be a shedding vertex because its link is always empty. \begin{lem}\label{boundary} Except in the case where $\mathscr{C} = \emptyset$ and $Y=S$, the complex $\Delta(S^i,\mathscr{C}^i,Y^i)$ is properly contained in the boundary complex $\partial \Delta(S,\mathscr{C},Y^i)$. \end{lem} \begin{proof} Since $\Delta(S^i,\mathscr{C}^i,Y^i) \subseteq \Delta(S,\mathscr{C},Y^i)$ and $\Delta(S,\mathscr{C},Y^i)$ is pure, it follows that every facet of $\Delta(S^i,\mathscr{C}^i,Y^i)$ meets at least one facet of $\Delta(S,\mathscr{C},Y^i)$. Now we show that each facet of $\Delta(S^i,\mathscr{C}^i,Y^i)$ meets only one facet of $\Delta(S,\mathscr{C},Y^i)$. Observe that the facets of $\partial \Delta(S,\mathscr{C},Y^i)$ are characterized as the codimension 1 faces $\{z_j\}_{j \ne k}$, $k \in \{1,\dots,n\}$, where each $z_j$ is either $x_j$ or $y_j$, such that exactly one of either $\{z_j\}_{j \ne k} \cup \{x_k\}$ or $\{z_j\}_{j \ne k} \cup \{y_k\}$ lies in $\Delta(S,\mathscr{C},Y^i)$. Every facet of $\Delta(S^i,\mathscr{C}^i,Y^i)$ is of the form $\{z_j\}_{j \ne i}$, and it always happens that $\{z_j\}_{j \ne i} \cup \{x_i\} \in \Delta(S,\mathscr{C},Y^i)$ and $\{z_j\}_{j \ne i} \cup \{y_i\} \notin \Delta(S,\mathscr{C},Y^i)$. Therefore $\Delta(S^i,\mathscr{C}^i,Y^i) \subseteq \partial \Delta(S,\mathscr{C},Y^i)$.\\ \indent We now must show that this containment is proper. We have two cases. Either $\{y_j\}_{j \ne i}$ is a face of $\Delta(S,\mathscr{C},Y^i)$ or it is not. If it is, then it must lie on $\partial \Delta(S,\mathscr{C},Y^i)$, because $\{y_j\}_{j \ne i} \cup \{x_i\}$ is a face of $\Delta(S,\mathscr{C},Y^i)$, while $\{y_j\}_{j \ne i} \cup \{y_i\}$ is not a face of $\Delta(S,\mathscr{C},Y^i)$ since either $\mathscr{C} \ne \emptyset$ or $Y \ne S$. If $\{y_j\}_{j \ne i}$ is not a face of $\Delta(S,\mathscr{C},Y^i)$, then there must be some $C \in \mathscr{C}$ not containing $i$ such that no other member of $\mathscr{C}$ is a subset of $C$. Then, for any $k \in C$, we have that $F = \{x_j\}_{j \notin C} \cup \{y_j\}_{j \in C\smallsetminus\{k\}}$ is a face of $\Delta(S,\mathscr{C},Y^i)$. This face $F$ lies on $\partial \Delta(S,\mathscr{C},Y^i)$, since $F \cup \{x_k\} \in \Delta(S,\mathscr{C},Y^i)$ but $F \cup \{y_k\} \notin \Delta(S,\mathscr{C},Y^i)$. Since both $\{y_j\}_{j \ne i}$ and $F$ contain $x_i$, neither is a face of $\Delta(S^i,\mathscr{C}^i,Y^i)$. Therefore there is always an element of $\partial \Delta(S,\mathscr{C},Y^i)$ that is not in $\Delta(S^i,\mathscr{C}^i,Y^i)$, and so the containment $\Delta(S^i,\mathscr{C}^i,Y^i) \subseteq \partial \Delta(S,\mathscr{C},Y^i)$ is proper. \end{proof} By the previous lemma and the remarks above, we see that every $\Delta(S,\mathscr{C},Y)$ is vertex-decomposable, because the $y$-vertices are always shedding vertices, and any complex without $y$-vertices is a simplex. The remainder of the proof is to strengthen this argument to prove that these simplicial complexes are balls or spheres, as appropriate. \begin{proof}[Proof of Theorem \ref{homeo}] First, we prove that $\Delta(S,\mathscr{C},Y)$ is a vertex-decomposable $(n-1)$-ball when $Y\neq S$ or $\mathscr{C}\neq \emptyset$, by induction on the number of $y$-vertices. If there are no $y$-vertices in $\Delta(S,\mathscr{C},Y)$ (that is, $\{i\}\in \mathscr{C}$ for all $i\in Y$), then $\Delta(S,\mathscr{C},Y)$ is just one simplex on $n$ vertices, which is homeomorphic to an $(n-1)$-ball. Assume the inductive hypothesis holds whenever there are fewer than $m$ $y$-vertices, and assume that $\Delta(S,\mathscr{C},Y)$ has $m$-many $y$-vertices. Choose a vertex $y_i$ in $\Delta(S,\mathscr{C},Y)$, and define \[\mathtt{Link} \coloneqq \link_{\Delta(S,\mathscr{C},Y)}(y_i) = \Delta(S^i,\mathscr{C}^i,Y^i)\text{ and } \mathtt{Del} \coloneqq \del_{\Delta(S,\mathscr{C},Y)}(y_i) = \Delta(S,\mathscr{C},Y^i)\] We observe that both $\mathtt{Link}$ and $\mathtt{Del}$ satisfy the inductive hypothesis; this is clear when $Y\neq S$. If $Y=S$, then by assumption $\mathscr{C}\neq\emptyset$. Since $y_i$ is a vertex of $\Delta(S,\mathscr{C},Y)$, we also know that $\{i\}\not\in \mathscr{C}$. It follows that $\mathscr{C}^i\neq\emptyset$, and so $\mathtt{Link}$ still satisfies the inductive hypothesis. Therefore, $\mathtt{Link}$ is a vertex-decomposable $(n-2)$-ball and $\mathtt{Del}$ is a vertex-decomposable $(n-1)$-ball. As a consequence, the cone $\mathtt{Cone}$ from $y_i$ on $\link_{\Delta(S,\mathscr{C},Y)}(y_i)$ is a vertex-decomposable $(n-1)$-ball. By Lemma \ref{boundary}, $\mathtt{Cone}$ and $\mathtt{Del}$ meet at the proper subset $\mathtt{Link}$ of $\partial \mathtt{Del}$, which is a vertex-decomposable $(n-2)$-ball. Therefore, $\Delta(S,\mathscr{C},Y) = \mathtt{Cone} \cup \mathtt{Del}$ is a vertex-decomposable $(n-1)$-ball, completing the induction. The remaining case is $\Delta(S,\emptyset,S)$. Consider the mapping $\{x_1,\dots,x_n,y_1,\dots,y_n\} \to \mathbb{R}^n$ given by $x_i \mapsto e_i$ and $y_i \mapsto -e_i$, where $\{e_1,\dots,e_n\}$ is the standard basis for $\mathbb{R}^n$. This mapping induces a bijection between the faces of $\Delta(S,\emptyset,S)$ and the faces of the cross-polytope (i.e. orthoplex) on the vertices $\{e_1,\dots,e_n,-e_1,\dots,-e_n\}$. Since this figure is homeomorphic to an $(n-1)$-sphere, so must be $\Delta(S,\emptyset,S)$. \end{proof} As noted, by Theorem \ref{basisthm} we have that the initial ideal of any lower bound ideal is of the form \eqref{idealform}. Therefore, by Theorem \ref{homeo}, the Stanley-Reisner complex of the initial ideal of any lower bound ideal is homeomorphic to either a ball or a sphere. It follows that all lower bound algebras over a field are Cohen-Macaulay, and so Theorem \ref{thm:CM} holds. \section{Normality of lower bound algebras}\label{sect:normality} In this section, we prove that all lower bound algebras defined from a quiver are normal. As explained in the introduction, the case where $\mathsf{Q}$ is acyclic follows immediately because $\mathcal{L}(\mathsf{Q})$ is equal to its upper cluster algebra, and is therefore normal. So, for the remainder of the section, we assume that $\mathsf{Q}$ contains a cycle. In this case, our proof of normality relies on a very slight adaptation of \cite[Proposition 8.1]{KLS-Richardson}. \begin{prop}\label{prop:normalityInGeneral}(cf. \cite[Proposition 8.1]{KLS-Richardson}) Fix a monomial order $<$ on the polynomial ring $\mathbb{K}[z_1,\dots,z_n]$. Let $X$, and $Y_1,\dots, Y_r$ be (reduced and irreducible) closed affine subvarieties of $\mathbb{A}^n$, where each of $Y_1,\dots, Y_r$ are codimension-$1$ in $X$. Assume that, with respect to the term order $<$, each of $X$ and $Y_1,\dots, Y_r$ Gr\"obner degenerate to Stanley-Reisner schemes. Then, if \begin{enumerate} \item the Stanley-Reisner complex of $\In_<X$ is a simplicial ball; \item the Stanley-Reisner complex of each $\In_<Y_i$ lies entirely on the boundary sphere $\partial \Delta_X$; and \item $X\smallsetminus (Y_1\cup Y_2\cup \cdots \cup Y_r)$ is normal, \end{enumerate} then $X$ is normal. \end{prop} We need the following standard result to prove this proposition. It is very similar to \cite[Proposition 3.1 (b)]{BC03}; we provide the necessary modifications in the proof below. \begin{lem} Fix a monomial order $<$ on the polynomial ring $S \coloneqq \mathbb{K}[z_1,\dots,z_n]$. Let $X$ and $Y$ be irreducible affine subvarieties of $\mathbb{A}^n$, and assume that $Y$ is codimension-$1$ in $X$. Then if $\textrm{in}_< X$ is generically regular along each irreducible component of $\textrm{in}_< Y$ then $X$ is generically regular along $Y$. \end{lem} \begin{proof} Let $X = \mathbb{V}(I)$ and $Y = \mathbb{V}(J)$ for $I,J\subseteq \mathbb{K}[z_1,\dots, z_n]$. Pick a weight vector $\lambda$ such that $\textrm{in}_\lambda I = \textrm{in}_< I$ and $\textrm{in}_\lambda J = \textrm{in}_< J$. Let $f = \sum_i a_im_i$, where each $a_i\in \mathbb{K}$, and each $m_i$ is a monomial. Let $\textrm{hom}_{\lambda}(f)$ denote the $\lambda$-homogenization of $f$ inside $S[t]$, that is, \[\textrm{hom}_\lambda(f) \coloneqq \sum a_im_it^{\lambda(f)-\lambda(m_i)},\] where $\lambda(f)$ denotes the highest $\lambda$-weight of any monomial in $f$, and $\lambda(m_i)$ is the $\lambda$-weight of the monomial $m_i$. Let $\textrm{hom}_\lambda I$ denote the $\lambda$-homogenization of the ideal $I$, that is, $\textrm{hom}_\lambda I \coloneqq \langle \textrm{hom}_\lambda(f) \mid f\in I \rangle.$ It is a standard fact that $A \coloneqq S[t]/\textrm{hom}_{\lambda} I$ is a free $\mathbb{K}[t]$-module and that $A/\langle t\rangle \cong S/\textrm{in}_{\lambda} I$ (see eg. \cite[Proposition 2.4]{BC03} or \cite[Theorem 15.17]{Eis95}). Now, by assumption, $\textrm{in}_< X$ is generically regular along each irreducible component of $\textrm{in}_< Y$. That is, the localization of $S/\textrm{in}_{<} I$ at any minimal prime of $\textrm{in}_< J$ is a regular local ring. Thus, by the above facts, we have that the localization of $A/\langle t\rangle$ at any minimal prime of $(\textrm{hom}_\lambda J+\langle t\rangle)$ is a regular local ring. Observe that $A$ is positively graded. Let $\frak{m}$ denote the maximal ideal generated by the indeterminates $z_1,...,z_n,t$, and let $A_{\frak{m}}$ denote the localization at $\frak{m}$. Because $A/\langle t \rangle$ localized at any minimal prime $\frak{p}$ of $(\textrm{hom}_\lambda J+\langle t\rangle)$ is regular, so too is $A_\frak{m}/\langle t\rangle$ localized at any non-trivial $A_\frak{m} \frak{p}$, and the non-trivial $A_\frak{m}\frak{p}$ are precisely the minimal primes of $(\textrm{hom}_\lambda J+\langle t\rangle)$ as an ideal in $A_\frak{m}/\langle t\rangle$. Now we can use the proof of \cite[Lemma 3.2]{BC03} to get that the localization of $A_{\frak{m}}$ at the height-$1$ prime ideal $\textrm{hom}_\lambda J$ is regular. The second half of the proof of \cite[Proposition 3.1 (b)]{BC03} then gives that the localization of $S/I$ at $J$ is regular. \end{proof} We now prove Proposition \ref{prop:normalityInGeneral}. \begin{proof}[Proof of Proposition \ref{prop:normalityInGeneral}] We follow the proof given in \cite[Proposition 8.1]{KLS-Richardson}. To show that $X$ is normal, we need to show that $X$ is $R1$ and $S2$. Since $\Delta_X$ is a simplicial ball by assumption (i), it follows that $X$ is Cohen-Macaulay, and hence $S2$. To show that $X$ is $R1$, first observe that, by assumption (iii), if $\frak{p}\subseteq S/I$ is a prime ideal of height $\leq 1$ which \emph{is not} the generic point of any $Y_i$, then $(S/I)_{\frak{p}}$ is regular. The remaining primes in $S/I$ which have height $\leq 1$ are the generic points of the various $Y_i\subseteq X$. It therefore remains to show that $X$ is generically regular along each irreducible subvariety $Y_i$. By assumption (ii), we have that $\textrm{in}_< X$ is generically regular along each irreducible component of each $\textrm{in}_< Y_i$. Thus, by the lemma, we get that $X$ is generically regular along $Y_i$. \end{proof} To use Proposition \ref{prop:normalityInGeneral} to prove that all lower bound algebras are normal, we need to show that the hypotheses (ii) and (iii) always hold for quivers with cycles. We start with (ii). To show the desired result, we use results of Knutson from \cite{Knu09}\footnote{The statement given here is less general than the one that appears in \cite[Theorem 2]{Knu09} and \cite[Lemma 6]{Knu09}. We also change the hypotheses of \cite[Theorem 2]{Knu09}, however, this is harmless as the proof goes through in the exact same way.}. \begin{thm}\label{thm:Knutson}(cf. Theorem 2, Lemma 6, Corollary 2 of \cite{Knu09}) Let $f \in \mathbb{Z}[z_{1}, \ldots, z_{n}]$ be a polynomial with the property that, for each prime $p$, $f^{p-1} (\text{mod }p)$ has a unique term divisible by $z_1^{p-1}z_2^{p-1}\cdots z_n^{p-1}$, and let $<$ be a term order of $\mathbb{Z}[z_1,\dots,z_n]$ for which $\textrm{in}_<f = z_1z_2\cdots z_n$. Denote by $\mathcal{J}$ the smallest set of ideals that contains the ideal $\langle f \rangle$ and such that \begin{enumerate} \item if $I_{1}, I_{2} \in \mathcal{J}$, then $I_{1} + I_{2}, I_{1} \cap I_{2} \in \mathcal{J}$; and \item if $I \in \mathcal{J}$ and $J$ is a primary component of $I$ then $J \in \mathcal{J}$. \end{enumerate} Then, over any field $\mathbb{K}$, every ideal $J \in \mathcal{J}$ is a radical ideal and the initial ideal of every $J \in \mathcal{J}$ with respect to $<$ is a squarefree monomial ideal. Furthermore, for any $I_1$ and $I_2$ in $\mathcal{J}$, \[\textrm{in}_<(I_1\cap I_2) = \textrm{in}_< I_1 \cap \textrm{in}_< I_2,\textrm{ and }\textrm{in}_<(I_1+I_2) = \textrm{in}_< I_1+\textrm{in}_< I_2.\] \end{thm} We will make use of this theorem in the case where $f = \prod_{i=1}^n (x_iy_i-p_i^+-p_i^-)$. Here we take $<$ to be a weighting of the variables where the $y$-variables are much more expensive than the $x$-variables. Observe that $f$ and $<$ satisfy the assumptions of Theorem \ref{thm:Knutson} and the ideal \[ I_\mathsf{Q} := \langle x_iy_i-p_i^+-p_i^-\mid 1\leq i\leq n \rangle. \] lies in the collection of ideals $\mathcal{J}$ from the theorem. Consequently $I$ is radical. We may then write an irredundant prime decomposition \begin{equation}\label{primeDecompositionl} I_\mathsf{Q} = K_\mathsf{Q}\cap P_1\cap\cdots \cap P_r \end{equation} where each $P_i$ is a minimal prime, and $K_\mathsf{Q}$ is the lower bound ideal \cite[Lemma 5.7]{BMRS15}. Consequently, each $P_i+K_\mathsf{Q}\in \mathcal{J}$ and so each $P_i+K_\mathsf{Q}$ is radical and degenerates to a squarefree monomial ideal. \begin{prop}\label{prop:boundarySphere} Let $\mathsf{Q}$ be a quiver with a directed cycle, so that there is at least one prime $P_i$ in \eqref{primeDecompositionl}. With respect to a term order where the $y$-variables are much more expensive than the $x$-variables, each prime component of $P_i+K_\mathsf{Q}$ Gr\"obner degenerates to the Stanley-Reisner ideal of a sub-simplicial complex of $\partial \Delta_{K_\mathsf{Q}}$. Furthermore, $\textrm{in}_< ((P_1\cap\cdots \cap P_r) + K_\mathsf{Q})$ is the Stanley-Reisner ideal of the entire boundary $\partial \Delta_{K_\mathsf{Q}}$. \end{prop} \begin{proof} By Theorem \ref{thm:Knutson}, we have that $P_i+K_\mathsf{Q}$ is radical and Gr\"obner degenerates to a squarefree monomial ideal. Let \[ K_\mathsf{Q}+P_i = \cap_J J \] be a decomposition of $K_\mathsf{Q}+P_i$ into minimal primes. By Theorem \ref{thm:Knutson}, each $\textrm{in}_< J$ is a squarefree monomial ideal. Applying the second part of Theorem \ref{thm:Knutson} and translating into the language of simplicial complexes yields the equality \[ \Delta_{K_\mathsf{Q}+P_i} = \Delta_{K_\mathsf{Q}}\cap \Delta_{P_i} = \cup_J \Delta_J \] where $\Delta_J$ denotes the Stanley-Reisner complex $\textrm{in}_< J$. To prove the first claim of the proposition, we must show that every face of each $\Delta_J$ is contained in the boundary sphere of the simplicial ball $\Delta_{K_\mathsf{Q}}$. So, suppose otherwise, and let $F$ be a face of some $\Delta_J$ which is not contained in the boundary $\partial \Delta_{K_\mathsf{Q}}$. Assume that $F$ is a maximal such face. We claim that $F$ must be a facet of $\Delta_{P_i}$. To prove this claim, we first apply Theorem \ref{thm:Knutson} to the prime decomposition in equation \eqref{primeDecompositionl} to get \begin{equation}\label{eq:simplicialEquality} \textrm{in}_< I = \textrm{in}_< (K_\mathsf{Q})\cap \textrm{in}_<(P_1)\cap\cdots \cap \textrm{in}_<(P_r) \end{equation} which, after translating into the language of simplicial complexes, says that $\Delta_{K_\mathsf{Q}}$ and every $\Delta_{P_i}$ is contained in the Stanley-Reisner complex associated to $\textrm{in}_< I = \langle x_iy_i \mid 1\leq i\leq n \rangle$, which can be geometrically realized as the $(n-1)$-dimensional boundary sphere of a cross-polytope on $2n$ vertices. Decompose this simplicial sphere into the union of two $(n-1)$-dimensional simplicial balls: \[ \Delta_I = \Delta_{K_\mathsf{Q}}\cup C, \textrm{ where } C \coloneqq \overline{\Delta_I\smallsetminus \Delta_{K_\mathsf{Q}}}. \] Observe that, by construction, $\Delta_{K_\mathsf{Q}}\cap C$ is the boundary sphere of $\Delta_{K_\mathsf{Q}}$. Now, suppose that $F$ is not a facet of $\Delta_{P_i}$. Then there is a vertex $z$ such that $F\cup\{z\}$ is a face of $\Delta_{P_i}$. Then, using the decomposition of $\Delta_I$, we see that either $F\cup\{z\}$ is contained in $\Delta_{K_\mathsf{Q}}$, or it is contained in $C$. If $F\cup \{z\}\subseteq \Delta_{K_\mathsf{Q}}$, we contradict the maximality of $F$. If $F\cup\{z\}\in C$, we contradict that $F$ was not contained in the boundary of $\Delta_{K_\mathsf{Q}}$ (since $\Delta_{K_\mathsf{Q}}$ and $C$ only intersect along the boundary of $\Delta_{K_\mathsf{Q}}$). Thus, our maximal face $F$ must be a facet of $\Delta_{P_i}$, which, since $P_i$ is prime, must have dimension one less than the dimension $\mathrm{dim}(S/P_i)$. But this is not possible because $\mathrm{dim}(S/J)$ is strictly smaller than $\mathrm{dim}(S/P_i)$. To obtain the last statement, we translate equality (\ref{eq:simplicialEquality}) into the language of simplicial complexes to see that the union $\cup_{i=1}^r \Delta_{P_i}$ necessarily contains the boundary sphere $\partial \Delta_{K_\mathsf{Q}}$. Thus, so does \[\Delta_{(P_1\cap\cdots \cap P_r)+K_\mathsf{Q})} = \Delta_{(P_1\cap\cdots\cap P_r)}\cap \Delta_{K_\mathsf{Q}} = \cup_{i=1}^r (\Delta_{P_i}\cap \Delta_{K_\mathsf{Q}}). \] But, as already shown, each $\Delta_{P_i}\cap \Delta_{K_\mathsf{Q}}$ is contained inside of the boundary sphere of $\Delta_{K_\mathsf{Q}}$ and so we are done. \end{proof} We next show that (iii) of Proposition \ref{prop:normalityInGeneral} holds for lower bound algebras. \begin{prop}\label{prop:openSetIsNormal} Let $\mathbb{V}(K_\mathsf{Q})$ denote the lower bound variety of a quiver $\mathsf{Q}$. Then $\mathbb{V}(K_\mathsf{Q})\smallsetminus \mathbb{V}(P_1\cap P_2 \cap \cdots \cap P_r)$ is normal\footnote{We note that $\mathbb{V}(P_1\cap P_2\cap\cdots\cap P_r)$ is empty when $\mathsf{Q}$ is acyclic.}. \end{prop} \begin{proof} Consider $q\in \mathbb{V}(K_\mathsf{Q})$, and define the (possibly empty) set \[ S_q \coloneqq \{ i\in \{1,2,...,n \} \mid x_i(q) =0 \} \] The vertices indexed by $S_q$ must be unfrozen, since frozen $x$-variables are invertible. First, assume $S_q$ does not contain a directed cycle, and consider the open set \[ U_q \coloneqq \{ q'\in \mathbb{V}(K_\mathsf{Q}) \mid \forall i\not\in S_q,\;x_i(q)\neq 0\} \] The coordinate ring of $U_q$ is the localization of $\L(\mathsf{Q})$ at the set of $x$-variables which are not in $S_q$; hence, it is isomorphic to the lower bound algebra of the ice quiver $\mathsf{Q}^\dagger$ obtained by freezing the vertices not in $S_q$. This ice quiver is \emph{acyclic}, and so the lower bound algebra coincides with the upper cluster algebra \cite{BFZ05}, which is normal \cite{MulLA}. Hence, $\mathbb{V}(K_\mathsf{Q})$ is normal at $q$. Next, assume $S_q$ contains a directed cycle, and consider the affine space \[ \mathbb{W}_q \coloneqq \{ q' \in \mathbb{K}^{2n} \mid \forall i, \;x_i(q')=x_i(q),\text{ and } \forall i \not\in S_q,\; y_i(q')=y_i(q)\} \] This contains $q$ and is contained in $\mathbb{V}(K_\mathsf{Q}\cap P_1 \cap \cdots \cap P_r)$. Since $S_q$ contains a directed cycle, there is some cycle polynomial whose leading term is a product of $y$-variables whose indices are contained in $S_q$, and hence cannot vanish on $\mathbb{W}_q$. Hence, $\mathbb{W}_q\not\subset \mathbb{}V(K_\mathsf{Q})$. By the irreducibility of $\mathbb{W}_q$, we have that $q\in\mathbb{W}_q\subset \mathbb{V}(P_1\cap \cdots\cap P_r)$, and so $q\not\in \mathbb{V}(K_\mathsf{Q})\smallsetminus \mathbb{V}(P_1\cap \cdots \cap P_r)$. \end{proof} We can now prove that all lower bound algebras are normal. \begin{proof}[Proof of Theorem \ref{thm:normalityOfLowerBounds}] We have already treated the case when $\mathsf{Q}$ is acyclic (i.e. $\mathcal{L}(\mathsf{Q})$ equals its associated upper cluster algebra, and is hence normal). So assume that $\mathsf{Q}$ contains a cycle. Let $K_\mathsf{Q}$ be the relevant lower bound ideal, and let $P_1,\dots, P_r\subseteq S$ be minimal primes of $\langle x_iy_i-p_i^+-p_i^-\mid 1\leq i\leq n \rangle$ such that \[ \langle x_iy_i-p_i^+-p_i^-\mid 1\leq i\leq n \rangle = K_\mathsf{Q}\cap P_1\cap\cdots \cap P_r. \] Now apply Proposition \ref{prop:normalityInGeneral} with $X = \mathbb{V}(K_\mathsf{Q})$, and $Y_1,\dots,Y_s$ the irreducible components of the various $\mathbb{V}(P_j + K_\mathsf{Q})$, $1\leq j\leq r$. Observe that item (i) of Proposition \ref{prop:normalityInGeneral} follows from Theorem \ref{thm:CM}, item (ii) follows from Proposition \ref{prop:boundarySphere}, and item (iii) holds by Proposition \ref{prop:openSetIsNormal}. \end{proof}
1,314,259,996,415
arxiv
\section{Conclusion} \label{sec:conclusion} In this work, we show the limitation of current sequence-level learning objective in captioning tasks from both theoretical and empirical aspects. From the theoretical aspect, this objective is equivalent to maximizing the generalized precision of the predicted caption set, which ignores the recall side. From the empirical aspect, models trained by this objective receive low score on proxy measurements of recall. To overcome the above limitations, we propose adding a sequence-level exploration term to maximize the diversity, a proxy measurement of recall, on generated captions. It encourages the model to explore more captions that are different in syntax but are semantically coherent with the groundtruth in training. Extensive experiments on both image and video captioning tasks show that the proposed objective leads to a win-win solution that consistently performs better on both precision and recall. \section{Experiment} \label{sec:expr} In this section, we first introduce the experiment setup. Then we report the performance of the model trained by our proposed objective on standard evaluation metrics of precision side in the image captioning task and video captioning task respectively. Finally, we discuss the model behavior on both precision and recall sides. \subsection{Experiment Setup} For the image captioning task, we use the MSCOCO dataset \cite{mscoco}, which is one of the largest image caption datasets that contains more than 120K images crawled from Flickr. Each image is annotated with 5 reference captions. We use the public split \cite{karpathy} for experiments. For the video captioning task, we use the TGIF dataset \cite{tgif}, which is one of the largest video caption datasets that contains 100K animated GIFs collected from Tumblr and $120K$ caption sentences. We use the official split \cite{tgif} for experiments. For image, we use Resnet152 \cite{resnet} pretrained on ImageNet \cite{imagenet} and apply spatial mean pooling to get a $2048$-dim feature vector. For video, we also use Resnet152 \cite{resnet} for fair comparison to other works rather than use a stronger CNN such as I3D \cite{i3d}. We apply spatial-temporal mean pooling to get a $2048$-dim feature vector. For simplicity, we don't finetune the feature on the caption datasets. We tune the hyper-parameter $\alpha$ in eq~\eqref{eq:problem} among $.25$, $.5$ and $.75$ on the validation set and set it to $.75$. We find that $.75$ is a quite stable value to reach the best performance across different datasets. \subsection{Image Captioning} We first study the contribution of our proposed objective by comparing it to training our model with the original sequence-level learning loss (SLL) and sequence-level learning with maximum entropy regularization (SLL-ME) \cite{entropy_rl}. The weight of the maximum-entropy regularization in SLL-ME is tuned among $10^{-1}$, $10^{-2}$, $10^{-3}$ and set to $10^{-2}$ for the best performance. Both the network architecture and input feature are the same across SLL, SLL-ME and SLL-SLE (ours). We use beam search in test stage with width of $5$. As shown in the middle block from table~\ref{tab:image_caption}, we can see that our model SLL-SLE improves over SLL and SLL-ME significantly on all metrics. The improvement of SLL-SLE over SLL-ME on all metrics (Meteor: $0.2$, CIDEr: $1.8$, SPICE: $0.2$) is much larger than the improvement of SLL-ME over SLL on all metrics (Meteor: $0.0$, CIDEr: $0.6$, SPICE: $0.1$). This shows that the typical maximum-entropy regularization doesn't help to solve the issue of original sequence-level objective in the captioning task. Our proposed sequence-level exploration is effective in guiding the model to explore more plausible captions in training and consequently SLL-SLE generates more accurate captions in test. In the last block of table~\ref{tab:image_caption}, we also include results of SLL, SLL-ME, SLL-SLE objectives when combined with attention architecture. Again the similar trend is observed: SLL-SLE improves over SLL and SLL-ME significantly. We also compare our proposed model to various state-of-the-art (SOTA) models with different network architectures trained by either word-level cross-entropy loss or sequence-level learning objective. For word-level XE loss, we compare to NIC model \cite{google}, Adaptive \cite{adaptive_attention}, Top-down attention \cite{bottom_up}. For sequence-level learning objective (SLL), we compare to self-critical learning (SCST:FC \& SCST:Att2in) \cite{self_critique} and Top-Down attention \cite{bottom_up}. As shown in table~\ref{tab:image_caption}, we see that the proposed objective leads to better performance on all metrics over all SOTA models. \begin{table}[!t] \centering \caption{Performance improvement on the image captioning: * means bottom-up region features are used with attention architecture} \begin{tabular}{cccc}\toprule Method & Meteor & CIDEr & Spice \\\cmidrule(lr){1-1}\cmidrule(lr){2-4} NIC \cite{google} & $23.7$ & $85.5$ & NA\\ Adaptive \cite{adaptive_attention} & $26.6$ & $108.5$ & NA\\ SCST:FC \cite{self_critique} & $25.5$ & $106.3$ & NA\\ SCST:Att2in \cite{self_critique} & $26.3$ & $111.4$ & NA\\ Top-Down-XE \cite{bottom_up} & $26.1$ & $105.4$ & $19.2$ \\ Top-Down-SLL \cite{bottom_up} & $26.5$ & $111.1$ & $20.2$ \\ \midrule SLL & $26.8$ & $115.0$ & $20.0$ \\ SLL-ME & $26.8$ & $115.6$ & $20.1$ \\ SLL-SLE (ours) & $\mathbf{27.0}$ & $\mathbf{117.2}$ & $\mathbf{20.3}$ \\\midrule SLL* & $26.6$ & $117.2$ & $19.4$ \\ SLL-ME* & $26.7$ & $117.9$ & $19.5$ \\ SLL-SLE* (ours) & $\mathbf{27.0}$ & $\mathbf{119.6}$ & $\mathbf{19.9}$ \\\bottomrule \end{tabular} \label{tab:image_caption} \end{table} \subsection{Video Captioning} Similarly, we first compare our proposed objective with original sequence-level learning loss (SLL) and sequence-level learning with maximum entropy regularization (SLL-ME). As we fix the hyper-parameter across datasets for our method (SLL-SLE), we also fix the hyper-parameter (weight before maximum-entropy regularization) in SLL-ME and set it to $10^{-2}$, same as that on MSCOCO dataset. We use beam search with width of $5$ in test stage. As shown in the last three rows from table~\ref{tab:video_caption}, we can see that our model, SLL-SLE, again improves over SLL and SLL-ME significantly on all metrics. Actually, SLL-ME performs worse than SLL on all metrics, which indicates that the maximum-entropy regularization is not stable across datasets and may even deteriorate the performance in some captioning task. Our model, SLL-SLE improves over SLL by $0.6$ on Meteor, $2.7$ on CIDEr and $0.6$ on SPICE with the same hyper-parameter setting as that on MSCOCO. This shows that the proposed sequence-level exploration term is stable and robust across datasets and are helpful to the model performance in general. \begin{table} \centering \caption{Performance improvement on the video captioning} \begin{tabular}{cccc}\toprule Method & METEOR & CIDEr & SPICE \\\cmidrule(lr){1-1}\cmidrule(lr){2-4} Official\cite{tgif} & $16.7$ & $31.6$ & NA \\ Show-adapt\cite{show_adapt} & $16.2$ & $29.8$ & NA\\ \midrule SLL & $17.8$ & $45.9$ & $15.9$ \\ SLL-ME & $18.2$ & $48.1$ & $16.0$ \\ SLL-SLE (ours) & $\mathbf{18.8}$ & $\mathbf{50.8}$ & $\mathbf{16.6}$ \\\bottomrule \end{tabular} \label{tab:video_caption} \end{table} We also compare our proposed model to various state-of-the-art (SOTA) models on the video captioning task. The TGIF dataset comes with an official baseline (Official) \cite{tgif} trained by word-level cross-entropy loss. Show-adapt \cite{show_adapt} leverages both TGIF and other datasets in training. By comparing our implementation of baseline model SLL to these models, we see that it performs better than them, which indicates that SLL is already a very strong baseline. This further suggests that the improvement over SLL is not trivial. \begin{table}[!t] \centering \caption{Comparison of models trained by XE, SLL, SLL-ME, our SLL-SLE on both precision and diversity sides (MSCOCO dataset): (rs) denotes random sampling decoding and (bs) denotes beam search decoding} \label{tab:SC_set} \setlength{\tabcolsep}{3pt} \begin{tabular}{ccccc}\toprule \multirow{2}{*}{Method} & precision & \multicolumn{3}{c}{recall}\\ & CIDEr & Div1 ($\uparrow$) & Div2 ($\uparrow$) & mBleu4 ($\downarrow$) \\\cmidrule(lr){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-5} XE (rs) & $74.2$ & $0.57$ & $0.78$ & $0.06$ \\ SLL (rs) & $114.6$ & $0.25$ & $0.32$ & $0.81$ \\ SLL-ME (rs) & $115.1$ & $0.25$ & $0.33$ & $0.80$ \\ SLL-SLE (rs) & $115.9$ & $0.29$ & $0.40$ & $0.68$ \\\midrule XE (bs) & $102.5$ & $0.27$ & $0.35$ & $0.80$ \\ SLL (bs) & $115.0$ & $0.26$ & $0.35$ & $0.78$\\ SLL-ME (bs) & $115.6$ & $0.26$ & $0.34$ & $0.79$ \\ SLL-SLE (bs) & $\mathbf{117.2}$ & $\mathbf{0.27}$ & $\mathbf{0.36}$ & $\mathbf{0.76}$ \\\midrule VAE\cite{vae} (bs) & $100.0$ & NA & NA & NA \\ GAN\cite{caption_gan} (rs) & NA & $0.41$ & $0.55$ & $0.51$\\ GAN\cite{caption_gan} (bs) & NA & $0.34$ & $0.44$ & $0.70$ \\\bottomrule \end{tabular} \end{table} \subsection{Discussion of Model Behavior on Precision and Recall} We study the model behavior on precision and recall sides for these objectives: cross-entropy (XE), sequence-level learning (SLL), sequence-level learning with maximum-entropy (SLL-ME), our SLL-SLE. On the precision side, we use CIDEr metric as it is shown to have good correlation with human judgement. On the recall side, we use diversity metrics Div1, Div2, mBleu\cite{caption_gan} as proxy measurements. To calculate the diversity metrics, we adopt two decoding straties as \cite{caption_gan}. The first decoding strategy is to sample $5$ captions from the model for each image (rs). The second decoding strategy is to beam search top $5$ captions from the model for each image (bs). The reported CIDEr is the average of CIDEr scores of the $5$ sampled captions. As shown in table~\ref{tab:SC_set}, compared to SLL and SLL-ME, the proposed objective, SLL-SLE, performs not only better on the precision side and but also better on the recall side under both random sampling and beam search decoding strategies. Compared to XE, SLL-SLE improves on both precision and recall aspects under beam search decoding strategies. We also list VAE and GAN's performance on precision and recall aspects for reference. \begin{figure}[!t] \centering \begin{subfigure}{.85\linewidth} \includegraphics[width=\linewidth]{pic/mscoco_case} \end{subfigure} \caption{Case study of model behavior on precision and recall by sampling strategy in decoding} \label{fig:diversity} \vspace{-5pt} \end{figure} Figure~\ref{fig:diversity} shows that the proposed objective can generate diverse and high quality captions with sampling strategy. The quality of captions generated by the XE model is not good. The SLL model with sampling strategy has limited diversity and keeps generating almost the same caption with sampling strategy. \section{Introduction} \label{sec:introduction} Captioning is one of the core tasks in vision and language fields. The input is an image or video and the output is a descriptive sentence. In terms of the output structure, the descriptive sentence is actually a sequence, which is more complex than the output of classification and detection tasks and therefore poses a challenge for the learning objective in captioning tasks. Furthermore, there exists multiple correct captions for the same input and it is impossible to enumerate all the correct captions when collecting the groundtruth. The above two unique properties, sequence structure and multiple correct grountruth captions, make captioning tasks difficult and worth special treatment for its own learning/training objective. \begin{figure} \centering \includegraphics[width=.9\linewidth]{pic/introduction} \caption{Illustration on limitations of current sequence-level learning: $5$ captions randomly sampled from the model \cite{self_critique} are almost identical, which indicates that the model is not likely to have high recall. } \label{fig:introduction} \end{figure} Most caption models \cite{google, self_critique,bottom_up} are based on the encoder-decoder architecture and we will only talk about training objectives associated with this architecture. The original training objective is cross-entropy loss \cite{google}, which does word-level supervision. To be specific, the decoder is fed with the word from the groundtruth caption at each step and predicts the word at next step. Thus, the decoder is trained to focus on the correctness of predicting each word separately. However, at each step in the test stage, the decoder is fed with the word predicted from the previous step rather than the groundtruth word. This leads to the gap between training and test and limits the performance in the test. Later, sequence-level learning objective is proposed by researchers to address this gap \cite{mixer, self_critique}. In this objective, only after the whole sentence is generated by the decoder, the quality of the caption is evaluated by a score and that score is used to guide the model training. That is, the decoder predicts the word at each step based on the word predicted at last step in both training and test stages. The sequence-level learning objective \cite{mixer, self_critique} is shown to improve performance significantly on most evaluation metrics such as CIDEr\cite{cider}, METEOR\cite{meteor} and SPICE\cite{spice} compared to the cross-entropy loss. In this paper, we show the limitations of the current sequence-level learning objective from both theoretical and empirical aspects despite its success in captioning tasks. From theoretical aspect, we show that the current objective is equivalent to optimizing the precision side of the predicted caption set. The standard precision is defined based on the set membership of an element. And the set membership function outputs 0-1 for a caption, which describes whether the caption belongs to a set or not. We relax the 0-1 set membership function used in precision calculation to real-value output within range $[0, 1]$. The relaxed set membership function describes the confidence of a caption belonging to a set. In this way, we show that the current sequence-level learning objective is equivalent to maximizing the generalized precision with the relaxed set membership function and it overlooks the recall side of the problem. From empirical aspect, we show that the model trained by the current sequence-level learning objective tends to cover very few different captions in its predictions and gets low score on recall related metrics. As illustrated in figure~\ref{fig:introduction}, we randomly sample $5$ sentences from the model and the resulting $5$ sentences are almost identical. To overcome the limitations of the current sequence-level learning objective, we propose to add a sequence-level exploration term to boost recall. In this exploration term, we maximize the difference between the generated captions (sequence-level) of the same input. One example of difference measurement could be edit distance. In the context of captioning task, the proposed exploration term corresponds to maximizing the diversity \cite{caption_gan} of generated captions. Furthermore, we show that diversity is a proxy measurement of recall for captioning. In training, this term encourages the model to explore more different captions. Such sequence-level exploration is different from the typical maximum-entropy exploration regularization \cite{entropy_rl} that is put on the policy in reinforcement learning. In typical maximum-entropy exploration regularization, it maximizes the uncertainty of the policy at each step. That is, given generated words up to step $t$, it maximizes the uncertainty of the next word. We call this word-level exploration. In summary, the contributions of this work are:\\* 1) We show the limitations of the current sequence-level learning objective for the captioning task from both theoretical and empirical aspects. \\* 2) We propose a new learning objective for the captioning task which adds a sequence-level exploration term to boost recall. \\* 3) The derived solution from the proposed objective achieves better performance on various standard evaluation metrics of the precision side. It also improves the performance on recall related metrics. \section{Limitations of Current Sequence-level Learning} \label{sec:limitation} In this section, we show the limitation of current sequence-level learning for the captioning task from both theoretical and empirical aspects. Theoretically, we show that the current objective function of sequence-level training is equivalent to optimizing the generalized precision with relaxed set membership function on the predicted captions. Empirically, we show that the model trained by the current sequence-level learning tends to generate very few different captions for the same input and does not get high score on recall related metrics. \subsection{Limitation from theory} We first relax the set membership function in the standard precision measurement for the captioning task. Then we show that the objective of current sequence-level learning is actually optimizing the generalized precision with relaxed set membership function in the context of captioning task. Suppose that the space of all the possible sentences is $\mathcal{Y}$, the groundtruth sentence set of an input (image / video) $x_i$ is $Y$ and the predicted sentence set of that input by the captioning model is $\widetilde{Y}$. Then the precision is defined by: \begin{align} Precision(Y, \widetilde{Y}) &= \frac{|Y \cap \widetilde{Y}|}{|\widetilde{Y}|}\nonumber\\ &= \frac{\sum_{y \in \mathcal{Y}}\delta[y \in Y] \delta[y \in \widetilde{Y}]}{\sum_{y \in \mathcal{Y}}\delta[y \in \widetilde{Y}]}\nonumber\\ &= \sum_{y \in \mathcal{Y}} \delta[y \in Y] \underbrace{\frac{\delta[y \in \widetilde{Y}]}{\sum_{y' \in \mathcal{Y}}\delta[y' \in \widetilde{Y}]}}_{p(y \in \widetilde{Y})}\nonumber\\ &= \sum_{y \in \mathcal{Y}} \delta[y \in Y] p(y \in \widetilde{Y})\label{eq:precision} \end{align} Inside the summation of eq~\eqref{eq:precision}, it contains two terms: $\delta[y\in Y]$ and $p(y \in \widetilde{Y}) =\frac{\delta[y \in \widetilde{Y}]}{\sum_{y' \in \mathcal{Y}}\delta[y' \in \widetilde{Y}]}$. In the $\delta[y \in Y]$ term, the $\delta$ function checks whether or not caption $y$ belongs to groundtruth sentence set $Y$. In the $p(y \in \widetilde{Y})$ term, the $\delta$ function checks whether or not caption $y$ belongs to the predicted sentence set $\widetilde{Y}$. For the $\delta[y \in Y]$ term, we relax the binary valued $\delta$ function to a real-valued function $\Delta(y, Y)$ with output in the range of $[0, 1]$: \begin{equation} \label{eq:first} \delta[y \in Y] \to \Delta(y, Y) \end{equation} $\Delta(y, Y)$ indicates the likelihood of each individual $y$ within the set $Y$ and is a relaxed set membership function. One natural choice for $\Delta(y, Y)$ is to use the evaluation metric normalized by its maximum value. As all the current evaluation metrics in the captioning task are bounded, they can be normalized properly. For simplicity, we assume that we are dealing with the evaluation metric $\Delta(y, Y)$ that has already been normalized. The term $p(y \in \widetilde{Y})$ can be interpreted as the chance of the sentence $y$ within set $\widetilde{Y}$. Note that the value of $\delta[y \in \widetilde{Y}]$ is 0-1, which represents whether the captioning model considers sentence $y$ as correct or not. Correspondingly, $p(y \in \widetilde{Y})$ can only take values eithor $0$ if $y \notin \widetilde{Y}$ or $\frac{1}{|\widetilde{Y}|}$ if $y \in \widetilde{Y}$. It does not cover the whole range $[0, 1]$ of a probability. If we again relax the 0-1 membership function $\delta[y \in \widetilde{Y}]$ to a real-valued confidence, $p(y \in \widetilde{Y})$ can cover the whole range $[0, 1]$ of a probability. After the relaxation, $p(y \in \widetilde{Y}) $ is actually the probability of caption $y$ from the captioning model. Thus by using the relaxed set membership function, we replace $p(y \in \widetilde{Y})=\frac{\delta[y \in \widetilde{Y}]}{\sum_{y' \in \mathcal{Y}}\delta[y' \in \widetilde{Y}]}$ with $p_\theta(y|x_i)$, which is the probability from the captioning model: \begin{equation} \label{eq:second} p(y \in \widetilde{Y}) = \frac{\delta[y \in \widetilde{Y}]}{\sum_{y' \in \mathcal{Y}} \delta[y' \in \tilde{Y}]} \to p_\theta(y|x_i) \end{equation} Substituting $\delta[y \in Y]$ and $p(y \in \widetilde{Y})$ in eq~\eqref{eq:precision} by \eqref{eq:first} and \eqref{eq:second} respectively, we get the generalized precision (GP) for the captioning task: \begin{equation} \label{eq:gp} GP(Y, \theta|x_i) = \sum_{y \in \mathcal{Y}} \Delta(y, Y) p_\theta(y|x_i) \end{equation} We could use generalized precision $GP$ to rewrite the original sequence-level learning objective for the captioning task. Setting $\Delta(y, Y)$ as reward, the original objective is to maximize the expected return: \begin{equation} \label{eq:return} J(\theta) = \sum_{i = 1}^n \mathbb{E}_{p_\theta(y|x_i)}\Delta(y, Y) \end{equation} By comparing eq~\eqref{eq:return} with the generalized precision measurement defined in eq~\eqref{eq:gp}, we see that they are exactly the same: \begin{align} \label{eq:equal} \begin{split} J(\theta) &= \sum_{i=1}^n \sum_{y \in \mathcal{Y}} \Delta(y, Y) p_\theta(y|x_i)\\ &= \sum_{i=1}^n GP(Y, \theta|x_i) \end{split} \end{align} This means that sequence-level learning objective only optimizes the precision side of the captions predicted by the captioning model. However, as there exist multiple correct answers for the same input $x_i$, which means that the recall side should also be taken into account when training the captioning model. On the contrary, the original objective totally overlooks the recall side of the problem. \subsection{Limitation from empirical results} Complementary to the theoretical analysis above, we also measure the precision and recall side of the model trained by current sequence-level learning objective. The precision side could be measured by the standard evaluation metrics in captioning tasks such as METEOR\cite{meteor} and SPICE\cite{spice}. As it is not possible to collect all the correct answers for an input $x_i$, directly computing recall is not feasible. Instead, we use set level diversity metrics \cite{caption_gan} \emph{Div-1}, \emph{Div-2} and \emph{mBleu} as a proxy measurement of the recall. The set level diversity metrics are defined on a set of captions, $\widetilde{Y}$, corresponding to the same input $x_i$. \begin{itemize} \item \emph{Div-1} ratio of the number of unique unigrams in $\widetilde{Y}$ to the number of words in $\widetilde{Y}$. Higher is more diverse. \item \emph{Div-2} ratio of the number of unique bigrams in $\widetilde{Y}$ to the number of words in $\widetilde{Y}$. Higher is more diverse. \item \emph{mBleu} Bleu score is computed between each caption in $\widetilde{Y}$ against the rest. Mean of these Bleu scores is the mBleu score. Lower is more diverse. \end{itemize} To report set level diversity metrics, we sample $5$ captions from the model for each input. Correspondingly, when calculating the precision metric CIDEr, we average the CIDEr scores of the $5$ sampled captions. Here is the reasoning of why the above diversity metrics is related to recall. Standard recall is defined by: \begin{align} \begin{split} Recall(Y, \widetilde{Y}) &= \frac{|Y \cap \widetilde{Y}|}{Y}\\ & \propto |Y \cap \widetilde{Y}|\\ & \propto |\widetilde{Y}| Precision(Y, \widetilde{Y}) \end{split} \end{align} When the precision is fixed, we see that the recall is proportional to the size of the predicted set $\widetilde{Y}$. To compare the recall at the same precision level, we could instead compare the size of the predicted caption set from the model. In this way, any measurement on the size of set $\widetilde{Y}$ could be considered as a proxy measurement of recall. Directly measuring the size of $\widetilde{Y}$ by the number of captions is not meaningful if we are allowed to sample infinite times from the model. A more meaningful way to measure the size of $\widetilde{Y}$ is: \emph{given fixed number of sampling times, calculating the difference between sampled captions}. And this is exactly the quantity defined in set level diversity metrics. \begin{table}[!t] \centering \caption{Comparison between word-level cross-entropy loss (XE) and sequence-level learning (SLL) on precision and recall sides} \label{tab:empirical} \begin{tabular}{ccccc}\toprule \multirow{2}{*}{Method} & Precision & \multicolumn{3}{c}{Recall} \\ & CIDEr ($\uparrow$) & Div1 ($\uparrow$) & Div2 ($\uparrow$) & mBleu4 ($\downarrow$) \\\cmidrule(lr){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-5} XE & 74.2 & $0.57$ & $0.78$ & $0.06$ \\ SLL & 114.6 & $0.25$ & $0.32$ & $0.81$ \\\bottomrule \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=\linewidth]{pic/scc_example} \caption{Illustration of $5$ captions sampled from models given the same input: XE is the model trained by cross-entropy objective and SLL is the model trained by sequence-level learning objective. } \label{fig:empirical} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{pic/scc_p} \caption{Illustration of the peak width of caption distribution $p(y|x)$ based on empirical results of the sequence-level learning objective} \label{fig:illustrate_p} \end{figure} As shown in table~\ref{tab:empirical} compared to word-level cross-entropy (XE) loss, sequence-level learning (SLL) leads to a large performance drop on the recall side though it improves the metrics on the precision side significantly. This could be further illustrated by the examples shown in figure~\ref{fig:empirical}. In this example, $5$ randomly sampled captions are almost identical for the model trained by sequence-level learning (SLL) objective while this is not an issue for the model trained by the word-level cross-entropy (XE) objective. We explain this observation by the peak width of the distribution. As illustrated in figure~\ref{fig:illustrate_p}, suppose we project the captions to a one-dimensional space and the width of the line segment containing semantic coherent captions for an input $x_i$ is $\sigma$. Based on the empirical result observed in this section, the peak width of the model trained by SLL objective should be much smaller than $\sigma$ so that most sampled sentences for input $x_i$ are almost identical. However, the peak width of an ideal model should be similar to $\sigma$. In this case, the samples from the model is likely to cover the semantically coherent space and get high score on recall as a result. \section{Acknowledgement} We would like to express our great appreciation to Shiwan Zhao for insightful discussions and valuable suggestions. This work was partially supported by National Natural Science Foundation of China (No. 61772535) and Beijing Natural Science Foundation (No. 4192028). {\small \bibliographystyle{ieee_fullname} \section{Related Work} \label{sec:related} The dominant neural network architecture of the captioning task is based on the encoder-decoder framework \cite{bahdanau2014neural}. Early works \cite{google, baidu, s2s} use convolution neural network as encoder and recurrent neural network with LSTM cell \cite{lstm} as decoder. In the image captioning task, Xu et al. \cite{spatial_attention} proposed the spatial attention, which selects relevant image regions to generate image descriptions. In the video captioning task, Yao et al. \cite{temporal_attention} proposed the temporal attention, which expands the attention mechanism in the temporal direction. After that, different variants of attention mechanism are proposed to further improve the performance, such as attention on semantic concepts \cite{semantic_attention,pan2017video,cvpr2} and adaptive attention on visual and linguistic contexts \cite{adjusted_attention,adaptive_attention,cvpr1}. The latest variation on attention mechanism is the up-down attention \cite{bottom_up} which enables attention to be calculated at the level of objects and other salient image regions. In addition to attention mechanism, researchers also propose other modification on the neural network architecture. Pan et al. \cite{pan2015hierarchical} utilized the hierarchical encoder to learn better visual representations. The original objective function \cite{google, baidu} used in the captioning task is cross-entropy loss, which applies word-level supervision. To be specific, in training, the model is fed with the groundtruth word at each step and supervision monitors whether the model outputs the correct next word. We call such supervision as word-level supervision. However, in the test stage, the model is fed with the word predicted by itself at last step rather than the groundtruth word. This is known as the train-test gap in sequence prediction tasks. Bengio et al. \cite{scheduled_sampling} proposed scheduled sampling, a curriculum learning approach, to minimize such gap. Later, sequence-level training is proposed by Ranzato et al. \cite{mixer} to systematically address this issue. Different from word-level supervision, the sequence-level learning evaluates the sentence only after the whole sentence has been generated. The sentence is evaluated by a reward about its semantic coherence with the groundtruth caption. And the reward is usually set to be the evaluation metric that has high correlation with human judgement. Rennie et al. \cite{self_critique} further improves the sequence-level learning by introducing a special baseline in reward, which is the score of the caption greedily decoded from the current model. Sequence-level training objective has been widely used in captioning tasks to achieve state-of-the-art performance \cite{bottom_up,discriminative,qjin1,qjin2}. \section{Solution} \label{sec:solution} We first propose a new objective function to address the limitations of current sequence-level learning objective shown in the last section. Then we derive the optimization procedure for this new objective function. Finally, we describe the network architecture and training details in implementation. \subsection{Objective Function} As we have shown that diversity is a proxy measurement of recall, we introduce an additional diversity term to the original sequence-level learning objective function to cover the recall side of the problem: \begin{align} \label{eq:problem} \begin{split} \text{max}_\theta: &\alpha \underbrace{\sum_{y \in \mathcal{Y}} \Delta(y, y_i) p_\theta(y|x_i)}_{\text{precision}} + \\ & (1-\alpha) \underbrace{\sum_{y \in \mathcal{Y}}\sum_{y' \in \mathcal{Y}} d(y, y') p_\theta(y|x_i) p_\theta(y'|x_i)}_{\text{diversity}} \end{split} \end{align} In this objective function, $x_i$ is the input image or video, $y_i$ is the groundtruth caption, $y$ and $y'$ are any two captions in the caption space $\mathcal{Y}$ that can be sampled from the caption model. $p_\theta(y|x_i)$ is the conditional probability given by the caption model. \\* $\bullet$ $\Delta(y, y_i)$ in precision term measures semantic coherence between caption $y$ and the groundtruth caption $y_i$. It is equivalent to $\Delta(y, Y)$ when there is only one groundtruth caption $y_i$ of input $x_i$. It encourages the model to put more probability mass $p_\theta(y|x_i)$ on captions that is semantically coherent with the groundtruth. Example choices for $\Delta(y, y_i)$ could be METEOR, CIDEr, SPICE, which are shown to have good correlation with human judgements. \\* $\bullet$ $d(y, y')$ in diversity term measures the syntactic difference between two captions. It encourages the model to explore more different ways to express the same semantic meaning. Example choices for $d(y, y')$ could be edit distance or BLEU3/4, which measures the difference in sentence structure. The diversity term is different from the standard maximum-entropy regularization used in reinforcement learning \cite{entropy_rl}, which is put on the \emph{policy} by $\mathbb{H}(p_\theta(w_j|w_{<j}, x_i))$ and maximizes the uncertainty of the next step word $w_j$ given the past words $w_{<j}$. The diversity term introduced here is directly put on captions, which are \emph{trajectories} in the reinforcement learning. Furthermore, we use distance $d$ rather than entropy of captions to avoid the intractable estimation of denominator $Z$ that involves summing over the probability of all captions. Using distance $d$ also offers us more flexibility to plug-in any measurement of difference in sentence structure. Thus, compared to standard maximum-entropy regularization, the diversity term has more direct effect on encouraging the model to explore more different captions and is more flexible for more syntactic difference measurements. Putting both precision term and diversity term together, the meaning of the proposed objective function is to encourage the model to \emph{explore more captions different in syntax but are semantically coherent with the groundtruth caption $y_i$ of input $x_i$. } Hyper-parameter $\alpha$ is introduced to balance between precision and diversity terms. \subsection{Optimization} We first show that the precision term in the objective function could be directly solved using REINFORCE algorithm \cite{rl}. Then we show that the diversity term could be solved with some variation on the technique used in the REINFORCE algorithm. Finally, we derive the surrogate loss and a complete algorithm for our objective function. In optimization convention, we always minimize the objective function. Thus, we take negation of the objective function in eq~\eqref{eq:problem} and decompose it into two parts: \begin{align} \begin{split} L(\theta) &= \alpha L_1(\theta) + (1-\alpha) L_2 (\theta)\\ L_1(\theta) &= -\sum_{y \in \mathcal{Y}} \Delta(y, y_i) p_\theta(y|x_i)\\ L_2(\theta) &= -\sum_{y \in \mathcal{Y}} \sum_{y' \in \mathcal{Y}} d(y, y') p_\theta(y|x_i) p_\theta(y'|x_i) \end{split} \end{align} \emph{1. Solution to $L_1(\theta)$: } We could rewrite $L_1$ as expectation: \begin{align} \begin{split} L_1(\theta)&=-\sum_{y \in \mathcal{Y}} \Delta(y, y_i) p_\theta(y|x_i)\\ &=-\mathbb{E}_{p_\theta (y|x_i)} [\Delta(y, y_i)] \end{split} \end{align} We could use REINFORCE \cite{rl} to calculate its gradient: \begin{align} \begin{split} \nabla L_1(\theta) &= -\mathbb{E}_{p_\theta (y|x_i)}[\Delta(y, y_i)\nabla \log p_\theta(y|x_i)]\\ &\approx -\Delta(\tilde{y}, y_i) \nabla \log p_\theta(\tilde{y}|x_i) \end{split} \end{align} The second line is Monte Carlo sampling with just one sample caption $\widetilde{y}$ from the model. \\* \emph{2. Solution to $L_2(\theta)$: } we could also rewrite $L_2$ as expectation: \begin{align} \begin{split} L_2(\theta) &= -\sum_{y \in \mathcal{Y}}\sum_{y' \in \mathcal{Y}} d(y, y') p_\theta(y|x_i) p_\theta(y'|x_i)\\ =& -\mathbb{E}_{p_\theta(y|x_i)}\mathbb{E}_{p_\theta(y'|x_i)} d(y, y') \end{split} \end{align} We see that there are two expectations involved. We could still apply REINFORCE to the outer expectation and inner expectation respectively and get: \begin{align} \begin{split} \nabla L_2(\theta) &= -\mathbb{E}_{p_\theta (y'|x_i)}\Big[ \mathbb{E}_ {p_\theta(y|x_i) } [d(y, y')] \nabla \log p_\theta(y|x_i)\Big] \\ & -\mathbb{E}_{p_\theta (y|x_i)}\Big[ \mathbb{E}_{p_\theta (y'|x_i)} \big[d(y, y')\nabla \log p_\theta(y'|x_i)\big]\Big] \end{split} \end{align} Approximating it by Monte Carlo sampling leads to the following solution: we sample $s$ captions $\widetilde{y}_1, \dots, \widetilde{y}_s$ and calculate pairwise distances. For each sample $\widetilde{y}_j$, its corresponding gradient is: \begin{equation} \nabla L_2 (\theta) = -\frac{2}{s^2} \sum_{j=1}^s \big( \sum_{k=1}^s d(\widetilde{y}_j, \widetilde{y}_k) \nabla \log p_\theta(\widetilde{y}_j|x_i) \big) \end{equation} \emph{3. Complete solution: } In standard policy gradient of reinforcement learning, the multiplier before $\nabla \log p_\theta (\widetilde{y}_j|x_i)$ represents the reward. In the gradient of $L_2$, the multiplier is $\sum_{k=1}^s d(\widetilde{y}_j, \widetilde{y}_k)$ for each sample $\widetilde{y}_j$. It is the sum of sample $\widetilde{y}_j$'s distance to other samples of input $x_i$. This aligns exactly with our formulation of $L_2$, which is the diversity term. This multiplier could be further considered as ``reward'' that involves multiple samples of the input $x_i$ jointly in calculation while calculating standard reward only uses each sample separately. Finally, we wrap up all the gradients of $L(\theta)$ in the following surrogate loss of the entire stochastic computation graph \cite{stochastic_computation_graph}: \begin{align} \mathcal{L}(\theta) = &\frac{1}{s}\sum_{j=1}^s\mathcal{L}^j(\theta) \label{eq:surrogate_loss}\\ \mathcal{L}^j(\theta) = & -\alpha \Delta(\widetilde{y}_j, y_i) \log p_\theta(\widetilde{y}_j|x_i) \label{eq:surrogate_loss_sample}\\ &-(1-\alpha) \frac{2}{s} \sum_{k=1}^s d(\widetilde{y}_j, \widetilde{y}_k) \log p_\theta(\widetilde{y}_j|x_i) \nonumber \end{align} Following the standard procedure in sequence-level learning of the captioning task, we first train the model by the word-level cross-entropy loss and then switch to this surrogate loss for training. Algorithm~\ref{alg:alg} summarizes the entire training process. \begin{algorithm} \caption{Training algorithm of sequence-level exploration} \label{alg:alg} \begin{algorithmic}[1] \For{epoch in [0, M)} \State train by cross-entropy loss \EndFor \For{epoch in [M, N)} \For{each instance $x_i$} \State sample $s$ captions $\widetilde{y}_1, \dots, \widetilde{y}_s$ \For{each sample $\widetilde{y}_j$} \State calculate $\mathcal{L}^j(\theta)$ as in eq~\eqref{eq:surrogate_loss_sample} \EndFor \State calculate surrogate loss $\mathcal{L}(\theta)$ as in eq~\eqref{eq:surrogate_loss} \State update parameter $\theta$ by stochastic gradient descent \EndFor \EndFor \end{algorithmic} \end{algorithm} \subsection{Network Architecture and Training Details} Our proposed objective and solution is compatible with any captioning model that follows the encoder-decoder architecture \cite{google}. The encoder depends on the input (image or video) and will be specified in the experiment section. The decoder is an RNN model of LSTM cell with hidden dimension set to $512$. We add one full connection layer after the encoder to reduce the dimension to $512$. In step $0$, the hidden state is initialized by the output of this full connection layer. We use CIDEr metric to calculate $\Delta(y, y_i)$ and we use BLEU3 + BLEU4 to calculate $d(y, y')$ in eq~\eqref{eq:surrogate_loss}. We set the number of samples $s$ to $5$. To reduce the variance introduced in the Monte Carlo sampling step when estimating the gradient in optimization, we follow the standard practice of using baseline. For the gradient of precision term, we set its baseline to the CIDEr score of greedily decoded caption from the model following work \cite{sc}. For the gradient of diversity term, we set it to $\frac{1}{s^2}\sum_{k=1}^s \sum_{j=1}^s d(\widetilde{y}_j, \widetilde{y}_k)$, the average of all the pairwise distances between sampled captions. We use ADAM optimizer in optimization.
1,314,259,996,416
arxiv
\section{Introduction} It is commonly believed that the hard X-ray emission in solar flares is produced by the bremsstrahlung process of energetic electrons in dense layers of the solar atmosphere (Brown 1971; Tandberg-Hanssen \& Emslie 1988). It is also known that up to now this scenario has several unresolved drawbacks as summarized in the paper by Brown et al. (1990). For example, the bremsstrahlung mechanism generating the hard X-ray bursts is of a very low efficiency and therefore huge electron beam fluxes $E_\mathrm{F}$ = 10$^{9}$ - 10$^{12}$ ergs s$^{-1}$ cm$^{-2}$ are required for an explanation of the observed X-ray fluxes (Hoyng et al. 1978). It means that at the acceleration site in the low corona with a relatively low density ($n_\mathrm{e} \sim$ 10$^9$ cm$^{-3}$), a substantial part of all plasma electrons needs to be accelerated. Furthermore, these electron beams represent huge electric currents that have to be neutralized by the return currents. The return current is a natural part of any beam-plasma system (van den Oord 1990). The beam-plasma interaction has been studied for a long time, starting with the paper by Bohm \& Gross (1949). While the first 1-D models considered the electrostatic aspects of this interaction (two-stream instability, generation of Langmuir waves, and quasi-linear relaxation of the beam, see e.g. Melrose 1980, Birdsall \& Langdon 1985, Benz 1993, Karlick\'y 1997 and the references therein), new 3-D studies include the return current and electromagnetic effects which lead to many further instabilities (Weibel, filamentation, oblique, Bell, Buneman, and so on, see Karlick\'y 2009, Bret 2009). (Remark: The Weibel instability in the sense used here and in the paper by Nishikawa et al. (2008) is also known as the filamentation instability (Bret 2009).) To cover all these processes, especially inductive processes neutralizing the total electric current, in the present study we use a general and fully self-consistent (basic plasma physics) approach -- a 3-D electromagnetic particle-in-cell (PIC) modelling. All the abovementioned processes necessarily modify the electron distribution function in the flare X-ray source. Moreover, contrary to simple models, which generally predict high anisotropy of electrons and X-rays, it was found that the observed hard X-ray directivities are low (e.g. Kane 1983). Furthermore, Kontar \& Brown (2006) found a low anisotropy of the electron distribution function in the X-ray source by separating the reflected X-ray emission from the direct one. They concluded that the conventional solar flare models with downward beaming are excluded. In the present paper we want to demonstrate the importance of the abovementioned processes on the evolution of the beam-plasma system with the return current. Our aim is to show their effects on the anisotropy of the electron distribution function in this system and thus on the directivity of the corresponding X-ray emission. Using the 3-D electromagnetic PIC model, for the first time in the study of X-ray directivity, we compute the evolution of the beam-plasma system with the return current depending on the magnetic field in the beam propagation direction. Then, assuming that the resulting electron distribution functions generate X-ray bremsstrahlung, we calculate the directivity of the associated X-ray emission. (For a detailed analysis of the instabilities and waves produced in the studied beam-plasma system, see Karlick\'y et al. 2008, Karlick\'y 2009, Karlick\'y and B\'arta 2009.) The layout of the paper is as follows: In Section 2 we outline our model. The results of computations of the electron distribution functions with the return current are shown in Section 3. In Section 4 we present the corresponding X-ray directivities. Finally, in Section 5 the results are discussed and conclusions given. \section{Model} \begin{table}[t] \begin{minipage}[t]{\columnwidth} \caption{Model parameters.} \label{tab4} \centering \renewcommand{\footnoterule}{} \begin{tabular}{ccccc} \hline Model & $m_\mathrm{i}$/$m_\mathrm{e}$ & $n_\mathrm{b}$/$n_\mathrm{e}$ & $v_\mathrm{b}/c$ & $\omega_\mathrm{ce}/\omega_\mathrm{pe}$ \\ \hline A & 16 & 1/8 & 0.666 & 0.0 \\ B & 16 & 1/8 & 0.666 & 0.1 \\ C & 16 & 1/8 & 0.666 & 0.5 \\ D & 16 & 1/8 & 0.666 & 0.7 \\ E & 16 & 1/8 & 0.666 & 1.0 \\ F & 16 & 1/8 & 0.666 & 1.3 \\ G & 1 & 1/8 & 0.666 & 0.0 \\ H & 1 & 1/8 & 0.666 & 1.3 \\ I & 100 & 1/8 & 0.666 & 0.0 \\ J & 100 & 1/8 & 0.666 & 1.3 \\ K & 16 & 1/8 & 0.333 & 0.0 \\ L & 16 & 1/8 & 0.333 & 1.3 \\ M & 16 & 1/40 & 0.666 & 0.0 \\ N & 16 & 1/8 & 0.234\footnote{mean velocity of the power-law beam distribution} & 0.0 \\ O & 16 & 1/8 & 0.234\footnote{mean velocity of the power-law beam distribution} & 1.3 \\ \hline \end{tabular} \end{minipage} \end{table} For our study we used a 3-D (3 spatial and 3 velocity components) relativistic electromagnetic PIC code (Buneman 1993). The system sizes are $L_x$ = 45$\Delta$, $L_y$ = 45$\Delta$, and $L_z$ = 600$\Delta$ (where $\Delta$ is the grid size). For a basic set of models we initiated a spatially homogeneous electron-proton plasma with the proton-electron mass ratio $m_\mathrm{p}/m_\mathrm{e}$=16 (Models A-F, and K-O in Table 1). This is unrealistic and it was chosen to shorten the proton skin depth and computations. Nevertheless, the ratio is still sufficient to well separate the dynamics of electrons and protons. For comparison we added models with the mass ratio $m_\mathrm{p}/m_\mathrm{e}$=1 and 100 (Models G-J in Table 1). The electron thermal velocity is $v_{T\mathrm{e}}$ = 0.06 $c$ (the corresponding temperature is $T_\mathrm{e}$ = 21.4 MK), where $c$ is the speed of light. In all models, 160 electrons and 160 protons per cube grid were used. The plasma frequency is $\omega_\mathrm{pe}$ = 0.05 and the electron Debye length is $\lambda_\mathrm{D}$ = 0.6 $\Delta$. In the models with the proton-electron mass ratio $m_\mathrm{p}/m_\mathrm{e}$=16, the electron and proton skin depths are $\lambda_\mathrm{ce}$ = 10 $\Delta$ and $\lambda_\mathrm{ci}$ = 40 $\Delta$, respectively. \begin{table} \begin{minipage}[t]{\columnwidth} \caption{The real spatial and time scales as a function of the chosen plasma density $n_\mathrm{e}$.} \label{catalog} \centering \renewcommand{\footnoterule}{} \begin{tabular}{ccccc} \hline $n_\mathrm{e}$ & $\omega_\mathrm{pe}$ & t = 200/$\omega_\mathrm{pe}$ & $\lambda_\mathrm{D}$ & 1/$\nu_0$ \\ (cm$^{-3}$) & (s$^{-1}$) & (s) & (cm) & (s) \\ \hline 10$^8$ & 5.64 $\times$ 10$^8$ & 3.55 $\times$ 10$^{-7}$ & 3.19 & 3.61 \\ 10$^9$ & 1.78 $\times$ 10$^{9}$ & 1.12 $\times$ 10$^{-7}$ & 1.01 & 0.36 \\ 10$^{10}$ & 5.64 $\times$ 10$^{9}$ & 3.55 $\times$ 10$^{-8}$ & 0.32 & 0.03 \\ 10$^{11}$ & 1.78 $\times$ 10$^{10}$ & 1.12 $\times$ 10$^{-8}$ & 0.10 & 0.003 \\ \hline \end{tabular} \end{minipage} \end{table} Then, we included one monoenergetic beam homogeneous throughout the numerical box (see Models A-M). Note that due to the physical and numerical simplicity and the propagation effect in which faster electrons escape from the slower ones, in most cases we consider monoenergetic electron beams, although in the interpretation of solar flare hard X-rays, the power-law distributions are used. The power-law distributions are derived as mean distributions over the whole X-ray source for much longer timescales than those considered in the present study. In much smaller flare volumes and on much shorter timescales, the monoenergetic beam is a reasonable choice. Nevertheless, in Models N and O we added computations with the beam having a power-law distribution function. To show effects of instabilities distinctly we chose its power-law index (in the velocity space) as 1.5, and the low-velocity cutoff of 0.09 $c$. \begin{figure}[b] \begin{center} \epsfig{file=12616fg1.ps, width=8 cm} \end{center} \caption{The electron distribution functions in Model B at four different times: at the initial state (a), at $\omega_\mathrm{pe} t$ = 40 (b), at $\omega_\mathrm{pe} t$ = 100 (c), and $\omega_\mathrm{pe} t$ = 200 (d). Crosses correspond to $f(v_{z})$, dotted and dashed lines display $f(v_x)$ and $f(v_y)$, respectively. Note that $f(v_x)$ and $f(v_y)$ overlap. The single cross in the part a) at $v/c$ = 0.666 denotes the monoenergetic electron beam.} \label{fig1} \end{figure} To keep the total current zero in these models in the initial states, we shifted the background plasma electrons in the velocity space (i.e. we initiated the return current) according to the relation $v_\mathrm{d} = - v_\mathrm{b} n_\mathrm{b}/n_\mathrm{e}$, where $v_\mathrm{b}$ is the velocity of the electron beam, $n_\mathrm{b}$ and $n_\mathrm{e}$ are the beam and background plasma densities (for this type of initiation see Niemiec et al. 2008). The beam velocity was chosen to be $v_\mathrm{b}/c$ = 0.666 or 0.333 (in the $z$ direction), see Table 1. The ratio of the beam and plasma densities was taken as $n_\mathrm{b}/n_\mathrm{e}$ = 1/8 (Models A-L and N-O), and $n_\mathrm{b}/n_\mathrm{e}$ = 1/40 (Model M). \begin{figure*}[t] \begin{center} \epsfig{file=12616fg2.ps, width=14 cm} \end{center} \caption{The electron distribution functions at $\omega_\mathrm{pe} t$ = 200 as a function of the magnetic field in Models A-F with $\omega_\mathrm{ce}/\omega_\mathrm{pe}$ = 0.0, 0.1, 0.5, 0.7, 1.0, and 1.3, respectively. Notation is the same as in Fig.~\ref{fig1}.} \label{fig2} \end{figure*} Because computations in the PIC models are dimensionless, the results are valid for a broad range of plasma densities. The real time and spatial scales are given by specifying the plasma density. Table 2 summarizes temporal and spatial scales (the interval of computations t = 200/$\omega_\mathrm{pe}$ and the Debye length) for the plasma densities in the 10$^8$-10$^{11}$ cm$^{-3}$ range. The processes under study are very fast. The collisional processes are much longer, see the collisional free time (1/$\nu_0$) in Table 2. The numerical system size is small (45$\Delta$ x 45 $\Delta$ x 600 $\Delta$ = 75 $\lambda_\mathrm{D}$ x 75 $\lambda_\mathrm{D}$ x 1000 $\lambda_\mathrm{D}$, i.e. for the plasma density e.g. $n_\mathrm{e}$ = 10$^{9}$ cm$^{-3}$ it gives 76 cm x 76 cm x 1010 cm). Since the periodic boundary conditions are used, in reality the studied problem is infinite in space. The beam density and the corresponding beam energy flux is given by the chosen plasma density $n_\mathrm{e}$, $n_\mathrm{b}/n_\mathrm{e}$ = (1/8 and 1/40), and the beam velocities (see Table 1). For example, for $n_\mathrm{e}$ = 10$^9$ cm$^{-3}$, n$_b$/n$_e$ = 1/8, and v$_b$ = 0.666 c, the beam density $n_\mathrm{b}$ = 1.25 $\times$ 10$^8$ cm$^{-3}$ and the beam energy flux $E_\mathrm{flux}$ = 4.55 $\times$ 10$^{11}$ ergs s$^{-1}$ cm$^{-3}$. Because we want to study the influence of the magnetic field, in the models we consider several values of the ratio of the electron-cyclotron and electron-plasma frequencies ($\omega_\mathrm{ce}/\omega_\mathrm{pe}$ = 0.0, 0.1, 0.5, 0.7, 1.0, and 1.3 ~-- see Table 1). Note that in the space close to the flare acceleration site in the low corona there is plasma of relatively low density. Thus, for the huge electron beam fluxes required for an explanation of the observed X-ray bursts, such high ratios of $n_\mathrm{b}/n_\mathrm{e}$ are needed. In all models, the periodic boundary conditions were used. \section{Results of 3-D PIC simulations} As an illustration of the time evolution of the electron distribution function in the beam-plasma system with the return current, Fig.~1 shows this evolution for Model B. As can be seen, due to the two-stream instability (Michailovskij 1975), a plateau of the distribution function $f(v_z)$ (in the beam propagation direction) on the beam side is formed. Moreover, some small part of the electrons even increased their energy due to their interaction with generated Langmuir waves. Simultaneously, the distribution functions $f(v_x)$ and $f(v_y)$, i.e. the distribution functions in the directions perpendicular to that of the beam propagation, are strongly heated. This is due to the Weibel instability (1959) (see also Nishikawa at al. 2006). To demonstrate how the magnetic field influences the resulting electron distribution function, Fig.~2 presents the distribution functions for six values of the ratio of the electron-cyclotron and electron-plasma frequencies ($\omega_\mathrm{ce}/\omega_\mathrm{pe}$ = 0.0, 0.1,0.5, 0.7, 1.0, and 1.3 - Models A-F, Table 1). It is evident that with the increase of the ratio $\omega_\mathrm{ce}/\omega_\mathrm{pe}$, the role of the Weibel instability is more and more reduced, the distribution functions in the direction perpendicular to the beam propagation $f(v_x)$ and $f(v_y)$ are less heated. On the other hand, the problem of the return current formation becomes more and more one-dimensional and a more extended tail on the return current side is formed (compare Model A and F in Fig.~2, see also Karlick\'y et al. 2008; Karlick\'y 2009). In Fig.~3 the same results are expressed in terms of the electron distribution functions depending on the electron energies. Although this type of description is more common in flare research, the distribution functions in \begin{figure*}[t] \begin{center} \epsfig{file=12616fg3.ps, width=0.8\textwidth} \end{center} \caption{The electron distribution functions in electron energies (thick lines) at $\omega_\mathrm{pe} t$ = 200 as a function of the magnetic field in Models A-F with $\omega_\mathrm{ce}/\omega_\mathrm{pe}$ = 0.0, 0.1, 0.5, 0.7, 1.0, and 1.3, respectively. For comparison in each panel the initial electron plasma distribution is added (thinner lines).} \label{fig3} \end{figure*} \begin{figure*}[!t] \begin{center} \epsfig{file=12616fg4.ps, width=0.8\textwidth} \end{center} \caption{Time evolution of the ratio of the electron kinetic parallel and perpendicular energies $E_\mathrm{par}/E_\mathrm{per}$ as a function of the magnetic field in Models A-F with $\omega_\mathrm{ce}/\omega_\mathrm{pe}$ = 0.0, 0.1, 0.5, 0.7, 1.0, and 1.3, respectively.} \label{fig4} \end{figure*} velocity space presented in Fig.~2 carry more information than those in Fig.~3 and thus they are more physically relevant in describing the studied processes. The ratio of the electron kinetic energies in the direction parallel and perpendicular to that of beam propagation, which expresses the "anisotropy" of the system, is shown in Fig.~4. The ratio of energies is defined as: \begin{eqnarray} \frac{E_\mathrm{par}}{E_\mathrm{perp}} = \frac{\sum_{i=1}^n \frac{1}{2} m_\mathrm{e} v_{iz}^2}{\sum_{i=1}^n \frac{1}{4} m_\mathrm{e} (v_{ix}^2 + v_{iy}^2)}, \end{eqnarray} where $n$ is the number of electrons in the whole numerical box. As can be seen in Fig. 4, the collisionless (wave-particle) processes very rapidly decrease the "anisotropy" on time scales shorter than $\omega_\mathrm{pe} t\approx 50$. This process is faster and more efficient for lower magnetic fields. While the ending ratio is $E_\mathrm{par}/E_\mathrm{perp} \approx$ 9 for Model F ($\omega_\mathrm{ce}/\omega_\mathrm{pe}$ = 1.3), in Model A ($\omega_\mathrm{ce}/\omega_\mathrm{pe}$ = 0.0) this ratio is only $E_\mathrm{par}/E_\mathrm{perp} \approx$ 2. In Fig.~5 a comparison of models with three different mass ratios ($m_\mathrm{p}/m_\mathrm{e}$ = 1, 16, 100) and two values of the ratio $\omega_\mathrm{ce}/\omega_\mathrm{pe}$ (0.0 and 1.3) is made. While in the cases with $m_\mathrm{p}/m_\mathrm{e}$ = 1 (the electron-positron plasma) the strong heating of the distribution functions $f(v_x)$ and $f(v_y)$ can be seen even for the strong magnetic field ($\omega_\mathrm{ce}/\omega_\mathrm{pe}$ = 1.3), for the proton-electron plasma the resulting $f(v_x)$ and $f(v_y)$ for $m_\mathrm{p}/m_\mathrm{e}$ = 16 and 100 do not differ significantly. Note that in the model with $m_\mathrm{p}/m_\mathrm{e}$ = 100 the proton skin depth is greater than the system sizes $L_x$ and $L_y$. \begin{figure}[t] \begin{center} \epsfig{file=12616fg5.ps, width=8 cm} \end{center} \caption{The electron distribution functions at $\omega_\mathrm{pe} t$ = 200 as a function of the mass ratio: $m_\mathrm{i}/m_\mathrm{e}$ = 1 -- two upper plots, $m_\mathrm{i}/m_\mathrm{e}$ = 16 -- two middle plots, and $m_\mathrm{i}/m_\mathrm{e}$ = 100 -- two bottom plots for two values of $\omega_\mathrm{ce}/\omega_\mathrm{pe}$ = 0.0 (left column) and 1.3 (right column). Notation is the same as in Fig.~\ref{fig1}.} \label{fig5} \end{figure} We also compared the evolution of the electron distribution functions in Models A and F with Models K and L, i.e. the models with a lower initial beam velocity ($v_\mathrm{b}/c$ = 0.333). We found that only the extent of the return-current tail in Model L is shorter than that in Model F. It is a natural consequence of the greater beam velocity in Model F than in Model L. Furthermore, it was found that Model M gave qualitatively the same results as Model A. In Figs. 6 and 7 the electron distribution functions in Models N and O, i.e. in the models with the power-law beam and with two different ratio of electron-cyclotron and electron-plasma frequencies ($\omega_\mathrm{ce}/\omega_\mathrm{pe}$ = 0.0 and 1.3) are shown. Because these models are not subject to the bump-on-tail instability there are no significant changes in the distribution $f(v_z)$ on the beam distribution side. On the other hand, the Weibel instability plays its role, especially in the case without the magnetic field (Model N). Once again, in Model N the plasma is heated in the direction perpendicular to that of beam propagation, whereas in Model O, the return current is formed by the extended distribution tail. \begin{figure}[t] \begin{center} \epsfig{file=12616fg6.ps, width=8 cm} \end{center} \caption{The electron distribution functions in Model N with the power-law beam at four different times: at the initial state (a), at $\omega_\mathrm{pe} t$ = 40 (b), at $\omega_\mathrm{pe} t$ = 100 (c), and $\omega_\mathrm{pe} t$ = 200 (d). Notation is the same as in Fig.~\ref{fig1}.} \label{fig6} \end{figure} \begin{figure}[t] \begin{center} \epsfig{file=12616fg7.ps, width=8 cm} \end{center} \caption{The electron distribution functions in Model O with the power-law beam at four different times: at the initial state (a), at $\omega_\mathrm{pe} t$ = 40 (b), at $\omega_\mathrm{pe} t$ = 100 (c), and $\omega_\mathrm{pe} t$ = 200 (d). Notation is the same as in Fig.~\ref{fig1}.} \label{fig7} \end{figure} \section{Directivity of X-ray emission} \begin{figure*}[t] \begin{center} \epsfig{file=12616fg8a.eps, width=6 cm} \epsfig{file=12616fg8b.eps, width=6 cm} \epsfig{file=12616fg8c.eps, width=6 cm} \end{center} \caption{The X-ray directivity in several energies for $f(\vek{v})$ corresponding to Models A and F at $\omega_\mathrm{pe}t=200$ and the X-ray directivity in the initial state for Models A-F (the case of simple beaming). The horizontal solid line represents the isotropic case, the dashed vertical line denotes the viewing angle for a limb source.} \label{fig8} \end{figure*} \begin{figure*}[t] \begin{center} \epsfig{file=12616fg9a.eps, width=6 cm} \epsfig{file=12616fg9b.eps, width=6 cm} \end{center} \caption{The electron directivity in several energies for Models A and F at $\omega_\mathrm{pe}t=200$. The horizontal solid line represents the isotropic case, the dashed vertical line denotes the viewing angle for a limb source. The corresponding X-ray directivities are shown in Fig.~\ref{fig8}.} \label{fig9} \end{figure*} Knowing the electron distribution function $f(\vek{v})$, an instantaneous X-ray bremsstrahlung, i.e. the so-called thin-target emission (e.g. Brown et al., 2003) can be calculated. To account for the anisotropy of $f(\vek{v})$, we considered the angle-dependent electron-ion bremsstrahlung cross-section $Q(\epsilon,E,\Theta)$ differential in the electron energy $E$ and the solid angle of the incoming electron, where $\epsilon$ is the photon energy and $\Theta$ is the angle between the electron pre-collision velocity and direction of the photon emission (Gluckstern \& Hull, 1953). We used the expression for $Q(\epsilon, E,\theta)$ given in Appendix of Massone et al. (2004), which includes the Elwert (1939) Coulomb correction. The cross-section was evaluated using \verb(hsi_reg_ge_angle_cross.pro( available in the Solar Software. Figure~\ref{fig8} shows the X-ray directivity, i.e. the ratio of the angle-dependent $I(\epsilon, \theta)$ to integral photon spectrum $I(\epsilon)=1/4\pi\int_{\Omega} I(\epsilon,\theta,\phi)\ \mathrm{d}\Omega$, where $\theta$ and $\phi$ is the polar and azimuthal angle, respectively, $\Omega$ is the solid angle. The $z$-axis of the coordinate system is chosen to be along the beam propagation direction. Note that due to axial symmetry of the problem around the $z$-axis, the photon spectrum $I(\epsilon, \theta,\phi)$ is also independent of $\phi$, so $I(\epsilon, \theta)=I(\epsilon,\theta,\phi)$. Assuming that the beam propagates along the local normal line towards the photosphere, Fig.~\ref{fig8} displays a variation of the X-ray directivity observed from different viewing angles: the cases with $\cos\theta=1$ and $\cos\theta= -1$ correspond to the forward (the direction to the photosphere) and backward (the direction to the Earth's observer when the X-ray source is at the disc centre) emissions, while the case with $\cos\theta=0$ denotes the emission in the perpendicular direction (the X-ray source placed on the solar limb). The behaviour of the X-ray directivity is closely related to the corresponding electron distribution. Comparing Model A and F at the time $\omega_\mathrm{pe} t$ = 200 with Models A-F in the initial state (i.e. the case with a simple beaming) in Fig.~\ref{fig8}, it can be seen that values of the directivity, especially in the backward direction, become closer to the value 1 (the isotropic case). Therefore, the global directivity decreased during the evolution of the electron distribution. Furthermore, we can see that the directivity values for $\cos\theta=0$ in Model A are closer to the isotropic case than those in Model F. This is due to the strong heating of the plasma in the direction perpendicular to the beam propagation and it is caused by the Weibel instability in Model A (the case with zero magnetic field). We also defined the electron directivity $f(E,\theta)/f(E)$, similarly to the X-ray one. Models A and F at time $\omega_\mathrm{pe} t$ = 200 are presented in Fig.~\ref{fig9} and show in another way the electron distribution characteristics discussed above in Section~3, Fig.~\ref{fig2}. Comparing these electron directivities, we can see that they differ more distinctly than the corresponding X-ray directivities (Fig.~\ref{fig8}). Such a difference is caused by the strong smoothing effect of the bremsstrahlung cross-section. We also calculated the X-ray directivities for Models K-L and N-O. They show the same changes as follows from the comparison of plots in Fig. 8, but these changes are less pronounced due to smaller changes of the $f(\vec v)$ anisotropy in these models. \section{Discussion and conclusions} Varying the ratio of electron-cyclotron and electron-plasma frequencies $\omega_\mathrm{ce}/\omega_\mathrm{pe}$, it was found that the magnetic field influences the evolution of the electron distribution function in electron beam -- plasma system with a return current. While for small magnetic fields ($\omega_\mathrm{ce}/\omega_\mathrm{pe}$ $\leq$ 0.1) the electron distribution function becomes broad in the direction perpendicular to the beam propagation due to the Weibel instability and the return current is formed by the electrons in a broad and shifted bulk of the distribution, for stronger magnetic fields ($\omega_\mathrm{ce}/\omega_\mathrm{pe}$ $\geq$ 1) the distribution is more extended in the beam-propagation direction and the return current is formed by the electrons in an extended distribution tail. Assuming the magnetic field and electron density as $B$ = 100 G and $n_\mathrm{e}$ = 10$^{11}$ cm$^{-3}$ relevant to solar flares, the ratio of the electron-cyclotron and electron-plasma frequencies is $\omega_\mathrm{ce}/\omega_\mathrm{pe}$ = 0.1. In such conditions the Weibel instability plays a role, but it is reduced for a higher magnetic field. The evolution is influenced also by the two-stream instability. Besides the formation of the plateau of the electron distribution on the electron beam side, the simultaneously generated Langmuir waves even accelerate a small part of the electrons. The collisionless processes cause a very fast decrease of the ratio of the electron kinetic parallel and perpendicular (with respect to the beam propagation direction) energies and lead to a decrease of the "anisotropy" of the system. Thus, the distribution function rapidly deviates from that with simple beaming. This can be also expressed by a decrease of the directivity of the associated X-ray bremsstrahlung emission. This fact agrees with the statement of Kontar \& Brown (2006) that conventional solar flare models with a simple downward beaming should be excluded. An additional aspect of the present study is that the inclusion and physical necessity of the return current in the beam -- plasma system resolves the problem of number of electrons needed for an acceleration of the dense electron beam in the corona where the density is relatively low. The return current simply carries the same amount of electrons as in the electron beam back to the acceleration site. However, the return current does not have the same distribution function as the initially injected beam. Variations of the X-ray directivity obtained in our models are of a level comparable to those in the electron beam propagation models by Langer \& Petrosian (1977, Fig.~1) and Leach \& Petrosian (1983, Fig.~4). However, there is an important difference between our model and the models by \cite{LangerPetrosian77} and \cite{LeachPetrosian83}. We treat only collisionless processes which were neglected in the previous studies. Due to the very short time scales in our computations, no effects of longer beam propagation or collision scattering are included in the electron beam evolution. Therefore, the similar level of X-ray directivies suggests that a comparable level of isotropisation of the electron distribution function caused by the collisional processes can be produced by the studied wave-particle processes on much shorter time scales. Moreover, it means that these fast processes should not be neglected in X-ray directivity studies. Our study is not aimed at a direct comparison with observations, mainly due to the large difference between simulated and observationally available time scales. Nevertheless, the paper by Kontar \& Brown (2006) allows us to compare our simulations with their derived ratio of downward-to-upward electron distributions, $F_\mathrm{d}(E)/F_\mathrm{u}(E)$. The comparison reveals an agreement between inferred $F_\mathrm{d}(E)/F_\mathrm{u}(E)$ and Model F within the confidence interval up to $\sim$~50~keV. At higher energies, our models predict a directivity higher than that obtained from observations. The results presented here could be appropriate for low-density parts of flare loops where the collisionless processes are dominant. Furthermore, one may consider them as input into simulations (on much longer time scales) which treat a propagation of the beam in the environment where Coulomb collisions play a significant role, such as the transition region and the chromosphere. Since all these processes (collisionless on long time scales, collisional and even ionization processes in the background plasma) lead to further isotropisation of the particle distribution, we speculate that the resulting electron distribution and X-ray directivity would be much closer to the isotropic case, as was recently found from X-ray observations (Kontar \& Brown, 2006). \begin{acknowledgements} All computations were performed on the parallel computer OCAS (Ond\v{r}ejov Cluster for Astrophysical Simulations, see http://wave.asu.cas.cz/ocas). This research was supported by the grant IAA300030701 (GA \v{C}R) and the research project AV0Z10030501 (Astronomical Institute). The authors thank the referee for constructive comments that improved the paper. \end{acknowledgements}
1,314,259,996,417
arxiv
\section{Introduction} \label{sec1} Storage systems are under continuous cyber attacks like ransomware, which have become endemic. It is extremely important to protect these systems using the most advanced tools in key distribution. The Shamir secret sharing scheme~\cite{s}, adapted to the cyber challenges of the present, is one of such advanced tools. The secret may consist of a large file and the pieces of information distributed to the participants may continuously change in order to enhance security, hence the whole process requires very fast encoding and decoding algorithms. The purpose of this paper is to present some methods achieving this goal. The Shamir secret sharing scheme consists of a secret symbol $D$ that can be reconstructed by sharing $n-1$ symbols among $n-1$ different participants. Given $k<n$, the secret symbol can be reconstructed from any $k$ of the $n-1$ shared symbols by an interpolation process. However, knowledge of any $k-1$ symbols gives no information about the secret $D$. It was observed~\cite{msa} that the Shamir scheme is equivalent to implementing an $[n,k]$ MDS code~\cite{ms} such that the secret is one of the data symbols (for example, the first symbol, a convenient assumption for implementation, as we will see in the next section), the remaining $k-1$ data symbols are random symbols, and the $n-k$ parity symbols are obtained by encoding the $k$ data symbols into the given MDS code. Then, the $n-1$ symbols excluding the secret symbol are distributed among $n-1$ participants. Any $k$ participants can then reconstruct the secret symbol by performing erasure correction, but less than $k$ participants are unable to do so. We will present a modification of the Shamir scheme in which the parity symbols are not assigned to participants, but are known by everybody. Only the $k-1$ data symbols excluding the secret symbol are assigned. The method will make the encoding faster, since the parities will be independent from each other, no linear system needs to be solved and they can even be computed in parallel. The decoding will be as fast as the one of the traditional Shamir scheme. The encoding and decoding can be made even faster by using array codes based on XOR operations, a feature that has been used in RAID-type architectures~\cite{Cor+04}. The paper is structured as follows: in Section~\ref{sec2}, we describe the modified Shamir scheme and we discuss its advantages during encoding. In particular, we illustrate this modified scheme with RS codes. In Section~\ref{sec3}, we consider the advantages of using the modified Shamir scheme of Section~\ref{sec2} with array codes as opposed to RS codes. In particular, we illustrate the ideas with Generalized EVENODD codes~\cite{bbv}. In Section~\ref{sec4}, we address other possibilities, like adapting the modified Shamir scheme to Generalized Row-Diagonal Parity (GRDP) codes~\cite{b,f} and identifying cases in which some participants report incorrect symbols. We divide those cases into two categories: one in which some participants are traitors and deliberately present the wrong symbol, and another in which a few errors are involuntary. In the second case, we propose mitigation by using array codes with local properties. \section{Modified Shamir secret sharing scheme} \label{sec2} Assume that $D_0$ is a secret symbol, there are $k-1$ participants and we want this secret symbol to be recoverable as long as $k-r$ participants are present, where $r<k$, but a gathering of at most $k-r-1$ participants provides no information about $D_0$. In this section, we will assume that $D_0$ is a symbol in a finite field $GF(q)$~\cite{ms} (for simplicity, we assume that $q$ is a power of 2 throughout the paper, although this assumption is not necessary). Assume that the $k-1$ participants are each assigned a random symbol, say $D_i\in GF(q)$, $1\leq i\leq k-1$. Let ${\cal C}$ be a $[k,k-r]$ MDS code over $GF(q)$ (for example, a RS code), and $H$ an $r\times k$ parity-check matrix of ${\cal C}$. Construct a new $[k+r,k]$ code ${\cal C}'$ whose parity-check matrix is the $r\times (k+r)$ (systematic) matrix \begin{eqnarray} \label{eq1} H'&\mbox{$\,=\,$} &(H\,|\,I_r), \end{eqnarray} where $I_r$ is the $r\times r$ identity matrix. It is well known that if $r\leq 3$, then code ${\cal C}'$ is MDS~\cite{ms}, but this is not the case for $r\geq 4$. However, it does not matter if ${\cal C}'$ is not MDS, the result will be valid for any $r<k$. In effect, assume that $k-r$ participants are present, say, those holding $$D_{j_0},D_{j_1},\ldots,D_{j_{k-r-1}},\quad {\rm where}\quad 1\leq j_0<j_1<\cdots <j_{k-r-1}\leq k-1,$$ while the symbols $D_0,D_{i_1},D_{i_2},\ldots,D_{i_{r-1}}$ are missing, where $$1\leq i_1<i_2<\cdots <i_{r-1}\leq k-1\quad {\rm and}\quad \{j_0,j_1,\ldots ,j_{k-r-1}\}\cup \{0,i_1,\ldots ,i_{r-1}\}\mbox{$\,=\,$} \{0,1,\ldots,k-1\}.$$ The missing $r$ symbols can be recovered if the corresponding $r\times r$ submatrix of the parity-check matrix $H'$ is invertible. Specifically, assume that $H'\mbox{$\,=\,$} (\underline{c}_0,\underline{c}_1,\ldots,\underline{c}_{k+r-1})$, where $\underline{c}_i$ is column $i$ of $H'$, hence, we have to show that the $r\times r$ submatrix $H_r$ of $H'$ given by $H_r\mbox{$\,=\,$} (\underline{c}_0,\underline{c}_{i_1},\underline{c}_{i_2},\ldots,,\underline{c}_{i_{r-1}})$ is invertible. Since $i_{r-1}\leq k-1$, by~(\ref{eq1}), this submatrix is also a submatrix of $H$, and since $H$ is the parity-check matrix of an MDS code, then $H_r$ is invertible~\cite{ms}. In particular, symbol $D_0$, which corresponds to the secret, can be recovered. This is impossible if less than $k-r$ participants are present. Obtaining the parity symbols using the parity-check matrix $H'$ according to~(\ref{eq1}) is very simple, since matrix $H'$ is in systematic form. Specifically, $$(D_k,D_{k+1},\ldots,D_{k+r-1})\mbox{$\,=\,$} H (D_0,D_1,\ldots,D_{k-1})^T,$$ a process that is very fast for codes such as RS codes (it is equivalent to computing $r$ syndromes in a RS code). Encoding in a regular RS code is a special case of the decoding, thus, it involves solving a linear system of $r$ equations with $r$ unknowns. This is not the case for the systematic encoding of code ${\cal C}'$, since the parities are computed independently and no linear system needs to be solved, they may be even computed in parallel. The resulting code is not MDS when $r\geq 4$ and ${\cal C}$ is a RS code, but in our case it does not matter, since the erasures are in the data and, as we have seen, the system is always solvable. \begin{example} \label{ex1} {\rm Consider the finite field $GF(8)$ with primitive polynomial~\cite{ms} $1+x+x^3$. Let $k\mbox{$\,=\,$} 7$ and $r\mbox{$\,=\,$} 4$, so, according to the description above, let ${\cal C}$ be a $[7,3]$ RS code over $GF(8)$ with parity-check matrix $$ H\mbox{$\,=\,$}\left( \begin{array}{ccccccc} 1&1&1&1&1&1&1\\ 1&\alpha &\alpha^2&\alpha^3&\alpha^4&\alpha^5&\alpha^6\\ 1&\alpha^2 &\alpha^4&\alpha^6&\alpha^8&\alpha^{10}&\alpha^{12}\\ 1&\alpha^3 &\alpha^6&\alpha^9&\alpha^{12}&\alpha^{15}&\alpha^{18}\\ \end{array} \right). $$ Also, according to~(\ref{eq1}), ${\cal C}'$ is the $[11,7]$ code whose parity-check matrix $H'$ is $$ \left( \begin{array}{ccccccccccc} 1&1&1&1&1&1&1&1&0&0&0\\ 1&\alpha &\alpha^2&\alpha^3&\alpha^4&\alpha^5&\alpha^6&0&1&0&0\\ 1&\alpha^2 &\alpha^4&\alpha^6&\alpha^8&\alpha^{10}&\alpha^{12}&0&0&1&0\\ 1&\alpha^3 &\alpha^6&\alpha^9&\alpha^{12}&\alpha^{15}&\alpha^{18}&0&0&0&1\\ \end{array} \right). $$ Next, assume that the secret is the symbol $D_0\mbox{$\,=\,$}\alpha^2$ and the 6 participants are assigned the symbols $D_1\mbox{$\,=\,$}\alpha^3$, $D_2\mbox{$\,=\,$}\alpha$, $D_3\mbox{$\,=\,$} 1$, $D_4\mbox{$\,=\,$} 0$, $D_5\mbox{$\,=\,$}\alpha^6$ and $D_6\mbox{$\,=\,$}\alpha^3$. The first step is computing the parity symbols as $$(D_7,D_8,D_9,D_{10})\mbox{$\,=\,$} H\left( \begin{array}{c} \alpha^2\\\alpha^3\\\alpha \\ 1\\ 0\\ \alpha^6\\ \alpha^3 \end{array} \right)\mbox{$\,=\,$} (\alpha , 0,\alpha^5,\alpha^2), $$ i.e., $D_7\mbox{$\,=\,$}\alpha$, $D_8\mbox{$\,=\,$} 0$, $D_9\mbox{$\,=\,$}\alpha^5$ and $D_{10}\mbox{$\,=\,$}\alpha^2$. The parity symbols $D_7$, $D_8$, $D_9$ and $D_{10}$ are known by all the participants. Now, assume that we have $k-r\mbox{$\,=\,$} 3$ participants, say, $D_2$, $D_3$ and $D_5$, who want to compute $D_0$. The parity-check matrix $H'$ gives the following system of 4 equations with 4 unknowns: \begin{eqnarray*} D_0\oplus D_1\oplus D_4\oplus D_6&=&S_0\\ D_0\oplus \alpha D_1\oplus \alpha^4D_4\oplus \alpha^6D_6&=&S_1\\ D_0\oplus \alpha^2D_1\oplus \alpha^8D_4\oplus \alpha^{12}D_6&=&S_2\\ D_0\oplus \alpha^3D_1\oplus \alpha^{12}D_4\oplus \alpha^{18}D_6&=&S_3, \end{eqnarray*} where $S_0\mbox{$\,=\,$} D_2\oplus D_3\oplus D_5\oplus D_7\mbox{$\,=\,$}\alpha^2$, $S_1\mbox{$\,=\,$} \alpha^2D_2\oplus \alpha^3D_3\oplus\alpha^5D_5\oplus D_8\mbox{$\,=\,$} \alpha^4$, $S_2\mbox{$\,=\,$} \alpha^4D_2\oplus \alpha^6D_3\oplus \alpha^{10}D_5\oplus D_9\mbox{$\,=\,$} 1$ and $S_3\mbox{$\,=\,$} \alpha^6D_2\oplus \alpha^9D_3\oplus \alpha^{15}D_5\oplus D_{10}\mbox{$\,=\,$} 0$. We need to solve the system above only for the secret symbol $D_0$. For example, using Cramer's rule, we have \begin{eqnarray*} D_0&=&\frac{ \det\left( \begin{array}{cccc} \alpha^2&1&1&1\\ \alpha^4&\alpha & \alpha^4&\alpha^6\\ 1&\alpha^2 & \alpha^8&\alpha^{12}\\ 0&\alpha^3 & \alpha^{12}&\alpha^{18} \end{array} \right) }{\det\left( \begin{array}{cccc} 1&1&1&1\\ 1&\alpha & \alpha^4&\alpha^6\\ 1&\alpha^2 & \alpha^8&\alpha^{12}\\ 1&\alpha^3 & \alpha^{12}&\alpha^{18} \end{array} \right)}\;\mbox{$\,=\,$}\; \alpha^2, \end{eqnarray*} since the determinant of the numerator is $\alpha$, while the determinant of the denominator is a Vandermonde determinant, which is equal to $$(1\oplus \alpha) (1\oplus \alpha^4)(1\oplus \alpha^6)(\alpha\oplus \alpha^4)(\alpha\oplus \alpha^6)(\alpha^4\oplus \alpha^6)\mbox{$\,=\,$} \alpha^6.$$ We will see next a more efficient method for computing the erased symbol $D_0$. \hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} } \end{example} Example~\ref{ex1} illustrates the simplicity of the encoding method: each parity symbol is the syndrome of the $k$ data symbols with respect to the parity-check matrix $H$. For RS codes, there is ample literature on how to efficiently compute the syndromes. Regarding the decoding, mainly the computation of the secret symbol $D_0$, we will next describe a method that is similar to the one presented in~\cite{br}. In effect, assume the conditions described above with the codes ${\cal C}$ and ${\cal C}'$, where the $r$ erased symbols are\\ $D_0,D_{i_1},D_{i_2},\ldots,D_{i_{r-1}}$, the symbols corresponding to the $k-r$ participants that are present are $D_{j_0},D_{j_1},\ldots,D_{j_{k-r-1}}$, while the parity symbols are $D_{k},D_{k+1},\ldots,D_{k+r-1}$. Moreover, we assume that ${\cal C}$ is a (shortened) RS code with parity-check matrix \begin{eqnarray} \label{eq2} H\mbox{$\,=\,$} \left( \begin{array}{ccccc} 1&1&1&\ldots &1\\ 1&\alpha &\alpha^2&\ldots &\alpha^{k-1}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&\alpha^{r-1} &\alpha^{2(r-1)}&\ldots &\alpha^{(k-1)(r-1)} \end{array} \right). \end{eqnarray} We are interested in computing only $D_0$. The syndrome $S_u$, $0\leq u\leq r-1$, is given by \begin{eqnarray} \nonumber S_u&=& \left(\bigoplus_{v=0}^{k-r-1}\alpha^{uj_v}D_{j_v}\right)\oplus D_{k+u}\\ \label{eq3} &=&\bigoplus_{s=0}^{r-1}\alpha^{ui_s}D_{i_s}. \end{eqnarray} Define the polynomials of degree at most $r-1$ \begin{eqnarray} \label{eq4} S(x)&=& S_0\oplus S_1x\oplus\cdots\oplus S_{r-1}x^{r-1} \end{eqnarray} and \begin{eqnarray} \nonumber G(x)&=&(x\oplus\alpha^{i_1})(x\oplus\alpha^{i_2})\cdots (x\oplus\alpha^{i_{r-1}})\\ \label{eq5} &=&g_{r-1}\oplus g_{r-2}x\oplus\cdots\oplus g_{0}x^{r-1}. \end{eqnarray} Notice that $G(1)\mbox{$\,=\,$}(1\oplus\alpha^{i_1})(1\oplus\alpha^{i_2})\cdots (1\oplus\alpha^{i_{r-1}})$ while $G(\alpha^{i_s})\mbox{$\,=\,$} 0$ for $1\leq s\leq r-1$. Then, assuming $i_0\mbox{$\,=\,$} 0$, by~(\ref{eq3}), (\ref{eq4}) and~(\ref{eq5}), we have \begin{eqnarray} \nonumber \bigoplus_{u=0}^{r-1}S_ug_{r-u-1}&=&\bigoplus_{u=0}^{r-1}\left( \bigoplus_{s=0}^{r-1}\alpha^{ui_s}D_{i_s}\right)g_{r-u-1}\\ \nonumber &=&\bigoplus_{s=0}^{r-1}D_{i_s}\left(\bigoplus_{u=0}^{r-1} g_{r-u-1}\alpha^{ui_s}\right)\\ \nonumber &=&\bigoplus_{s=0}^{r-1}D_{i_s}G(\alpha^{i_s})\\ \label{eq6} &=&D_{0}\prod_{s=1}^{r-1}(1\oplus\alpha^{i_s}). \end{eqnarray} Thus, by~(\ref{eq5}) and~(\ref{eq6}) \begin{eqnarray} \label{eq7} D_{0}&=&\frac{\bigoplus_{u=0}^{r-1}S_ug_{r-u-1}}{\prod_{s=1}^{r-1}(1\oplus\alpha^{i_s})} \mbox{$\,=\,$} \frac{\bigoplus_{u=0}^{r-1}S_ug_{r-u-1}}{\bigoplus_{u=0}^{r-1}g_u}, \end{eqnarray} since by~(\ref{eq5}), $G(1)\mbox{$\,=\,$}\prod_{s=1}^{r-1}(1\oplus\alpha^{i_s})\mbox{$\,=\,$}\bigoplus_{u=0}^{r-1}g_u$. Both the numerator and the denominator in~(\ref{eq7}) can be easily computed. \begin{example} \label{ex2} {\rm Let us revisit Example~\ref{ex1} and find $D_0$ using~(\ref{eq7}). Using the syndromes obtained in Example~\ref{ex1} and~(\ref{eq4}), we obtain \begin{eqnarray*} S(x)&=& \alpha^2\oplus\alpha^4x\oplus x^2. \end{eqnarray*} From Example~\ref{ex1}, we have $i_1\mbox{$\,=\,$} 1$, $i_2\mbox{$\,=\,$} 4$ and $i_3\mbox{$\,=\,$} 6$, so, by~(\ref{eq5}), we obtain \begin{eqnarray*} G(x)&=& (x\oplus\alpha )(x\oplus\alpha^4)(x\oplus\alpha^6)\mbox{$\,=\,$}\alpha^4\oplus\alpha^6x\oplus x^2\oplus x^3, \end{eqnarray*} i.e., $g_3\mbox{$\,=\,$}\alpha^4$, $g_2\mbox{$\,=\,$}\alpha^6$, $g_1\mbox{$\,=\,$} 1$ and $g_0\mbox{$\,=\,$} 1$ in~(\ref{eq5}). Hence, the numerator in~(\ref{eq7}) is given by \begin{eqnarray*} g_3S_0\oplus g_2S_1\oplus g_1S_2\oplus g_0S_3\mbox{$\,=\,$} \alpha^6\oplus\alpha^3\oplus 1\mbox{$\,=\,$}\alpha^5, \end{eqnarray*} while the denominator equals \begin{eqnarray*} G(1)&=&\alpha^4\oplus\alpha^6\oplus 1\oplus 1\mbox{$\,=\,$} \alpha^3, \end{eqnarray*} so $D_0\mbox{$\,=\,$} \alpha^2$, which coincides with the value obtained in Example~\ref{ex1}. \hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} } \end{example} \section{Use of array codes in the modified Shamir secret sharing scheme} \label{sec3} The purpose of using array codes in RAID-type architectures~\cite{bbbm,{Cor+04}} was to replace finite field operations, which usually require a look-up table, by XOR operations. In an application like the Shamir scheme described in Section~\ref{sec2}, if the size of the secret is pretty large, implementation of a RS code has to be done multiple times. Array codes like the ones described in~\cite{b,bbbm,bbv,br,Cor+04,f} can have symbols (which correspond to columns in the array) of size $p-1$, where $p$ is a prime number. Certainly, $p$ can be as large as needed, while large symbols in a RS code require a large look-up table in the corresponding finite field and may not be practical. An example of an MDS array code is given by Blaum-Roth (BR) codes~\cite{br}. We are not the first to point out the usefulness of array codes in the context of the Shamir secret sharing scheme. For example, in~\cite{wd}, the use of BR codes is proposed. Given an odd prime number $p$, the codewords of a $[p,k]$ BR code consist of $(p-1)\times p$ arrays such that, when appending a zero row to such an array in the code, making it a $p\times p$ array, the lines of slope $i$ (with a toroidal topology), $0\leq i\leq p-k-1$, have even parity. For example, the first four lines of the $5\times 5$ array below are in a $[5,2]$ BR code: the horizontal lines (slope 0), the lines of slope 1 and the lines of slope 2 have all even parity. In the left array, we illustrate in bold the second line of slope 1, while in the right array, in bold is the third line of slope 2 (we assume that the individual symbols in the arrays are bits, although they can have any size. It is not necessary either that the number of columns is a prime number, since some columns may be assumed to be zero). $$\begin{array}{cc} \begin{array}{|c|c|c|c|c|} \hline 1& {\bf 0}& 1& 1& 1\\ \hline {\bf 0}& 0& 0& 1& 1\\ \hline 1& 0& 1& 0& {\bf 0}\\ \hline 1& 1& 0& {\bf 0}& 0\\ \hline\hline 0& 0& {\bf 0}& 0& 0\\ \hline \end{array} & \begin{array}{|c|c|c|c|c|} \hline 1& {\bf 0}& 1& 1& 1\\ \hline 0& 0& 0& {\bf 1}& 1\\ \hline {\bf 1}& 0& 1& 0& 0\\ \hline 1& 1& {\bf 0}& 0& 0\\ \hline\hline 0& 0& 0& 0& {\bf 0}\\ \hline \end{array} \end{array} $$ An equivalent algebraic definition of BR codes (and a very convenient one for decoding) is that they are RS codes over the ring of polynomials modulo $M_p(x)\mbox{$\,=\,$} 1\oplus x\oplus x^2\oplus\cdots\oplus x^{p-1}$~\cite{br}. The parity-check matrix $H$ of such a code (shortened to $k$ columns) is given by~(\ref{eq2}). Let us point out that the polynomial $M_p(x)$ may not be irreducible (for example, $M_5(x)$ is irreducible but $M_7(x)\mbox{$\,=\,$} (1\oplus x\oplus x^3)(1\oplus x^2\oplus x^3)$). However, the code is always MDS~\cite{br}. In order to apply our particular version of the Shamir scheme as described in Section~\ref{sec2}, we need to consider the parity-check matrix $H'$ as given by~(\ref{eq1}), while $H$ is given by~(\ref{eq2}). Such a resulting code is a generalization of the EVENODD code~\cite{bbbm} and has different names in literature: generalized EVENODD code~\cite{bbv}, independent parity (IP) code~{\cite{bbv}} or Blaum-Bruck-Vardy code~\cite{hsl}. The MDS condition of these codes has been extensively studied for $r\geq 4$~\cite{bbv,hsl}, but for our purpose the modified Shamir scheme will always work for $r<k$, as in the case of RS codes we studied in Section~\ref{sec2}. Notice that for these generalized EVENODD codes, the horizontal lines always have even parity, while the lines of slope $i$, $1\leq i\leq r-1$, may have either even or odd parity: the special line of slope $i$ starting in the last bit of the first column (which is 0 and not written) determines the parity of all the other lines of slope $i$~\cite{bbv}. So, the encoding is very fast and convenient. \begin{example} \label{ex3} {\rm The following array corresponds to a generalized EVENODD code with $p\mbox{$\,=\,$} 5$ and 3 parities: $$\begin{array}{cc} \begin{array}{|c|c|c|c|c||c|c|c|} \hline 1& 0& 0& 0&\hspace{-.1cm} {\bf 0}\hspace{-.1cm}& 1& 0& 0\\ \hline 0& 0& 1&\hspace{-.1cm} {\bf 1}\hspace{-.1cm}& 0& 0& 1& 0\\ \hline 1& 0& \hspace{-.1cm}{\bf 0}\hspace{-.1cm}& 1& 0& 0& 0& 1\\ \hline 1&\hspace{-.1cm} {\bf 1}\hspace{-.1cm}& 0& 1& 1& 0& 0& 1\\ \hline\hline \hspace{-.1cm}{\bf 0}\hspace{-.1cm}& 0& 0& 0& 0& 0& 0& 0\\ \hline \end{array} & \begin{array}{|c|c|c|c|c||c|c|c|} \hline 1& 0& \hspace{-.1cm} {\bf 0}\hspace{-.1cm}& 0& 0& 1& 0& 0\\ \hline 0& 0& 1& 1& \hspace{-.1cm} {\bf 0}\hspace{-.1cm}& 0& 1& 0\\ \hline 1& \hspace{-.1cm} {\bf 0}\hspace{-.1cm}& 0& 1& 0& 0& 0& 1\\ \hline 1& 1& 0& \hspace{-.1cm} {\bf 1}\hspace{-.1cm}& 1& 0& 0& 1\\ \hline\hline \hspace{-.1cm} {\bf 0}\hspace{-.1cm}& 0& 0& 0& 0& 0& 0& 0\\ \hline \end{array} \end{array} $$ In the array on the left we illustrate in bold the entries of the special line of slope 1 starting at the bottom of the first column. It has an even number of ones, so all the diagonals must have even parity, which is determined by the second parity column (the first parity column corresponds to horizontal parity, so it has always even parity). Similarly, in the array on the right, we illustrate in bold the entries corresponding to the special line of slope 2 starting at the bottom of the first column. In this case, the number of 1s of this special line is odd, so all the lines of slope 2 must have odd parity, and this is reflected in the last parity column. Notice that the parities are independent of each other, so, for that reason, these codes are also called Independent Parity (IP) codes~\cite{bbv}. Denote the 8 columns in the array as $(\underline{c}_0,\underline{c}_1,\ldots,\underline{c}_7)$ and assume that the secret is $\underline{c}_0$, while the parities are $\underline{c}_5$, $\underline{c}_6$ and $\underline{c}_7$. The four data columns $\underline{c}_1$, $\underline{c}_2$, $\underline{c}_3$ and $\underline{c}_4$ are assigned to participants, while the three parity columns are known by everybody. Assume that $k-r\mbox{$\,=\,$} 2$ participants get together, say, $\underline{c}_2$ and $\underline{c}_4$. Then, symbols $\underline{c}_0$ (the secret), $\underline{c}_1$ and $\underline{c}_3$ are erased, and we have to use $\underline{c}_2$, $\underline{c}_4$, $\underline{c}_5$, $\underline{c}_6$ and $\underline{c}_7$ to retrieve them. We proceed similarly to the method described in Section~\ref{sec2} for RS codes. The first step is computing the syndromes using the parity-check matrix $H'$: \begin{eqnarray*} S_0&=&\underline{c}_2\oplus\underline{c}_4\oplus\underline{c}_5\\ S_1&=&\alpha^2\underline{c}_2\oplus\alpha^4\underline{c}_4\oplus\underline{c}_6\\\ S_2&=&\alpha^4\underline{c}_2\oplus\alpha^8\underline{c}_4\oplus\underline{c}_7. \end{eqnarray*} Notice that as a function of $\alpha$, from the array above, $\underline{c}_2\mbox{$\,=\,$} \alpha$, $\underline{c}_4\mbox{$\,=\,$} \alpha^3$, $\underline{c}_5\mbox{$\,=\,$} 1$, $\underline{c}_6\mbox{$\,=\,$} \alpha$ and $\underline{c}_7\mbox{$\,=\,$}\alpha^2\oplus\alpha^3$, where $M_5(\alpha)\mbox{$\,=\,$} 0$. Hence, $\alpha^4\mbox{$\,=\,$} 1\oplus\alpha\oplus\alpha^2\oplus\alpha^3$ and $\alpha^5\mbox{$\,=\,$} 1$, and the syndromes can be easily calculated as $S_0\mbox{$\,=\,$} 1\oplus\alpha\oplus\alpha^3$, $S_1\mbox{$\,=\,$} \alpha\oplus\alpha^2\oplus\alpha^3$ and $S_2\mbox{$\,=\,$} 1\oplus\alpha\oplus\alpha^2\oplus\alpha^3\mbox{$\,=\,$}\alpha^4$. Thus, by~(\ref{eq4}), \begin{eqnarray*} S(x)&=&(1\oplus\alpha\oplus\alpha^3)\oplus (\alpha\oplus\alpha^2\oplus\alpha^3)x\oplus \alpha^4x^2. \end{eqnarray*} Next, using~(\ref{eq5}), since $i_1\mbox{$\,=\,$} 1$ and $i_2\mbox{$\,=\,$} 3$ \begin{eqnarray*} G(x)&=&(x\oplus\alpha)(x\oplus\alpha^3)\\ &=&\alpha^4\oplus (\alpha\oplus\alpha^3)x\oplus x^2, \end{eqnarray*} i.e., in~(\ref{eq5}), $g_2\mbox{$\,=\,$} \alpha^4$, $g_1\mbox{$\,=\,$} \alpha\oplus\alpha^3$ and $g_0\mbox{$\,=\,$} 1$. Next, we have to compute the right hand side in~(\ref{eq6}), which gives \begin{eqnarray*} S_0g_2\oplus S_1g_1\oplus S_2g_0&=& \alpha\oplus\alpha^3. \end{eqnarray*} Using~(\ref{eq6}), we have to solve \begin{eqnarray*} (1\oplus\alpha)(1\oplus\alpha^3)D_0&=& \alpha\oplus\alpha^3, \end{eqnarray*} Let $(1\oplus\alpha^3)D_0\mbox{$\,=\,$} X$, then we have to solve first \begin{eqnarray} \label{eq72} (1\oplus\alpha)X&=& \alpha\oplus\alpha^3, \end{eqnarray} which can be done using the following lemma~\cite{br}: \begin{lemma} \label{l1} {\rm Assume that we want to solve $(1\oplus\alpha^j)X(\alpha)\mbox{$\,=\,$} Y(\alpha)$ over the ring of polynomials modulo $M_p(x)$, where $p$ is prime, $1\leq j\leq p-1$, $Y(\alpha)\mbox{$\,=\,$} \bigoplus_{i=0}^{p-2}y_i\alpha^i$ is given and $X(\alpha)\mbox{$\,=\,$} \bigoplus_{i=0}^{p-2}x_i\alpha^i$. Then, for $1\leq u\leq p-1$, \begin{eqnarray} \label{eq71} x_{\mbox{$\langle$} -uj-1\mbox{$\rangle$}}&=&x_{\mbox{$\langle$} -(u-1)j-1\mbox{$\rangle$}}\oplus\hat{y}_{\mbox{$\langle$} -(u-1)j-1\mbox{$\rangle$}}, \end{eqnarray} where given any integer $m$, $\mbox{$\langle$} m\mbox{$\rangle$}$ denotes the unique integer $v$, $0\leq v\leq p-1$, such that $v\equiv m\;(\bmod\;p)$ (for example, for $p=5$, $\mbox{$\langle$} -2\mbox{$\rangle$}=3$), $y_{p-1}\mbox{$\,=\,$} 0$ and $\hat{y}_j\mbox{$\,=\,$} y_j\oplus\bigoplus_{i=0}^{p-1}y_i$. \hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} } \end{lemma} Applying recursion~(\ref{eq71}) in Lemma~\ref{l1} to~(\ref{eq72}), we obtain $X\mbox{$\,=\,$} \alpha\oplus\alpha^2$. Next we have solve $(1\oplus\alpha^3)D_0\mbox{$\,=\,$} \alpha\oplus\alpha^2.$ Again, using recursion~(\ref{eq71}), we obtain $D_0\mbox{$\,=\,$} 1\oplus\alpha^2\oplus\alpha^3$, which coincides with column $\underline{c}_0$ of the array above, corresponding to the secret. \hskip 3pt \hbox{\vrule width4pt depth2pt height6pt} } \end{example} \section{Other possibilities and conclusions} \label{sec4} The Shamir scheme can have other implementations as well. Another array code that can be used is the GRDP code~\cite{b,Cor+04,f}. The GRDP code has a minimal number of encoding operations, so it is very convenient for the modified Shamir scheme described in sections~\ref{sec2} and~\ref{sec3}. A $GRDP(p,r)$ code consists of the arrays $(a_{i,j})_{\substack{0\leq i\leq p-1 \\ 0\leq j\leq p+r-1}}$, such that \begin{eqnarray} \label{eq8} a_{i,p-1}&=&\bigoplus_{j=0}^{p-2}a_{i,j}\\ \label{eq9} a_{i,k+u}&=&\bigoplus_{j=0}^{p-1}a_{\mbox{$\langle$} i-uj\mbox{$\rangle$},j}\quad {\rm for}\quad 1\leq u\leq r-1, \end{eqnarray} where $\mbox{$\langle$} m\mbox{$\rangle$}$ was defined in Lemma~\ref{l1}. For example, according to~(\ref{eq8}) and~(\ref{eq9}), the following is an array in $GRDP(5,3)$: $$ \begin{array}{|c|c|c|c|c|c|c|} \hline 1&1&1&1&0&1&0\\\hline 1&0&0&1&0&1&0\\\hline 0&0&1&0&1&0&0\\\hline 1&0&0&0&1&0&1\\\hline\hline 0&0&0&0&0&0&0\\\hline \end{array} $$ Column $p+u$, $1\leq u\leq r-1$, contains the parity of the lines of slope $u$ in the $p\times p$ array, computed using the horizontal parity (hence, the parities are not independent as in the extended EVENODD code described in Section~\ref{sec3}), and excluding the line starting at location $p-1$ of the first column. The encoding is simpler than the encoding of the extended EVENODD codes, since the lines of slope $u$, $1\leq u\leq r-1$, all have even parity and the parity of the lines starting at location $p-1$ of the first column do not need to be computed. From the above discussion, the modified Shamir scheme with GRDP codes is as follows: let the secret be a symbol of length $p-1$, where $p$ is prime, then take $p-2$ random symbols of length $p-1$, and encode these $p-1$ symbols into a $GRDP(p,r)$ code. The $p-2$ random symbols together with the horizontal parity symbol are distributed to $p-1$ participants, while the $r-1$ parity symbols corresponding to lines of slope $u$, $1\leq u\leq r-1$, are known by all the participants. Then, if any $p-r$ participants share their symbols, $r$ erasures can be corrected by the code. It has been shown~\cite{b,hhsl} that a GRDP code is MDS if and only if a corresponding generalized EVENODD code is also MDS. This property also helps with the decoding in the recovery of the secret: once the transformation is established, there are efficient methods to decode the generalized EVENODD code~\cite{b,hhsl} that can be used in our context. As pointed out in~\cite{msa}, since an MDS code can correct errors together with erasures, the Shamir scheme can handle cases in which a number of participants, for a variety of reasons, incorrectly report their symbols. Specifically, an $[n,k]$ MDS code can correct $s$ errors together with $t$ erasures as long as $2s+t\leq n-k$~\cite{ms}. In the Shamir scheme, this means that if $n-t$ participants get together and $s$ of them report the wrong symbol, then the secret can be recovered as long as $2s+t\leq n-k$. This scheme works also for our modified Shamir scheme: in this case the $r$ parity symbols are known by everybody, the secret is the first symbol and the remaining $k-1$ symbols are distributed among participants. If $k-t$ participants share their symbols but $s$ of them provide the wrong symbol, then the secret can be recovered as long as $2s+t\leq r$. The decoding of RS codes containing both errors and erasures is well known. However, there is no known efficient decoding algorithm correcting more than three errors for array codes such as BR codes. For example, an efficient algorithm correcting one error and any number of erasures was presented in~\cite{br}. Efficient algorithms correcting two and three errors with any number of erasures can be found in~\cite{bbv}. Beyond that, the problem is open, though correction of up to three errors may be enough for most applications of the Shamir scheme. The inaccuracy of sharing symbols with other participants, as stated above, may be due to a few different reasons. One such cause involves a traitor among the participants, who may exploit the information from the other participants either to have sole access to the secret or to sabotage the entire enterprise. Provided that there is enough redundancy, the scheme for correcting errors and erasures prevents this scenario, allowing for the identification of up to $s$ traitors. However, such a scheme is costly if the participant providing erroneous information did not have nefarious purposes. The information may have been corrupted by a few erroneous or erased bits through normal noise during transmission of the symbol. Recently, an expansion of the BR, generalized EVENODD and GRDP codes was presented~\cite{bh}. In these expansions, the arrays have column size $p$ as opposed to $p-1$. The expanded codes continue to be MDS, but each column is in a cyclic code with generator polynomial $(1\oplus x)g(x)$, where $g(x)$ divides $1\oplus x^p$. If the cyclic code has minimum distance $d$, then $s$ bits in error together with $t$ erased bits can be corrected in every column as long as $2s+t\leq d-1$. Hence, a few errors and erasures can be corrected locally in each column of the array without invoking the other columns. The full power of the code is reserved for cases in which traitors deliberately misrepresent the column they had been assigned. A further generalization was obtained in~\cite{wh}, which describes a generalization of the expanded BR codes to powers of prime numbers. We presented the decoding algorithm to obtain the secret as a result of repeated recursions. There are more efficient decoding algorithms reducing the number of recursions when obtaining all the erasures, mainly through the LU factorization of Vandermonde matrices~\cite{wh}. For our purpose, however, we only need to obtain one erasure, the one corresponding to the secret. The modified Shamir secret sharing scheme presented in this paper consists of assigning $k-1$ random data symbols to participants (excluding the secret), while the parity symbols are independent from each other and known by everyone. This method simplifies the encoding since computing the parity does not require solving a system of linear equations and can be done in parallel, while the decoding remains the same. We studied this modified scheme with RS and with array codes. By using array codes with local properties, we showed that the cases in which participants report their symbols with involuntary errors can be mitigated.
1,314,259,996,418
arxiv
\section{Introduction} One of the tasks at the Large Hadron Collider (LHC) is to discover some new physics. In models of it, one or more charged Higgs bosons can be predicted. For example, in the 2-Higgs Doublet Model (2HDM), there are two complex Higgs doublets. After Electro-Weak Symmetry Breaking (EWSB), there are five physical Higgs bosons, 2 neutral CP-even scalars ($h$ and $H$, with $m_h < m_H$, where $m_{h/H}$ is the light(heavy) Higgs boson mass), a CP-odd pseudoscalar ($A$) and two charged Higgs states ($H^{\pm}$). Thus, if a charged Higgs boson can be found, it will be a clear evidence of such a new physics. In the 2HDM, a $Z_2$ symmetry is introduced into the Yukawa sector in order to avoid too large effects of Flavour Changing Neutral Currents (FCNCs). Depending on the $Z_2$ charge assignment of the Higgs doublets and the fermion flavours, four basic scenarios are defined, which are known as Types-I/II/X/Y of the 2HDM~\cite{Branco:2011iw}. In this note, we focus on $H^\pm\to W^\pm + 4\gamma$ signals in the 2HDM Type-I possibly emerging at the LHC from $H^\pm h$ production and decay, wherein both $h\to\gamma\gamma$~\cite{Wang:2021pxc}. We assume that the heavy scalar $H$ is the observed CP-even SM-like Higgs boson, which properties are consistent with the LHC measurements, and $h$ is lighter than 125 GeV. We study the properties of a light charged Higgs boson, i.e., satisfying the condition $M_{H^{\pm}} < M_{t} + M_{b}$. The decay modes of this charged Higgs boson are dominated by $H^{\pm} \to W^{\pm} h$, while the dominant decay mode of the light neutral Higgs boson $h$ is indeed $h \to \gamma \gamma$. This emerges over the parameter space where $h$ is almost fermiophobic, i.e., $cos\alpha/sin\beta$ $\to$ $0$. In the Type-I scenario, the $hq\bar{q}$ coupling is proportional to $cos\alpha/sin\beta$. Since $cos\alpha=sin\beta sin(\beta-\alpha)+cos\beta cos(\beta-\alpha)$, when $sin(\beta-\alpha)$ is negative and $cos(\beta-\alpha)$ is positive, $cos\alpha$ will be cancelled for a special $tan\beta$. Thus, in the corresponding (fermiophobic) limit, $h$ decay modes into SM fermions can be highly suppressed (so that, as intimated, $h\to\gamma\gamma$ becomes dominant). Specifically, we focus on the process $pp \rightarrow H^{\pm}h \rightarrow W^{\pm*}hh \rightarrow l^{\pm}\nu +4\gamma$, i.e., on the aforementioned the $W+4\gamma$ final state. According to the parton level analysis in \cite{Arhrib:2017wmo}, this signature is essentially background free and can lead to a sizable significance when the integrated luminosity is large enough. In this note, we aim at confirming this result, obtained at the parton level, following a thorough detector level simulation. We start by performing a scan over the parameter space of the Type-I scenario aimed at maximising the yield of the discussed $W+4\gamma$ signature. Such a scan is based on the latest constraints from both theoretical consistency and experimental bounds by using 2HDMC~\cite{Eriksson:2009ws}, HiggsSignals-2.6.0~\cite{Bechtle:2020uwn} and HiggsBounds-5.9.0~\cite{Bechtle:2020pkv}. For the light $h$ state, we vary $m_{h}$ from 20 GeV to about 80 GeV, which is always lighter than 125 GeV and lighter than the $H^{\pm}$ mass too. The mass of the charged Higgs boson varies from 91 GeV to 170 GeV. By requiring $H^{\pm} \to W^{\pm}h$, the $W^{\pm}$ still be on-shell or off-shell. If $M_{H^{\pm}} < M_{W^{\pm}} + M_{h}$, the charged Higgs boson will decay to a soft lepton through an off-shell $W^{\pm}$ boson. Also notice that the parameter $\sin(\beta-\alpha)$ is constrained by the SM-like Higgs boson measurements at the LHC, requiring $-0.3<sin(\beta-\alpha)<0$, and $tan\beta$ is in the range [7,20]. Our results show that the cross section $\sigma$ of our signal process in these regions can reach its maximum value. In Fig.~\ref{f:mh_br}, we show the scatter plots in the plane of $cos\alpha/sin\beta$ (left) or $m_{h}$ (right, where the correlation to $cos\alpha/sin\beta$ is gauged in colour) and Branching Ratios ($BR$s) of the $h$ state, limitedly to the $b\bar b$ and $\gamma\gamma$ channels. In Fig.~\ref{f:mh_cx}, in the left (right) panel, we show scatter plots to demonstrate the dependence of $\sigma$($pp\rightarrow H^{\pm}h$) ($BR(H^{\pm} \rightarrow W^{\pm}h$)) on the parameters $m_h$ and $m_{H^\pm}$. From these plots, we selected 14 BPs, which are listed in Tab.~\ref{t_bp_para}. We also deliberately choose $m_h$ and $m_{H^{\pm}}$ to cover the parameter space as much as possible. Most of the BPs lie over the region $m_{h}>62.5$ GeV, which is half of the SM-like Higgs boson mass. It is because, when considering the experiments constraints from HiggsBounds and HiggsSignals, lots of parameter space will be ruled out for $m_{h}<62.5$ GeV. For BP4--BP10, we implicitly require the $W^{\pm}$ boson to be off-shell. \begin{figure}[ht] \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/hxx_decay.png} \end{center} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/hgaga_mh.png} \end{center} \end{minipage} \caption{Scatter plots in $m_h$ and $cos\alpha/sin\beta$ for $BR(h\rightarrow\gamma\gamma, b\bar b)$ are shown.}\label{f:mh_br} \end{figure} \begin{figure}[ht] \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/crosssection_Hh.png} \end{center} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/crosssection_Wh.png} \end{center} \end{minipage} \caption{Scatter plots in $m_h$ and $m_{H^{\pm}}$ for $pp\rightarrow H^{\pm}h$ and $BR(H^{\pm} \rightarrow W^{\pm}h)$ are shown.}\label{f:mh_cx} \end{figure} \begin{table} \begin{center} \begin{tabular}{| c| c| c| c| c| c| c| c| c|} \hline para&$M_h$&$M_A$&$M_{H^{\pm}}$&$sin(\beta-\alpha)$&$tan\beta$&$M_{12}^{2}$&$\sigma_{13}$ [fb]&$\sigma_{14}$ [fb]\\ \hline BP1&25.57&72.39&111.08&-0.074&13.58&11.97&101.40&112.55\\ \hline BP2&35.12&111.24&151.44&-0.075&13.32&16.66&167.75&186.20\\ \hline BP3&45.34&162.07&128.00&-0.136&7.57&80.96&10.76&11.93\\ \hline BP4&53.59&126.09&91.49&-0.127&8.00&51.16&27.05&29.88\\ \hline BP5&63.13&85.59&104.99&-0.056&18.09&190.24&179.31&198.61\\ \hline BP6&65.43&111.43&142.15&-0.087&11.52&325.36&174.49&194.30\\ \hline BP7&67.82&79.83&114.09&-0.111&8.94&326.32&177.72&197.23\\ \hline BP8&69.64&195.73&97.43&-0.111&8.86&357.10&196.04&217.18\\ \hline BP9&73.18&108.69&97.34&-0.122&8.06&594.64&193.56&214.57\\ \hline BP10&84.18&115.26&148.09&-0.067&14.82&473.88&61.92&68.98\\ \hline BP11&68.96&200.84&155.40&-0.112&8.64&531.46&62.02&69.14\\ \hline BP12&71.99&91.30&160.10&-0.104&9.74&472.22&58.99&65.80\\ \hline BP13&74.09&102.49&163.95&-0.092&10.56&503.74&55.58&62.04\\ \hline BP14&81.53&225.76&168.69&-0.101&9.75&501.29&51.85&57.91\\ \hline \end{tabular} \end{center} \caption{Input parameters and parton level cross sections (in fb) corresponding to the selected BPs are tabulated. All masses are in GeV and for all points $M_H$ = 125 GeV. Here, $\sigma_{13/14}$ denotes the cross section of $pp\to W+4\gamma$ at $\sqrt{s}=13/14$ TeV.}\label{t_bp_para} \end{table} Then, we present a detailed Monte Carlo (MC) analysis at the detector level to examine the feasibility of signal events for the center-of-mass energies of 13 TeV and 14 TeV at the LHC. The SM background processes include $W^\pm +4j0\gamma$, $W^\pm+3j1\gamma$, $W^\pm+2j2\gamma$, $W^\pm+1j3\gamma$ and $W^\pm+0j4\gamma$, where one or more jets have a certain probability to fake a photon. The MC events are generated by MadGraph5$\_{\rm aMC@NLO}$-2.8.2~\cite{Alwall:2014hca} with the following parton level cuts: \begin{eqnarray} |\eta(l,j,\gamma)|<2.5, \quad p_T(j,\gamma,l)>10 ~\text{GeV}, \quad \Delta R(l,j,\gamma)>0.5, \quad \textrm{MET} > 5 ~\text{GeV}. \end{eqnarray} The hadronisation and detector simulation are performed by using Pythia-8~\cite{Sjostrand:2006za} and Delphes-3.4.2~\cite{deFavereau:2013fsa}, where the anti-$k_t$ jet algorithm with jet parameter $\Delta R=0.5$ is adopted in FastJet~\cite{Cacciari:2011ma} and the fake photon rate for a jet is taken as $0.001$. After the event pre-selection, it is found that the background events are indeed negligible when compared with the possible signal events for our BPs. Thus, the significance for the $W+4\gamma$ signature only depends on the signal cross section and the integrated luminosity ${L}$, which can be computed by using the relation $\frac{N_S}{\sqrt{N_S+N_B}}\sim \sqrt{N_S} \sim \sqrt{\sigma \times {L}}$. The final significances for each BP are listed in Tab.~\ref{t_significance}. \begin{table} \begin{center} \begin{tabular}{|c | c| c| c| c| c| c| c| } \hline BPs & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline $\sigma_{13 \text{TeV}}$ &12.1& 23.7& 6.7 & 9.4 & 27.4 & 32.6 & 29.2 \\ \hline $\sigma_{14 \text{TeV}}$& 12.5& 24.4 &7.0 & 9.8 & 28.4 & 33.9 & 30.3 \\ \hline BPs & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\ \hline $\sigma_{13 \text{TeV}}$ & 25.2 & 23.9 & 20.8 & 20.2 & 20.3 & 19.9 & 19.9 \\ \hline $\sigma_{14 \text{TeV}}$& 26.2 & 24.8 & 21.8 & 21.1 & 21.0 & 20.8 & 20.8 \\ \hline \end{tabular} \end{center} \caption{The significances for all 14 BPs at the LHC are tabulated, where the luminosity is assumed to be 300 fb$^{-1}$ at both $\sqrt{s} =13$ and $14$ TeV. } \label{t_significance} \end{table} By studying the acceptance efficiency $\epsilon_{\rm det}$ with the 14 BPs at the detector level, we have introduced two sets of cuts at the parton level: $p_{T}^{\gamma} > 10 ~\textrm{GeV}$ and $p_{T}^{\ell}>20 ~\textrm{GeV}$ plus $p_{T}^{\gamma} > 20 ~\textrm{GeV}$ and $p_{T}^{\ell}>10 ~\textrm{GeV}$. We provide the fiducial efficiencies for these two sets through the relation \begin{equation} \epsilon= \sigma(\textrm{cuts}) \times \epsilon_{\rm det} / \sigma(\textrm{no cuts}). \end{equation} We apply the efficiencies to the signal events and obtain the results shown in Fig~\ref{f_effi}. \begin{figure}[!t] \begin{center} \end{center} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/effi_detectorA_13.pdf} \end{center} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/effi_detectorB_13.pdf} \end{center} \end{minipage} \begin{center} \end{center} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/effi_detectorA_14.pdf} \end{center} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/effi_detectorB_14.pdf} \end{center} \end{minipage} \caption{Fiducial efficiency $\epsilon$ for detecting the signal via the $\ell\nu_\ell+4\gamma$ signature at detector level for the two sets of cuts provided, when $\sqrt{s}=13$ TeV (top) as well as $\sqrt{s}=14$ TeV (bottom) and $L=$ 300 fb$^{-1}$.}\label{f_effi} \end{figure} The predicted significances for both energies and the given luminosity over the ($M_{h}, M_{H^{\pm}}$) plane are shown in Fig.~\ref{f_sig_13}. The significances are obtained by convoluting the signal production cross sections with cuts and acceptance efficiencies at detector level. For each point on the ($M_{h}, M_{H^{\pm}}$) plane, $tan\beta$ and $sin(\beta-\alpha)$ are allowed to vary in order to find the maximum significance. The predicted significances are then mapped over the $(sin(\beta-\alpha),tan\beta)$ plane, which are shown in Fig.~\ref{f_para_sig_13}. \begin{figure}[t!] \begin{center} \end{center} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/significance_13_1.pdf} \end{center} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/significance_13_2.pdf} \end{center} \end{minipage} \begin{center} \end{center} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/significance_14_1.pdf} \end{center} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/significance_14_2.pdf} \end{center} \end{minipage} \caption{The predicted significances over the ($M_h, M_{H^\pm}$) plane for the two sets of cuts are shown, when $\sqrt{s}=13$ TeV (top) as well as $\sqrt{s}=14$ TeV (bottom) and $L=$ 300 fb$^{-1}$.}\label{f_sig_13} \end{figure} \begin{figure}[h!] \begin{center} \end{center} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/para_significance_13_1.pdf} \end{center} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/para_significance_13_2.pdf} \end{center} \end{minipage} \begin{center} \end{center} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/para_significance_14_1.pdf} \end{center} \end{minipage} \begin{minipage}{0.4\textwidth} \begin{center} \includegraphics[height=4cm]{figures/para_significance_14_2.pdf} \end{center} \end{minipage} \caption{The predicted significances over the ($\sin(\beta-\alpha), \tan\beta$) plane for the two sets of cuts are shown, when $\sqrt{s}=13$ TeV (top) as well as $\sqrt{s}=14$ TeV (bottom) and $L=$ 300 fb$^{-1}$.}\label{f_para_sig_13} \end{figure} In a summary, we have examined the feasibility of the signature $W^{\pm}+4\gamma$, where the $W^\pm$ decays leptonically in electrons and muons, from the associated production of the charged Higgs boson and lightest neutral Higgs state in the Type-I scenario of the 2HDM (i.e., via $pp \to H^\pm h\to W^{\pm(*)}hh\to \ell\nu_\ell + 4\gamma$) at the LHC with a collision energy of $\sqrt{s}=13$ and $14~\text{TeV}$ and an integrated luminosity of $L= 300$ $\mathrm{fb}^{-1}$. We have exploited a MC analysis at a detector level by including parton shower, hadronisation and heavy flavour decays. By doing so, we have confirmed a previous study done at the parton level. As we have shown, even after taking into account background processes events generated by both real and fake photons (from jets), the signal is essentially background free, so that significances only depend upon the signal cross sections and the integrated luminosity. We have also provided some reliable estimates for the detector efficiency and associated heat maps which can expedite an estimate of the signal significance over the relevant Type-I parameter space, which could be useful for LHC working groups. Finally, for more thorough experimental analyses, we have also chosen 14 BPs, where the $W^\pm$ boson can be either on-shell or off-shell, depending on the mass difference $M_{H^\pm}-M_{h}$. \vspace{6pt} \authorcontributions{All authors have contributed in equal parts to all aspect of this research.} \funding{The work of AA, RB, MK and BM is supported by the Moroccan Ministry of Higher Education and Scientific Research MESRSFC and CNRST Project PPR/2015/6. The work of SM is supported in part through the NExT Institute and the STFC Consolidated Grant No. ST/L000296/1. Y. W. is supported by the `Scientific Research Funding Project for Introduced High-level Talents' of the Inner Mongolia Normal University Grant No. 2019YJRC001 and the scientific research funding for introduced high-level talents of Inner Mongolia of China. Q.-S. Yan's work is supported by the Natural Science Foundation of China Grant No. 11875260.} \institutionalreview{Not applicable.} \informedconsent{Not applicable.} \dataavailability{Not applicable.} \conflictsofinterest{The authors declare no conflict of interest.} \end{paracol} \reftitle{References}
1,314,259,996,419
arxiv
\section{The Arctic Ocean} The Arctic Ocean is subject to changes e.g.\,as shrinking sea ice extent and the additional thinning of the ice. Warm impulses of water from the Atlantic Ocean are circulating in the Arctic basins. Water from the Pacific Ocean transports internal energy (``oceanic heat'') through Bering Strait into the Arctic Ocean. Changes of sea ice extent, thickness and volume are observed by remote sensing and by drifting buoys with measuring equipment inside \cite{Perovich}. Monitoring the state of the water below the ice is technically difficult and requires international efforts in terms of oceanographic surveys with ice-breakers or multiyear observatories deployed at the seafloor. \begin{figure*}[t!] \includegraphics[width=\textwidth]{fgAwi_FsCtd.pdf}\\ \vspace{-0.6cm} \caption{Temperature field of the upper 800\,m of Fram Strait at a latitude of 78\,$^\circ$50'\,N. The data was acquired during a cruise with RV~Polastern between June~24th and July~11th, 2011. White areas correspond to sea floor. The bold perpendicular lines symbolize the longitudes of the mooring lines, a degree longitude corresponds to roughly 20\,km here.} \label{FigCtd} \end{figure*} The approach to gain synoptic (i.e.\,large-scale) insights about the Arctic Ocean exchanges is to monitor the in- and out-flow of water through Fram Strait. Although climatological scales are defined over periods of 30~years and more according to the World Meteorological Organization (\href{http://wmo.int}{WMO}), it seems possible to set up a preliminary energy- and volume balances: There are only four key gateways to the Arctic Ocean where water and sea-ice leaves or enters \cite{ChangingArctic}. The Fram Strait is the only deep-sea opening of the Arctic Ocean to the other oceans, all other gateways are shallower and/or narrower, see Tab.\,\ref{TabGateways}. \begin{table}[b] \begin{tabular*}{\columnwidth}{@{\extracolsep\fill}p{3cm}ccc} \hline Arctic gateway & width & max.\,depth& net volume flux \\ \hline \hline Barents Sea Opening&350\,km& 500\,m& $2.0\pm2.9$\,Sv \\ \hline Fram Strait & 300\,km &2\,600\,m& $-2.0\pm2.7$\,Sv\\ \hline Bering Strait & 86\,km & 50\,m & $0.8\pm 0.2$\,Sv \\ \hline Nares Strait & 40\,km &220\,m &$0.57\pm 0.3$\,Sv\\ Lancaster Sound & 40\,km &125\,m &$-0.7$\,Sv \\ Cardigan Strait & 8\,km &180\,m &$-0.3$\,Sv \\ \hline \end{tabular*} ~\vspace{-0.3cm}% \caption{Gateways to the Arctic Ocean, the last three are the largest of the Canadian Archipelago. 1~Sverdrup (Sv) = $10^6 {\rm m}^3/{\rm s}$. Emptying Lake Constance (German: ``Bodensee'') within a day corresponds to 0.56\,Sv. Estimates from \cite{ChangingArctic}, positive values: inflow into the Arctic Ocean.} \label{TabGateways} \end{table} \section{The Fram Strait time series} The two main currents of Fram Strait are a northward inflow of water from the Atlantic (next to Spitsbergen) on the one hand and a southward outflow of polar freshwater (next to Greenland) on the other hand. The East Greenland Current (EGC) can be recognized as (dark blue) cold patch above 200\,m at the left side of Fig.\,\ref{FigCtd}. The water there moves southwards (out of the paper plane) and carries sea ice out of the Arctic Ocean. The \emph{West Spitsbergen Current} (WSC) is the patch with temperatures above $2^\circ$\,C (red, yellow, light-green) on the right side of Fig.\,\ref{FigCtd}, east of $5^\circ$\,E. The WSC is a branch of the North Atlantic Current which is the northern extension of the Gulf Stream. The WSC is the warmest water mass entering the Arctic Ocean. Since our measurements started in 1997, the temperature of the Atlantic water increased with a rate of around $1^\circ$\,C within 10\,years \cite{ChangingArctic}. Motivated by the importance of the Fram Strait for the Arctic climate, the Alfred-Wegener-Institute for Polar- and Marine Research (AWI) maintains a transect of moorings in collaboration with the Norwegian Polar Institute at $78^\circ50'$\,N. The transect ends at the shelf breaks at $6^\circ 52$W and $8^\circ40$E respectively, see Fig.~\ref{FigCtd} and Fig.~\ref{FigBathy}. Instruments (from the companies Aanderaa and Falmouth Scientific Instruments) provide point measurements of horizontal velocities, flow direction, temperature and salinity. However, the salinity measurements of the current meters are not trustworthy since the conductivity cell is not pumped. This causes much slower response times than what is needed to cope with the high flow speeds in Fram Strait. Besides, the sensors fail frequently due to growing bio-films. Therefore, many gaps within the salinity time series exist. The additionally available data from a few upward-looking Acoustic Doppler current profilers (ADCP) near the surface are also neglected as their time-series are far shorter. \begin{figure*}[htbp] \includegraphics[width=0.91\textwidth]{fgAwi_Fs_T_01d.pdf}\\ \vspace{-1cm}% \resizebox{\textwidth}{1cm}{\textcolor{white}{\rule{1in}{2in}}}\\ \vspace{-1.5cm} \includegraphics[width=0.91\textwidth]{fgAwi_Fs_T_07d.pdf}\\ \vspace{-1cm}% \resizebox{\textwidth}{1cm}{\textcolor{white}{\rule{1in}{2in}}}\\ \vspace{-1.5cm} \includegraphics[width=0.91\textwidth]{fgAwi_Fs_T_21d.pdf}\\ \vspace{-0.4cm} \caption{Temperature correlations coefficients between nearest and 2nd nearest neighbors of all timeseries since 2003 for daily means (top) and with a 7-day filter (bottom). The bullets represent the average instrument position (depth and longitude), for this reason the mooring lines are not straight as sketched in Fig.\ref{FigCtd}.} \label{FigTemp} \end{figure*} Data is recorded at 2 to 5 different depth levels every hour. The lowest level usually is located approximately 10\,m above the sea-floor. If applicable, there are instruments at 1500\,m depth, around 800\,m and 300\,m depth. The uppermost sensor is located around 60\,m below the sea surface to avoid losses of instruments by drifting ice keels. Recently, fish trawling became more frequent with the retreating ice edge. The fishing nets are a threat for our instruments. In-situ data by observational oceanography are assimilated by ocean modelers. A major product for them are time series of de-tided daily averages. The effect of tides are excluded by filtering out semi-diurnal and diurnal tidal constituents. The analysis is based on the daily means, accepting that any additional post-processing of the measurements (e.g.\,depth-correction to due flow-induced dives or manual spike-removal) happens inside a black box. For our purposes these are only minor changes which do not change the main statements. \afterpage{\clearpage} \section{Constructing a correlation network} The method suggested here should work for any set of spatio-temporal time series. In Fram Strait, the network of point measurements has 66~nodes with three times series per node~$x$: the temperature~$T_x$, the meridional velocity~$v_x$ and the zonal velocity~$u_x$ at a given position~$x \in \{1,\ldots,66\}$. Each $x$ can be mapped on a longitudinal position and a depth level. Every time series is constructed by merging snippets of the maintenance period. \begin{figure*}[t!] \includegraphics[width=0.91\textwidth]{fgAwi_Fs_M_10d.pdf}\\ \vspace{-1cm}% \resizebox{\textwidth}{1cm}{\textcolor{white}{\rule{1in}{2in}}}\\ \vspace{-1.5cm} \includegraphics[width=0.91\textwidth]{fgAwi_Fs_Z_10d.pdf}\\ \vspace{-0.4cm} \caption{Correlation graph for meridional (top) and zonal (bottom) velocities with a 10-day filter for the complete time series since 2003.} \label{FigVel} \end{figure*} The Spearman correlation coefficient~$\rho$ between any nearest or second-nearest neighbors is calculated for the complete time series since 2003. Working with daily averages, only correlations between observables at the same time instance are considered. Pairwise complete observables are taken into account, i.e.\,whenever both time series have a valid value at a given point in time. Looking at \emph{all} different pairs of correlation is a standard consistency check and therefore $66^2=4365$ correlation coefficients have to be evaluated. New is the concise and intuitive representation as a network of (2nd)~nearest neighbors which add up to only 205~correlation coefficients. One expects high correlations even between non-neighboring pairs of point measurements because the dynamics of the water is driven by the atmosphere and ocean upstream. The new method is to use a multi-scale approach which does not look on the value at the correlation itself, but at how the correlation increases with increasing filtering size. Before, it has to be noted that correlations between neighboring instruments are high in the vertical and significantly lower in the horizontal direction, simply because of the different resolutions in both directions. Within a 2\,km mooring line there are up to five instruments while there is only a mooring line roughly every 20\,km. The higher vertical correlations (between velocities within same mooring line) are usually interpreted as sign of a prevailing barotropic structure of the flow, i.e.\,the flow is mainly directed by the topography of the ground and does not depend on the depth. Moving towards longer filtering times, the zonal correlation within the (sub)surface widens towards the West. This is expected when the upper ocean is well-mixed by the atmosphere. In particular, where the ocean surface is only rarely covered by sea ice, mixing takes place. The strong correlations at the bottom between $2$ and $4^\circ$\,East are most probably induced by the topographic steering of the Knippovich Ridge whose Northern foothill mountains are located at these longitudes, see Fig.\,\ref{FigBathy}. Besides, there are vertical correlations at the correct depth and longitudinal position where the Atlantic Water is expected to recirculate. The recirculation manifests itself in form of coherent southward flow. A hint is the warm patch ($T \geq 4^\circ$C) between $\pm 3^\circ$E in Fig.\,\ref{FigCtd} (REC). The long-term averaged velocity field (not shown here) supports this assumption. The effect of the filtering time on the pattern is small for the correlation network constructed from the water velocities. Therefore, without loss of generality a 10-days boxcar filter is applied in Fig.\,\ref{FigVel}. \section{Multiscale correlation analysis.} In oceanography, a water mass is ``an identifiable water volume with a common history which may be traced back to some source area''. Traditionally, water masses are defined by (potential) temperature and salinity, although additional properties may also taken into account. Ideally, these are conservative, in the sense that they are only modified by mixing, and neither biological activity, nor chemical degradation influences the parameter. In Fram Strait, no reliable salinity measurements for the whole time series exist. Thus, a pure temperature criterion has been introduced to identify different water masses. Whenever the temperature is above a threshold of $1^\circ$C the water is assumed to originate in the Atlantic \cite{FramSeriesA}. Here, a statistical method to identify water masses is presented. This approach can be also helpful for other networks of time series. The correlation pattern between the 2nd and 3rd mooring line from the East is independent of the variable, see Fig.\,\ref{FigTemp} and \ref{FigVel}. It is exactly the location of the central core of the West Spitsbergen Current (WSC) which is consistent with the temperature-criterion for identifying the water masses, compare Fig.\,\ref{FigCtd}. It is even more interesting to identify regions where correlations are lower and the absence of links persists when changing the filtering time. All correlations grow slowly with increasing filtering intervals since the filtering operation introduces correlations. However, the effect of growing filter window also provides oceanographic information. For instance the missing vertical links in in the Western part of the Strait (for all variables, but more pronounced for the temperature-correlation network) are a clear sign for the decoupling of the ocean from the atmosphere which occurs due to frequent sea ice cover in this region. For the water velocities in Fig.\,\ref{FigVel}, the otherwise prominent vertical correlations disappear between the 2nd and 3rd layer of instruments. Exactly in those regions the two boundary currents tend to have their vertical limits. \section{Conclusion} Traditional methods to classify flow regimes rely on numerical expensive interpolation schemes. The drawback is that values (for temperature and water velocity) have to produced for the complete vertical field across the gateway. The interpolation is based on sparse measurements every 20\,km which necessarily comes along with large error bars for transport estimates, see \cite{FemSect} for a reliable method based on finite elements. A new climate network has been introduced which allows to identify water masses and other spatially consistent flow regimes using simple maths. The nodes are the instrument positions, given as longitude-depth coordinates. Links are established if the correlation between the time series between two nodes is bigger than a certain threshold. The new approach allows to quantify regions of interest without any prior knowledge about oceanographic phenomena. Where links are absent within the correlation network, increasing the spatial resolution of the transect may help to gain precision. One crucial region is the longitudinal resolution at 6$\pm1^\circ$\,E. The new method quantifies the common knowledge that the error of the flux estimates actually originates in the uncertainty of the western boundary of the West Spitsbergen Current (WSC). \section{Data sources} \label{SecDataSources} Eventually the complete data set will be freely available, the following table summarizes the state in June 2012. {\tiny \begin{longtable}{p{1cm}p{1.55cm}p{1.55cm}p{0.85cm}p{0.85cm}p{2.2cm}} \hline snippet & begin & end & $~^\circ W$ & $~^\circ N$ & doi:10.1594/ \\ \hline F1-7 & 2004-07-20 & 2005-08-17 & 8.664 & 78.832 & \href{http://dx.doi.org/10.1594/pangaea.778856}{pangaea.778856}\\ F1-8 & 2005-08-17 & 2006-08-21 & 8.664 & 78.832 & \href{http://dx.doi.org/10.1594/pangaea.778857}{pangaea.778857}\\ F1-10 & 2007-09-12 & 2008-07-05 & 8.674 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777564}{pangaea.777564}\\ F1-11 & 2008-07-07 & 2009-07-04 & 8.667 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777565}{pangaea.777565}\\ F2-6 & 2002-08-02 & 2003-09-22 & 8.330 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.780554}{pangaea.780554}\\ F2-8 & 2004-07-20 & 2005-08-17 & 8.327 & 78.836 & \href{http://dx.doi.org/10.1594/pangaea.778867}{pangaea.778867}\\ F2-9 & 2005-08-18 & 2006-08-21 & 8.327 & 78.836 & \href{http://dx.doi.org/10.1594/pangaea.778868}{pangaea.778868}\\ F2-10 & 2006-08-23 & 2007-09-11 & 8.327 & 78.836 & \href{http://dx.doi.org/10.1594/pangaea.777574}{pangaea.777574}\\ F2-11 & 2007-09-28 & 2008-07-05 & 8.329 & 78.835 & \href{http://dx.doi.org/10.1594/pangaea.777575}{pangaea.777575}\\ F2-12 & 2008-07-07 & 2009-07-04 & 8.333 & 78.840 & \href{http://dx.doi.org/10.1594/pangaea.777576}{pangaea.777576}\\ F2-13 & 2009-07-05 & 2010-07-01 & 8.334 & 78.840 & \href{http://dx.doi.org/10.1594/pangaea.777577}{pangaea.777577}\\ F2-14 & 2010-07-03 & 2011-06-25 & 8.334 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777578}{pangaea.777578}\\ F3-6 & 2003-09-26 & 2004-07-19 & 7.994 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.778908}{pangaea.778908}\\ F3-7 & 2004-07-20 & 2005-08-17 & 7.992 & 78.838 & \href{http://dx.doi.org/10.1594/pangaea.778869}{pangaea.778869}\\ F3-8 & 2005-08-18 & 2006-08-22 & 7.992 & 78.839 & \href{http://dx.doi.org/10.1594/pangaea.778870}{pangaea.778870}\\ F3-9 & 2006-08-23 & 2007-09-11 & 7.992 & 78.839 & \href{http://dx.doi.org/10.1594/pangaea.777583}{pangaea.777583}\\ F3-10 & 2007-09-28 & 2008-07-05 & 8.001 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777579}{pangaea.777579}\\ F3-11 & 2008-07-07 & 2009-07-03 & 8.000 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777580}{pangaea.777580}\\ F3-12 & 2009-07-05 & 2010-07-01 & 8.001 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777581}{pangaea.777581}\\ F3-13 & 2010-07-03 & 2011-06-25 & 8.000 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777582}{pangaea.777582}\\ F4-6 & 2003-09-26 & 2004-07-18 & 7.000 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.778871}{pangaea.778871}\\ F4-7 & 2004-07-18 & 2005-08-17 & 7.000 & 78.836 & \href{http://dx.doi.org/10.1594/pangaea.778872}{pangaea.778872}\\ F4-8 & 2005-08-18 & 2006-08-23 & 7.002 & 78.836 & \href{http://dx.doi.org/10.1594/pangaea.778873}{pangaea.778873}\\ F4-9 & 2006-08-27 & 2007-09-16 & 7.010 & 78.839 & \href{http://dx.doi.org/10.1594/pangaea.777588}{pangaea.777588}\\ F4-10 & 2007-09-12 & 2008-07-05 & 6.997 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777584}{pangaea.777584}\\ F4-11 & 2008-07-07 & 2009-07-03 & 7.000 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777585}{pangaea.777585}\\ F4-12 & 2009-07-04 & 2010-07-01 & 7.000 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777586}{pangaea.777586}\\ F4-13 & 2010-07-04 & 2011-06-19 & 7.006 & 78.835 & \href{http://dx.doi.org/10.1594/pangaea.777587}{pangaea.777587}\\ F5-6 & 2003-10-02 & 2004-07-17 & 6.002 & 78.832 & \href{http://dx.doi.org/10.1594/pangaea.778874}{pangaea.778874}\\ F5-7 & 2004-07-19 & 2005-08-18 & 6.000 & 78.832 & \href{http://dx.doi.org/10.1594/pangaea.778875}{pangaea.778875}\\ F5-8 & 2005-08-23 & 2006-08-28 & 6.003 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.778876}{pangaea.778876}\\ F5-9 & 2006-08-28 & 2007-09-11 & 6.003 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777593}{pangaea.777593}\\ F5-10 & 2007-09-12 & 2008-07-05 & 6.000 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777589}{pangaea.777589}\\ F5-11 & 2008-07-12 & 2009-07-03 & 6.000 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777590}{pangaea.777590}\\ F5-12 & 2009-07-05 & 2010-07-01 & 6.000 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777591}{pangaea.777591}\\ F5-13 & 2010-07-04 & 2011-06-25 & 6.000 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777592}{pangaea.777592}\\ F6-7 & 2003-09-27 & 2004-07-17 & 5.021 & 78.830 & \href{http://dx.doi.org/10.1594/pangaea.778877}{pangaea.778877}\\ F6-8 & 2004-07-19 & 2005-08-23 & 5.022 & 78.830 & \href{http://dx.doi.org/10.1594/pangaea.778878}{pangaea.778878}\\ F6-9 & 2005-08-26 & 2006-08-25 & 5.022 & 78.830 & \href{http://dx.doi.org/10.1594/pangaea.778879}{pangaea.778879}\\ F6-10 & 2006-08-28 & 2007-09-11 & 5.022 & 78.831 & \href{http://dx.doi.org/10.1594/pangaea.777594}{pangaea.777594}\\ F6-11 & 2007-09-13 & 2008-07-11 & 5.002 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777595}{pangaea.777595}\\ F6-12 & 2008-07-12 & 2009-07-03 & 5.004 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777596}{pangaea.777596}\\ F6-13 & 2009-07-06 & 2010-07-02 & 5.004 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777597}{pangaea.777597}\\ F6-14 & 2010-07-02 & 2011-06-27 & 5.000 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777598}{pangaea.777598}\\ F7-6 & 2004-07-22 & 2005-08-23 & 4.000 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.778880}{pangaea.778880}\\ F7-7 & 2005-08-26 & 2006-08-24 & 4.000 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.778881}{pangaea.778881}\\ F7-8 & 2006-08-29 & 2008-07-11 & 4.000 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777600}{pangaea.777600}\\ F7-9 & 2008-07-15 & 2010-07-10 & 3.997 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777601}{pangaea.777601}\\ F7-10 & 2010-07-11 & 2011-06-27 & 4.000 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777599}{pangaea.777599}\\ F8-7 & 2004-07-22 & 2005-08-29 & 2.801 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.778882}{pangaea.778882}\\ F8-8 & 2005-08-31 & 2006-08-29 & 2.802 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.778883}{pangaea.778883}\\ F8-9 & 2006-08-29 & 2008-07-15 & 2.801 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777604}{pangaea.777604}\\ F8-10 & 2008-07-18 & 2010-07-10 & 2.805 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777602}{pangaea.777602}\\ F8-11 & 2010-07-11 & 2011-06-27 & 2.799 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777603}{pangaea.777603}\\ F9-6 & 2004-08-21 & 2005-08-29 & -0.812 & 78.839 & \href{http://dx.doi.org/10.1594/pangaea.778884}{pangaea.778884}\\ F9-7 & 2005-08-30 & 2006-09-07 & -0.811 & 78.838 & \href{http://dx.doi.org/10.1594/pangaea.778885}{pangaea.778885}\\ F9-8 & 2006-09-08 & 2008-07-20 & -0.811 & 78.839 & \href{http://dx.doi.org/10.1594/pangaea.777605}{pangaea.777605}\\ F9-9 & 2008-07-21 & 2010-07-18 & -0.782 & 78.837 & \href{http://dx.doi.org/10.1594/pangaea.777606}{pangaea.777606}\\ F10-6 & 2003-09-30 & 2004-08-21 & -2.001 & 78.832 & \href{http://dx.doi.org/10.1594/pangaea.778858}{pangaea.778858}\\ F10-7 & 2004-08-24 & 2005-08-29 & -2.001 & 78.831 & \href{http://dx.doi.org/10.1594/pangaea.778859}{pangaea.778859}\\ F10-9 & 2006-09-09 & 2008-07-21 & -2.050 & 78.821 & \href{http://dx.doi.org/10.1594/pangaea.777567}{pangaea.777567}\\ F10-10 & 2008-07-21 & 2010-07-20 & -2.115 & 78.828 & \href{http://dx.doi.org/10.1594/pangaea.777566}{pangaea.777566}\\ F15-2 & 2003-09-28 & 2004-08-23 & 1.611 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.778860}{pangaea.778860}\\ F15-3 & 2004-08-23 & 2005-08-29 & 1.610 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.778861}{pangaea.778861}\\ F15-4 & 2005-08-30 & 2006-08-30 & 1.610 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.778862}{pangaea.778862}\\ F15-5 & 2006-08-30 & 2007-09-24 & 1.609 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777568}{pangaea.777568}\\ F15-6 & 2007-09-24 & 2008-07-19 & 1.605 & 78.833 & \href{http://dx.doi.org/10.1594/pangaea.777569}{pangaea.777569}\\ F15-7 & 2008-07-18 & 2010-07-17 & 1.599 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.777570}{pangaea.777570}\\ F16-2 & 2003-09-29 & 2004-08-22 & 0.401 & 78.835 & \href{http://dx.doi.org/10.1594/pangaea.778863}{pangaea.778863}\\ F16-3 & 2004-08-22 & 2005-08-29 & 0.397 & 78.834 & \href{http://dx.doi.org/10.1594/pangaea.778864}{pangaea.778864}\\ F16-4 & 2005-08-30 & 2006-08-30 & 0.401 & 78.835 & \href{http://dx.doi.org/10.1594/pangaea.778865}{pangaea.778865}\\ F16-5 & 2006-08-31 & 2007-09-13 & 0.401 & 78.835 & \href{http://dx.doi.org/10.1594/pangaea.777571}{pangaea.777571}\\ F16-6 & 2007-09-13 & 2008-07-19 & 0.540 & 78.832 & \href{http://dx.doi.org/10.1594/pangaea.777572}{pangaea.777572}\\ F16-7 & 2008-07-20 & 2010-07-17 & 0.390 & 78.827 & \href{http://dx.doi.org/10.1594/pangaea.777573}{pangaea.777573}\\ \hline \end{longtable}} The first column is the identifier of the snippet, the first number encodes the position, the number after the dash counts the number of deployments.
1,314,259,996,420
arxiv
\section{Introduction} \label{sec:Intr} Theoretical calculations by the lattice quantum chromodynamics (LQCD) approach show that the quark-gluon plasma (QGP) phase, which is chirally restored and color deconfined, is formed at critical conditions of high energy density ($\epsilon \sim 1 $ GeV/$\mathtt{fm}^3$ ) and temperature ($ \mathtt{T} \sim 154 $ MeV) \cite{Karsch:2000kv,Pal:2003rz}. These conditions are expected to be obtained in Ultra-relativistic heavy ion collisions, when a dense medium of quarks and gluons is produced, which then experiences rapid-collective expansion before the partons hadronize and subsequently decouple \cite{Pal:2003rz}. Many experiments are committed to discovering the QGP signals assuming quick thermalization, such as the Large Hadron Collider (LHC) at CERN, Geneva, and the Relativistic Heavy Ion Collider (RHIC) at BNL, USA \cite{Hanus:2019fnc, Busza:2018rrf}. Regrettably, measurements are limited to final state particles, the majority of which are hadrons \cite{Pal:2003rz}. The ensuing transverse and longitudinal expansion of the produced QGP is then studied by the relativistic viscous hydrodynamics models \cite{DerradideSouza:2015kpt}. In this case the net entropy, which is essentially conserved between preliminary thermalization until freeze-out \cite{Hanus:2019fnc, Busza:2018rrf,DerradideSouza:2015kpt,Pal:2003rz}, is an intriguing quantity which may provide significant information on the produced matter during the early stages of the nuclear collision. By accurately accounting for the entropy production at various phases of collisions, the observable particle multiplicities at the final state can be linked to system parameters, such as initial temperature, at earlier stages of the nuclear collisions \cite{Hanus:2019fnc}. Two alternative methods are typically used to calculate the net created entropy during the collisions \cite{Hanus:2019fnc}. Pal and Pratt pioneered the first approach, which calculates entropy using transverse momentum spectra of various particle species and their source sizes as calculated by HBT correlations \cite{Hanus:2019fnc,Pal:2003rz}. The original research analyzed experimental data taken from $\sqrt{s_{NN}} = 130$ GeV produced from Au-Au collisions and is still used to determine entropy at various energies \cite{Hanus:2019fnc, Busza:2018rrf}. The second approach \cite{Sollfrank:1992ru,Muller:2005en} converts the multiplicity per rapidity $dN/dy$ produced at the final state to an entropy per rapidity $dS/dy$ using the entropy per hadron derived in a hadron resonance gas (HRG) model. Despite the fact that estimating the entropy per rapidity $ds/dy$ from the measured multiplicity $dN {ch} /d\eta$ is reasonably simple, the conversion factor between the measured charged-particle multiplicity dNch /d and the entropy per rapidity dS/dy in the literature \cite{Muller:2005en,Gubser:2008pc,Nonaka:2005vr,Berges:2017eom} is quite varied. Hanus and Reygers \cite{Hanus:2019fnc} estimated the entropy production using the transverse momentum distribution from data produced in p-p, and Pb-Pb collisions at $\sqrt{s} =$ $7$, and $2.76$ TeV for various particles , respectively. The present work aims to calculate the entropy per rapidity $ds/dy$ based on the transverse momentum distribution measured in Pb-Pb collisions at $\sqrt{s} =$ $2.76$, and $5.02$ TeV for particles $\pi$, $k$, $p$, $\Lambda$, $\Omega$, and $\bar{\Sigma}$, and $\pi$, $k$, $p$, $\Lambda$, and $K_s^0$, respectively. For a precise estimation of the entropy per rapidity $ds/dy$, we fitted the transverse momentum distribution of the considered particles using two thermal approaches, Tsalis distribution \cite{Cleymans:2016opp,Bhattacharyya:2017hdc} and the HRG model \cite{Yassin:2019xxl}. This enable us to cover a large range of the measured transverse particle momentum $p_{T}$, up to $\sim 20 $ GeV/c, unlike hanus that used a small range of $p_{T}$, $\sim 1.5 $ GeV/c and consider the particle's mass as free parameter. Indeed, we use the exact value of the particle's mass for all considered particle as in Particle Data Group (PDG) \cite{ParticleDataGroup:2018ovx}. Tsallis distribution succeeded to describe a large range of $p_{T}$ but cannot describe the whole range of $p_T$. That's why we use the HRG model to fit the other part of the $p_{T}$. Also, we estimated the entropy per rapidity $ds/dy$ for the considered particles using a very promising simulation model, the Artificial Neural Network (ANN). Recently, several modelling methods based on soft computing systems include the application of artificial intelligence (AI) techniques. These evolution algorithms have a physically powerful existence in this field \cite{ar16,ar17,ar18,ar19,ar20}. The behavior of p-p and pb-pb interactions are complicated due to the non-linear relationship between the interaction parameters and the output. Understanding the interactions of fundamental particles requires multi-part data analysis and artificial intelligence techniques are vital. These techniques are useful as alternative approaches to conventional ones \cite{ar21}. In this sense, AI techniques such as Artificial Neural Network (ANN), Genetic Algorithm (GA), Genetic Programming (GP) and Genetic Expression Programming (GEP) can be used as alternative tools to simulate these interactions \cite{ar16, ar20}. The motivation for using an ANN approach is its learning algorithm, which learns the relationships between variables in data sets and then creates models to explain these relationships (mathematically dependant)\cite{ar29}. There is a desire for fresh computer science methods to analyze the experimental data for a better understanding of various physics phenomenons. ANNs have gained popularity in recent years as a powerful tool for establishing data correlations and have been successfully employed in materials science due to its generalisation, noise tolerance, and fault tolerance \cite{Annintro}. This enables us to use it to estimate the entropy per rapidity $ds/dy$. The results are then confronted to available experimental data and results obtained from previous calculation. The present paper is organised as follow. In Sec. (\ref{sec:models}), the used approaches are presented. Results and discussion are shown in Sec. (\ref{sec:Res}). The conclusion is drawn in Sec. (\ref{sec:Cncls}). A mathematical description for the entropy per rapidity $d s/d y$ and the transverse momentum spectra based on both Tsallis distribution and the HRG model are given in Appendices. \section{The Used Approaches} \label{sec:models} In Sec. (\ref{sec:models}), we discuss the used methods for estimating the entropy per rapidity $ds/dy$ for various particles. The first method depends on the measured particle spectra for the considered particles \cite{Hanus:2019fnc}. In the second model, we use the ANN model, which may be considered the future simulation model \cite{Annintro}. \subsection{ Entropy per rapidity $ds/dy$ From Transverse Momentum distribution and HBT correlations} \label{sec:spectra} Here, we review the entropy per rapidity $d s/d y$ estimation from the phase space function distribution calculated from particle distribution spectra and femtoscopy \cite{Hanus:2019fnc}. The fandemetals of this approach are shown in Ref. \cite{Hanus:2019fnc,Bertsch:1994qc,Ferenc:1999ku}. For any particles species at the thermal freeze-out stage, the entropy $S$ is obtained from the phase space distibution function $f (\vec{p} ,\vec{r})$ \cite{Hanus:2019fnc} \begin{equation} S=(2 J+1) \int \frac{d^{3} r d^{3} p}{(2 \pi)^{3}}[-f \ln f \pm(1 \pm f) \ln (1 \pm f)], \label{eq:1} \end{equation} where $+$ and $-$ stands for bosons and fermions, respectively. The quantity $2 J + 1$ represents the spin degeneracy for particles. The net entropy produced in the nuclear collisions is then obtained by summing over all the entropies of the created hadrons species. From Eq. (\ref{eq:1}), the integral can be expressed in a series expansion form \begin{equation} \pm(1 \pm f) \ln (1 \pm f)=f \pm \frac{f^{2}}{2}-\frac{f^{3}}{6} \pm \frac{f^{4}}{12}+\ldots, \label{eq:2} \end{equation} The source radii, observed from HBT two particle correlations \cite{Lisa:2005dd} in three dimension, are calculated from a longitudinally co-moving system (LCMS) where the pair momentum component along the direction of the beam vanishes. In the LCMS, the source's density function is parametrized by a Gaussian in three dimension, allowing the phase space distribution function to be represented as \cite{Hanus:2019fnc} \begin{equation} f(\vec{p}, \vec{r})=\mathcal{F}(\vec{p}) \exp \left(-\frac{x_{\mathrm{out}}^{2}}{2 R_{\mathrm{out}}^{2}}-\frac{x_{\text {side }}^{2}}{2 R_{\text {side }}^{2}}-\frac{x_{\text {long }}^{2}}{2 R_{\text {long }}^{2}}\right), \label{eq:3} \end{equation} where $\mathcal{F}(\vec{p})$, the maximum phase density, is given by \cite{Hanus:2019fnc,Pal:2003rz} \begin{equation} \mathcal{F}(\vec{p})=\frac{(2 \pi)^{3 / 2}}{2 J+1} \frac{d^{3} N}{d^{3} p} \frac{1}{R_{\text {out }} R_{\text {side }} R_{\text {long }}}, \label{eq:4} \end{equation} In Eqs. (\ref{eq:3}) and (\ref{eq:4}), the source radii are expressed in terms of the momentum $\vec{p}$. Due to restricted statistics, in many circumstances only the source radius $R_{\mathrm{inv}}$ measured in one dimension, which is computed in the pair rest frame (PRF), may be obtained experimentally. The relationship between the PRF's $R_{\mathrm{inv}}$ and the three-dimensional source radii in the LCMS is considered to be Ref. \cite{Hanus:2019fnc,Pal:2003rz} \begin{equation} R_{\mathrm{inv}}^{3} \approx \gamma R_{\mathrm{out}} R_{\mathrm{side}} R_{\mathrm{long}}, \label{eq:5} \end{equation} where $\gamma=m_{\mathrm{T}} / m \equiv \sqrt{m^{2}+p_{\mathrm{T}}^{2}} / m .$ In Refs. \cite{Hanus:2019fnc,ALICE:2015tra} the ALICE collaboration published values for both $R_{\mathrm{inv}}$ and $R_{\mathrm{out}}, R_{\text {side }}, R_{\text {long }}$ determined from two pion correlations in Pb-Pb nuclear collisions at $\sqrt{s_{\mathrm{NN}}}=2.76 \mathrm{TeV}$. From these data Hanus et al., expressed a more general formula of Eq. (\ref{eq:5}) as \cite{Hanus:2019fnc} \begin{equation} R_{\text {inv }}^{3} \approx h(\gamma) R_{\text {out }} R_{\text {side }} R_{\text {long }}. \label{eq:6} \end{equation} with $h(\gamma)=\alpha \gamma^{\beta}$. Form Eq. (\ref{eq:5}), the entropy per rapidity $d s/ d y$ can be given as \cite{Hanus:2019fnc} \begin{equation} \begin{aligned} \frac{d S}{d y}=& \int d p_{T} 2 \pi p_{T} E \frac{d^{3} N}{d^{3} p}\left(\frac{5}{2}-\ln \mathcal{F} \pm \frac{\mathcal{F}}{2^{5 / 2}}\right.\\ &\left.-\frac{\mathcal{F}^{2}}{2 \times 3^{5 / 2}} \pm \frac{\mathcal{F}^{3}}{3 \times 4^{5 / 2}}\right), \label{eq:7} \end{aligned} \end{equation} where $\mathcal{F}$, the phase space distribution function, is given by \cite{Hanus:2019fnc} \begin{equation} \mathcal{F}=\frac{1}{m} \frac{(2 \pi)^{3 / 2}}{2 J+1} \frac{1}{R_{\mathrm{inv}}^{3}\left(m_{\mathrm{T}}\right)} E \frac{d^{3} N}{d^{3} p}. \label{eq:8} \end{equation} For a better describtion for central $\mathrm{Pb}-\mathrm{Pb}$, Hanus et al., approximate expression $(1+f) \ln (1+f)$ in terms of Eq. (\ref{eq:1}) with numerical coefficients $a_{i}$ that is also used for high multiplicity values of $\mathcal{F}$ as \cite{Hanus:2019fnc} \begin{equation} \frac{d S}{d y}=\int d p_{T} 2 \pi p_{T} E \frac{d^{3} N}{d^{3} p}\left(\frac{5}{2}-\ln \mathcal{F}+\sum_{i=0}^{7} a_{i} \mathcal{F}^{i}\right). \label{eq:9} \end{equation} To calculate the entropy per rapidity $d s/ d y$ for the considered hadrons, the measured spectra of the transverse momentum $E \frac{d^{3} N}{d^{3} p}$ have to be extrapolated at $p_{T} = 0$. To do this, We confronted the $p_{T}$ momentum spectra to two various fitting functions estimated from two well-known models, Tsallis distribution and HRG model. A mathematical description of the transverse momentum distribution $E \frac{d^{3} N}{d^{3} p}$ using HRG model and Tsallis distribution is given in Appendices (\ref{sec:(append:hrg)}) and (\ref{sec:(append:tsalis)}), respectively. \subsection{Artificial Neural Network(ANN) Model} \label{sec:ann} Artificial neural network model \cite{ar1,ar2,ar3,ar4,ar5,ar40,ar41} is a machine learning technique most popular in high-energy physics community. In the last decade important physics results have been separated utilizing this model. Neuron is the essential processing component of Artificial neural network model (see Fig. \ref{fig:oneeai}), which forms a weighted sum of its input and passes the outcome to the yield through a non-linear transfer function. These transfer functions can also be linear, and then the weighted sum is sent directly to the output way. Eq. (\ref{eqa:1}) and Eq. (\ref{eqa:2}) represent respectively the weighted summation of the inputs and the non linear transfer function to the output of the neuron. \begin{equation} \sigma=\sum_{n} x_{n} w_{n}, \label{eqa:1} \end{equation} \begin{equation} Y=f(\sigma), \label{eqa:2} \end{equation} \begin{figure}[htb] \includegraphics[width=8cm]{Figure1.png} \caption{Schematic diagram of a basic formal neuron.} \label{fig:oneeai} \end{figure} The most widely recognized sort of ANN is multilayer feed forward neural network dependent on the BP (backpropagation) learning algorithm. Back propagation learning calculation is the most incredible in the Multi-layer calculation as shown in Alsmadi et al. \cite{ar6}. Multilayer feed-forward artificial neural network structure is a blend of various layers (see Fig. \ref{fig:twooai}). The primary layer (input layer) is the info layer that presents the experimental data then it is prepared and spread to the yield layer(output layer)through at least one hidden layer. \begin{figure}[htb] \includegraphics[width=8cm]{figure2.png} \caption{Representative architecture of a feed-forward artificial neural network.} \label{fig:twooai} \end{figure} Number of hidden layers and neurons required in every hidden layer are the important thing in designing a network. The best number of neuron and hidden layers rely upon many factors like the number of inputs, output of the network, the commotion of the target data, the intricacy of the error function, network design, and network training algorithm. In the greater part of cases, it is basically impossible to effortlessly decide the ideal number of hidden layers and number of neurons in each hidden layer without having to train the network. The training network comprises of constantly adjusting the weights of the association links between the processing as input patterns and required output components relating to the network. block diagram of the back propagation network is shown in Fig. \ref{fig:threeeai}. The aim of the training is to reduce and minimize the error which represents the difference between output experimental data $(t)$ and simulation results $(y)$ to accomplish the most ideal result. \begin{figure}[htb] \includegraphics[width=8cm]{figure3.png} \caption{Back-propagation network block diagram.} \label{fig:threeeai} \end{figure} Thus one tries to minimize the next mean square error (MSE) \cite{ar7}. \begin{equation} \mathrm{MSE}=\frac{1}{n} \sum_{i=1}^{n}\left(t_{i}-y_{i}\right)^{2}, \label{eqa:3} \end{equation} where n is data points number used for training the model. \subsubsection{Resilient propagation} Resilient propagation, in short, RPROP\cite{ar8} is one of the quickest training algorithms available widely used for learning multilayer feed forward neural networks in numerous applications with the extraordinary advantage of basic applications. The RPROP algorithm simply alludes to the direction of the gradient. It is a supervised learning method. Resilient propagation calculates an individual delta $\Delta_{i j}$, for each connection, which determines the size of the weight update. The next learning rule is applied to calculate delta \begin{equation} \label{eqa:4} \begin{array}{c} \Delta_{i j}{ }^{(t)}=\left\{\begin{array}{l} \eta^{+} \times \Delta_{i j}{ }^{(t-1)} \quad, \text { if } \frac{\partial E}{\partial w_{i j}}^{(t-1)} \times \frac{\partial E^{(t)}}{\partial w_{i j}}>0 \\ \eta^{-} \times \Delta_{i j}{ }^{(t-1)}, \quad \text { if } \frac{\partial E}{\partial w_{i j}}^{(t-1)} \times \frac{\partial E^{(t)}}{\partial w_{i j}}<0 \\ \Delta_{i j}{ }^{(t-1)}, \text { else } \end{array}\right. \\ \text { where } 0<\eta^{-}<1<\eta^{+} \end{array} \end{equation} The update-amount $\Delta_{i j}$ develops during the learning process depend on the sign of the error gradient of the past iteration, $\frac{\partial E}{\partial w_{i j}}^{(t-1)}$ and the error gradient of the present iteration, $\frac{\partial E}{\partial w_{i j}}^{(t)}$. Each time the partial derivative (error gradient) of the corresponding weight $w_{i j}$ changes its sign, which shows that the last update too large and the calculation has jumped over a local minimum, the update-amount $\Delta_{i j}$ is decreased by the factor $\eta^{-}$ which is a constant usually with a value of $0.5$. If the derivative retains its sign, the update amount is slightly increased by the factor $\eta^{+}$ in order to accelerate convergence in shallow regions. $\eta^{+}$ is a constant usually with a value of 1.2. If the derivative is $0$, then we do not change the update-amount. When the update-amount is determined for each weight, the weight-update is then determined. The following equation is utilized to compute the weight-update \begin{equation} \label{eqa:5} \begin{array}{c} \Delta w_{i j}^{(t)}=\left\{\begin{array}{l} -\Delta_{i j}{ }^{(t)}, \text { if } \frac{\partial E^{(t)}}{\partial w_{i j}}>0 \\ +\Delta_{i j}{ }^{(t)}, \text { if } \frac{\partial E^{(t)}}{\partial w_{i j}}<0 \\ 0, \text { else } \\ w_{i j}^{(t+1)} = w_{i j}{ }^{(t)}+\Delta w_{i j}{ }^{(t)} \end{array}\right. \end{array} \end{equation} If the present derivative is a positive amount meaning the past amount is also a positive amount (increasing error), then the weight is decreased by the update amount. If the present derivative is negative amount meaning the past amount is also a negative amount (decreasing error) then the weight is increased by the update amount. \section{Results and Discussion} \label{sec:Res} In this section, we discussed the obtained results of the entropy per rapidity $d s/d y$ for central Pb-Pb at LHC energies, $\sqrt{s}$ $=2.76$ and $5.02$ TeV. The ANN simulation model is also used to estimate the entropy per rapidity $d s/d y$ at the considered energies. A comparison between the simulated results obtained from the experimental measurements and the simulated results is also shown. \subsection{The estimated Entropy per rapidity $ds/dy$ from Pb-Pb collisions at $\sqrt{s}$ $=2.76$ TeV } We calculated the entropy per rapidity $d s/d y$, for particles $\pi$, $k$, $p$, $\Lambda$, $\Omega$ and $\bar{\Sigma}$, produced in central Pb-Pb collisions at $\sqrt{s}$ $=2.76$ TeV. The obtained results are compared to that estimated from the ANN simulation model and to that calculated in \cite{Hanus:2019fnc}. As experimental input, the computation includes transverse momentum spectra of the particles $\pi$, $k$, $p$ \cite{ALICE:2013mez}, $\Lambda$ \cite{ALICE:2013cdo}, $\Omega$ and $\bar{\Sigma}$ \cite{ALICE:2013xmt}. We also employ ALICE-measured HBT radii \cite{ALICE:2015hvw}. Also, Rprop based ANN is used to simulate $p_T$ spectra for the same particles. This procedure involves supervised learning algorithm that is implemented by using a set of input-output experimental data. As the nature of the output (various particles) is totally not the same, authors chose individual neural systems trained independently. Six networks are used to simulate different particles. Our networks have three inputs and one output. The inputs are $\sqrt{S}$, $P_{T}$ and Centrality. The output is {$\frac{1}{N_{evt} }\frac{d^2 N}{d y d p_T}$}. Number of layers between input and output (hidden layer) and number of neurons in each hidden layer are selected by trial and error. In the beginning, we are begun with one hidden layer and one neuron in the hidden layer then the number of hidden layers and neurons are increased regularly. By changing the number of hidden layers and neurons, the performance of network would change. The learning performance of network can be measured and evaluated by inspecting the coefficients of the MSE and regression value (R). If the coefficient of the MSE is close to zero, it means that the difference between the network and desired output is small. Also, if it is zero, it means there is no difference or no error. On the other hand, R determines the correlation level between the output. And if it's value is equal to $1$, it means that the experimental results is compared with ANN model output and it has been found that there is a very good agreement between them. In our work, best MSE and R values are obtained by using four hidden layers. The number of neurons in each hidden layer are ($40$, $40$, $40$, $40$), ($20$, $20$, $10$, $20$), ($40$, $30$, $30$, $30$), ($40$, $40$, $40$, $40$), ($100$, $100$, $110$, $100$), and ($80$, $70$, $60$, $50$) for particles $\pi$, $k$, $p$, $\Lambda$, $\Omega$, and $\bar{\Sigma}$, respectively. A simplification of the proposed ANN networks are shown in Fig. \ref{fig:oneeai1} for particles $\pi$ (a), $k$ (b), $p$ (c), $\Lambda$(d), $\Omega$(e), and $\bar{\Sigma}$(f) respectively. \begin{figure}[htbp] \includegraphics[width=5cm]{pion1.png} \includegraphics[width=5cm]{kaon1.png} \includegraphics[width=5cm]{proton1.png} \includegraphics[width=5cm]{lambda1.png} \includegraphics[width=5cm]{omega1.png} \includegraphics[width=5cm]{xayminus1.png} \caption{A schematic diagram of the basic formal neuron for particles (a) $\pi$, (b) $k$, (c) $p$, (d) $\Lambda$, (e) $\Omega$, and (f) $\bar{\Sigma}$.} \label{fig:oneeai1} \end{figure} The generated MSE and R for training are shown in Figs. (\ref{fig:oneeai2}) and (\ref{fig:oneeai3}) for particles $\pi$ (a), $k$ (b), $p$ (c), $\Lambda$(d), $\Omega$(e), and $\bar{\Sigma}$(f), respectively. MSE values are $5.5224 \times 10^{-4}$, $9.8843 \times 10^{-6}$, $2.2375 \times 10^{-3}$, $2.2517 \times 10^{-5}$, $9.1972\times10^{-6}$ and $5.5113\times10^{-5}$ after epoch (number of training) $1000$, $113$, $1000$, $485$, $171$ and $911$ for particles $\pi$ (a), $k$ (b), $p$ (c), $\Lambda$(d), $\Omega$(e), and $\bar{\Sigma}$(f), respectively as in Fig. (\ref{fig:oneeai2}). Also, as shown in Fig.(\ref{fig:oneeai3}) regression values are closed to one. MSE and regression values mean good agreement between ANN results and experimental data. \begin{figure}[htbp] \includegraphics[width=5cm]{pion2.png} \includegraphics[width=5cm]{kaon2.png} \includegraphics[width=5cm]{proton2.png} \includegraphics[width=5cm]{lambda2.png} \includegraphics[width=5cm]{omega2.png} \includegraphics[width=5cm]{xayminus2.png} \caption{The best training performance (MSE) for particles (a) $\pi$, (b) $k$, (c) $p$, (d) $\Lambda$, (e) $\Omega$, and (f) $\bar{\Sigma}$.} \label{fig:oneeai2} \end{figure} \begin{figure}[htbp] \includegraphics[width=5cm]{pion3.png} \includegraphics[width=5cm]{kaon3.png} \includegraphics[width=5cm]{proton3.png} \includegraphics[width=5cm]{lambda3.png} \includegraphics[width=5cm]{omega3.png} \includegraphics[width=5cm]{xayminus3.png} \caption{R Regression for particles (a) $\pi$, (b) $k$, (c) $p$, (d) $\Lambda$, (e) $\Omega$, and (f) $\bar{\Sigma}$ at the used epoch.} \label{fig:oneeai3} \end{figure} The transfer function used in hidden layer is \text{logsig} for $k$ particle and \text{poslin} for all other particles and \text{purelin} in output layer. All parameters used for ANN model are represented in Tab. (\ref{tabinputann276}). \begin {table}[htbp] \caption {ANN parameters for particles $\pi$, $k$, $p$, $\Lambda$, $\Omega$, and $\bar{\Sigma}$ at $\sqrt{s}$ $=2.76$ TeV.} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{ANN parameters} & \multicolumn{6}{|c|}{particles}\\ \cline { 2 - 7 } &$\pi$ & $K$ & $p$ & $\Lambda$ & $\Omega$ & $\bar{\Sigma}$ \\ \hline Inputs & \multicolumn{2}{|c|}{$\sqrt{S} $} &\multicolumn{2}{|c|}{$P_{T}(\mathrm{GeV})$}& \multicolumn{2}{|c|}{Centrality} \\ \hline $\sqrt{S}$ & \multicolumn{6}{|c|}{$2.76(\mathrm{TeV})$} \\ \hline Output & \multicolumn{6}{|c|}{$\frac{1}{N_{evt} }\frac{d^2 N}{ d y d p_T}$} \\ \hline Hidden layers & \multicolumn{6}{|c|}{4} \\ \hline Neurons & $40,40,40,40$ & $20,20,10,20$ & $40,30,30,30$ & $40,40,40,40$ & $100,100,110,100$ & $80,70,60,50$ \\ \hline Epochs & 1000 & 113 & 1000 & 485 & 171 & 911 \\ \hline performance & $5.5224 \times 10^{-4}$ & $9.8843 \times 10^{-6}$ & $2.2375 \times 10^{-3}$& $2.2517 \times 10^{-5}$ & $9.1972 \times 10^{-6}$ & $5.5113 \times 10^{-5}$ \\ \hline Training algorithms & \multicolumn{6}{|c|}{Rprop} \\ \hline Training functions & \multicolumn{6}{|c|}{trainrp}\\ \hline Transfer functions of hidden layers & Poslin & Logsig & Poslin & Poslin & Poslin & Poslin \\ \hline Output functions & \multicolumn{6}{|c|}{Purelin}\\ \hline \end{tabular} \end{adjustbox} \label{tabinputann276} \end {table} To estimate the entropy $S$, extrapolation of the observed transverse momentum spectra to $p_T = 0$ is required. To achieve this, we fitted both the experimental and simulated $p_T$ spectra to two various functional models, Tsallis distribution \cite{Cleymans:2016opp,Bhattacharyya:2017hdc} and the HRG model \cite{Yassin:2019xxl}. The aim of using two different models is to fit the whole $p_T$ curve. Fig. (\ref{fig:oneer}) shows the particle spectrum, measured by ALICE collaboration \cite{Kisiel:2014upa} and represented by closed blue circles symbols, is fitted to the Tasllis distribution \cite{Cleymans:2016opp,Bhattacharyya:2017hdc}, represented by solid red color, to extrapolate the spectrum at $p_{T} = 0$. The HBT one-dimensional radii scaled by $((2 + \gamma)/3)^{1/2}$ \cite{Hanus:2019fnc,Kisiel:2014upa} to be a function of transverse mass, $m_{T}$. Confronting both the experimental and simulated particle spectra $p_T$ to both Tsallis distribution and HRG model are shown in Fig. (\ref{fig:twon}) for particles $\pi$ (a), $k$ (b), $p$ (c), $\Lambda$ (d), $\Omega$ (e), and $\bar{\Sigma}$ (f). It is clear from Fig. (\ref{fig:twon}) that using the various forms of the fitting function is obvious as the Tsallis function can fit only the left side of the $p_{T}$ curve at $0.001 < y < 6 $ while the HRG model can fit the right side $ 6 < y < 12 $ as well. This conclusion can encourage us for further investigation. The obtained fitting parameters as a result of both Tsallis distribution and HRG model are summarized in Tabs. (\ref{tab1exfitt276}) and (\ref{tab1annfitt276}), respectively. \begin{figure}[htbp] \includegraphics[width=8cm]{radius.eps} \caption{The particle spectrum, measured by ALICE collaboration \cite{Kisiel:2014upa} and represented by closed blue circles symbols, is fitted to the Tasllis distribution \cite{Cleymans:2016opp,Bhattacharyya:2017hdc}, represented by solid red color, to extrapolate the spectrum at $p_{T} = 0$. The HBT one-dimensional radii scaled by $((2 + \gamma)/3)^{1/2}$ \cite{Hanus:2019fnc,Kisiel:2014upa} to be a function of transverse mass, $m_{T}$.} \label{fig:oneer} \end{figure} \begin{figure}[htbp] \includegraphics[width=5cm]{entropy_pion_pp_276TeV.eps} \includegraphics[width=5cm]{entropy_kaon_pp_276TeV.eps} \includegraphics[width=5cm]{entropy_proton_pp_276TeV.eps} \includegraphics[width=5cm]{entropy_lambda_pp_276TeV.eps} \includegraphics[width=5cm]{entropy_omega_pp_276TeV.eps} \includegraphics[width=5cm]{entropy_xayminus_pp_276TeV.eps} \caption{The transverse momentum distribution, measured by ALICE experiment collaboration \cite{ALICE:2013mez,ALICE:2013cdo,ALICE:2013xmt} at centre of mass energy $= 2.76$ TeV and represented by blue open circles symbols, for particles $\pi$ (a), $k$ (b), $p$ (c), $\Lambda$ (d), $\Omega$ (e), and $\bar{\Sigma}$ (f) is compared to the statistical fits from Tsallis distribution, perfectly fits at $0.001 < y < 6 $, represented by red solid line given by Eq. (\ref{eqb:5}) and the HRG model, works in $ 6 < y < 12 $, represented by solid green line given by Eq. (\ref{eqq(7)}). A boarder line is drawn between Tsallis and HRG models and represented by solid purple color. The experimental data and the results of both models are then confronted to that obtained from the ANN simulation model, represented by dark brown plus sign symbols.} \label{fig:twon} \end{figure} \begin {table}[htbp] \caption {The transverse momentum distribution fitting parameters when confronting the Tasllis distribution, Eq. (\ref{eqb:5}), and the HRG model, Eq. (\ref{eqq(7)}), to the ALICE experimental data \cite{ALICE:2013mez,ALICE:2013cdo,ALICE:2013xmt} at $\sqrt{s}$ $= 2.76$ TeV for particles $\pi$, $k$, $p$, $\Lambda$, $\Omega$, and $\bar{\Sigma}$.} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{particle} & \multicolumn{3}{c|}{Tsallis distribution}&\multicolumn{3}{c|}{HRG model} & \multirow{2}{*}{$\chi^{2}$ /dof} \\ \cline{2-7} & dN/dy & $T_{Ts}$ GeV & q &V $fm^3$ &$T_{th}$ GeV & $\mu$ GeV & \\ \hline $\pi$& $739.886$ & $0.0658$ & $1.2305$ & $3.41332 \times 10^1 \pm 6.103$ &$3.3881\times 10^{-1} \pm 4.8231 \times 10^{-3}$ & $ 1.2549 \pm 4.219 \times 10^{-2}$& $13.6/12$ \\ \hline $K$ & $88.3303$ & $0.1711$ &$1.1132$ &$1.38260 \times 10^1 \pm 3.6055 $ &$3.13028\times 10^{-1} \pm 4.36637\times 10^{-3}$ & $1.26516 \pm 6.32063\times 10^{-2}$& $163.536/11$ \\ \hline $p$ & $39.8814$ &$0.31752$ &$1.13739$ & $5.57340\times 10^2 \pm 3.31936\times 10^{2}$& $3.89646\times 10^{-1}\pm 1.79383\times 10^{-3} $& $3.10152\times 10^{-2} \pm 2.39531\times 10^{-1}$ & $ 25.3567/12$ \\ \hline $\Lambda$ & $47.6767$ & $0.4388$ & $1.1148$ &$48.7847 \pm 48983.2$ & $0.503582 \pm 45.8599$& $1.12747 \pm 626.183 $& $286.085/15$ \\ \hline $\Omega $& $1.24793$ & $0.4277$ & $1.14736$ &$1.24168 \pm 1.0661$ &$ 0.539524 \pm 0.0513$ & $1.14468 \pm 0.3706$& $0.0199/7$ \\ \hline $\bar{\Sigma}$& $6.5279$ & $0.4640$ & $1.1245$ & $4.38896 \pm 1.5158$ &$0.545919 \pm 0.0209$ & $0.865 \pm 0.1993$& $ 3.0475/15 $ \\ \hline \end{tabular} \end{adjustbox} \label{tab1exfitt276} \end {table} \begin {table}[htbp] \caption {The same in Tab. (\ref{tab1exfitt276}) but the statistical fits results, from both used models, is confronted to the ANN simulation model.} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{particle} & \multicolumn{3}{c|}{Tsallis distribution}&\multicolumn{3}{c|}{HRG model} & \multirow{2}{*}{$\chi^{2}$ /dof} \\ \cline{2-7} & $ dN/dy$ & $T$ GeV & q &V $fm^3$ &$T$ GeV & $\mu$ GeV & \\ \hline $\pi$& $775.496$ & $0.08174$ & $1.1873$ & $23.5314 \pm 15.5048$ & $0.3286 \pm 0.0448$&$1.3797 \pm 0.09944$ & $32.0297/11 $ \\ \hline $K$ &$88.2811$ &$0.1719$ &$1.111$ &$13.8111 \pm 1.993$ & $ 0.3125 \pm 0.00478$ &$ 1.2664 \pm 0.0292$ &$ 159.234/11 $ \\ \hline $p$ &$40.2232$ & $0.31542$ & $1.14088$ &$26.0248 \pm 3.2175$ & $0.3758 \pm 0.0053$ &$ 1.2924 \pm 0.0486$& $ 23.8924/12 $ \\ \hline $\Lambda$ & $47.953$ & $0.4458$ &$1.11257$ &$ 50.852 \pm 54211.8$&$0.5057 \pm 30.6755 $&$1.1584 \pm 358.797 $ &$ 285.464/15 $ \\ \hline $\Omega $& $1.2580$ & $0.4125$ & $1.1492$ & $1.17248 \pm 0.294$&$0.6464 \pm 0.0184$&$0.4864 \pm 0.1411$ &$0.02487/7 $ \\ \hline $\bar{\Sigma}$& $6.5309$ & $0.4595$ & $1.1260$ &$4.3795 \pm 1.8017$& $0.5577 \pm 0.0248$ &$0.7849 \pm 0.2178$ &$ 3.1109/15$ \\ \hline \end{tabular} \end{adjustbox} \label{tab1annfitt276} \end {table} The estimated entropy per rapidity $ds/dy$ from Pb-Pb central collisions at $\sqrt{s}$ $=2.76$ TeV using the Tsallis distribution, HRG model, and the ANN model for particles $\pi$, $k$, $p$, $\Lambda$, $\Omega$ and $\bar{\Sigma}$ is represented in Tab. (\ref{tab3276entropy}). The effect of both the Tsallis distribution and HRG model fitting function on the estimated entropy per rapidity $d s/d y$ is also shown in Tab. (\ref{tab3276entropy}). We compare the entropy per rapidity obtained from the statistical fits and ANN model to that obtained in Ref. \cite{Hanus:2019fnc}. The function which describes the non-linear relationship between inputs and output based ANN simulation model is given in Appendix \ref{sec:(append:neural)}. The results of ANN simulation, Tsallis distribution and the HRG model for particles compared with experimental data are shown in Fig.(\ref{fig:twon}). \begin {table}[htbp] \caption {The estimated entropy per rapidity $ds/dy$ from Pb-Pb central collisions at $\sqrt{s}$ $=2.76$ TeV using the Tsallis distribution, HRG model, and the ANN model. The obtained results are compared to that obtained in Ref. \cite{Hanus:2019fnc}.} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{|c|c|c|c|c|c|} \hline particle & $(ds/dy)_{y=0}$ & $(ds/dy)_{y=0}$ supplemented by Tsallis & $(ds/dy)_{y=0}$ supplemented by HRG model & $(ds/dy)_{y=0}$ estimated by ANN model & $(ds/dy)_{y=0}$ Ref. \cite{Hanus:2019fnc} \\ \hline $\pi$ & $1908.21$ & $2260.85$ & $ 2267.58$ &$2265.17$ & $ 2182$ \\ \hline $K$ & $478.351 $ & $512.399 $& $514.321$ &$514.347 $& $605 $ \\ \hline $p$ & $265.648$ & $ 278.125$& $278.486 $& $277.937$& $ 266$ \\ \hline $\Lambda$ & $304.334 $ &$325.742 $ & $321.939$ & $300 $& $ 320 $ \\ \hline $\Omega $ &$10.1561$ & $14.4025$ & $14.2159$ &$13.3129$ & $ 16$ \\ \hline $\bar{\Sigma}$& $54.3717 $ & $58.3102 $& $57.8227 $&$58.1449$ & $58 $ \\ \hline \end{tabular} \end{adjustbox} \label{tab3276entropy} \end {table} From Tab. (\ref{tab3276entropy}), The calculated entropy per rapidity $d s/d y$ form the statistical fits, ANN model and that obtained in Ref. \cite{Hanus:2019fnc} are agree with each other. The excellent agreement between the estimated results of $d s/d y$ from ANN simulation model and to that obtained in Ref. \cite{Hanus:2019fnc} encourage us to use it at another energies. \subsection{The estimated entropy per rapidity from Pb-Pb collisions at $\sqrt{s}$ $= 5.02$ TeV} Here, In central Pb-Pb collisions at $\sqrt{s}$ $= 5.02$ TeV, we calculated the entropy per rapidity $d s/d y$ for particles $\pi$, $k$, $p$, $\Lambda$, and $K_s^0$. Transverse momentum spectra of the particles $\pi$, $k$, $p$ \cite{ALICE:2019hno}, $\Lambda$, and $K_s^0$ \cite{Sefcik:2018acn} are used as experimental input for the computation. We also employ ALICE measured HBT source radii \cite{ALICE:2015hvw}. We also used the same deduced inputs for the ANN model. We applied the ANN model to acquire the $p_T$ spectra of the particles $\pi$, $k$, $p$, $\Lambda$, and $K_s^0$ according to the input parameters represented in Tab. (\ref{tabinputann502}). Five networks are chosen to simulate experimental data according to different particles. Best performance value and regression are obtained by using four hidden layers. The number of neurons in each hidden layer are ($100$, $100$, $120$, $120$), ($70$, $90$, $80$, $80$), ($100$, $80$, $80$, $70$), ($20$, $30$, $30$, $20$), ($30$, $20$, $40$, $40$) for particles $\pi$, $k$, $p$, $\Lambda$ and $K_s^0$, respectively. A simplification of the proposed ANN networks are shown in Fig.(\ref{fig:oneeai111}) for particles $\pi$(a), $k$(b), $p$(c), $\Lambda$(d), and $K_s^0$(e), respectively. \begin{figure}[htbp] \includegraphics[width=5cm]{pion11.png} \includegraphics[width=5cm]{kaon11.png} \includegraphics[width=5cm]{proton11.png} \includegraphics[width=5cm]{lambda11.png} \includegraphics[width=5cm]{kshort11.png} \caption{The same as in Fig. (\ref{fig:oneeai1}) but for particles (a) $\pi$, (b) $k$, (c) $p$, (d) $\Lambda$, and (e) $K_s^0$ at $\sqrt{s}$ $= 5.02$ TeV.} \label{fig:oneeai111} \end{figure} As a result, the obtained best performance and regression from training are shown in Figs.(\ref{fig:oneeai222} and \ref{fig:oneeai333}) for particles (a) $\pi$, (b) $k$, (c) $p$, (d) $\Lambda$, and (e) $K_s^0$ respectively. The performance is $9.897 \times 10^{-6}$, $8.6914 \times 10^{-6}$, $9.3767 \times 10^{-6}$, $9.8911 \times 10^{-6}$ and $9.8314 \times 10^{-6}$ after epoch $637$, $792$, $577$, $506$ and $259$ for particles (a) $\pi$, (b) $k$, (c) $p$, (d) $\Lambda$, and (e) $K_s^0$ respectively as in Fig. (\ref{fig:oneeai222}). The transfer function used is \text{logsig} in hidden layers and \text{purelin} in output layer for all particles. All parameter used for ANN is shown in Tab.\ref{tabinputann502}. \begin{figure}[htbp] \includegraphics[width=5cm]{pion22.png} \includegraphics[width=5cm]{kaon22.png} \includegraphics[width=5cm]{proton22.png} \includegraphics[width=5cm]{lambda22.png} \includegraphics[width=5cm]{kshort22.png} \caption{The same as in Fig. (\ref{fig:oneeai2}) but for particles (a) $\pi$, (b) $k$, (c) $p$, (d) $\Lambda$, and (e) $K_s^0$ at $\sqrt{s}$ $= 5.02$ TeV.} \label{fig:oneeai222} \end{figure} \begin{figure}[htbp] \includegraphics[width=5cm]{pion33.png} \includegraphics[width=5cm]{kaon33.png} \includegraphics[width=5cm]{proton33.png} \includegraphics[width=5cm]{lambda33.png} \includegraphics[width=5cm]{kshort33.png} \caption{The same as in Fig. (\ref{fig:oneeai3}) but for particles (a) $\pi$, (b) $k$, (c) $p$, (d) $\Lambda$, and (e) $K_s^0$ at $\sqrt{s}$ $= 5.02$ TeV.} \label{fig:oneeai333} \end{figure} \begin {table}[htbp] \caption {ANN parameters for particles $\pi$, $k$, $p$, $\Lambda$, and $K_s^0$ at $\sqrt{s}$ $= 5.02$ TeV.} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{|c|c|c|c|c|c|} \hline ANN & \multicolumn{5}{|c|}{particles} \\ \cline { 2 - 6 } parameters & $\pi$ & $K$ & $p$ & $\Lambda$ &$k_s^0$ \\ \hline Inputs & \multicolumn{2}{|c|}{$\sqrt{S} $} &\multicolumn{2}{|c|}{$P_{T}(\mathrm{GeV})$} & Centrality \\ \hline $\sqrt{S}$ & \multicolumn{5}{|c|}{$5.02$ TeV} \\ \hline Output & \multicolumn{5}{|c|}{$\frac{1}{N_{evt} }\frac{d^2 N}{ d y d p_T}$} \\ \hline Hidden layers & \multicolumn{5}{|c|}{4} \\ \hline Neurons & $100,100,120,120$ & $70,90,80,80$ &$100,80,80,70$ & $20,30,30,20$ &$30,20,40,40$ \\ \hline Epochs & 637 &792 & 577 & 506 & 259\\ \hline performance & $9.897 \times 10^{-6}$ & $8.6914 \times 10^{-6}$&$9.3767 \times 10^{-6}$ &$9.8911 \times 10^{-6}$ & $9.8314 \times 10^{-6}$ \\ \hline Training algorithms & \multicolumn{5}{|c|}{Rprop} \\ \hline Training functions & \multicolumn{5}{|c|}{trainrp} \\ \hline Transfer functions of hidden layers & Logsig & Logsig & Logsig & Logsig & Logsig \\ \hline Output functions & \multicolumn{5}{|c|}{Purelin} \\ \hline \end{tabular} \end{adjustbox} \label{tabinputann502} \end {table} Extrapolation of the observed transverse momentum spectra to $p_T = 0$ is necessary to determine the entropy $S$. To achieve this, we fitted both the experimental and simulated $p_T$ spectra to two various functional models, Tsallis distribution \cite{Cleymans:2016opp,Bhattacharyya:2017hdc} and the HRG model \cite{Yassin:2019xxl}. The aim of combining two models is to fit the entire $p_T$ curve. In Fig. (\ref{fig:twonnn}), the experimental and simulated particle spectra $p_T$ are compared to the Tsallis distribution and the HRG model for particles $\pi$ (a), $k$ (b), $p$ (c), $\Lambda$ (d), $\Omega$ (e), and $\bar{\Sigma}$ (f). As seen in Fig. (\ref{fig:twon}), employing various forms of the fitting function is obvious, as the Tsallis function can only match the left side of the $p_T$ curve at $0.001 < y < 10 $, whereas the HRG model can fit the right side at $ 10 < y < 20 $. This result may motivate us to pursue additional research. Tabs. (\ref{tab1exfitt502}) and (\ref{tab1annfitt502}) summarise the fitting parameters obtained from the Tsallis distribution and HRG model, respectively. \begin{figure}[htbp] \includegraphics[width=5cm]{entropy_pion_pbpb_502TeV.eps} \includegraphics[width=5cm]{entropy_kaon_pbpb_502TeV.eps} \includegraphics[width=5cm]{entropy_proton_pbpb_502TeV.eps} \includegraphics[width=5cm]{entropy_lambda_pbpb_502TeV.eps} \includegraphics[width=5cm]{entropy_kshort_pbpb_502TeV.eps} \caption{The transverse momentum distribution, measured by ALICE experiment collaboration \cite{ALICE:2019hno,Sefcik:2018acn} at centre of mass energy $= 5.02$ TeV and represented by blue open circles symbols, for particles $\pi$ (a), $k$ (b), $p$ (c), $\Lambda$ (d), and $K_s^0$ (e) is compared to the statistical fits from Tsallis distribution, perfectly fits at $0.001 < y < 10 $, represented by red solid line given by Eq. (\ref{eqb:5}) and the HRG model, works in $ 10 < y < 20 $, represented by solid green line given by Eq. (\ref{eqq(7)}). A boarder line is drawn between Tsallis and HRG models and represented by solid purple color. The experimental data and the results of both models are then confronted to that obtained from the ANN simulation model, represented by dark brown plus sign symbols.} \label{fig:twonnn} \end{figure} \begin {table}[htbp] \caption {The transverse momentum distribution fitting parameters when confronting the Tasllis distribution, Eq. (\ref{eqq(7)}), and the HRG model, Eq. (\ref{eqb:5}), to the ALICE experiment data \cite{ALICE:2019hno,Sefcik:2018acn} at $\sqrt{s}$ $= 5.02$ TeV for particles $\pi$, $k$, $p$, $\Lambda$, and $K_s^0$.} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{particle} & \multicolumn{3}{c|}{Tsallis parameters}&\multicolumn{3}{c|}{HRG model} & \multirow{2}{*}{$\chi^{2}$ /dof} \\ \cline{2-7} & $ dN/dy$ & $T$ GeV & q &V $fm^3$ &$T$ GeV & $\mu$ GeV & \\ \hline $\pi$& $5804.25$ & $0.2101$ & $1.1218$ & $4709.14 \pm 1351.88$ &$ 2.6497 \pm 0.0475$& $26.2268 \pm 0.7773$& $316/48$ \\ \hline $K$ &$ 953.214$ &$ 0.2392$ &$1.1146$ & $3614.78 \pm 1.6854$ &$0.719156 \pm 630181$ & $1.28019 \pm 630181$& $905.7/44 $ \\ \hline $p$ & $437.394$ &$0.3393$ & $1.1167$ & $ 8.6667 \pm 3.7659$&$3.45715 \pm 0.1372$ &$22.842 \pm 1.8419$ & $219.6/36 $ \\ \hline $\Lambda$ & $466.969$ & $0.4920$ &$1.119$ & $395.622 \pm 44731.7$&$1.49141 \pm 15.7463$ &$ 10.0101 \pm 226.948$& $198/16$ \\ \hline $K_s^0$ &$750.684$ &$ 0.4003$ & $1.105$ & $72.7633 \pm 82.7046$&$2.5859 \pm 0.227$ &$17.809 \pm 3.2296$ & $547.7/18$ \\ \hline \end{tabular} \end{adjustbox} \label{tab1exfitt502} \end {table} \begin {table}[htbp] \caption {The same in Tab. (\ref{tab1exfitt502}) but the statistical fits results, from both used models, is confronted to the ANN simulation model.} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{particle} & \multicolumn{3}{c|}{Tsallis fitting parameters}&\multicolumn{3}{c|}{HRG model fitting parameters} & \multirow{2}{*}{$\chi^{2}$ /dof} \\ \cline{2-7} & $dN/dy$ & $T_{Ts}$ GeV & q &V $fm^3$ &$T_{th}$ GeV & $\mu$ GeV & \\ \hline $\pi$& $5807.58$ & $0.2099$ & $1.1221$ & $4681.65 \pm 1995.89$ &$ 2.5486 \pm 0.066$&$ 24.7643 \pm 1.0789$&$3173/48$ \\ \hline $K$ &$958.826$ & $0.2396$& $1.1159$& $ 5871.87 \pm 4273.75$ &$ 3.202 \pm 0.1495$ & $36.6246 \pm 2.627$ &$9116.2/44 $ \\ \hline $p$ & $440.362$ & $ 0.3341$& $1.1223$ &$1981.5 \pm 1398.83$ &$ 4.5297 \pm 0.2092$ &$59.7687 \pm 3.7575$ & $2246.4/36 $ \\ \hline $\Lambda$ &$472.765$ & $0.4976$ &$ 1.1183 $& $396.77 \pm 53486.2$ &$1.42152 \pm 15.2713$ &$ 9.10505 \pm 220.834$ & $ 1949.7/16$ \\ \hline $K_s^0$ &$749.794 $ & $ 0.3945$& $1.108$ &$ 26.688 \pm 55.3559$ &$2.0758 \pm 0.4493$ &$9.7132 \pm 5.5569$ & $5558.9/18 $\\ \hline \end{tabular} \end{adjustbox} \label{tab1annfitt502} \end {table} The estimated entropy per rapidity $ds/dy$ from Pb-Pb central collisions at $\sqrt{s}$ $= 5.02$ TeV using the Tsallis distribution, HRG model, and ANN model for particles $\pi$, $k$, $p$, $\Lambda$, and $K_s^0$ is represented in Tab. (\ref{tab3502entropy}). The effect of both the Tsallis distribution and HRG model fitting function on the estimated entropy per rapidity $d s/d y$ is also shown in Tab. (\ref{tab3502entropy}). \begin {table}[htbp] \caption {The same in Tab. (\ref{tab3276entropy}) but at $\sqrt{s}$ $=5.02$ TeV.} \begin{adjustbox}{width=\columnwidth} \begin{tabular}{|c|c|c|c|c|} \hline particle & $(ds/dy)_{y=0}$ & $(ds/dy)_{y=0}$ supplemented by Tsallis& $(ds/dy)_{y=0}$ supplemented by HRG model & $(ds/dy)_{y=0}$ estimated by ANN model \\ \hline $\pi$ & $13188.2$ & $13406.9$ &$13333.6$ & $13330.9$ \\ \hline $K$ & $3909.58 $& $3938.4 $&$1011.8 $&$3896.25 $ \\ \hline $p$& $2259.36$ &$ 2262.64$ & $2250.08$ & $2253.3$ \\ \hline $\Lambda$ &$2104.91 $ &$2193.98 $ & $2179.42 $&$2178$ \\ \hline $K_s^0$ & $2463.9$ &$3396.13$ & $3372.08$ & $3369.06$ \\ \hline \end{tabular} \end{adjustbox} \label{tab3502entropy} \end {table} The values of the entropy per rapidity $d s/d y$ are calculated from fitting the experimental and simulated particle spectra to the statistical models are agree with each other. The function which describes the non-linear relationship between inputs and output is given in Appendix \ref{sec:(append:neural)}. This implies further use for ANN model to predict the entropy per rapidity $d s/d y$ in the absence of the experiment. \section{Summary and Conclusions} \label{sec:Cncls} In this work, We calculated the entropy per rapidity $d S/d y$ produced in central Pb-Pb ultra-relativistic nuclear collisions at LHC energies using experimentally observed identifiable particle spectra and source radii estimated from HBT correlations. The considered particles are $\pi$, $k$, $p$, $\Lambda$, $\Omega$, and $\bar{\Sigma}$, and $\pi$, $k$, $p$, $\Lambda$, and $K_s^0$ where the center of mass energy is $ \sqrt{s}$ $=2.76$ and $5.02$ TeV, respectively. ANN simulation model is used to estimate the entropy per rapidity $d S/d y$ for the same particles at the considered energies. Extrapolating the transverse momentum spectra at $p_T$ $=0$ is required to calculate $d S/d y$ thus we use two different fitting functions, Tsallis distribution and the Hadron Resonance Gas (HRG) model. The effect of both the Tsallis distribution and HRG model fitting function on the estimated entropy per rapidity $d s/d y$ is also discussed. The Tsallis function can only match the left side of the $p_T$ curve, whereas the HRG model can fit the right side. This result may motivate us to pursue additional research. The success of ANN model to describe the experimental measurements will implies further prediction for the entropy per rapidity in the absence of the experiment. \begin{appendices} \label{appendces} \section{A detailed description for the entropy production $d s/ d y$ as shown in Eq. (\ref{eq:1})} \label{sec:(append:entropy)} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} According to Gibbs-Duham relation, the thermodynamic quantities are related by \cite{Letessier:2002gp} \begin{equation}\label{eq:s1} E(V, T, \boldsymbol{\mu})=\boldsymbol{F}^{\prime}(V, T, \boldsymbol{\mu})+T S(V, T, \boldsymbol{\mu})+\boldsymbol{\mu} \boldsymbol{b}(V, T, \boldsymbol{\mu}) \end{equation} Thus the entropy $(S)$ can be obtained as\cite{Letessier:2002gp} \begin{equation}\label{eq:s2} S=\frac{1}{T}\left(E-\boldsymbol{F}^{\prime}-\boldsymbol{\mu} \boldsymbol{b}\right)=\ln Z-\beta \frac{\partial \ln Z}{\partial \beta}-(\ln \lambda) \boldsymbol{\lambda} \frac{\partial \ln Z}{\partial \lambda} \end{equation} Our aim is to write $(S)$ in terms of $(f)$, which is the single particle distribution function, and it is given by\cite{Letessier:2002gp} \begin{equation}\label{eq:s3} \mathrm{f}_{F / \beta}(\xi, \boldsymbol{\beta}, \lambda)=\frac{1}{e^{\beta(\xi-\mu)} \pm \mathbf{1}} \end{equation} where $(+)$ and $(-)$ represent fermions and bosons, respectively. The partition function ( $\ln Z $) is given by\cite{Letessier:2002gp} \begin{equation}\label{eq:s4} \ln Z_{F / \beta}(\boldsymbol{V}, \boldsymbol{\beta}, \lambda)=\pm \int \frac{d^{3} r d^{3} P}{(2 \pi)^{3}} \ln \left[1 \pm \boldsymbol{e}^{\beta(\mu-\xi)}\right] \end{equation} Differentiating Eq. (\ref{eq:s4}) with respect to $\beta$, the inverse of temperature, we get \cite{Letessier:2002gp} \begin{equation}\label{eq:s5} \begin{array}{l} \frac{\partial \ln Z_{F / \beta}}{\partial \beta}=\pm \int \frac{d^{3} r d^{3} P}{(2 \pi)^{3}} \frac{(\boldsymbol{\mu}-\boldsymbol{\xi}) \boldsymbol{e}^{\boldsymbol{\beta}(\mu-\xi)}}{1 \pm \boldsymbol{e}^{\boldsymbol{\beta}(\mu-\xi)}}\\ =\pm \int \frac{d^{3} r d^{3} P}{(2 \pi)^{3}} \frac{(\mu-\xi)}{e^{\beta(\xi-\mu)} \pm 1} \end{array} \end{equation} Also, Differentiating Eq. (\ref{eq:s4}) with respect to $\lambda$, we get \cite{Letessier:2002gp} \begin{equation}\label{eq:s6} \frac{\partial \ln Z}{\partial \lambda}=\pm \int \frac{d^{3} r d^{3} P}{(2 \pi)^{3}} \frac{\boldsymbol{e}^{\boldsymbol{\beta}(\mu-\xi)}}{1 \pm \boldsymbol{e}^{\boldsymbol{\beta}(\boldsymbol{\mu}-\xi)}} \end{equation} Eq. (\ref{eq:s6}) can be arranged as \cite{Letessier:2002gp} \begin{equation}\label{eq:s7} \frac{\partial \ln Z}{\partial \lambda}=\pm \int \frac{d^{3} r d^{3} P}{(2 \pi)^{3}} \frac{1}{e^{\beta(\xi-\mu)} \pm 1} \end{equation} Substituting from Eqs. (\ref{eq:s4}), (\ref{eq:s5}), and (\ref{eq:s7}) into Eq. (\ref{eq:s2}), we get \cite{Letessier:2002gp} \begin{equation}\label{eq:s8} \begin{array}{c} S=\pm \int \frac{d^{3} r d^{3} P}{(2 \pi)^{3}}\left[\ln \left(1 \pm e^{\beta(\mu-\xi)}\right)-\frac{\beta(\mu-\xi)}{e^{\beta(\mu-\xi)} \pm 1}\right. \\ \left.-\frac{\beta \mu e^{\beta \mu}}{e^{\beta(\xi-\mu)} \pm \mathbf{1}}\right] \end{array} \end{equation} at vanishing chemical potential, $\mu_B=0 $, the last term in Eq. (\ref{eq:s8}) will be equal zero. \begin{equation}\label{eq:s9} \boldsymbol{e}^{\boldsymbol{\beta}(\xi-\boldsymbol{\mu})} \pm \mathbf{1}=\frac{1}{\mathrm{f}_{F / \beta}} \end{equation} Eq. (\ref{eq:s3}) can be written in the following form \cite{Letessier:2002gp} \begin{equation}\label{eq:s10} -e^{\beta(\xi-\mu)}=\frac{1}{\mathrm{f}_{F / \beta}} \mp 1=\frac{1 \pm \mathrm{f}_{F / \beta}}{\mathrm{f}_{F / \beta}} \end{equation} recalling Eq. (\ref{eq:s10}), we obtain \begin{equation}\label{eq:s11} -e^{\boldsymbol{\beta}(\boldsymbol{\mu}-\xi)}=\frac{\mathrm{f}_{F / \beta}}{1 \mp \mathrm{f}_{F / \beta}} \end{equation} Rearranging Eq. (\ref{eq:s11}) in the following form \begin{equation}\label{eq:s12} 1 \pm e^{\beta(\mu-\xi)}=1 \pm \frac{\mathrm{f}_{F / \beta}}{1 \mp \mathrm{f}_{F / \beta}}=\frac{1 \mp \mathrm{f}_{F / \beta} \pm \mathrm{f}_{F / \beta}}{1 \mp \mathrm{f}_{F / \beta}}-\frac{1}{1 \mp \mathrm{f}_{F / \beta}}=1 \pm e^{\beta(\mu-\xi)} \end{equation} Substituting from Eq. (\ref{eq:s12}) into Eq. (\ref{eq:s8}), we get \begin{equation}\label{eq:s13} S=\pm \int \frac{d^{3} r d^{3} P}{(2 \pi)^{3}}\left[\ln \left(\frac{1}{1 \mp \mathrm{f}_{F / \beta}}\right)-\ln \left(\frac{\mathrm{f}_{F / \beta}}{1 \mp \mathrm{f}_{F / \beta}}\right) \mathrm{f}_{F / \beta}-z e r o\right] \end{equation} Rearranging Eq. (\ref{eq:s13}) as \begin{equation}\label{eq:s14} S=\pm \int \frac{d^{3} r d^{3} P}{(2 \pi)^{3}}\left[-\ln \left(1 \mp \mathrm{f}_{F / \beta}\right)-\left[\ln \mathrm{f}_{F / \beta}-\ln \left(1 \mp \mathrm{f}_{F / \beta}\right)\right] \mathrm{f}_{F / \beta}\right] \end{equation} Simplifying Eq. (\ref{eq:s14}) as \begin{equation}\label{eq:s15} \begin{array}{c} S=\pm \int \frac{d^{3} r d^{3} P}{(2 \pi)^{3}}\left[-\ln \left(1 \mp \mathrm{f}_{F / \beta}\right)-\mathrm{f}_{F / \beta} \ln \mathrm{f}_{F / \beta}\right. \\ \left.+\mathrm{f}_{F / \beta} \ln \left(1 \mp \mathrm{f}_{F / \beta}\right)\right] \end{array} \end{equation} Finally, the entropy $S$ can be given by \cite{Letessier:2002gp} \begin{equation}\label{eq:s16} \begin{array}{c} S=\pm \int \frac{d^{3} r d^{3} P}{(2 \pi)^{3}}\left[-\mathrm{f}_{F / \beta} \ln \left(1 \mp \mathrm{f}_{F / \beta}\right)-\mathrm{f}_{F / \beta} \ln \mathrm{f}_{F / \beta}\right. \\ \left.+\mathrm{f}_{F / \beta} \ln \left(1 \mp \mathrm{f}_{F / \beta}\right)\right]. \end{array} \end{equation} Eq. (\ref{eq:s16}) represents the entropy equation that shown in Eq. (\ref{eq:1}). \section{The transverse momentum distribution based on the HRG model} \label{sec:(append:hrg)} \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} The partition function $Z(T,V,\mu)$ is given by \begin{equation} Z(T,V,\mu)=\mbox{Tr}\left[\exp\left(\frac{{\mu}N-H}{\mathtt{T}}\right)\right], \label{eqq(1)} \end{equation} where $H$ stands for the system's Hamiltonian, $\mu$ is the chemical potential, and $N$ is the net number of all constituents. In the HRG approach, Eq. (\ref{eqq(1)}) can be written as a summation of all hadron resonances \begin{equation}\ln Z(T,V,\mu)=\sum_i{{\ln Z}_i(T,V,\mu)} =\frac{V g_i}{(2{\pi})^3}\int^{\infty}_0{\pm d^{3}p {\ln} {\left[1\pm \exp\left(\frac{E-\mu_{i}}{\mathtt{T}} \right) \right]}}, \label{eqq(2)} \end{equation} where $\pm $ represent the bosons and fermions particles, respectively and $E_{i}=\left(p^{2}+m_{i}^{2}\right)^{1/2}$ is the energy of the $i$-th hadron. the particle's multiplicity can be determined from the partition function as \begin{equation} N_{i}=T \frac{\partial Z_{i}(T, V)}{\partial \mu_{i}}=\frac{V g_i} {(2{\pi})^3}\int^{\infty}_0{d^{3}p \left[\exp\left(\frac{E-\mu_{i}} {\mathtt{T}}\right)\pm1\right]^{-1}}, \label{eqq(3)} \end{equation} For a partially radiated thermal source, the inavriant momentum spectrum is obtained as \cite{Letessier:2002gp} \begin{equation} E\frac{d^{3}N_{i}}{d^{3}p}=E \frac{V g_i} {(2{\pi})^3}\left[\exp\left(\frac{E-\mu_{i}} {\mathtt{T}}\right)\pm1\right]^{-1}, \label{eqq(4)} \end{equation} The i-th particle's energy $E_{i}$ can be written as a function of the rapidity $\left(y\right)$ and $m_{T}$ as \begin{equation} E=m_{T} \cosh\left(y\right), \label{eqq(5)} \end{equation} Where $m_{T}$ represents the transverse mass and can be written in terms of the transverse momentum $p_{T}$ by \begin{equation} m_{T}=\sqrt{m^{2}+p_{T}^{2}}, \label{eqq(6)} \end{equation} Substituting from Eq.(\ref{eqq(5)}) in to Eq.(\ref{eqq(4)}), one get the particle momentum distribution at mid-rapidity ($y= 0$) and $\mu \neq 0$ \begin{equation} \frac{1}{2 \pi p_{T}} \frac{d^2N}{dydp_{T}}= \frac{V g_i m_{T}} {(2{\pi})^3}\left[\exp\left(\frac{m_{T}-\mu_{i}} {\mathtt{T}}\right)\pm1\right]^{-1}. \label{eqq(7)} \end{equation} We fitted the experimental data of the particle momentum spectra with that calculated from Eq.(\ref{eqq(7)}) where the fitting parameters are $V$, $\mu$, and $\mathtt{T}$. \section{The transverse momentum distribution based on Tsallis model} \label{sec:(append:tsalis)} \renewcommand{\theequation}{C.\arabic{equation}} \setcounter{equation}{0} The transverse momentum distribution of the produced hadrons at LHC energies \cite{Cleymans:2016opp,Bhattacharyya:2017hdc} \begin{equation} \left.\frac{1}{p_{T}} \frac{d^{2} N}{d p_{T} d y}\right|_{y=0}=g V \frac{m_{T}}{(2 \pi)^{2}}\left[1+(q-1) \frac{m_{T}}{T}\right]^{-q /(q-1)}, \label{eqb:1} \end{equation} where $m_{T}$ and $p_{T}$ represent the transverse mass and transverse momentum, respectively. $y$ is the rapidity, $g$ is the degeneracy factor, and $V$ is the volume of the system. The obtained values of $q$ and $T$ represent a system in the kinetic freeze-out case. In the limit where $q->1$, Eq. (\ref{eqb:1}) is a simplification of the conventional Boltzmann distribution as \cite{Cleymans:2016opp,Bhattacharyya:2017hdc} \begin{equation} \left.\lim _{q \rightarrow 1} \frac{1}{p_{T}} \frac{d^{2} N}{d p_{T} d y}\right|_{y=0}=g V \frac{m_{T}}{(2 \pi)^{2}} \exp \left(-\frac{m_{T}}{T}\right), \label{eqb:2} \end{equation} As a result, several statistical mechanics ideas may be applied to the distribution provided in Eq. (\ref{eqb:1}). Integrating Eq. (\ref{eqb:1}) though the transverse momentum, one gets \cite{Cleymans:2016opp,Bhattacharyya:2017hdc} \begin{equation} \begin{aligned} \left.\frac{d N}{d y}\right|_{y=0} &=\frac{g V}{(2 \pi)^{2}} \int_{0}^{\infty} p_{T} d p_{T} m_{T}\left[1+(q-1) \frac{m_{T}}{T}\right]^{-\frac{q}{q-1}} \\ &=\frac{g V T}{(2 \pi)^{2}}\left[\frac{(2-q) m_{0}^{2}+2 m_{0} T+2 T^{2}}{(2-q)(3-2 q)}\right]\left[1+(q-1) \frac{m_{0}}{T}\right]^{-\frac{1}{q-1}}, \label{eqb:3} \end{aligned} \end{equation} where $m_{0}$ stands for the mass of the used particle. From Eq. (\ref{eqb:3}), the volume of the system can be written in terms of the multiplicity per rapidity $d N / d y$ and the Tsallis parameters $q$ and $T$ as \begin{equation} V=\left.\frac{d N}{d y}\right|_{y=0} \frac{(2 \pi)^{2}}{g T}\left[\frac{(2-q)(3-2 q)}{(2-q) m_{0}^{2}+2 m_{0} T+2 T^{2}}\right]\left[1+(q-1) \frac{m_{0}}{T}\right]^{\frac{1}{q-1}}, \label{eqb:4} \end{equation} Substituting from Eq. (\ref{eqb:4}) into Eq. (\ref{eqb:3}), one obtains the transverse momentum spectra \begin{equation} \begin{aligned} \left.\frac{1}{p_{T}} \frac{d^{2} N}{d p_{T} d y}\right|_{y=0}=&\left.\frac{d N}{d y}\right|_{y=0} \frac{m_{T}}{T} \frac{(2-q)(3-2 q)}{(2-q) m_{0}^{2}+2 m_{0} T+2 T^{2}}\left[1+(q-1) \frac{m_{0}}{T}\right]^{\frac{1}{q-1}} \\ &\left[1+(q-1) \frac{m_{T}}{T}\right]^{-\frac{q}{q-1}}. \label{eqb:5} \end{aligned} \end{equation} where $d N/d y$, $\mathtt{T}$, and $\mathtt{q}$ are the fitting parameters. \section{The transverse momentum distribution based on ANN model} \label{sec:(append:neural)} \renewcommand{\theequation}{D.\arabic{equation}} \setcounter{equation}{0} the transeverse momentum distribution $\frac{1}{N_{evt} }\frac{d^2 N}{ d y d p_T}$, can be estimated from ANN model as: \begin{equation} \begin{aligned} \frac{1}{N_{evt} }\frac{d^2 N}{ d y d p_T} &= \text{purelin}[net.LW\left\{5,4\right\}f(net.LW\left\{4,3\right\}f(net.LW\left\{3,2\right\}f(net.LW\left\{2,1\right\} \\ & f(net.IW\left\{1,1\right\}R+net.b\left\{1\right\})+ net.b\left\{2\right\})+ net.b\left\{3\right\})\\ & + net.b\left\{4\right\})+ net.b\left\{5\right\}]. \label{equ:seventeenn} \end{aligned} \end{equation} Where $R$ is the inputs ( $\sqrt{S} $, $p_{T}$ and centrality),\newline $f$ is hidden layer transfer function (logsig or poslin),\newline $IW$ and $LW$ are the linked weights as follow: \newline $net.IW \left\{1, 1\right\}$ is linked weights between the input layer and first hidden layer,\newline $net.LW \left\{2, 1\right\}$ is linked weights between first and second hidden layer,\newline $net.LW \left\{3, 2\right\}$ is linked weights between the second and third hidden layer,\newline $net.LW \left\{4, 3\right\}$ is linked weights between the third and fourth hidden layer,\newline $net.LW \left\{5, 4\right\}$ is linked weights between the fourth and output layer,\newline and $b$ is the bias and considers as follow:\newline $net.b\left\{1\right\}$ is the bias of the first hidden layer,\newline $net.b\left\{2\right\}$ is the bias of the second hidden layer,\newline $net.b\left\{3\right\}$ is the bias of the third hidden layer,\newline $net.b\left\{4\right\}$ is the bias of the fourth hidden layer,and \newline $net.b\left\{5\right\}$ is the bias of the output layer. \end{appendices} \section{References} \bibliographystyle{aip}
1,314,259,996,421
arxiv
\section{Introduction} \label{secIntro} Percolation \cite{LanCon94,CarLec01} is one of the easiest of the statistical models to simulate numerically. As such, it provides an excellent testing ground for uncovering how conformal invariance arises at critical points. Upon varying the probability of a lattice site or bond to be open, one finds such a critical point delineating configurations in which one can or can not cross between opposite edges of the lattice via open sites or bonds. At this critical point, percolation is believed to be described by a conformal field theory{} with vanishing central charge, and this belief has been well tested through the determination of the quantities one can calculate within the theory and comparison with numerical simulations. Naturally, the most important of these are the crossing probabilities, which give the probability that a random configuration will contain a cluster of open sites or bonds connecting opposite edges of the lattice. Informed of the (then unpublished) numerical results of Langlands \emph{et al} \cite{LanUni92}, Aizenman suggested the conformal invariance of these crossing probabilities. Upon being questioned on this, Cardy derived an exact closed-form expression for the horizontal crossing probability of a rectangular lattice in the thermodynamic limit (taken with the aspect ratio of the rectangle kept fixed), as a function of this aspect ratio. The precise result \cite{CarCri92} is not relevant for the purposes of this paper, only that it is non-trivial (not constant). However, we emphasise that the agreement with numerical simulation is impressive. A rigorous proof of Cardy's result has since been presented \cite{SchSca00,SmiCri01}. Cardy's derivation relied heavily on the machinery of conformal field theory{}, hence may be viewed as a strong confirmation of the conformal invariance of critical percolation. Paradoxically however, it has not been formulated within a completely coherent conformal-field-theoretic framework. Cardy interpreted the continuum limit of the percolation theory described above as a boundary conformal field theory{} (on a rectangle) with vanishing central charge, and the horizontal crossing probability as (roughly speaking) a four-point correlation function on the upper half-plane $\corrfn{\func{\phi}{z_1} \func{\phi}{z_2} \func{\phi}{z_3} \func{\phi}{z_4}}$, involving a boundary field $\phi$ of conformal dimension $h=0$. The role of the field $\func{\phi}{z_i}$ in the theory is to implement the change in the boundary conditions at $z_i$. These two properties of $\phi$ (being boundary changing and having $h=0$) suggest its identification with the field $\phi_{1,2}$ in the minimal model $\MinMod{2}{3}$. Then the null state $\brac{L_{-2} - \frac{3}{2} L_{-1}^2} \ket{\phi_{1,2}}$ determines the differential equation for the crossing probability in the usual manner \cite{BelInf84,DiFCon97}, and appropriate boundary conditions then select the required solution. However, since $\MinMod{2}{3}$ is trivial, it is clear that it does not provide the proper framework in which to describe the above non-trivial four-point function. Indeed, the field $\phi_{1,2}$ generates another null vector, $L_{-1} \ket{\phi_{1,2}}$, in the corresponding Verma module, which induces the differential equations $\partial_{z_i} \corrfn{\func{\phi_{1,2}}{z_1} \func{\phi_{1,2}}{z_2} \func{\phi_{1,2}}{z_3} \func{\phi_{1,2}}{z_4}} = 0$ (for all $i = 1, 2, 3, 4$). It is not difficult to pinpoint the essential property that makes $\MinMod{2}{3}$ trivial and thereby implies the undesirable differential equations above. To this end, let us examine a simple proof of this triviality. As in any theory, the vacuum $\ket{0}$ must exist, and when $c=0$, the descendant $L_{-2} \ket{0}$ is null\footnote{We will freely use the terms \emph{null vector} (a state of zero-norm), \emph{singular vector} (a descendant state annihilated by $L_1$ and $L_2$) and \emph{principal singular vector} (a singular vector which is not itself a descendant of a singular vector). Such states may or may not identically vanish, and we shall refer to non-vanishing states as \emph{physical}.}. The corresponding null field is then the energy-momentum tensor $\func{T}{z}$ whose modes $L_n$ must therefore annihilate all physical states. This implies that the only physical state is the vacuum itself. Let us reformulate this result in a more mathematically precise manner: When $c=0$, the only physical state which can coexist with the \emph{irreducible} vacuum module is the vacuum itself\footnote{We recall that a module is \emph{reducible} if it contains a non-trivial submodule and \emph{decomposable} if it can be written as the direct sum of two non-trivial submodules. \emph{Irreducible} and \emph{indecomposable} describe the opposite situations, respectively.}. This irreducibility condition forces the modes of $\func{T}{z}$ to act as the zero operator on the physical state space, and it is therefore this very condition that Cardy's result forces us to relax. We will explore the consequences of breaking the hypothesis of an irreducible vacuum module in the following sections, and show that this simple act leads to a consistent conformal field theory{} in which the non-triviality of the $\phi_{1,2}$ four-point function is fact. This theory is constructed from the minimal set of conditions ensuring this non-triviality, and will turn out to be a \emph{logarithmic} conformal field theory{}. \section{Heuristic Considerations} \label{secConstructions1} It proves convenient to fix a few notations from the outset. We present a part of the extended Kac table for $c=0$ in \tabref{tabExtKacc=0}, in which the dimensions $h_{r,s}$ of the (possibly primary) fields $\phi_{r,s}$ are displayed for $r=1,2,3$ and $s=1,\ldots,10$. This extends the Kac table of the minimal model $\MinMod{2}{3}$. We will denote the Verma module generated from the state $\ket{\phi_{r,s}}$ by $\VerMod{r,s}$ and its irreducible quotient by $\IrrMod{r,s}$. Note that the $\VerMod{r,s}$ with $r$ even or $s$ a multiple of $3$ have their maximal submodules generated by a single singular vector at grade $rs$, whereas the maximal submodules of the other $\VerMod{r,s}$ associated to the extended Kac table are generated by two singular vectors at grades $rs$ and $\brac{r-2} \brac{s-3}$, respectively \cite{FeiVer84}. We will also be interested in the indecomposable (but reducible) modules given by quotienting each $\VerMod{r,s}$ by the Verma module generated by the singular vector at grade $rs$. These modules will be denoted by $\IndMod{r,s}$. \begin{table} \begin{center} \setlength{\extrarowheight}{4pt} \begin{tabular}{|C|C|C|C|C|C|C|C|C|C|C} \hline 0 & 0 & \tfrac{1}{3} & 1 & 2 & \tfrac{10}{3} & 5 & 7 & \tfrac{28}{3} & 12 & \cdots \\[1mm] \hline \tfrac{5}{8} & \tfrac{1}{8} & \tfrac{-1}{24} & \tfrac{1}{8} & \tfrac{5}{8} & \tfrac{35}{24} & \tfrac{21}{8} & \tfrac{33}{8} & \tfrac{143}{24} & \tfrac{65}{8} & \cdots \\[1mm] \hline 2 & 1 & \tfrac{1}{3} & 0 & 0 & \tfrac{1}{3} & 1 & 2 & \tfrac{10}{3} & 5 & \cdots \\[1mm] \hline \end{tabular} \vspace{3mm} \caption{The first three rows of the extended Kac table for $c=0$, displaying the conformal dimensions $h_{r,s} = \bigl( \brac{3r-2s}^2 - 1 \bigr) / 24$ of the fields $\phi_{r,s}$. $r$ increases downwards, and $s$ increases to the right, and the top-left-hand corner corresponds to the identity field $\phi_{1,1}$, which with $\phi_{1,2}$ exhausts the usual Kac table for $\MinMod{2}{3}$. In fact, all dimensions appearing in the extended Kac table may be found in the first two semi-infinite rows, as $h_{r,s} = h_{r+2,s+3} = h_{r,3r-s}$.} \label{tabExtKacc=0} \end{center} \end{table} We begin with the observation that translation invariance of the vacuum requires that $L_{-1} \ket{0} = 0$, and this of course is reinforced by the state-field correspondence: $\ket{0} \leftrightarrow I$ so that $L_{-1} \ket{0} \leftrightarrow \partial I = 0$. Since we have already argued that the vacuum module cannot be irreducible, the only remaining possibility is that the vacuum module is the indecomposable (but not irreducible) $\IndMod{1,1} = \VerMod{1,1} / \VerMod{1,4}$. In other words, we require that the singular vector $L_{-2} \ket{0}$ be \emph{non-vanishing}, and in this way recover a non-trivial (though null) energy-momentum tensor $\func{T}{z}$. Furthermore, Cardy's result relies upon the identification of his $h=0$ boundary field with $\phi_{1,2}$. Indeed, we want to be able to derive the differential equation induced by the descendant singular vector at grade $2$, but not be able to derive the differential equations induced by the singular vector at grade $1$. We propose to achieve this by forcing the singular vector at grade $1$ to be non-vanishing, and its grade $2$ counterpart to vanish identically. Accommodating Cardy's result then requires also taking the physical module corresponding to the primary field $\phi_{1,2}$ to be indecomposable (but not irreducible): $\IndMod{1,2} = \VerMod{1,2} / \VerMod{1,5}$. We therefore see that in order to put Cardy's derivation in a consistent conformal-field-theoretic framework, we must start with two reducible but indecomposable modules of highest weight $h=0$. These are illustrated schematically in \figref{figModulesh=0}. The corresponding primary fields are distinguished by their different descendant structures, and in this way the Kac symmetry of $\MinMod{2}{3}$ is broken: $\phi_{1,1} \neq \phi_{1,2}$. We emphasise that what we have described amounts to a \emph{minimal fit} in that all of our reasoning has been forced by one goal---validating Cardy's derivation, itself validated conclusively by numerical simulations. It remains to ``flesh out'' this theory and check its consistency, thus verifying that the formalism we construct achieves our goal. \psfrag{0}[][]{$0$} \psfrag{1}[][]{$1$} \psfrag{2}[][]{$2$} \psfrag{5}[][]{$5$} \psfrag{7}[][]{$7$} \psfrag{M11}[][]{$\IndMod{1,1}$} \psfrag{M12}[][]{$\IndMod{1,2}$} \begin{figure} \begin{center} \includegraphics[width=7cm]{VirMod} \caption{A schematic picture of the physical modules of conformal dimension $0$ in our $c=0$ theory. The black circles represent the highest weight states, grey denotes a singular vector that does \emph{not} identically vanish, and white denotes the identically vanishing singular vectors. These states are labelled by their conformal dimension.} \label{figModulesh=0} \end{center} \end{figure} In the remainder of this section, we will explore the theory we are constructing in a somewhat heuristic manner, so as to quickly deduce certain necessary features. In the following section, we will revisit our constructions using more precise analysis techniques, and thereby prove that these necessary features are indeed present. It is these precise methods which will uncover the logarithmic structure of the percolation conformal field theory{}. For now, we explore the field content of the theory generated by the modules $\IndMod{1,1}$ and $\IndMod{1,2}$. The vanishing singular vector $\brac{L_{-2} - \frac{3}{2} L_{-1}^2} \ket{\phi_{1,2}} = 0$ implies, via the usual consideration of three-point functions \cite{BelInf84,DiFCon97}, the fusion rules \begin{equation} \label{eqnFR12byrs} \phi_{1,2} \times \phi_{r,s} = \phi_{r,s-1} + \phi_{r,s+1}, \end{equation} where $\phi_{r,0}$ is formally set to zero. When the module generated by $\phi_{1,2}$ is irreducible, the other vanishing singular vector further constrains the fields appearing in the above fusion rule. We will proceed however, by \emph{assuming} that in the indecomposable case, this other singular vector $L_{-1} \ket{\phi_{1,2}}$ (which is non-vanishing) does \emph{not} lead to constraints on the above fusion rules. This assumption will keep our conclusions in this section on a heuristic level, but it will be validated in the more precise treatment of the following section. So, accepting the fusion rules (\ref{eqnFR12byrs}) for the moment, we obviously have $\phi_{1,2} \times \phi_{1,2} = \phi_{1,1} + \phi_{1,3}$. The field $\phi_{1,3}$ must appear on the right hand side if the $\phi_{1,2}$ four-point function is to be non-trivial. This follows from \begin{equation} \label{eqnCF12.12.12.d12} \corrfn{\func{\phi_{1,2}}{z_1} \func{\phi_{1,2}}{z_2} \func{\phi_{1,2}}{z_3} \func{\brac{L_{-1} \phi_{1,2}}}{z_4}} = \partial_{z_4} \corrfn{\func{\phi_{1,2}}{z_1} \func{\phi_{1,2}}{z_2} \func{\phi_{1,2}}{z_3} \func{\phi_{1,2}}{z_4}} \neq 0. \end{equation} If $\phi_{1,3}$ did not appear on the right hand side of this fusion rule, then inserting the corresponding operator product expansion{} as $z_1 \rightarrow z_2$ and then again as $z_2 \rightarrow z_3$ would reduce the correlation functions in \eqnref{eqnCF12.12.12.d12} to a linear combination of two-point functions, each of which involves a descendant of $\func{\phi_{1,2}}{z_3}$ and $\func{\brac{L_{-1} \phi_{1,2}}}{z_4} = \func{\partial \phi_{1,2}}{z_4}$. But these two-point functions all vanish, as $h_{1,2} = 0$ implies that $\corrfn{\func{\phi_{1,2}}{z_3} \func{\phi_{1,2}}{z_4}} = 1$. We conclude then that the presence of $\phi_{1,3}$ in the theory is necessary. Consider now the fusion rule $\phi_{1,2} \times \phi_{1,3} = \phi_{1,2} + \phi_{1,4}$. Inserting the operator product expansions{} corresponding to (\ref{eqnFR12byrs}) with $r=1,s=2$ (as $z_1 \rightarrow z_2$) and then again with $r=1,s=3$ (as $z_2 \rightarrow z_3$) into \eqnref{eqnCF12.12.12.d12}, we obtain a linear combination of two-point functions involving the null field $\func{\partial \phi_{1,2}}{z_4}$ and descendants of $\func{\phi_{1,2}}{z_3}$ or $\func{\phi_{1,4}}{z_3}$. As we know, those involving $\phi_{1,2}$-descendants vanish, so global conformal invariance and $h_{1,4} = 1$ (see \tabref{tabExtKacc=0}) imply that the non-vanishing contributions are obtained from the action of differential operators (with respect to $z_3$) acting on \begin{equation} \label{eqnCF14.d12} \corrfn{\func{\phi_{1,4}}{z_3} \func{\partial \phi_{1,2}}{z_4}} = \frac{C}{\brac{z_3 - z_4}^2} \qquad \text{(for some constant $C$)}. \end{equation} We remark that $\func{\partial \phi_{1,2}}{z_4}$ is a primary (though null) field of dimension $1$, and non-triviality requires $C \neq 0$. This in turn obviously requires that $\phi_{1,4}$ belong to the theory. Summarising, the difference between the trivial theory constructed from irreducible modules and the theory we are constructing here from indecomposable (but reducible) modules is (at this level of rigour) that in the latter case, the presence of $\phi_{1,3}$ opens up a new channel in the operator product expansions{}, which allows the possibility of a non-trivial four-point function $\corrfn{\phi_{1,2} \phi_{1,2} \phi_{1,2} \phi_{1,2}}$. We point out that $\corrfn{\func{\partial \phi_{1,2}}{z_3} \func{\partial \phi_{1,2}}{z_4}} = \partial_{z_3} \partial_{z_4} \corrfn{\func{\phi_{1,2}}{z_3} \func{\phi_{1,2}}{z_4}} = 0$, hence that $\phi_{1,4} \neq \partial \phi_{1,2}$, by \eqnref{eqnCF14.d12}. Note that \eqnref{eqnCF14.d12} does imply that $\ket{\phi_{1,4}}$ and $L_{-1} \ket{\phi_{1,2}}$ have non-zero inner-product, indicating that these states both belong to some common indecomposable module (this refines an observation of \cite{SimPer07}). We will see shortly that this is the case, and it is the logarithmic structure of this module which makes it possible. We could continue this process, generating $\phi_{1,5}$ and beyond, but as we have already mentioned, this all relies on the assumption that the fusion rules (\ref{eqnFR12byrs}) are correct. Justifying this assumption is somewhat delicate because we are working with modules more general than the familiar irreducible ones, so the usual methods of inferring fusion rules (examining the action of null vectors on three-point functions in particular) might not be valid. The key point here is that we want a method in which we can distinguish between vanishing and non-vanishing null-vectors, and we expect that this will not be easy if inner-products and correlation functions are used. We therefore turn to a direct \emph{algebraic} computation of these fusion rules which make no reference to correlation functions and inner-products. \section{Fusion Rules and the Rise of Logarithms} \label{secConstructions2} To investigate the fusion ring generated by the indecomposable modules $\IndMod{1,1}$ and $\IndMod{1,2}$ from which the percolation conformal field theory{} is constructed, we turn to the algorithm of Nahm. This was originally introduced in \cite{NahQua94} for so-called quasi-rational modules, and was extended (and made more transparent) by Gaberdiel and Kausch \cite{GabInd96} using earlier results of Gaberdiel \cite{GabFus94}. We shall not discuss the details of this algorithm here. We only mention that it provides information on the decomposition of the fusion of two modules, by utilising a natural representation of the chiral symmetry algebra on the set of operator product expansions{} (for primary \emph{and} descendant fields) corresponding to the states in these modules. Importantly, the vanishing singular vectors of the modules to be fused are inputs to this algorithm, and at no point do we use the inner-products on the modules. Of course, there are infinitely many such operator product expansions{}, as there are an infinite number of descendant states in each module, graded by their conformal dimensions. It is possible however to consistently truncate this set of operator product expansions{} to a finite number, by imposing an upper-bound on the grade, relative to the highest weight state{} (mathematically, one considers the elements of an appropriate filtration of the module). Of course, this means that one only obtains a finite amount of information concerning the structure of the decomposition of the fused modules. Fortuitously, one can deduce the entire decomposition structure from such a (sufficiently large) finite amount of information, essentially by ``looking deeply enough'' to see the principal singular vectors (or not, as the case may be). It is this feature that makes the Nahm-Gaberdiel-Kausch algorithm (whose practical implementation is nicely detailed in \cite{GabInd96}) so powerful (and general). We illustrate the application of this algorithm to the fusion of the indecomposable module $\IndMod{1,2}$ with itself (as expected, the indecomposable vacuum module $\IndMod{1,1}$ still acts as the identity of the fusion ring). A theorem of Nahm \cite{NahQua94} guarantees that the zero-grade states in the decomposition of the fused modules can be associated with the states in a two-dimensional Cartesian product space\footnote{Generally, they would be associated with a subspace of these states and one would have to search for the \emph{spurious subspace} \cite{NahQua94}. However, there are no spurious states in this case.}. Computing the natural representative for $L_0$ (see \cite{GabFus94} for explicit formulae) on this space gives a matrix form for this representative: \begin{equation} L_0 = \begin{pmatrix} 0 & 0 \\ 1 & \tfrac{1}{3} \end{pmatrix} \quad \text{with respect to the ordered basis} \quad \begin{Bmatrix} \ket{\phi_{1,2}} \times \ket{\phi_{1,2}} \\ L_{-1} \ket{\phi_{1,2}} \times \ket{\phi_{1,2}} \end{Bmatrix} . \end{equation} Thus $L_0$ is diagonalisable with eigenvalues $0$ and $1/3$ on the zero-grade states of the fusion of $\IndMod{1,2}$ with itself. This is perfectly consistent with the fusion rule (\ref{eqnFR12byrs}) with $r=1,s=2$. To completely identify the character of the modules appearing in the decomposition of this fusion process, we must repeat this computation whilst considering all states up to grade $3$. This time, we compute\footnote{There is in addition a one-dimensional spurious subspace to be determined in this case. We used the method suggested in \cite{GabInd96} to find it.} a $9 \times 9$ representing matrix for $L_0$, which turns out to be diagonalisable with eigenvalues $0$, $2$, $3$, $\tfrac{1}{3}$, $\tfrac{4}{3}$, $\tfrac{7}{3}$, $\tfrac{7}{3}$, $\tfrac{10}{3}$, and $\tfrac{10}{3}$. This result is only consistent with the fusion decomposition \begin{equation} \IndMod{1,2} \times_{\! f} \IndMod{1,2} = \IndMod{1,1} \oplus \IndMod{1,3}, \end{equation} where we denote the fusion operation by $\times_{\! f}$, to distinguish it from the Cartesian product ($\oplus$ denotes the direct sum of modules, as always). This is the precise version of the fusion rule (\ref{eqnFR12byrs}) (with $r=1,s=2$) which we proposed heuristically in \secref{secConstructions1}. We mention that $\IndMod{1,3} = \IrrMod{1,3}$ is in fact irreducible. A more interesting computation is to determine the fusion of $\IndMod{1,2}$ and $\IndMod{1,3}$ to grade $1$. By Gaberdiel and Kausch's generalisation of Nahm's theorem to all grades \cite{GabInd96}, we compute within a four dimensional space, finding\footnote{Again, there are no spurious states in this case.} \begin{equation} L_0 = \begin{pmatrix} \tfrac{1}{3} & 0 & \tfrac{2}{9} & \tfrac{8}{27} \\ 0 & \tfrac{4}{3} & \tfrac{2}{3} & \tfrac{4}{9} \\ 1 & 0 & \tfrac{4}{3} & 0 \\ 0 & 1 & 0 & 1 \end{pmatrix} \quad \text{with respect to the ordered basis} \quad \begin{Bmatrix} \ket{\phi_{1,2}} \times \ket{\phi_{1,3}} \\ L_{-1} \ket{\phi_{1,2}} \times \ket{\phi_{1,3}} \\ \ket{\phi_{1,2}} \times L_{-1} \ket{\phi_{1,3}} \\ L_{-1} \ket{\phi_{1,2}} \times L_{-1} \ket{\phi_{1,3}} \end{Bmatrix}, \end{equation} which turns out \emph{not} to be diagonalisable. Indeed, it has simple eigenvalues $0$ and $2$ and a Jordan cell of rank $2$ corresponding to the eigenvalue $1$. Computing the action of $L_{-1}$ in the same way, we find that the eigenstate of eigenvalue $0$ is mapped to the true eigenstate of eigenvalue $1$ by $L_{-1}$ whereas its Jordan partner is mapped to the eigenstate of eigenvalue $2$. This suggests the identification of the eigenstates of eigenvalues $0$, $1$ and $2$ with $\ket{\phi_{1,2}}$, $L_{-1} \ket{\phi_{1,2}}$ and $L_{-1} \ket{\phi_{1,4}}$, respectively, where $\ket{\phi_{1,4}}$ denotes the Jordan partner to $L_{-1} \ket{\phi_{1,2}}$. We normalise this partner state so that \begin{equation} \label{eqnNormJord12} L_0 \ket{\phi_{1,4}} = \ket{\phi_{1,4}} + L_{-1} \ket{\phi_{1,2}}, \end{equation} fixing it up to multiples of $L_{-1} \ket{\phi_{1,2}}$. Let $\LogMod{1,4}$ denote the module obtained from fusing $\IndMod{1,2}$ and $\IndMod{1,3}$: \begin{equation} \IndMod{1,2} \times_{\! f} \IndMod{1,3} = \LogMod{1,4}. \end{equation} A full picture of the structure of this module requires computing to grade $6$, so as to ``see'' all principal singular vectors. This is computationally intensive, but straight-forward to program (we used \textsc{Maple}). The result is that $\LogMod{1,4}$ is the \emph{vector space} direct sum of the modules $\IndMod{1,2}$ and $\IndMod{1,4}$, but is \emph{indecomposable} itself as a Virasoro module\footnote{The notation $\LogMod{}$ emphases the indecomposable aspect of these modules and is used instead of the more familiar $\mathcal{R}$ which stresses their reducibility.}. This is an example of a \emph{staggered module}, in the terminology of Rohsiepe \cite{RohRed96}: $\LogMod{1,4}$ has a submodule isomorphic to the highest weight module{} $\IndMod{1,2}$ and its quotient by this submodule is isomorphic to the highest weight module{} $\IndMod{1,4}$. Mathematically, this is summarised by the exact sequence \begin{equation} 0 \longrightarrow \IndMod{1,2} \longrightarrow \LogMod{1,4} \longrightarrow \IndMod{1,4} \longrightarrow 0. \end{equation} We illustrate $\LogMod{1,4}$ schematically in \figref{figStagMods}. Note, however, that $\LogMod{1,4}$ is not itself a highest weight module{}. \psfrag{vac}[][]{$\scriptstyle \left| 0 \right\rangle$} \psfrag{L1vac}[][]{$\scriptstyle L_{-1} \left| 0 \right\rangle$} \psfrag{L2vac}[][]{$\scriptstyle L_{-2} \left| 0 \right\rangle$} \psfrag{L0}[][]{$\scriptstyle L_0$} \psfrag{L1}[][]{$\scriptstyle L_1$} \psfrag{L2}[][]{$\scriptstyle L_2$} \psfrag{L3}[][]{$\scriptstyle A_3$} \psfrag{L6}[][]{$\scriptstyle L_6 + \ldots$} \psfrag{phi12}[][]{$\scriptstyle \left| \phi_{1,2} \right\rangle$} \psfrag{phi14}[][]{$\scriptstyle \left| \phi_{1,4} \right\rangle$} \psfrag{phi15}[][]{$\scriptstyle \left| \phi_{1,5} \right\rangle$} \psfrag{L1phi12}[][]{$\scriptstyle L_{-1} \left| \phi_{1,2} \right\rangle$} \psfrag{L2phi12}[][]{$\scriptstyle \left| \chi \right\rangle$} \psfrag{R11}[][]{$\LogMod{1,5}$} \psfrag{R12}[][]{$\LogMod{1,4}$} \begin{figure} \begin{center} \includegraphics[width=14cm]{StagMod} \caption{A schematic picture of the staggered modules $\LogMod{1,4}$ and $\LogMod{1,5}$ showing the singular vector structure of the two highest weight modules{} from which they are constructed. White circles correspond to identically vanishing singular vectors, whereas grey indicate that the singular vector is non-vanishing. Here, $\ket{\chi}$ is the vanishing singular vector $\brac{L_{-2} - \frac{3}{2} L_{-1}^2} \ket{\phi_{1,2}}$, and $A_3$ is defined after \eqnref{eqnA3.17}. The curved arrows depict (roughly) how the Virasoro mode action ``glues'' these modules together to form the staggered module (the precise actions are given in the text).} \label{figStagMods} \end{center} \end{figure} In fact, $\LogMod{1,4}$ is generated by the state $\ket{\phi_{1,4}}$, as computing $L_1$ with the Nahm-Gaberdiel-Kausch algorithm gives \begin{equation} \label{eqnL1.14} L_1 \ket{\phi_{1,4}} = \frac{-1}{2} \ket{\phi_{1,2}}. \end{equation} This non-trivial relation does not follow from \eqnref{eqnNormJord12} and the Virasoro commutation relations, and in fact serves to fix the structure of the staggered module $\LogMod{1,4}$ completely. Note that upon quotienting by $\IndMod{1,2}$, we recover\footnote{In particular, note that the vanishing grade $4$ singular vector $\ket{\zeta}$ descended from $\ket{\phi_{1,4}}$ is not the $\IndMod{1,4}$ singular vector \begin{equation*} \brac{L_{-4} - L_{-3} L_{-1} - L_{-2}^2 + \frac{5}{3} L_{-2} L_{-1}^2 - \frac{1}{4} L_{-1}^4} \ket{\phi_{1,4}}, \end{equation*} as one might have expected. Solving $L_1 \ket{\zeta} = L_2 \ket{\zeta} = 0$ in $\LogMod{1,4}$ (subject to the vanishing of the $\IndMod{1,2} \subset \LogMod{1,4}$ singular vector $\brac{L_{-2} - \frac{3}{2} L_{-1}^2} \ket{\phi_{1,2}}$), yields the true (identically vanishing) $\LogMod{1,4}$ singular vector \begin{equation*} \ket{\zeta} = \brac{L_{-4} - L_{-3} L_{-1} - L_{-2}^2 + \frac{5}{3} L_{-2} L_{-1}^2 - \frac{1}{4} L_{-1}^4} \ket{\phi_{1,4}} + \brac{\frac{1}{2} L_{-5} + \frac{4}{3} L_{-4} L_{-1} - \frac{8}{9} L_{-3} L_{-2}} \ket{\phi_{1,2}} = 0. \end{equation*} Of course this reduces to the $\IndMod{1,4}$ singular vector upon quotienting by $\IndMod{1,2}$.} the highest weight condition for $\ket{\phi_{1,4}} \in \IndMod{1,4}$. We are now in a position to verify \eqnref{eqnCF14.d12}, which we showed in \secref{secConstructions1} was necessary for the non-triviality of the $\phi_{1,2}$ four-point function. It is now clear that the constant $C$ appearing there is just \begin{equation} \label{eqnLogCoup14} C = \bracket{\phi_{1,4}}{L_{-1}}{\phi_{1,2}} = \frac{-1}{2} \braket{\phi_{1,2}}{\phi_{1,2}} = \frac{-1}{2}, \end{equation} using \eqnref{eqnL1.14}. We remark that even though $L_{-1} \ket{\phi_{1,2}}$ is null, it can still have a non-vanishing inner-product with another state (in particular its Jordan partner state $\ket{\phi_{1,4}}$). This would not be possible in a highest weight module{}, so we see in hindsight that Cardy's derivation can only be valid in a conformal field theory{} based on modules more general than highest weight modules{} (such as the staggered module $\LogMod{1,4}$ we have discovered here). In other words, this exposes clearly the necessity of having a non-diagonalisable $L_0$ in the percolation conformal field theory{}. As is well known, non-diagonalisability of $L_0$ is often taken as a defining property of logarithmic conformal field theories{} \cite{GurLog93}. This logarithmic structure is easy to elucidate in the present case. First, \eqnDref{eqnNormJord12}{eqnL1.14} allow us to derive the operator product expansion{} \begin{equation} \func{T}{z} \func{\phi_{1,4}}{w} = \frac{-1}{2} \frac{\func{\phi_{1,2}}{w}}{\brac{z-w}^3} + \frac{\func{\phi_{1,4}}{w} + \func{\partial \phi_{1,2}}{w}}{\brac{z-w}^2} + \frac{\func{\partial \phi_{1,4}}{w}}{z-w} + \ldots \end{equation} This and global conformal invariance of the vacuum now imply that $\corrfn{\func{\phi_{1,4}}{z} \func{\phi_{1,4}}{w}}$ satisfies the differential equation \begin{equation} \label{eqnCF14.14} \brac{z \partial_z + w \partial_w + 2} \corrfn{\func{\phi_{1,4}}{z} \func{\phi_{1,4}}{w}} = \frac{1}{\brac{z-w}^2} \qquad \Rightarrow \qquad \corrfn{\func{\phi_{1,4}}{z} \func{\phi_{1,4}}{w}} = \frac{A + \log \brac{z-w}}{\brac{z-w}^2} \end{equation} where $A$ is some constant\footnote{Note that a direct consequence of the logarithm appearing in \eqnref{eqnCF14.14} is that the (standard) inner-product $\braket{\phi_{1,4}}{\phi_{1,4}}$ diverges. Indeed, considering $L_{-1} \ket{\phi_{1,2}}$ and its Jordan partner $\ket{\phi_{1,4}}$, the norm of the former vanishes and that of the latter diverges, but their inner-product is finite and non-zero (\eqnref{eqnLogCoup14}). This reflects the simple fact that there is no single invariant inner-product defined on these non-highest weight modules{}. Note that if the norm of $\ket{\phi_{1,4}}$ were not divergent, then letting $L_0$ act on the bra and ket respectively in $\bracket{\phi_{1,4}}{L_0}{\phi_{1,4}}$ would lead to \begin{equation*} \braket{\phi_{1,4}}{\phi_{1,4}} = \braket{\phi_{1,4}}{\phi_{1,4}} + \bracket{\phi_{1,4}}{L_{-1}}{\phi_{1,2}} = \braket{\phi_{1,4}}{\phi_{1,4}} - \frac{1}{2}, \end{equation*} a contradiction. Here, it is important to note that $L_0^{\dag} \ket{\phi_{1,4}} = \ket{\phi_{1,4}}$ (and $L_0^{\dag} L_{-1} \ket{\phi_{1,2}} = L_{-1} \ket{\phi_{1,2}} + \ket{\phi_{1,4}}$). Thus, $L_0 \neq L_0^{\dag}$ as required by non-diagonalisability.}. In fact, we can set $A = 0$ because we still have the freedom to redefine $\phi_{1,4}$ as $\phi_{1,4} + a \partial \phi_{1,2}$ for arbitrary $a$, without affecting the defining \eqnDref{eqnNormJord12}{eqnL1.14}. This discussion firmly establishes the theory we are constructing as the conformal field theory associated to critical percolation by Cardy. However, we have not yet exhausted the richness of this theory. In particular, we can apply the algorithm of Nahm, Gaberdiel and Kausch to the fusion of $\IndMod{1,3} = \IrrMod{1,3}$ with itself. Despite this module being irreducible, we still compute non-diagonalisable representatives for $L_0$ on the fusion product, and by computing to grade $5$, we conclude that \begin{equation} \IndMod{1,3} \times_{\! f} \IndMod{1,3} = \IndMod{1,3} \oplus \LogMod{1,5}. \end{equation} Here, $\LogMod{1,5}$ is another staggered module, structurally described by the exact sequence $0 \rightarrow \IndMod{1,1} \rightarrow \LogMod{1,5} \rightarrow \IndMod{1,5} \rightarrow 0$. The field $\phi_{1,5}$ is the Jordan partner of the energy-momentum tensor $T$: \begin{equation} \label{eqnL0.15} L_0 \ket{\phi_{1,5}} = 2 \ket{\phi_{1,5}} + L_{-2} \ket{0}, \end{equation} and computing the action of $L_2$ on $\ket{\phi_{1,5}}$ gives \begin{equation} \label{eqnL2.15} L_2 \ket{\phi_{1,5}} = \frac{-5}{8} \ket{0}. \end{equation} We illustrate this module schematically in \figref{figStagMods}. Again, this staggered module structure leads to the appearance of logarithms in correlation functions. \eqnDref{eqnL0.15}{eqnL2.15} imply the operator product expansion{} \begin{equation} \func{T}{z} \func{\phi_{1,5}}{w} = \frac{-5}{8} \frac{1}{\brac{z-w}^4} + \frac{2 \func{\phi_{1,5}}{w} + \func{T}{w}}{\brac{z-w}^2} + \frac{\func{\partial \phi_{1,5}}{w}}{z-w} + \ldots, \end{equation} and the differential equations derived from the conformal invariance of the vacuum yield \begin{equation} \label{eqnCF15.15} \corrfn{\func{\phi_{1,5}}{z} \func{\phi_{1,5}}{w}} = \frac{5}{4} \frac{\log \brac{z-w}}{\brac{z-w}^4}. \end{equation} Here, we have redefined $\func{\phi_{1,5}}{z}$ so as to set the arbitrary constant coming from the differential equation to zero (as discussed after \eqnref{eqnCF14.14}). We see immediately that the norm of $\ket{\phi_{1,5}}$ also diverges. Thus far, we have constructed a part of the spectrum of a conformal field theory{} consistent with Cardy's percolation result. Of course, it is possible to continue the analysis, uncovering more of this percolation conformal field theory{} structure. We have computed several more fusion rules in order to elucidate the general pattern, including\footnote{These computations require the explicit forms of the vanishing singular vectors of the staggered modules $\LogMod{1,s}$.} \begin{align} \IndMod{1,2} \times_{\! f} \LogMod{1,4} &= 2 \: \IndMod{1,3} \oplus \LogMod{1,5}, & \IndMod{1,2} \times_{\! f} \LogMod{1,5} &= \LogMod{1,4} \oplus \IndMod{1,6}, \notag \\ \IndMod{1,3} \times_{\! f} \LogMod{1,4} &= 2 \: \LogMod{1,4} \oplus \IndMod{1,6}, & \IndMod{1,3} \times_{\! f} \LogMod{1,5} &= 2 \: \IndMod{1,3} \oplus \LogMod{1,7}, \\ \LogMod{1,4} \times_{\! f} \LogMod{1,4} &= 4 \: \IndMod{1,3} \oplus 2 \: \LogMod{1,5} \oplus \LogMod{1,7}, & \LogMod{1,5} \times_{\! f} \LogMod{1,5} &= \IndMod{1,3} \oplus 2 \: \LogMod{1,5} \oplus \LogMod{1,7} \oplus \IndMod{1,9}. \notag \end{align} The module $\LogMod{1,7}$ appearing here is defined by the exact sequence $0 \rightarrow \IndMod{1,5} \rightarrow \LogMod{1,7} \rightarrow \IndMod{1,7} \rightarrow 0$, and the conditions \begin{equation} L_0 \ket{\phi_{1,7}} = 5 \ket{\phi_{1,7}} + \ket{\xi} \qquad \text{and} \qquad A_3 \ket{\phi_{1,7}} = \frac{-35}{3} \ket{\phi_{1,5}}, \label{eqnA3.17} \end{equation} where $\ket{\phi_{1,7}}$ is the logarithmic partner of $\ket{\xi} = \brac{L_{-3} - L_{-2} L_{-1} + \tfrac{1}{6} L_{-1}^3} \ket{\phi_{1,5}}$ (the non-vanishing singular vector of $\IndMod{1,5}$), and $A_3 = L_3 - L_1 L_2 + \tfrac{1}{6} L_1^3$. The general pattern observed is best expressed as follows: \begin{enumerate} \item Replace each staggered module $\LogMod{1,3m+n}$ ($n=1,2$) by the direct sum $\IndMod{1,3m-n} \oplus \IndMod{1,3m+n}$ to which it is isomorphic as a vector space (but not as a Virasoro module). \item Compute the fusion using distributivity and the na\"{\i}ve fusion rules of \secref{secConstructions1}: \begin{equation} \IndMod{1,s} \times_{\! f} \IndMod{1,t} = \IndMod{1,\abs{s-t}+1} \oplus \IndMod{1,\abs{s-t}+3} \oplus \ldots \oplus \IndMod{1,s+t-3} \oplus \IndMod{1,s+t-1}. \end{equation} \item Replace direct sums of the form $\IndMod{1,3m-n} \oplus \IndMod{1,3m+n}$ ($n=1,2$) by the corresponding staggered module $\LogMod{1,3m+n}$. It is not hard to check that there will always be a unique way of doing this. \end{enumerate} It should be clear that closure under fusion requires that the spectrum of the logarithmic conformal field theory{} we have constructed contains the modules \begin{equation} \label{spectrum} \IndMod{1,1}, \quad \IndMod{1,2}, \quad \IndMod{1,3k} = \IrrMod{1,3k}, \quad \LogMod{1,3k+1} \quad \text{and} \quad \LogMod{1,3k+2} \qquad \text{($k \in \mathbb{Z}_+$)}. \end{equation} Here, $\LogMod{1,3k+1} \cong \IndMod{1,3k-1} \oplus \IndMod{1,3k+1}$ and $\LogMod{1,3k+2} \cong \IndMod{1,3k-2} \oplus \IndMod{1,3k+2}$ as vector spaces. We illustrate this procedure with an example. To compute $\LogMod{1,4} \times_{\! f} \LogMod{1,5}$, first note that \begin{equation} \brac{\IndMod{1,2} \oplus \IndMod{1,4}} \times_{\! f} \brac{\IndMod{1,1} \oplus \IndMod{1,5}} = 2 \: \IndMod{1,2} \oplus 3 \: \IndMod{1,4} \oplus 2 \: \IndMod{1,6} \oplus \IndMod{1,8}. \end{equation} We infer from this the fusion rule \begin{equation} \LogMod{1,4} \times_{\! f} \LogMod{1,5} = 2 \: \LogMod{1,4} \oplus 2 \: \IndMod{1,6} \oplus \LogMod{1,8}, \end{equation} where $\LogMod{1,8}$ is a staggered module with exact sequence $0 \rightarrow \IndMod{1,4} \rightarrow \LogMod{1,8} \rightarrow \IndMod{1,8} \rightarrow 0$. We have of course checked this result through direct computation. \section{Discussion} \label{secDiscussion} The identification of critical percolation with a logarithmic conformal field theory{} has received much attention recently. Indeed, this was even argued by Cardy himself \cite{CarLog99} for a general class of disordered quenched systems with trivial partition function (that includes percolation), but without a detailed supporting conformal field theory{} construction. We will now compare our theory with the other logarithmic theories that have been proposed in the literature. We first compare with the proposed theory of Read and Saleur \cite{ReaAss07} who studied a $c=0$ theory defined by the continuous limit of a $\func{\mathcal{U}_q}{\alg{sl}_2}$ XXZ spin-$\frac{1}{2}$ chain of even length at $q=e^{\mathfrak{i} \pi /3}$. By analysing the associated Temperley-Lieb algebra, they deduced the existence of modules which may be identified with our $\IndMod{1,1}$, $\IndMod{1,6k-3} = \IrrMod{1,6k-3}$, $\LogMod{1,6k-1}$ and $\LogMod{1,6k+1}$, for $k \in \mathbb{Z}_+$. These are the modules in (\ref{spectrum}) with \emph{odd} second subscript label. This forms a fusion subring of that which we have computed in that the fusion rules given in \cite{ReaAss07} agree with the appropriate restriction of ours (and close). It is worth mentioning however that their proposed theory does not contain a field that may be identified with $\phi_{1,2}$, and so cannot explain the crossing probability computation of Cardy. This contrasts with the fusion ring proposed by Pearce and Rasmussen \cite{RasFus07}. This was deduced from numerical studies of an integrable lattice model of critical percolation, defined in their prior work with Zuber using Temperley-Lieb algebras to obtain lattice constructions of logarithmic extensions of minimal models \cite{PeaLog06}. Tellingly, they propose fusion rules for modules corresponding to \emph{all} fields in the extended Kac table ($\phi_{r,s}$ with $r,s \in \mathbb{Z}_+$). This is necessitated by their assumption that both $\IndMod{1,2}$ and $\IndMod{2,1} = \IrrMod{2,1}$ are present in the theory. Cardy's crossing probability result only requires the former to be present, so our fusion ring may be identified with a subring of theirs, in fact, the subring which they refer to as the ``vertical fusion algebra''. This ``extended'' fusion ring, as reported by Pearce and Rasmussen, is in turn identical to a subring of the ring previously proposed by Eberle and Flohr \cite{EbeVir06}, based on extensive computations using the algorithm of Nahm-Gaberdiel-Kausch (as ours are). It is clear, however, that their starting assumption is that of irreducibility (which we rejected in \secref{secIntro}). They assume that every irreducible module in the extended Kac table is present, though the trivial irreducible module $\IrrMod{1,1} = \IrrMod{1,2}$ is noted to decouple from the fusion ring obtained and is removed. The indecomposable modules $\IndMod{1,1}$ and $\IndMod{1,2}$ are then added to the theory, as they are found to occur as submodules of indecomposable modules generated by fusing the above irreducibles (although they do not seem to add $\IndMod{1,4}, \IndMod{1,5}, \ldots$, which also appear as such submodules). The spectrum of their theory is therefore even richer than that of Pearce and Rasmussen, and we obviously again find agreement between our fusion ring and theirs (when restricted to our spectrum). Eberle and Flohr were also able to further characterise the modules appearing in their theory by determining certain parameters $\beta$ (originally discussed in \cite{GabInd96}) associated to the staggered modules (which we denote by $\LogMod{r,s}$). In particular, they give $\beta = \tfrac{-1}{2}$ and $\beta = \tfrac{-5}{8}$ for $\LogMod{1,4}$ and $\LogMod{1,5}$ respectively, agreeing with our \eqnDref{eqnL1.14}{eqnL2.15}, respectively. However, the parameters (sometimes they give two) that they determine for more general $\LogMod{r,s}$ are \emph{not} invariants of the module itself, so they are difficult to independently verify. We have computed the invariant parameter $\beta = -\tfrac{35}{3}$ (and there is only ever one \cite[Thm.\ 5.12]{RohRed96}) for $\LogMod{1,7}$ in \eqnref{eqnA3.17}, and verified that this value is always found, regardless of which fusion rule is used to generate this module (this we checked with $\LogMod{1,4} \times_{\! f} \LogMod{1,4}$, $\LogMod{1,5} \times_{\! f} \LogMod{1,5}$ and $\IndMod{1,2} \times_{\! f} \IndMod{1,6}$). To elaborate, the first part of \eqnref{eqnA3.17} only defines $\ket{\phi_{1,7}}$ modulo the kernel $\mathcal{K} \subset \LogMod{1,7}$ of $L_0 - 5 \id$. This kernel is the three-dimensional subspace of grade-$3$ descendants of the highest weight state{} $\ket{\phi_{1,5}}$ of $\LogMod{1,7}$ (not to be confused with the non-highest weight state{} $\ket{\phi_{1,5}}$ in $\LogMod{1,5}$). Eberle and Flohr define their parameter $\beta$ to be the multiple of $\ket{\phi_{1,5}}$ obtained by letting $L_1^3$ act on $\ket{\phi_{1,7}}$. However, $L_1^3$ does not act trivially on $\mathcal{K}$, so their $\beta$ depends upon which particular $\ket{\phi_{1,7}}$ they have chosen (and so can take a continuous range of values). To get an invariant $\beta$, one must act on $\ket{\phi_{1,7}}$ with a (raising, grade $3$) operator which annihilates $\mathcal{K}$. There is of course only one such operator (up to scalar multiples) and it is found by taking the singular vector $\ket{\xi} \in \mathcal{K}$ (whose logarithmic partner is $\ket{\phi_{1,7}}$) and removing $\bra{\phi_{1,5}}$ from $\bra{\xi}$. This is the operator we have denoted by $A_3$ in \eqnref{eqnA3.17}, obtaining $\beta = \tfrac{-35}{3}$. In fact, the above analysis makes it clear that this invariant is (for a general staggered module $\LogMod{r,s}$) nothing but \begin{equation} \beta_{r,s} = \braket{\chi}{\phi_{r,s}}, \end{equation} where $\ket{\chi}$ is the non-vanishing singular vector in the highest weight (indecomposable) submodule of $\LogMod{r,s}$ and $\ket{\phi_{r,s}}$ is its logarithmic partner state (which is not in this submodule). (We can now identify the constant $C$ of \eqnref{eqnLogCoup14} with $\beta_{1,4}$---see also \eqnref{eqnL1.14}). $\beta_{r,s}$ therefore quantifies the degree to which the highest weight submodule is coupled to its logarithmic partner module. We therefore call this staggered module invariant the \emph{logarithmic coupling}. It is now evident that this invariant in fact scales with the (square of the) normalisation of the singular vector $\ket{\chi}$. We always normalise $\ket{\chi}$ (and hence $\beta_{r,s}$) so that the term with the single (most negative) Virasoro mode has coefficient $1$ (and we order the modes in the other terms by non-decreasing index). To summarise the comparisons made thus far, we have identified the fusion ring of Read and Saleur as a subring of ours which does not contain $\phi_{1,2}$, and so their theory does not provide a formalism in which to understand Cardy's crossing probability result. On the other hand, the fusion rings proposed by Pearce--Rasmussen and Eberle--Flohr contain our fusion ring as a subring, and so are sufficiently rich to explain Cardy's result. Unfortunately, the spectral excess (over what is strictly necessary for the non-triviality of the $\phi_{1,2}$ four-point function) in these enlarged fusion rings clashes with conformal invariance. This is due to a subtlety involving logarithmic couplings, and follows from an argument originally due to Gurarie and Ludwig \cite[App.\ A]{GurCon04} which we briefly outline. As we have shown, if the theory contains the module $\IndMod{1,2}$ (or $\IndMod{1,3} = \IrrMod{1,3}$), then fusion generates the module $\LogMod{1,5}$. If $\IndMod{2,1} = \IrrMod{2,1}$ is also present, then we additionally generate a module $\LogMod{3,1}$ with exact sequence $0 \rightarrow \IndMod{1,1} \rightarrow \LogMod{3,1} \rightarrow \IndMod{3,1} \rightarrow 0$. $\ket{\phi_{3,1}}$ has conformal dimension $2$ (\tabref{tabExtKacc=0}), and satisfies (compare with \eqnref{eqnL2.15}) \begin{equation} L_0 \ket{\phi_{3,1}} = 2 \ket{\phi_{3,1}} + L_{-2} \ket{0} \qquad \text{and} \qquad L_2 \ket{\phi_{3,1}} = \frac{5}{6} \ket{0}. \end{equation} If, however, the conformal invariance of the vacuum is used to compute the correlation function $\corrfn{\func{\phi_{1,5}}{z} \func{\phi_{3,1}}{w}}$, one finds that global invariance under $L_{-1}$ and $L_0$ fix the form of this function completely (as with \eqnref{eqnCF15.15}), but this form does \emph{not} satisfy the $L_1$-invariance constraint, essentially because the logarithmic couplings $\beta_{1,5} = \tfrac{-5}{8}$ and $\beta_{3,1} = \tfrac{5}{6}$ are not equal (whilst the respective dimensions of the generating states of these modules are). The conclusion is then that one cannot have both $\IndMod{1,2}$ and $\IndMod{2,1}$ in the theory simultaneously\footnote{We stress that this argument proves that one cannot augment the theory introduced above by the \emph{module} $\IndMod{2,1}$, whose highest weight state{} has dimension $\tfrac{5}{8}$ (\tabref{tabExtKacc=0}). It does not rule out the possibility of consistently adding a primary field of this dimension. However, the highest weight state{} corresponding to such a field cannot have a vanishing singular vector at grade $2$. We expect that an extended algebra approach will be able to determine whether such augmentations are also forbidden.}. The work of Gurarie and Ludwig (detailed in \cite{GurCon04}) is of considerable relevance to our construction. Their view is to construct $c=0$ logarithmic theories by assuming that the theory satisfies a particular set of carefully chosen operator product expansions{}. Many of these involve a logarithmic partner field to $T$ which we can identify in our theory with $\phi_{1,5}$. The focus of their work was not to construct the theory from its fusion ring (fusion processes are not treated there), but to investigate the consequences of extending the Virasoro algebra by the modes of this partner field. It is very interesting to see that their partial extension already allows them to determine ``anomaly numbers'' $b$ \cite{GurCTh99}, which coincide with the logarithmic couplings $\beta$ we have discussed above in the two cases they treat, $\tfrac{-5}{8}$ and $\tfrac{5}{6}$. As noted above, the staggered modules $\LogMod{1,5}$ and $\LogMod{3,1}$ cannot both be simultaneously present in a consistent conformal field theory{}. Gurarie and Ludwig realised that this means that there are (at least) two distinct logarithmic theories that one can construct at $c=0$ (and we venture that Read and Saleur's XXZ spin chain theory \cite{ReaAss07} perhaps leads to a third). Moreover, they identified in \cite{GurCon02} the one containing $\LogMod{1,5}$ ($\beta_{1,5} = \tfrac{-5}{8}$) as realising polymers and that containing $\LogMod{3,1}$ ($\beta_{3,1} = \tfrac{5}{6}$) as realising percolation (however, this identification was not reaffirmed or refuted in the sequel \cite{GurCon04}). Contrarily, we maintain that percolation must involve $\LogMod{1,5}$ (and more fundamentally, $\IndMod{1,2}$), hence percolation has $\beta_{1,5} = \tfrac{-5}{8}$. In finishing, let us reemphasise the essential aspects of our construction. To explain Cardy's result, we deform the $\MinMod{2}{3}$ model by breaking the Kac symmetry $\ket{\phi_{1,1}} = \ket{\phi_{1,2}}$. The simplest way of doing this is by rendering the two modules reducible (but indecomposable), each differently, by allowing one of the primitive singular vectors in each module to be physical (non-vanishing). The proper choices, $L_{-2} \ket{\phi_{1,1}} \neq 0$ and $L_{-1} \ket{\phi_{1,2}} \neq 0$, which transform $\IrrMod{1,s}$ into $\IndMod{1,s}$ ($s=1,2$), are fixed by the physics. (Note that this starting point fits naturally with the point of view that percolation is to be regarded as a limiting theory with $c \rightarrow 0$: In this picture, the natural modules to consider are precisely our $\IndMod{1,s}$.) In a second step, we have shown how the logarithms arise naturally---without further input---from these assumptions. Our formalism is also well-suited to interpreting and consolidating the results of Gurarie and Ludwig. In particular, we hope to use the framework we have developed to investigate the (partial) extended algebra approach that they have pioneered. Furthermore, it is clear that our constructions may be easily adapted to defining logarithmic theories corresponding to every minimal model. We expect that these theories will prove to be the correct framework in which to explain the occurrence of non-local observables corresponding to fields outside the (standard) Kac table in other critical models (the Ising model \cite{ArgNon02} for example). \section*{Acknowledgements} We thank Yvan Saint-Aubin for discussions and critical comments on the manuscript, and Matthias Gaberdiel for clarifying certain aspects of the Nahm-Gaberdiel-Kausch algorithm for fusion.
1,314,259,996,422
arxiv
\subsection{SDN in Industrial Networking} \subsection{FastReact - Sensor Packet Parsing and Processing} \label{ssec:history} When a sensor packet enters the \emph{FastReact} switch, we parse the sensor \emph{ID}, extract the sensor value and store it in a time series datastore implemented by custom register arrays. We keep a rolling history of the latest $n$ values per sensor \emph{ID}, where $n$ is configurable. When inserting a new sensor value, we also compute a moving average in a separate register array using approximative arithmetic \cite{201474}. Both the historical sensor values and moving averages can be fetched by a \ac{sdn} or industrial controller either through the data plane using a \code{get}-like request, or through the control plane via the P4-generated API. In that sense, \emph{FastReact} acts similar as a key-value cache for sensor readings with the additional functionality to implement lightweight compute operations. In contrast to NetCache \cite{jin17}, which keeps only the hot items in the cache and has a cache replacement strategy, we keep a fixed number of sensor \emph{IDs} in the switch memory in order to be deterministic. Also, we argue that the Industrial Controller knows how many sensor \emph{IDs} are deployed, so it can configure the table size during compile time of the switch. Based on the configured processing logic (see Section~\ref{ssec:logic}), the switch decides, if it should discard the packet, send it to the Industrial Controller or notify an actuator. The connection between sensors and actuators are configured through the match-action table, which is populated by the SDN controller as instructed by the Industrial Controller. If the conditions encoded in the logic tables are true, the actuator specified in the match-action table will be notified of the sensor value. The action will modify the sensor packet, transforming it into an actuator notification message as specified by the \ac{sdn} controller. An example of a match-action table can be seen in Table~\ref{tab:forward}. The table has two actions, named \code{Route} and \code{Failover}, respectively. The \code{Route} table is used to connect sensors and actuators. In this example, Sensor 1 notifies Actuator 2, while Sensor 2 notifies Actuator 5. Multiple actuators could be notified through separate actions, as P4 does not support variable number of arguments. The \code{Failover} table specifies actions the switch should take if a failure is detected. Here, for Sensor 1 the data is forwarded to Actuator 3, while for Sensor 2 the data is sent up to another switch, which, in turn, can pass the packet to a backup actuator. \begin{table} \begin{small} \begin{tabularx}{\linewidth}{| l | l | X |} \hline {\bf Table} & {\bf Match Key} & {\bf Action} \\ \hline Route & sensor = 1 & forward\_mod(2 \emph{(actuator, ..)}) \\ \hline Route & sensor = 2 & forward\_mod(5 \emph{(actuator, ..)}) \\ \hline Failover & sensor = 1 & {forward\_mod(3 \emph{(bkup. actuator, ..)})} \\ \hline Failover & sensor = 2 & send\_up(10 \emph{(switch)}) \\ \hline \end{tabularx} \end{small} \caption{Example of match-action table} \label{tab:forward} \end{table} \subsection{FastReact - Decision Logic} \def$(S1 < 50 \vee S2 > 25) \wedge (S3 = 10)${$(S1 < 50 \vee S2 > 25) \wedge (S3 = 10)$} \def$(S1 < 50)${$(S1 < 50)$} \label{ssec:logic} One of the main challenges in \emph{FastReact} is how to make the decision logic in the switch as generic as possible while being able to dynamically update it through the south-bound API. Most P4 targets expose the ability to read from and write to switch registers, which are array-like data structures stored in the switch memory. Our implementation utilizes these registers to communicate the desired control logic for each switch. In order to encode a complex logical expression in a generic way, we first transform it into its \ac{cnf}. The \ac{cnf} is given by: \[(A \lor B \lor C...) \land (D \lor B...) \land ...\] Where $A$, $B$, $C...$ are logical expressions in the form $(s\;{\_}\; v)$. Here $s$ represents the recorded value of a sensor, $v$ the value which it should be compared to and the placeholder $\_$ represents an operator. A simple example for a logical expression in \ac{cnf} is the following: \[(s_1 < 50 \lor s_2 > 25) \land (s_3 = 10)\] Based on the processing logic provided by the Industrial Controller, the \ac{sdn} controller derives the \ac{cnf} of the logical expression and installs it on the switch, in a table format, by using switch registers. This is stored on a per-sensor basis. \begin{table} \begin{small} \begin{tabularx}{\columnwidth}{| X | X | X | X | X |} \hline {\bf \footnotesize Sensor ID} & Cond 1 & Cond 2 & Cond 3 & Cond 4 \\ \hline 1 & 1 & 2 & & \\ \hline \end{tabularx} \end{small} \caption{Example of sensor conjunctive table for the expression $(S1 < 50 \vee S2 > 25) \wedge (S3 = 10)$} \label{tab:and} \vspace{2ex} \begin{small} \begin{tabularx}{\columnwidth}{| X | X | X | X | X |} \hline {\bf Index} & Cond 1 & Cond 2 & Cond 3 & Cond 4 \\ \hline &\multicolumn{4}{c|}{\bf Sensor ID} \\ \hline 1 & 1 & 2 & & \\ \hline 2 & 3 & & & \\ \hline &\multicolumn{4}{c|}{\bf Operator} \\ \hline 1 & $<$ & $>$ & & \\ \hline 2 & $=$ & & & \\ \hline &\multicolumn{4}{c|}{\bf Value} \\ \hline 1 & 50 & 25 & & \\ \hline 2 & 10 & & & \\ \hline \end{tabularx} \end{small} \caption{Example of sensor disjunctive tables for the expression $(S1 < 50 \vee S2 > 25) \wedge (S3 = 10)$} \label{tab:or} \end{table} The Tables~\ref{tab:and} and Table~\ref{tab:or} shows how this data is stored in the switch. Table~\ref{tab:and} is the conjunctive table, where each row contains a sensor~ID signifying which sensor the expression should be applied to. It also contains a set of indices which point to entries in the disjunctive tables (Table~\ref{tab:or}). There are three disjunctive tables, one for sensor~IDs, one for operators and one for values, which encode the components of the comparative expression (e.g. $S1 < 50$). Each column represents a separate conditional (e.g. $A \lor B$). These tables are loaded into the switch, which transforms them into a logical expression. \begin{figure} \lstinputlisting[label=code:p4,language=p4,basicstyle=\ttfamily\scriptsize,caption=Shortened sensor logic code.]{code/logic.p4.txt} \end{figure} Listing~\ref{code:p4} shows a shortened version of the P4 code for the decision logic. This code is automatically generated given parameters such as maximum number of disjunctive and conjunctive expressions, supported operators etc. As a packet arrives at the switch, the program loads the indices from the conjunctive table, corresponding to the sensor ID in the packet header. If there is logic configured (\code{cidxN != 0}), it loads the sensor value, operator, and value for the first conditional. Now, the comparative expression is performed, and if true, the program flags this (\code{disj = true}). After processing all disjunctive conditionals, if the flag is not set, the packet is dropped because one of the conjunctive terms are false. This is done for each condition in the conjunctive table. If the packet isn't dropped, the switch forwards it according to its forwarding table. \subsection{FastReact - Failure Recovery} \label{ssec:failover} In order to provide failure recovery, \emph{FastReact} monitors sensor and actuator liveness and reacts locally from the data plane should a sensor or actuator fail. In \emph{FastReact}, the switch records the timestamp of the last received packet for each switch port in a register array. Assuming that both sensor and actuators send periodic liveness messages, when no messages have been received on a particular port for a certain time, it is considered down. This port timeout interval is configured through a register array, where each entry is the timeout for a particular output port. For each actuator, a backup actuator can be specified in a failover match-action table. When a packet should be forwarded to an actuator which is considered down, the packet is instead forwarded to the backup actuator. This forwarding may be immediate (if it is reachable directly from the switch), or through another switch (as in our experiment, depicted in Figure~\ref{fig:overview}). If there are multiple backup actuators, and the \emph{FastReact} switch is directly connected to them, it will pick the first live one. If it is not directly connected, it is the responsibility of the \emph{FastReact} switch connected to the backup actuator to decide where to send the packet next. Again, it is up to the SDN controller to push the proper table entries to specify that behavior in more detail, which is outside the scope of this paper. \subsection{FastReact - Filtering} \label{ssec:filtering} To reduce the amount of traffic, \emph{FastReact} switches can filter the packets sent to the controller. This filtering can be done either using the switch decision logic programmed through the register tables, or we can only forward every \emph{n}-th packet. The filter logic is configurable on a per-sensor basis through the match-action table. For each sensor \emph{ID}, the switch keeps track of how many messages have been received and compares this count to the filtering rate, determining if the incoming packet should be forwarded or discarded. The filtering based on the decision tables is performed just like any other packet, discarding packets which do not match the logic configured through the register tables. \subsection{FastReact - In Network Caching} \label{ssec:caching} \emph{FastReact} also supports requesting stored sensor values, running averages and simple computation of the time series database through the data path using a \code{get} request. If the industrial system contains a large number of sensors, it may be beneficial to have devices request values as they are needed in order to reduce traffic. The switch keeps historical records of the latest received sensor values, along with reception timestamps. These timestamps can be used to determine the age of a certain sensor value. When the switch parses a packet in the ingress and detects a \code{get} request, the timestamp of the latest historical value for the requested sensor is compared to the current time. If the difference is smaller than a configurable tolerance, the switch forwards the request to the sensor. This request is replied to with a sensor value updated, which allows the switch to update its time series database, while also passing the message on to the original requester. To instruct the switch what kind of data it should return (e.g. most recent value, moving average), an OpCode is provided in the \code{get} request with the sensor \emph{ID}. \subsection{FastReact - Feasibility in P4} \label{ssec:feasibility} Our P4 implementation requires support for basic P4 primitives defined by the P4\_16 standard~\cite{budiu17}. In addition, it requires support for registers reading, register writing, IP and UDP checksum calculation and stateless header parsing. Support for ingress timestamps is also required. \textbf{Sensor Dependency Table:} The switch memory requirements depend on the logic table sizes, the number of sensors supported and the history size. The size of the conjunctive table is $S_{count} C_{cols} \lceil\log_2{D_{rows}}\rceil$ bits. Here $S_{count}$ is the number of sensors, $C_{cols}$ is the maximum number of conjunctive conditionals and $D_{rows}$ is the maximum number of rows in the disjunctive table. The size of the disjunctive table is $(D_{rows} D_{cols}) (3 + Sz_{sen} + \lceil\log_2{S_{count}}\rceil)$ bits. $D_{rows}$ is the maximum number of rows in the disjunctive table, $D_{cols}$ is the maximum number of disjunctive conditionals, $Sz_{sen}$ is the size of the sensor value data type and $S_{count}$ is the number of sensors. Calculating an appropriate disjunctive table size requires some consideration, because it represents the maximum number of conditional expressions in the logic table. For example, using 16-bit sensor values, 5000 sensors, a conjunctive table size of 25.000 and a maximum of 5 disjunctive and 5 conjunctive conditionals, the dependency table memory requirement amounts to around~\SI{4.4}{MB}. \textbf{Sensor Time Series Database:} In order to store historical sensor values and their respective timestamps, the memory requirements are: $(H_{count} + 1) (Sz_{ts} + Sz_{sen}) S_{count}$ bits of storage. $H_{count}$ is the number of values to store per sensor \emph{ID}, one additional entry is used for storing and updating the moving average. $Sz_{ts}$ and $Sz_{sen}$ are the timestamp and sensor value data type size. Finally, $S_{count}$ is the number of sensor \emph{ID}s. In addition, to store the round robin indexes that determine, which is the sensor value received last, we additionally require: $\lceil\log_2{H_{count}\rceil S_{count}}$ bits of storage. For 5000 sensors \emph{ID}s and 100 historical values per sensor, we need approximately~\SI{32}{MB} assuming 16-bit sensor value data type and 48-bit timestamps. \textbf{Other Memory Requirements: } The failure recovery requires storing one timestamp for each switch port. The filtering requires storing a counter for each sensor \emph{ID}. For 5000 sensors and 24 switch ports, this will amount to around~\SI{80}{KB} assuming 16-bit sensor counters and 48-bit timestamps. Furthermore, some settings are stored in registers, but require minimal space (less than \SI{1}{KB}). \subsection{FastReact - Implementation} \label{ssec:implementation} \nfig{gfx/p4code}{p4code}{Overview of our implementation of \emph{FastReact} in P4. } In order to evaluate the feasibility of our design, we implemented a prototype in P4 using the \code{v1model} architecture\footnote{The P4 code will be available at \url{https://github.com/jonavest/P4-FastReact}}. As can be seen from Fig.~\ref{fig:p4code}, packets are parsed, determining if they are sensor messages, \emph{get} requests (not shown for simplicity) or regular network traffic. If a sensor packet is detected, the sensor value is extracted and recorded in the time series registers. Then, the decision logic is applied, which determines if the packet should be dropped, sent to another switch or an actuator should be notified. If an actuator is notified, the \code{route} match-action table instructs the switch, based on the sensor \emph{ID}, ingress port and packet type what action should be performed. The \code{forward(port)} action simply forwards the packet normally and the \code{forward\_mod(port, ip, mac)} action sends a notification to an actuator. Then, the ingress timestamp of the packet is compared to the latest timestamp of the output port. If the difference is greater than the configured tolerance, the \code{failover} table is applied. The \code{failover} table can only result in the \code{send\_up(port)} action. The \code{send\_up(port)} changes the packet type to indicate that it is using a backup route and sends it to the specified port. This indication is later used by other switches to help determine the appropriate backup destination. Finally, checksums are recalculated and the packet is moved to the egress pipeline. Filtering is performed after the decision logic, where a filtered packet counter is incremented, and compared to the filter rate, determining if the packet should be filtered (dropped) or forwarded. Cache timestamps are updated when the sensor value is recorded, while \code{get} requests are processed in the beginning of the pipeline. These \code{get} requests are handled separately from the sensor logic. When a \code{get} request enters the switch, the timestamp of the latest historical sensor entry is compared to the ingress packet timestamp. If the difference is smaller than a configurable tolerance, a new sensor value is requested from the sensor. Otherwise, the operational code is extracted and the operation on the time series registers are calculated and returned from the cache. \section{Introduction} \label{sec:intro} \input{intro} \section{Background} \label{sec:background} \input{background} \section{FastReact Design} \label{sec:design} \input{design} \section{Evaluation} \label{sec:evaluation} In this section, we evaluate the effectiveness of FastReact by implementing it in P4 and performing several experiments using the CORE network emulator~\cite{ahrenholz08}. CORE uses Linux Network Namespaces to create virtual hosts and links. These are run using the normal Linux networking stack, which allows integration of the P4 behavioral model\footnote{Available at \url{https://github.com/p4lang/behavioral-model}}. We use the modified CORE version\footnote{Available at \url{https://github.com/tohojo/core}} which uses \code{netem} instead of \code{tbf} for rate management, which allows to tune the queue length. \subsection{Experimental Setup} \label{sec:experiment} \input{setup} \subsection{Results} \label{sec:results} \input{results} \section{Conclusion} \label{sec:conclusion} \input{conclusion} \section*{Acknowledgement} Parts of this work has been funded by the Knowledge Foundation of Sweden through the project READY. \def\footnotesize{\footnotesize} \printbibliography \end{document} \subsubsection{Baseline} \nfig{graphs/p4sen-ref}{ref}{Reference experiment using a single sensor and actuator. The ``Reference'' test uses normal controller processing, while ``Fast Reaction'' uses \emph{FastReact}. } First, we performed a reference experiment using traditional switching, which is not aware of the sensor format and can not react to changes in the sensor readings. All sensor messages are sent to the industrial controller which then sends a reply to the appropriate actuator. In this experiment, \emph{Sensor 1} sends continuous sensor messages to the industrial controller over \SI{40}{\s} with an interval of \SI{10}{\ms}. The industrial controller will receive these messages and passes an appropriate action to \emph{Actuator 1}. Fig. \ref{fig:ref} shows the results of the experiment. In the figure, the x-axis represents the time when \emph{Sensor 1} sent the packet, and the y-axis represents the latency between \emph{Sensor 1} and \emph{Actuator 1}. We compare the delay with our approach \emph{FastReact}, where the P4 enabled switch parses sensor packets in the data plane, checks the control actions and directly sends commands from the switch to the actuator. As we can see from our experimental data, the sensor-actuator delay averages to \SI{8.201}{\ms} for the case where the Industrial Controller decides the action compared to an average delay of \SI{2.249}{\ms} for \emph{FastReact}. The significant reduction in latency is because the sensor messages only pass through the switch, and we avoid the costly communication to reach the industrial controller. \subsubsection{Failover} \nfig{graphs/p4sen-multifail}{multifail}{Sensor-Actuator delay when the link between the actuator and their corresponding switch go up and down. } In order to evaluate the failure recovery behavior of \emph{FastReact}, we performed a second set of experiments. First, we evaluated how resilient the network is to a single link failure. In this experiment, \emph{Sensor 1} sends messages at a \SI{10}{\ms} interval and the port timeout is set to \SI{30}{\ms}. If the switch does not receive any packets for \SI{30}{\ms} on a port, that port is considered to be down, and the message will be forwarded up the network, to switch $B1$. $B1$ sends the message down to switch $A2$ which sends it down to \emph{Actuator 2}. Fig. \ref{fig:multifail} shows sensor-actuator delays over time, and the lines show the link states of \emph{Actuator 1} and \emph{Actuator 2} respectively. As we can see, when the link between \emph{Actuator 1} and $A1$ or the node \emph{Actuator 1} fails, \emph{FastReact} immediately reacts by sending the sensor values through $B1$ and $A2$ to \emph{Actuator 2}, which causes an increase in the latency from an average of \SI{2.244}{\ms} to \SI{4.624}{\ms}. This increase is due to the longer path between the sensor and the actuator. It is worth pointing out that if both links between \emph{Actuator 1} and $A1$ and between \emph{Actuator 2} and $A2$ fail, no data is received, because no actuator is reachable. The number of packets lost during the local repair in this scenario is between 2 and 3. One important aspect of how fast the switch can recover from a link failure is determined by the packet sending interval configured in the actuator and the port timeout configured in the switch. The shorter the sending interval, and the shorter port timeout, the faster faults can be detected. However, using a short sending interval increases the amount of traffic on the link, and using a short port timeout increases the probability for false positives (due to unexpected packet loss or jitter). Fig. \ref{fig:failover} shows the sensor-actuator delay with varying actuator sending intervals and port timeouts. In this experiment, packets are sent between \emph{Sensor 1} and \emph{Actuator 1}, as the link between \emph{Actuator 1} and $A1$ is taken down. The measurement points in the figure are the difference between the last received packet for \emph{Actuator 1} and the first received packet for \emph{Actuator 2}. From the figure, we can see that the recovery time is highly impacted by both sending interval and port timeout. For short port timeouts of \SI{10}{\ms} in combination with a short sending interval of \SI{5}{\ms}, the mean sensor-actuator delay (during failure) is \SI{14.45}{\ms}. The missing measurement points are in the cases when the sending rate is slower than the port timeout, causing both ports to be in a down state, and no packets are being transmitted. \nfig{graphs/p4sen-failover}{failover}{Sensor-Actuator delays in case of a failure with varying actuator sending intervals and port timeouts. } \note{J: Confirm these results especially at the lower end. } \subsubsection{FastReact Logic Processing} \emph{FastReact} is capable to correlate multiple sensor values in order to determine proper actions as given by the control logic in the data plane. This section shows results from experiments focusing on the impact of the switch processing logic on delay. \nfig{graphs/p4sen-bindep}{bindep}{Sensor-Actuator delays with binary sensor values. } Fig. \ref{fig:bindep} shows an experiment where we have used two sensors which take binary values. A message should be sent to the actuator only when both sensors are sending the value $1$. The two lines represent the sensor value at the current time, while the dots represent the arrival time of messages at the actuator. In this test, two sensors are configured which alternate between sending the values $0$ and $1$ every \SI{0.5}{\s}, with one of the sensors shifted its starting time slightly. The test was run for \SI{20}{\s}, however, the figure shows only the results between \SI{5}{\s} and \SI{6}{\s}. From the figure, we can see that the actuator only receives messages when both sensors observe a value of $1$. This behaviour is of course fully configurable depending on the application. \nfig{graphs/p4sen-double}{double}{Experiment using two sensors with stateful switching. } In the next experiment, we have two sensors, \emph{Sensor 1A} and \emph{Sensor 1B}, which report integer values. The \emph{FastReact} logic is configured so that when the reported value exceeds $50$ for both sensors we want to notify the actuator, i.e. $(A \geq 50) \wedge (B \geq 50)$. Because we record the historical values for each sensor in the data plane in \emph{FastReact}, we compare the received sensor value and correlate it with the previously received value of the other sensor. Fig. \ref{fig:double} shows two dashed lines representing the value of the two sensors, and a solid line representing the delay between sensor and actuator. The sensor values increase by $1$ unit every second. The sensors are configured to start at $20$ and $30$ respectively. As we can see from the figure, the switch is able to keep the state of both sensors in memory, and wait for both sensors to reach the configured value before it triggers the fast reaction and notifies the actuator. \note{J: When redoing the graphs, please redo two values so that the sensor names are correct} \nfig{graphs/p4sen-average-spikes}{spikes}{Experiment using single sensor and actuator. Sensor values suffering from spikes. Using moving average values instead of latest. Points are drawn on the figure lines during the spikes for readability. } In order to demonstrate the moving average functionality, we constructed an additional scenario where the sensor values occasionally spike. If we were just looking at the last sensor value, we would notify the actuator intermittently, causing maybe an unnecessary activation of the actuator leading to state oscillation. The spikes occur with an interval of \SI{7}{\s} and changes the sensor value to $55$ for \SI{50}{\ms}. As we can see from Fig. \ref{fig:spikes}, two spikes occur at $t=7$ and $t=14$. The first does not trigger an alert to the actuator, because the moving average evens out the matched value. However, the later spike (at $t=14$) triggers an immediate fast reaction and sends a message directly to the actuator, because the moving average will increase above $50$. \nfig{graphs/p4sen-filter}{filter}{Experiment filtering packets at various sampling rates.} Fig. \ref{fig:filter} shows the effect of filtering sensor messages at the switch. In this experiment, messages are always sent from the sensor to the industrial controller, and then forwarded to the actuator. No fast reaction is employed, however, packet filtering is enabled with various sampling rates. The sampling rate can be seen in the x-axis of the figure while the y-axis is the network load (in packets) and the time between messages received at the controller. The sensor-actuator delay show the increase in delay due to filtering messages. Decreasing the sampling rate reduces the load on the network, while increasing the potential sensor-actuator delay. \note{J: Check notation} \nfig{graphs/p4sen-reqlong}{reqlong}{Time between request and response for \code{get} requests sent from the industrial controller to a sensor node. } Finally, Fig. \ref{fig:reqlong} shows the industrial controller continuously sending \code{get} requests to the sensor node. As the cache timeout is set to \SI{5}{\s}, the first request is transmitted through $A1$ to the sensor, while in the 4 subsequent requests the value has been cached by \emph{FastReact}. As such the cached sensor value is immediately returned by the switch. The y-axis shows the time between the controller request was sent until the response was received. The ``Not Cached'' dots shows the request time when there is no cache entry, while the ``Cached'' dots shows the request time when the sensor value is in the switch cache. As we can see from the figure, the average delay between request and response is \SI{8.73}{\ms} if the value has not been cached, and \SI{6.38} {\ms} if the value has been cached by the switch.
1,314,259,996,423
arxiv
\section{Introduction} \subsection{Background} A central theme of complex dynamics is that of linearization, that is, conjugating a mapping near a fixed point to a simpler mapping. The idea is that it is then easier to see how the mapping behaves near the fixed point. For example, if $p$ is a polynomial in $\mathbb{C}$ with a repelling fixed point $z_0$, i.e. $p(z_0) = z_0$ and $|p'(z_0)| >1$, then there exists an entire function $L$ which satisfies $L(0) = z_0$ and \[ p(L(z)) = L(p'(z_0)\cdot z),\] for all $z\in \mathbb{C}$. Hence up to conjugation $p$ behaves like a $\mathbb{C}$-linear mapping near $z_0$, see for example \cite{Milnor}. The function $L$ is called a Poincar\'{e} function or a linearizer of $p$ at $z_0$. The functional equation may be iterated to obtain \[ p^k(L(z)) = L( p'(z_0)^k \cdot z),\] for any $k\in \mathbb{N}$. This indicates that the linearizer $L$ depends on the dynamical properties of $p$ as well as $z_0$ and $p'(z_0)$. \begin{figure}[h] \begin{center} \includegraphics[width=5in]{L.png} \caption{The Julia set for the linearizer of $z^2 -0.8+0.157i$ about $z_0 = 1.528 -0.076i$ (to $3$ decimal places). The fast escaping set is a spider's web.}\label{pic1} \end{center} \end{figure} The dynamics of such linearizers, and in particular the fast escaping set, were studied in \cite{MBP} by Mihaljevic-Brandt and Peter. They showed that if $c$ is not in the Mandelbrot set, then a linearizer about a fixed point of $z^2+c$ has a spider's web structure for its fast escaping set, see for example Figure \ref{pic1} which was produced by Doug Macclure. We briefly recall the notion of the fast escaping set here. Recall that the escaping set of a holomorphic function is defined by \[ I(f) = \{ z\in \mathbb{C} : f^n(z) \to \infty \},\] and was first studied by Eremenko \cite{E} for transcendental entire functions, and the fast escaping set is defined by \[ A(f) = \{ z \in \mathbb{C} : \exists L \in \mathbb{N}, |f^{n+L}(z)| \geq M^n(R,f), n \in \mathbb{N}\}, \] where $ M(R,f) = \max \{ |f(x) | : |x|=R \}$ is the maximum modulus and $M^n(R,f)$ denotes the iterated maximum modulus, e.g. $M^2(R,f) = M(M(R,f),f)$. The fast escaping set was introduced by Bergweiler and Hinkkanen \cite{BH} and has been extensively studied, see for example \cite{MBP,RS,RS2012}. Quasiregular mappings are a natural higher dimensional analogue of holomorphic functions in the plane, and so the iteration of quasiregular mappings is a natural higher dimensional counterpart to complex dynamics. An important point here is that quasiregular mappings satisfy analogues of Picard's Theorem and Montel's Theorem, both key results for complex dynamics. Rickman's monograph \cite{Rickman} is a foundational reference for quasiregular mappings. Briefly, quasiregular mappings are Sobolev mappings in $W^1_{n,loc}(\mathbb{R}^m)$ with a uniform bound on the distortion. See \cite{B2} for an overview of the iteration theory of quasiregular mappings, \cite{BFLM,FN} for the escaping set of quasiregular mappings, \cite{B3,BN} for the definition of the Julia set for quasiregular mappings and \cite{BDF} for the fast escaping set of quasiregular mappings of transcendental type. Here, a quasiregular mapping of transcendental type is one for which the limit of $f(x)$ as $|x|\to \infty$ does not exist, in direct analogy with transcendental entire functions. The aim of this article is to investigate the properties of analogues of Poincar\'{e} linearizers for quasiregular mappings. There is not a surfeit of examples of quasiregular mappings with interesting dynamics, and so an aim of this paper is to bring attention to this class of quasiregular mappings and its dynamics. \subsection{Construction of linearizers and statement of results} We will first construct the analogue of Poincar\'{e} functions in the quasiregular setting. Recalling how Poincar\'e linearizers arise from repelling fixed points of polynomials, the natural analogue of polynomials to this situation are uniformly quasiregular mappings (abbreviated to uqr). These are mappings $f:\mathbb{R}^m \to \mathbb{R}^m$ for which there is a uniform bound on the distortion of all the iterates. All currently known examples extend to quasiregular mappings $S^m \to S^m$, where $S^m = \mathbb{R}^m \cup \{ \infty \}$. These were the first quasiregular mappings whose dynamics were studied \cite{IM}. For such mappings which are not injective, there are direct analogues of the Fatou and Julia sets and $\mathbb{R}^m = F(f) \cup J(f)$. The escaping set $I(f)$ is a connected neighbourhood of infinity \cite{FN} and $J(f)$ is the closure of the periodic points \cite{Siebert}. It is an open question whether $J(f)$ is the closure of the repelling periodic points, but see Theorem \ref{dense}. Suppose $f:\mathbb{R}^m \to \mathbb{R}^m$ is a uqr mapping of polynomial type and $x_0 \in J(f)$ is a repelling periodic point. In this context, a repelling periodic point $x_0$ is one for which there exist $k\in \mathbb{N}$ and a neighbourhood $U \ni x_0$ such that $f^k(x_0) = x_0$, $f^k$ is injective on $U$ and $ \overline{U} \subset f^k(U)$. An immediate obstacle is that quasiregular mappings need not be differentiable everywhere. However, Hinkkanen, Martin and Mayer \cite{HMM} consider the notion of the generalized derivative (see also \cite{GMRV}). Without loss of generality, assume that the fixed point is $x_0=0$. For $\lambda >0$ define $f_\lambda(x) = \lambda f(x/\lambda)$. Then the set of limit mappings \[ \mathcal{D} f(0) =\{ \varphi \in \lim _{j\to \infty} f_{\lambda_j}, \text{ where } \lambda_j \to \infty \}\] is called the infinitesimal space of the uqr map $f$ at $0$, and elements of the infinitesimal space are called generalized derivatives. \begin{remark} If $f$ is differentiable at $x_0$, then $\mathcal{D}f(x_0)$ contains only the linear mapping $x\mapsto f'(x_0)x$. \end{remark} \begin{theorem} \label{linearizer} Let $f:\mathbb{R}^m \to \mathbb{R}^m$ be a uqr mapping of polynomial type with repelling fixed point $x_0$. Then there exists a quasiregular mapping $L:\mathbb{R}^m \to \mathbb{R}^m$ with $L(0) = x_0$ such that $f\circ L = L \circ T$, where $T(x) = 2x$. \end{theorem} \begin{proof} This theorem essentially combines two well-known results. By \cite[Theorem 6.3 (ii)]{HMM} there is a quasiregular mapping $\Psi : \mathbb{R}^m \to \mathbb{R}^m$ and a generalized derivative $\varphi$ such that $f\circ \Psi = \Psi \circ \varphi$. Here, $\varphi$ is a loxodromic uniformly quasiconformal map which fixes $0$ and infinity, and $0$ is the repelling fixed point. Then by \cite[Theorem 1.2]{HM}, $\varphi$ is quasiconformally equivalent to $T(x) = 2x$, i.e. there exists a quasiconformal mapping $g:\mathbb{R}^m \to \mathbb{R}^m$ such that $\varphi \circ g = g\circ T$. Hence \[ f\circ \Psi = \Psi \circ g \circ T \circ g^{-1}\] and we may take $L = \Psi \circ g$. \end{proof} This mapping $L$ is called a Poincar\'{e} function or linearizer of $f$ at $x_0$, and is the mapping we will study. \begin{remark} Unlike the holomorphic case, a Poincar\'{e} function of a uqr mapping is not specified by the mapping and the repelling fixed point, since the infinitesimal space $\mathcal{D}f(x_0)$ may contain more than one mapping. We observe that by \cite[Lemma 4.4]{HMM} if one generalized derivative in an infinitesimal space is loxodromic repelling, then they all are. \end{remark} \begin{remark} We may have chosen $T(x) = \lambda x$ for any $\lambda >1$. This will change the mapping $L$, but not its dynamical properties. Recall that $p'(z_0)$ is a fixed complex number, whereas in this quasiregular situation we are free to choose the multiplicative factor in the map we conjugate $f$ to about $x_0$. \end{remark} The first main result concerns the order of growth of such a linearizer. The order of growth of a quasiregular mapping $f:\mathbb{R}^m \to \mathbb{R}^m$ is defined by \[ \rho_f = \limsup_{r \to \infty} (m-1) \frac{ \log \log M(r,f) }{\log r},\] and the lower order is \[ \lambda_f = \liminf_{r \to \infty} (m-1) \frac{ \log \log M(r,f) }{\log r}.\] Recall that the order of a linearizer of a polynomial $p$ of degree $d$ about $x_0$ is given by $\log d / \log |p'(x_0)|$, see \cite{Valiron}. \begin{theorem} \label{growth} Let $f:\mathbb{R}^m \to \mathbb{R}^m$ be a $K$-uqr mapping of degree $d>K$ with repelling fixed point $x_0$ and let $L$ be a linearizer of $f$ about $x_0$ conjugating $f$ to $T$. Then the order $\rho_L$ of $L$ satisfies \[ \frac{ \log d - \log K }{ \log 2} \leq \rho_L \leq \frac{ \log d + \log K}{ \log 2}.\] The same holds for the lower order. \end{theorem} \begin{remark} In view of the fact we may have chosen $T(x) = \lambda x$ for any $\lambda >1$, Theorem \ref{growth} implies that we can construct linearizers of arbitrarily large or arbitrarily small positive order at a repelling fixed point of a uqr mapping. \end{remark} We say that $y\in\mathbb{R}^m$ is an omitted value of $L$ if there is no $x\in \mathbb{R}^m$ such that $L(x)=y$. By Rickman's Theorem \cite{Rickman}, $L$ can only omit finitely many values. We write $\mathcal{O}(L)$ for the set of omitted values of $L$. A point $x\in \mathbb{R}^m$ is called an exceptional value for a uqr mapping $f:\mathbb{R}^m \to \mathbb{R}^m$ if the backwards orbit $O^-(x) = \cup_{n\geq 1} f^{-n}(x)$ under $f$ is finite. We write $\mathcal{E}(f)$ for the set of exceptional values of $f$. If $f\circ L = L\circ T$, we relate the omitted values of $L$ to the exceptional values of $f$, compare \cite[Proposition 4.1]{MBP}. \begin{theorem} \label{omitted} With the hypotheses of Theorem \ref{growth}, if $L$ is a linearizer of $f$ at $x_0$, we have $\mathcal{O}(L) = \mathcal{E}(f) \setminus \{ x_0 \}$. \end{theorem} For a quasiregular mapping $f:\mathbb{R}^m \to \mathbb{R}^m$ of transcendental type, the Julia set of $f$ is defined in \cite{BN} to be \[ J(f) = \{ x\in \mathbb{R}^m : \operatorname{cap} \left ( \mathbb{R}^m \setminus O_f^+(U) \right ) = 0, \text{ for all open sets } U \ni x\},\] where $O_f^+(U)$ denotes the forward orbit of $U$ under $f$. We refer to \cite{BN} for the technical definition of capacity. The Julia set of a Poincar\'e linearizer has many properties in analogy with the Julia set of a transcendental entire function. \begin{theorem} \label{julialin} With the hypotheses of Theorem \ref{growth}, we have \begin{enumerate}[(i)] \item $J(L) \subset \overline{O_L^-(x)}$ for all $x\in \mathbb{R}^m \setminus \mathcal{E}(L)$, \item $J(L) \subset \overline{O_L^-(x)}$ for all $x\in J(L) \setminus \mathcal{E}(L)$, \item $\mathbb{R}^m \setminus O_L^+(U) \subset \mathcal{E}(L)$ for every open set $U$ intersecting $J(L)$, \item $J(L)$ is perfect, \item $J(L^k) = J(L)$ for all $k\in \mathbb{N}$. \item $J(L) = \partial A(L)$, recalling $A(L)$ is the fast escaping set. \end{enumerate} \end{theorem} We next consider specific uqr mappings and their linearizers. Suppose that \begin{equation} \label{tame} J(f) \text{ is a tame Cantor set.} \end{equation} Here, a tame Cantor set $E$ is one such that there is a homeomorphism $\psi : \mathbb{R}^m \to \mathbb{R}^m$ with $\psi(E)$ equal to a standard one thirds Cantor set contained in a line. Cantor sets which are not tame are called wild, for example Antoine's necklace. Every Cantor set in the plane is tame. See \cite{HR} for wild Cantor sets in the context of quasiregular mappings. \begin{remark} Condition \eqref{tame} may appear a restrictive condition, but there are plenty of quasiregular mappings satisfying it. In \cite{MP}, Martin and Peltonen show that every quasiregular mapping $h:S^m \to S^m$ can be decomposed as $h=f\circ \varphi$, where $\varphi$ is a quasiconformal map and $f$ is a uqr mapping for which $J(f)$ is a quasiconformally tame Cantor set. \end{remark} Before we discuss dynamical properties of $L$, we first state the following result, which extends the known cases for when the Julia set of a uniformly quasiregular mapping is the closure of the repelling periodic points, compare with \cite{Siebert}. \begin{theorem} \label{dense} Let $f:\mathbb{R}^m \to \mathbb{R}^m$ be a uniformly quasiregular mapping of polynomial type and suppose that $J(f)$ is a tame Cantor set. Then the repelling periodic points are dense in $J(f)$. \end{theorem} \begin{definition} A set $E\subset \mathbb{R}^m$ is called a \emph{spider's web} if $E$ is connected and there exists a sequence of bounded topologically convex domains $G_n$ with $G_n \subset G_{n+1}$, for $n \in \mathbb{N}$, $\partial G_n \subset E$ and $\bigcup_{n \in \mathbb{N}} G_n = \mathbb{R}^m$. \end{definition} We recall that a topologically convex domain is one for which the only components of the complement are unbounded. With this definition, $\mathbb{R}^m$ is itself a spider's web, but since every quasiregular mapping of transcendental type has infinitely many periodic points by \cite[Theorem 4.5]{Siebert2}, the fast escaping set can never be $\mathbb{R}^m$. We now state our final theorem. \begin{theorem} \label{mainthm} Let $f:\mathbb{R}^m \to \mathbb{R}^m$ be a uqr mapping of polynomial type whose Julia set $J(f)$ is a tame Cantor set and let $x_0 \in J(f)$ be a repelling fixed point. If $L$ is a linearizer of $f$ at $x_0$, then $A(L)$ is a spider's web. \end{theorem} \begin{remark} In \cite[Theorem 1.6]{BDF}, it is shown that if the minimum modulus of a quasiregular mapping $f$ of transcendental type is comparable to the maximum modulus on every annulus of the form $A(r,Cr)$ for some $C>1$, then $A(f)$ is a spider's web. This theorem cannot be used here since we do not have such estimates for linearizers. However, we prove Theorem \ref{mainthm} by showing that the minimum modulus is comparable to the maximum modulus on annuli of the form $A(r,r^{\mu})$ for some $\mu >1$. \end{remark} The rest of the paper is organized as follows. In section 2, we cover material from the iteration theory of quasiregular mappings, in section 3 we prove Theorem \ref{growth}, in section 4 we prove Theorem \ref{omitted}, in section 5 we prove Theorem \ref{julialin} in section 6 we prove Theorem \ref{dense} and finally in section 7 we prove Theorem \ref{mainthm}. The author would like to thank Dan Nicks for helpful comments that improved the paper. \section{Preliminaries} Throughout we will write \[ B(x,R) = \{ y\in \mathbb{R}^m : |y-x| <R \} \] for the ball of radius $R$ centred at $x$ and \[ A(R_1,R_2) = \{ y\in \mathbb{R}^m : R_1 < |y| < R_2 \} \] for the annulus centred at $0$ with radii $R_1,R_2$. \subsection{Quasiregular maps} A mapping $f:E \rightarrow \mathbb{R}^{m}$ defined on a domain $E \subseteq \mathbb{R}^{m}$ is called quasiregular if $f$ belongs to the Sobolev space $W^{1}_{m, loc}(E)$ and there exists $K \in [1, \infty)$ such that \begin{equation} \label{eq2.1} \arrowvert f'(x) \arrowvert ^{m} \leq K J_{f}(x) \end{equation} almost everywhere in $E$. Here $J_{f}(x)$ denotes the Jacobian determinant of $f$ at $x \in E$. The smallest constant $K \geq 1$ for which (\ref{eq2.1}) holds is called the outer dilatation $K_{O}(f)$. If $f$ is quasiregular, then we also have \begin{equation} \label{eq2.2} J_{f}(x) \leq K' \inf _{\arrowvert h \arrowvert =1} \arrowvert f'(x) h \arrowvert ^{n} \end{equation} almost everywhere in $E$ for some $K' \in[1, \infty)$. The smallest constant $K' \geq 1$ for which (\ref{eq2.2}) holds is called the inner dilatation $K_{I}(f)$. The dilatation $K=K(f)$ of $f$ is the larger of $K_{O}(f)$ and $K_{I}(f)$, and we then say that $f$ is $K$-quasiregular. Informally, a quasiregular mapping sends infinitesimal spheres to infinitesimal ellipsoids with bounded eccentricity. A foundational result in the theory of quasiregular mappings is Rickman's Theorem, which states that a non-constant quasiregular mapping $f:\mathbb{R}^m\to \mathbb{R}^m$ can only omit $q(m,K)$ many values, where $q$ depends only on $m$ and $K$. Quasiregular mappings are a generalization of analytic and meromorphic functions in the plane; see Rickman's monograph \cite{Rickman} for many more details. In particular, quasiregular mappings are open and discrete. A quasiregular mapping $f:\mathbb{R}^m \to \mathbb{R}^m$ is said to be of polynomial type if $|f(x)| \to \infty$ as $|x| \to \infty$, whereas it is said to be of transcendental type if this limit does not exist and hence $f$ has an essential singularity at infinity. This is in direct analogy with polynomials and transcendental entire functions in the plane. Denote by $i(x,f)$ the local index of $f$ at $x$. The following result shows that quasiregular mappings are locally H\"older continuous. \begin{theorem}[{\cite[Theorem III.4.7]{Rickman}}] \label{rick2} Let $f:E\to \mathbb{R}^n$ be quasiregular and non-constant, and let $x\in E$. Then there exist positive numbers $\rho, A,B$ such that for $y\in B(x,\rho)$, \[ A|y-x|^{\nu} \leq |f(x)-f(y)| \leq B|y-x|^{\mu},\] where $\nu = (K_O(f)i(x,f))^{1/(n-1)}$ and $\mu = (i(x,f)/K_I(f))^{1/(n-1)}$. \end{theorem} \subsection{Iteration of quasiregular mappings} The composition of two quasiregular mappings is again a quasiregular mapping, but the dilatation typically increases. A quasiregular mapping $f$ is called uniformly $K$-quasiregular, or $K$-uqr, if the dilatation of each iterate $f^k$ of $f$ is bounded above by $K$. For uniformly quasiregular mappings, there are direct analogues of the Fatou set $F(f)$ and Julia set $J(f)$ for holomorphic mappings in the plane. In this case, the boundary of the escaping set coincides with the Julia set. The following was proved in \cite{FN2}. \begin{theorem}[\cite{FN2}] \label{unifperf} Let $f:S^m \to S^m$ be uniformly quasiregular. Then $J(f)$ is $\alpha$-uniformly perfect, that is, if $R$ is any ring domain separating $J(f)$, then the conformal modulus of $R$ is at most $\alpha$, for some $\alpha >0$. \end{theorem} This result essentially says that any ring domain separating $J(f)$ cannot be too thick. We next prove a presumably well-known result on the growth of quasiregular mappings of polynomial type near infinity. \begin{lemma} \label{holderinf} Let $h:\mathbb{R}^m \to \mathbb{R}^m$ be a $K$-quasiregular mapping of polynomial type of degree $d>K$. Then there exist $R_0>0$ and positive constants $C_1,C_2$ such that \[ C_1 ^{ q_j ( (d/K)^{1/(m-1)} ) } |x|^{ (d/K)^{j/(m-1)} } \leq |h^j(x)| \leq C_2 ^{ q_j ( (dK)^{1/(m-1)} ) } |x|^{ (dK)^{j/(m-1)} }, \] for $|x| >R_0$, where $q_j$ is the polynomial $q_j(y) = y^{j-1} + y^{j-2} + \ldots + y +1$. \end{lemma} \begin{proof} By the hypotheses, a neighbourhood of infinity is contained in $I(h)$, see \cite{FN}, and so infinity is an attracting fixed point of $h$. By the H\"{o}lder continuity for quasiregular mappings, Theorem \ref{rick2}, and conjugating by the M\"{o}bius map $x \mapsto x/|x|^2$, it is not hard to see that there exist $R_0>0$ such that $\{x\in \mathbb{R}^m : |x|>R_0\} \subset I(h)$ and positive constants $C_1,C_2$ such that \[ C_1 |x|^{(d/K)^{1/(m-1)} }\leq |h(x)| \leq C_2 |x|^{(dK)^{1/(m-1)} },\] for $|x|>R_0$. The result then follows by induction. \end{proof} \subsection{The fast escaping set} We summarize some of the results from \cite{BDF} on the fast escaping set. Let $f:\mathbb{R}^m \to \mathbb{R}^m$ be a quasiregular mapping of transcendental type. \begin{defna} The fast escaping set is \[ A(f) = \{ x\in \mathbb{R}^m: \text{ there exists }P\in \mathbb{N} : |f^{n+P}(x)| \geq M^n(R,f), \text{ for all } n\in \mathbb{N} \},\] where $R>0$ is any value such that $M^n(R,f) \to \infty$ as $n\to \infty$. \end{defna} The fast escaping set does not depend on the particular value of $R$. For such values of $R$, the fast escaping set with respect to $R$ is \[ A_R(f) = \{ x \in \mathbb{R}^m : |f^n(x)| \geq M^{n}(R,f), n \in \mathbb{N}\}. \] \begin{comment} and its $P$'th level is \[ A_R^P(f) = \{ x \in \mathbb{R}^m : |f^n(x)| \geq M^{n+P}(R,f), n \in \mathbb{N}, n \geq -P \}. \] We can then express the fast escaping set as \begin{equation*} A(f) = \bigcup _{P\in \mathbb{N}} A_R^{-P}(F). \end{equation*} \end{comment} By \cite[Theorem 1.2]{BDF}, $A(f)$ is non-empty and every component of $A(f)$ is unbounded. We will use the following characterization of spider's webs for $A_R(f)$. \begin{lemma}[Proposition 6.5, \cite{BDF}] \label{char} Let $f$ be a quasiregular mapping of transcendental type. Then $A_R(f)$ is a spider's web if and only if there exists a sequence $G_n$ of bounded topoloigcally convex domains such that for all $n \geq 0$, \[ B(0,M^n(R,f)) \subset G_n,\] and $G_{n+1}$ is contained in a bounded component of $\mathbb{R}^m \setminus f(\partial G_n)$. \end{lemma} If $A_R(f)$ is a spider's web, then $A(f)$ is a spider's web, since $A_R(f) \subset A(f)$ and every component of $A(f)$ is unbounded. \section{Order of growth} \begin{proof}[Proof of Theorem \ref{growth}] Let $R_0$ be the constant from Lemma \ref{holderinf} and suppose that $M(r,L) \geq R_0$ for $r \geq r_1$. Fix $r_0 \geq r_1$ and let $r\in [r_0, 2r_0]$. Then by Lemma \ref{holderinf} and the fact that $M(r,L)$ is increasing in $r$, \[ C_1 ^{ q_j ( (d/K)^{1/(m-1)} ) } M(r_0,L)^{ (d/K)^{j/(m-1)} } \leq M(r,f^j \circ L) \leq C_2 ^{ q_j ( (dK)^{1/(m-1)} ) } M(2r_0,L)^{ (dK)^{j/(m-1)} }, \] where $q_j(y) = (y^j-1)/(y-1)$. Since $\log q_j(y) = j \log y +O(1)$ as $j \to \infty$, $f^j \circ L = L\circ T^j$ and $M(r,L \circ T^j) = M(2^jr, L)$ we have \[ j \log \left ( \frac{d}{K} \right )^{1/(m-1)} -O(1) \leq \log \log M(2^jr, L) \leq j \log \left ( dK \right) ^{1/(m-1)} +O(1),\] uniformly for $r\in[r_0,2r_0]$ as $j\to \infty$, and hence \[ \frac{ \log \left ( d/K \right )^{1/(m-1)} \log 2^j r }{\log 2} -O(1) \leq \log \log M(2^jr, L) \leq \frac{ \log \left ( dK \right) ^{1/(m-1)} \log 2^jr}{\log 2} +O(1),\] which gives \[ \frac{ \log \left ( d/K \right )^{1/(m-1)} }{\log 2} -o(1) \leq \frac{ \log \log M(2^jr, L)}{\log 2^j r} \leq \frac{ \log \left ( dK \right) ^{1/(m-1)}}{\log 2} +o(1)\] uniformly for $r\in[r_0,2r_0]$ as $j\to \infty$. These two inequalities imply the result for the order and the lower order. \end{proof} \section{Omitted values} \begin{proof}[Proof of Theorem \ref{omitted}] Since $L(0) = x_0$, the point $x_0$ is never omitted. If $y\in \mathbb{R}^m \setminus \mathcal{E}(f)$, then the backward orbit $O^-(y)$ has infinitely many elements. Since $L$ omits at most $q(m,K)$ many values by Rickman's Theorem, $O^-(y)$ must intersect $L(\mathbb{R}^m)$. That is, there exists $k\in \mathbb{N}$ and $w\in \mathbb{R}^m$ with $L(w) \in f^{-k}(y)$. Therefore $y=f^k(L(w)) = L(2^kw)$ and so $y\notin \mathcal{O}(L)$. Hence $\mathcal{O}(L) \subset \mathcal{E}(f) \setminus \{x_0\}$. Next, let $y\in \mathbb{R}^m \setminus \mathcal{O}(L)$. If $y=x_0$, there is nothing to prove, so suppose $y\neq x_0$. Then there exists $x\neq 0$ with $L(x) = y$. Hence $L(x/2^k) \in f^{-k}(y)$ by the iterated functional equation. Since $x\neq 0$ and $L$ is injective in a neighbourhood of $0$, the backwards orbit of $y$ has infinitely many elements. \end{proof} \section{The Julia set of $L$} In this section we prove Theorem \ref{julialin}. As defined in \cite{BN}, a quasiregular mapping $f:\mathbb{R}^m\to \mathbb{R}^m$ has the {\it pits effect} if there exists $N\in \mathbb{N}$ such that for all $c>1$ and all $\epsilon >0$, there exists $R_0$ such that if $R>R_0$, then the set \[ \{ x\in \mathbb{R}^m : R\leq |x| \leq cR, |f(x)|\leq 1 \}\] can be covered by $N$ balls of radius $\epsilon R$, that is, the set where $f$ is small is not too large. By \cite[Theorem 1.8]{BN}, if a quasiregular mapping of transcendental type does not have the pits effect, then the Julia set has the properties given in the statement of Theorem \ref{julialin} $(i)-(v)$. Hence parts $(i)-(v)$ of the theorem are proved by the following lemma. \begin{lemma} \label{julialemma} Let $L:\mathbb{R}^m \to \mathbb{R}^m$ be a Poincar\'e linearizer. Then $L$ does not have the pits effect. \end{lemma} \begin{proof} As noted after Definition 1.2 in \cite{BN}, it suffices to show that there is a sequence $(x_k)_{k=1}^{\infty}$ tending to $\infty$ such that $\limsup_{k\to \infty} |x_{k+1}|/|x_k| < \infty$ and \begin{equation} \label{juliaeq1} |L(x_k)| \leq C \end{equation} for all $k\in \mathbb{N}$ and some positive constant $C$. Recall from Theorem \ref{unifperf} that the Julia set of a uqr mapping $f$ is $\alpha$-uniformly perfect, that is, if $E$ is any ring domain in $F(f)$ separating points of $J(f)$ then the conformal modulus of $E$ is at most $\alpha$ for some $\alpha$ depending on $f$. If $L$ is $K$-quasiregular, then suppose that $D>0$ is chosen so that $\frac{1}{2K} \log D > \alpha$. Next, since $0$ is not a branch point of $L$, there exists a neighbourhood $U$ of $0$ such that $L \arrowvert_{U}$ is injective and we may assume that $L(U) \cap J(f) \neq J(f)$. Consider the annulus $A=A(r,Dr)$, and choose $j \in \mathbb{N}$ large enough such that $T^{-j}(A) = A(2^{-j}r, 2^{-j}Dr) \subset U$. Then $L(T^{-j}(A))$ is a ring domain separating points of $J(f)$ with modulus greater than $\alpha$ and hence must intersect $J(f)$. Therefore, with $C = \max _{x \in J(f)} |x|$, there exists $y \in L(U) \cap J(f)$ which satisfies \[ |f^j(y)| \leq C,\] for all $j \in \mathbb{N}$. There exists $y' \in U$ such that $L(y') = y$ and $T^j(y') \in A$. Hence $T^j(y')$ is a point in $A(r,Dr)$ with $|L(T^j(y'))| = |f^j(L(y'))| \leq C$. This argument shows that we can find a sequence of points $x_k \in A(D^{k-1}r,D^kr)$ for $k\in \mathbb{N}$ which satisfy \eqref{juliaeq1} with this $C$ and such that $|x_{k+1}|/|x_k| \leq D^2$. This proves the lemma. \end{proof} For part $(vi)$, it is shown in \cite{BFN} that for a quasiregular mapping of transcendental type of positive lower order, the Julia set and the boundary of the fast escaping set agree. Hence part $(vi)$ follows from Theorem \ref{growth}. \section{Density of repelling periodic points} \begin{proof}[Proof of Theorem \ref{dense}] By a result contained in Siebert's thesis \cite{Siebert}, and see also \cite[Theorem 4.1]{B2} and the discussion preceding it, the periodic points are dense in the Julia set of a uniformly quasiregular mapping. Suppose that $f:\mathbb{R}^m \to \mathbb{R}^m$ is a uqr mapping of polynomial type and that $J(f)$ is a tame Cantor set. Let $x_0$ be a periodic point of period $p$, and write $F=f^p$ so that $x_0$ is a fixed point of $F$. Let $U_{\delta}$ be a neighbourhood of $x_0$ such that the diameter of $U_{\delta}$ is at most $\delta$ and $\partial U_{\delta} \subset I(f)$. That such neighbourhoods exist follows from the tameness of $J(f)$ and the fact that every bounded component of $F(f)$ is simply connected since $J(f) = \partial I(f)$. Let $\delta_n \to 0$ be such that $\overline{U_{\delta_{n+1}}} \subset U_{\delta_n}$. We must have that $F$ is injective on a neighbourhood of $x_0$ since otherwise $x_0$ is an attracting fixed point by Theorem \ref{rick2} (see also \cite[equation (7)]{HMM}). This is impossible since there are escaping points arbitrarily close to $x_0$. Find $\epsilon >0$ so that $F$ is injective on $B(x_0,\epsilon)$. We may assume $\delta_n <\epsilon$ for all $n$. Choose $n$ large enough so that $U_{\delta_n}$ is a very small neighbourhood of $x_0$ and so that there exists $k\in \mathbb{N}$ with $\overline{U_{\delta_n}} \subset F^k(U_{\delta_n}) \subset B(x_0,\epsilon)$. That we can find such a $k$ follows since $\partial U_{\delta_n} \subset I(F)$ (recall $I(F)=I(f)$) and as long as $F^j(U_{\delta_n}) \subset B(x_0,\epsilon)$, the mapping $F^j |_{U_{\delta_n}}$ is $K$-quasiconformal, and then $\max _{x\in \delta_n} |F^j(x)| / \min_{x\in \delta_n} |F^j(x)| <C$ for some $C$ depending on $K$. Hence by the topological definition of fixed points, see \cite[p.90]{HMM}, $x_0$ is a repelling fixed point of $F^k$. Finally, by \cite[Proposition 4.6]{HMM}, $x_0$ is a repelling periodic point of $f$. \end{proof} \section{The fast escaping set of $L$} Suppose that $f:\mathbb{R}^m \to \mathbb{R}^m$ is a $K$-uqr mapping of polynomial type, that $J(f)$ is a tame Cantor set and that $x_0$ is a repelling periodic point guaranteed by Theorem \ref{dense}. We may assume that the degree $d$ of $f$ is greater than $K$, otherwise consider an iterate $f^n$ of $f$ so that $nd >K$. Note that $J(f^n) = J(f)$ by \cite[Corollary 3.3]{HMM}. Let $L$ be a linearizer of $f$ at $x_0$, conjugating $f$ to $T(x) =2x$. Throughout this section, we will write \[ \beta = \left ( \frac{d}{K} \right ) ^{1/(m-1)}.\] \subsection{H\"{o}lder continuity and growth estimates} We next use the H\"{o}lder estimate for the iterates of $f$ from Lemma \ref{holderinf} to obtain growth estimates for the linearizer $L$. \begin{lemma} \label{reggrowth} There exists $R_1>0$ such that \[ \prod _{i=1}^{n-1} \left ( \left ( \frac{d}{K} \right ) ^{1/(m-1)} + \frac{ \log C_1}{\log M(2^ir,L) } \right ) \leq \frac { \log M(2^nr,L)}{\log M(r,L) } \leq \prod _{i=1}^{n-1} \left ( (dK ) ^{1/(m-1)} + \frac{ \log C_2}{\log M(2^ir,L) } \right ),\] for $r>R_1$, where the constants $C_1,C_2$ are from Lemma \ref{holderinf}. \end{lemma} \begin{proof} Denote by $S_r = \{ x\in \mathbb{R}^m : |x| = r \}$ the sphere of radius $r$ centred at $0$. Let $r$ be large and $y \in S_r$ such that $|L(y)| \geq |L(x)|$ for all $x \in S_r$. Let $w = L(y)$ so that $|w| = M(r,L)$. Then by the functional equation for $L$ and Lemma \ref{holderinf}, \begin{align*} \log M(2r,L) & = \log M(r,f \circ L) \\ &= \log M( L(S_r),f )\\ & \geq \log |f(w)| \\ & \geq \log C_1 +(d/K)^{1/(m-1)} \log |w| \\ &= \log C_1 + (d/K)^{1/(m-1)} \log M(r,L). \end{align*} Also, \begin{align*} \log M(2r,L) & \leq \log M(M(r,L),f) \\ & \leq \log C_2 + (dK)^{1/(m-1)} \log M(r,L), \end{align*} and so we have \[ \left( \frac{d}{K} \right ) ^{1/(m-1)} + \frac{\log C_1}{\log M(r,L)} \leq \frac{ \log M(2r,L)}{\log M(r,L) } \leq (dK)^{1/(m-1)}+ \frac{\log C_2}{\log M(r,L)}.\] By induction, we have the result. \end{proof} \begin{lemma} \label{reggrowth2} Let $\mu >1$. There exist $R_2>0$ and $N\in \mathbb{N}$ such that for all $R>R_2$, the sequence defined by \begin{equation}\label{eq:reg1} r_n = 2^n M^n(R,L) \end{equation} satisfies \[ M(r_n,L) > r_{n+1}^{\mu},\] for $n\geq N$. \end{lemma} \begin{proof} Assume $R$ is large. With the sequence $r_n$ defined by \eqref{eq:reg1}, applying Lemma \ref{reggrowth} with $r = M^n(R,L)$ yields \begin{align*} \log M(r_n,L) & = \log M(2^nM^n(R,L),L) \\ & \geq \prod _{i=0}^{n-1} \left ( \left ( \frac{d}{K} \right )^{1/(m-1)} + \frac {\log C_1 }{\log M(2^i M^n (R,L) , L) } \right ) \cdot \log M( M^n(R,L) , L) \\ &\geq \left ( \beta - \frac{ \log C_1}{\log R} \right ) ^n \log M^{n+1}(R,L). \end{align*} Now, \begin{align*} \log r_{n+1}^{\mu} & = \mu \log ( 2^{n+1} M^{n+1}(R,L) ) \\ &= \mu (n+1) \log 2 + \mu \log M^{n+1}(R,L). \end{align*} Hence the result is true if \[ \log M^{n+1}(R,L) \left ( \left ( \beta - \frac{\log C_1}{\log R} \right )^n - \mu \right ) > \mu (n+1) \log 2.\] This is so if we choose $R$ large enough and $n$ large enough so that $\beta^n > \mu$. \end{proof} We next prove a lemma on the growth of the minimum modulus of $L$. \begin{lemma} \label{minmodgrowth} Suppose that $f:\mathbb{R}^m \to \mathbb{R}^m$ is $K$-uqr of degree $d>K$, $J(f)$ is a tame Cantor set, $x_0$ is a repelling fixed point and $L$ is the corresponding linearizer. Let \[ \mu > \frac{ \log d + \log K}{\log d- \log K}.\] There exists $R_3>0$ such that for $r>R_3$ there is a continuum $\Gamma^r$ separating $S_r$ and $S_{r^{\mu}}$ such that \[ m(\Gamma^r,L) > M(r,L).\] \end{lemma} \begin{proof} There exists a neighbourhood $U$ of $0$ such that $L |_{U}$ is injective. Let $\delta >0 $ be small enough that $B(x_0,\delta) \subset L(U)$. Since $J(f)$ is a tame Cantor set, there exists a topologically convex neighbourhood $V$ of $x_0$ such that $\gamma_\delta := \partial V \subset B(x_0,\delta) \cap I(f)$. Let $\Gamma _{\delta} = L^{-1}(\gamma_{\delta}) \cap U$. Then $\Gamma_{\delta}$ is a continuum which separates $0$ from infinity. Suppose that $\Gamma_{\delta} \subset A(s,t)$ and without loss of generality, we may assume that $t<1$. Let $r$ be large and find $\ell_1,\ell_2 \in \mathbb{N}$ such that \[ 2^{\ell_1 -1} \leq r < 2^{\ell_1},\] and \[ t \cdot 2^{\ell_2} \leq r^{\mu} < t \cdot 2^{\ell_2+1}.\] This pair of inequalities implies that \begin{equation} \label{minmodeq0} \mu \ell_1 - D_1 < \ell_2 < \mu \ell_1 -D_2, \end{equation} where $D_1 = \log t / \log 2 + \mu +1$ and $D_2 = \log t / \log 2$. Next, since $\gamma_{\delta} \subset I(f)$ is compact, find $j \in \mathbb{N}$ minimal such that $f^{j}(\gamma_{\delta}) \subset \{ |x| > R_0\}$ and so we can apply Lemma \ref{holderinf}. Define $\Gamma^r = \{ x \in \mathbb{R}^m : 2^{-\ell_2}\cdot x \in \Gamma_{\delta} \}$. Then $\Gamma^r$ separates $S_r$ and $S_{r^{\mu}}$. We first estimate the minimum modulus on $\Gamma^r$: \begin{align*} \log m(\Gamma^r,L) & = \log m( \Gamma^r, f^{\ell_2} \circ L \circ T^{-\ell_2} ) \\ & = \log m( \Gamma_{\delta}, f^{\ell_2} \circ L ) \\ & = \log m( \gamma_{\delta}, f^{\ell_2} ) \\ & \geq \log m(R_0, f^{\ell_2 - j} ) \\ & \geq q_{\ell_2 - j} ( (d/K)^{1/(m-1)} ) \log C_1 + (d/K)^{(\ell_2 - j)/(m-1)} \log R_0. \end{align*} Next, \begin{align*} \log M(r,L) & = \log M(L( S_{2^{-\ell_1}r} ), f^{\ell_1} ) \\ & \leq \log M( R_0, f^{\ell_1} ) \\ & \leq q_{\ell_1}( (dK)^{1/(m-1)} ) \log C_2 + (dK)^{\ell_1 / (m-1)} \log R_0 . \end{align*} Since $y^{j-1} \leq q_j(y) \leq y^{j}$, we obtain \begin{equation} \label{minmodeq1} \log m(\Gamma^r, L) \geq \left ( \frac{d}{K} \right ) ^{(\ell_2 - j -1)/(m-1) } \log C_1 + \left ( \frac{d}{K} \right ) ^{(\ell_2 - j)/(m-1) } \log R_0 \end{equation} and \begin{equation} \label{minmodeq2} \log M(r,L) \leq (dK)^{\ell_1 / (m-1) } \log (C_2R_0). \end{equation} Using \eqref{minmodeq0} and \eqref{minmodeq1}, we obtain \begin{equation} \label{minmodeq3} \log m(\Gamma^r, L) \geq \left ( \frac{d}{K} \right ) ^{ ( \mu\ell_1 - D_1 - j -1)/(m-1)} \log C_1 + \left ( \frac{d}{K} \right ) ^{ (\mu \ell_1 - D_1 - j )/(m-1) } \log R_0. \end{equation} Recall that $\beta = (d/K)^{1/(m-1)} >1$. Therefore, to obtain $\log m(\Gamma^r, L) \geq \log M(r,L)$, by \eqref{minmodeq2} and \eqref{minmodeq3} and rearranging, it suffices to show that \[ \beta^{ \mu \ell_1 - D_1 - j - \ell_1 - 1}( \log C_1 + \beta \log R_0) > \beta^{\ell_1} K^{2\ell_1/(m-1)} \log (C_2R_0).\] Recalling that $\ell_1$ depends on $r$ and writing $C$ for a constant which does not depend on $r$, by taking logarithms this can be written as \[ \ell_1 \left ( (\mu -1 )\log \beta - \frac{2 \log K}{m-1} \right ) \geq C.\] This is satisfied for large enough $r$, that is for large enough $\ell_1$, if \[ (\mu-1) \log \beta > \frac{2 \log K}{m-1} ,\] that is, if \[ \mu > 1 + \frac{ 2\log K}{(m-1)\log \beta } = \frac{\log d + \log K}{\log d - \log K}.\] \end{proof} \subsection{$A(L)$ is a spider's web} We will use the Lemma \ref{char} characterization of spider's webs. Recalling Lemmas \ref{holderinf}, \ref{reggrowth}, \ref{reggrowth2} and \ref{minmodgrowth}, let $R>\max \{ R_0,R_1,R_2,R_3 \}$ and for $n \in \mathbb{N}$, let $r_n = 2^n M^{n}(R,L)$ and let $\mu > \frac{ \log d + \log K}{\log d- \log K}$. By Lemma \ref{minmodgrowth}, there is a continuum $\Gamma^{r_n}$ separating $S_{r_n}$ and $S_{r_{n}^{\mu}}$ such that \[ m(\Gamma^{r_n}, L) >M(r_n,L).\] We define $G_n$ to be the interior of $\Gamma^{r_n}$. Then by construction, every $G_n$ is a bounded topologically convex domain with \[ G_n \supset \{ x \in \mathbb{R}^m : |x| <r_n \} \supset \{ x \in \mathbb{R}^m : |x| <M^n(R,L) \}.\] Further, it follows from Lemma \ref{reggrowth2} that \[ m(\partial G_n, L) = m(\Gamma^{r_n},L) > M(r_n,L) > r_{n+1}^{\mu} > \max _{x \in \partial G_{n+1} } |x|,\] and hence $G_{n+1}$ is contained in a bounded component of $\mathbb{R}^m \setminus L(\partial G_n )$ and we have fulfilled the conditions of Lemma \ref{char} for $A(L)$ to be a spider's web.
1,314,259,996,424
arxiv
\section{Introduction} Due to numerous applications in autonomous vehicles and robotics perception, immersive media processing, 3D graphics, etc., 3D Point Clouds have emerged to be a popular form of representation for 3D vision tasks. Research and development on point cloud data processing has attracted a lot of attention. Recent trends show a heavy inclination towards development of learning-based methods for point clouds. One of the primary tasks in point cloud understanding is object classification. The task is to assign a category label to a 3D point cloud object scan. The unordered nature of 3D point cloud demands methods to be invariant to $N!$ point permutations for a scan of $N$ points. It was demonstrated in the pioneering work called PointNet \cite{qi2017pointnet} that permutation invariance can be achieved using a symmetric function such as the maximum value of point feature responses. Besides permutations, invariance with respect to rotations is desirable in many applications such as 3D registration. In particular, point cloud features are invariant with any 3D transformation in the SO(3) group; namely, the group of $3\times3$ orthogonal matrices representing rotations in 3D. Achieving rotation invariance can guarantee that point clouds expressed in different orientations are regarded the same and, thereby, the classification result is unaffected by the pose. State-of-the-art methods do not account for rotations, and they perform poorly in classifying different rotated instances of the same object. In most cases, objects are aligned in a canonical pose before being fed into a learner. Several approaches have been proposed to deal with this problem. First is data augmentation, where different rotated instances of the same object are presented to a learner. Then, the learner implicitly learns to reduce the error metric in classifying similar objects with different poses. This approach leads to an increase in the computation cost and system complexity. Yet, there is no guarantee in rotational invariance. A more elegant way is to design point cloud representations that are invariant to rotations. Thus, point cloud objects expressed in different orientations are indistinguishable to classifiers. Another class of methods are based on SO(3) equivariant networks, where invariance is obtained as a byproduct of the equivariant point cloud features. Point cloud classification based on the green learning principle was first introduced in PointHop \cite{zhang2020pointhop}. The work is characterized by its mathematical transparency and lightweight nature. The methodology has been successfully applied to point cloud segmentation \cite{zhang2020unsupervised, zhang2022gsip} and registration \cite{kadam2022r}. For point cloud classification, both PointHop and its follow-up work PointHop++ \cite{zhang2020pointhop++} assume that the objects are pre-aligned. Due to this assumption, these methods fail when classifying objects with different poses. In this work, we propose an SO(3) invariance member for the PointHop family, and name it S3I-PointHop. This is achieved through the derivation of invariant representations by leveraging principal components, rotation invariant local/global features, and point-based eigen features. Our work has two main contributions. First, the pose dependent octant partitioning operation in PointHop is replaced by an ensemble of three rotation invariant representations to guarantee SO(3) invariance. Second, by exploiting the rich spatial information, we simplify multi-hop learning in PointHop to one-hop learning in S3I-PointHop. Specifically, two novel aggregation schemes (i.e., conical and spherical aggregations in local regions) are proposed, which makes one-hop learning possible. The rest of this paper is organized as follows. Related material is reviewed in Sec. \ref{sec:review}. The S3I-PointHop method is proposed in Sec. \ref{sec:method}. Experimental results are presented in Sec. \ref{sec:experiments}. Finally, concluding remarks are given in Sec. \ref{sec:conclusion}. \begin{figure*}[htb] \centerline{\includegraphics[width=7in]{Architecture.png}} \caption{An overview of the proposed S3I-PointHop method: 1) an input point cloud scan is approximately aligned with the principal axes, 2) local and global point features are extracted and concatenated followed by the Saab transform, 3) point features are aggregated from different conical and spherical volumes, 4) discriminant features are selected using DFT and a linear classifier is used to predict the object class.} \label{fig:architecture} \end{figure*} \section{Related Work} \label{sec:review} \subsection{Green Point Cloud Learning} Green learning (GL) \cite{kuo2022green} is a data-driven learning methodology. It uses training data statistics to derive representations without labels. The learning process utilizes the Saab transform \cite{kuo2019interpretable} or the channel-wise Saab transform \cite{chen2020pixelhop++}. GL is a radical departure from neural networks. It has achieved impressive results for point cloud data processing. For example, PointHop and PointHop++ offer competitive performance in classification of aligned point clouds. They both have three main modules: 1) hierarchical attribute construction based on the distribution of neighboring points in the 3D space and attribute dimensionality reduction using the Saab/channel-wise Saab transform, 2) feature aggregation, and 3) classification. The capability of GL has been demonstrated by follow-ups, including point cloud segmentation \cite{zhang2020unsupervised, zhang2022gsip}, registration \cite{kadam2020unsupervised, kadam2022r}, odometry \cite{kadam2022greenpco}, and pose estimation \cite{kadam2022pcrp}. GL-based point cloud processing techniques are summarized in \cite{liu20213d}. Similar to early point cloud classification methods, PointHop and PointHop++ fail to classify objects of arbitrary poses. \subsection{Rotation Invariant Networks} Early pioneering deep networks for point cloud processing tasks such as PointNet \cite{qi2017pointnet}, PointNet++ \cite{qi2017pointnet++}, DGCNN \cite{wang2019dynamic} and PointCNN \cite{li2018pointcnn} are susceptible to point cloud rotations. Designing rotation invariant networks has been popular for 3D registration when global alignment is needed. Methods such as PPFNet \cite{deng2018ppfnet} and PPF-FoldNet \cite{deng2018ppf} achieve partial and full invariance to 3D transformations, respectively. The idea behind any rotation invariant method is to design a representation that is free from the pose information. This is done by exploiting properties of 3D transformations such as preservation of distances, relative angles, and principal components. Global and local rotation invariant features for classification were proposed in \cite{li2021rotation}, which form a basis of our method. Ambiguities associated with global PCA alignment were analyzed and a disambiguation network was proposed in \cite{li2021closer}. Another approach is the design of equivariant neural networks that achieve invariance via certain pooling operations. SO(3)- and SE(3)-equivariant convolutions make networks equivariant to the 3D rotation and 3D roto-translation groups, respectively. Exemplary work includes the Vector Neurons \cite{deng2021vector} for classification and segmentation, results in \cite{chen2021equivariant, li2021leveraging} for category-level pose estimation. \section{Proposed S3I-PointHop Method} \label{sec:method} The S3I-PointHop method assigns a class label to a point cloud scan, $X$, whose points are expressed in an arbitrary coordinate system. Its block diagram is shown in Fig. \ref{fig:architecture}. It comprises of object coarse alignment, feature extraction, dimensionality reduction, feature aggregation, feature selection and classification steps as detailed below. \subsection{Pose Dependency in PointHop} The first step in PointHop is to construct a 24-dimensional local descriptor for every point based on the distribution of 3D coordinates of the nearest neighbors of that point. 3D rotations are distance preserving transforms and, hence, the distance between any two points remains the same before and after rotation. As a consequence, the nearest neighbors of points are unaffected by the object pose. However, the use of 3D coordinates makes PointHop sensitive to rotations since the 3D Cartesian coordinates of every point change with rotation. Furthermore, the 3D space surrounding the current point is partitioned into 8 octants using the standard coordinate axes. The coordinate axes change under different orientations of the point cloud scan. We align an object with its three principal axes. The PCA alignment only offers a coarse alignment, and it comes with several ambiguities as pointed out in \cite{li2021closer}. Furthermore, object asymmetries may disturb the alignment since PCA does not contain semantic information. Yet, fine alignment is not demanded. Here, we develop rotation invariant features based on PCA aligned objects. \subsection{Feature Extraction} Local and global information fusion is effective in feature learning for point cloud classification \cite{wang2019dynamic}. To boost the performance of S3I-PointHop, three complementary features are ensembled. The first feature set contains the omni-directional octant features of points in the 3D space as introduced in PointHop. That is, the 3D space is partitioned into eight octants centered at each point as the origin. The mean of 3D coordinates of points in each octant then constitute the 24D octant feature. The second feature set is composed by eigen features \cite{hackel2016fast} obtained from the covariance analysis of the neighborhood of a point. They are functions of the three eigen values derived from the Singular Value Decomposition (SVD) of the local covariance matrix. The 8 eigen features comprise of linearity, planarity, anisotropy, sphericity, omnivariance, verticality, surface variation and eigen entropy. They represent the surface information in the local neighborhood. The third feature set is formed by geometric features derived from distances and angles in local neighborhoods as proposed in \cite{li2021rotation}. For simplicity, we replace the geometric median in \cite{li2021rotation} with the mean of the neighboring coordinates. The 12D feature representation is found using the $K$ nearest neighbors, leading to a pointwise $12 \times K$ matrix. While a small network is trained in \cite{li2021rotation} to aggregate these features into a single vector, we perform a channel-wise max, mean and $l_2$-norm pooling to yield a 36D vector of local geometric feature. The octant, covariance and geometric features are concatenated together to build a 68D ($24+8+36=68)$ feature vector. After that, the Saab transform is performed for dimension reduction. \begin{figure}[htb] \centerline{\includegraphics[width=2.5in]{Aggregation.png}} \caption{Illustration of conical and spherical aggregation. The conventional ``global pooling" is shown in (a), where features of all points are aggregated at once. The proposed ``regional pooling" schemes are depicted in (b)-(d), where points are aggregated only in distinct spatial regions. Only, the solid red points are aggregated. For better visual representation, cones/spheres along only one axis are shown. (b) and (c) use the conical pooling while (d) adopts spherical pooling in local regions.} \label{fig:aggregation} \end{figure} \subsection{Feature Aggregation} The point features need to be aggregated into a global point cloud feature for classification. A symmetric aggregation function such as max or average pooling is a popular choice for feature aggregation. Four aggregations (the max, mean, $l_1$ norm, and $l_2$ norm) have been used in PointHop and PointHop++. Instead of aggregating all points globally at once as shown in Fig. \ref{fig:aggregation} (a), we propose to aggregate subsets of points from different spatial regions here. We consider regions of the 3D volume defined by cones and spheres. For conical aggregation, we consider two types of cones, one with tip at the origin and the other with tip at a unit distance along the principal axes. They are illustrated in Figs. \ref{fig:aggregation} (b) and (c), respectively. The latter cone cuts the plane formed by the other two principal axes in a unit circle and vice versa for the former. For each principal axis, we get four such cones, two along the positive axis and two along the negative. Thus, 12 cones are formed for all three axes in total. For each cone, only the features of points lying inside the cone are pooled together. The pooling methods are the max, mean, variance, $l_1$ norm, and $l_2$ norm. This means for a single point feature dimension, we get a 5D feature vector from each cone. For spherical aggregation, we consider four spheres of a quarter radius centered at a distance of positive/negative one and three quarters from the origin along each principal axis. One example is illustrated in Fig. \ref{fig:aggregation} (d). This gives 12 spheres in total. Points lying in each sphere are pooled together in a similar manner as cones. For instance, points lying in different cones for four point cloud objects are shaded in Fig. \ref{fig:cones}. Unlike max/average pooling, aggregating local feature descriptors into a global shape descriptor such as Bag of Words (BoW) or Vector of Locally Aggregated Descriptors (VLAD) \cite{jegou2010aggregating} is common in traditional literature. On the other hand, the region-based local spatial aggregation has never been explored before. These resulting features are powerful in capturing local geometrical characteristics of objects. \begin{figure}[htb] \centerline{\includegraphics[width=3.4in]{Cones.PNG}} \caption{An example of conical aggregation. For every point cloud object, points lying in each cone are colored uniquely.}\label{fig:cones} \end{figure} \subsection{Discriminant Feature Selection and Classification} In order to select a subset of discriminant features for classification, we adopt the Discriminant Feature Test (DFT) as proposed in \cite{yang2022supervised}. DFT is a supervised learning method that can rank features in the feature space based on their discriminant power. Since they are calculated independently of each other, the DFT computation can be parallelized. Each 1D feature $f^i$ of all point clouds are collected and the interval $[f^i_{min},f^i_{max}]$ is partitioned into two subspaces $S^i_L$ and $S^i_R$ about an optimal threshold $f^i_{op}$. Then, the purity of each subspace is measured by a weighted entropy loss function. A smaller loss indicates stronger discriminant power. DFT helps control the number of features fed to the classifier. As shown in Sec. \ref{sec:experiments}, it improves the classification accuracy significantly and prevents classifier from overfitting. In our experiments, we select top 2700 features. Finally, we train a linear least squares classifier to predict the object class. \section{Experiments} \label{sec:experiments} We evaluate the proposed S3I-PointHop method for the point cloud classification task on the ModelNet40 dataset \cite{wu20153d}, which consists of 40 object classes. Objects in ModelNet40 are pre-aligned. We rotate them in the train and test sets in the following experiments. The rotation angles are uniformly sampled in $[0,2\pi]$. We use $z$ to denote random rotations along the azimuthal axis and $SO(3)$ to indicate rotations about all three orthogonal axes. In Tables \ref{tab:pointhop}, \ref{tab:others} and \ref{tab:ablation_study}, $z/SO(3)$ means that the training set follows the $z$ rotations while the test set adopts $SO(3)$ rotations, and so on. For all experiments, we set the numbers of nearest neighbors in calculating geometric, covariance, and octant features to be 128, 32, and 64, respectively. \subsection{Comparison with PointHop-family Methods} Table \ref{tab:pointhop} compares the performance of S3I-PointHop, PointHop \cite{zhang2020pointhop}, PointHop++ \cite{zhang2020pointhop++} and R-PointHop \cite{kadam2022r}. Clearly, S3I-PointHop outperforms the three benchmarking methods by a huge margin. Although R-PointHop was proposed for point cloud registration and not classification, we include it here due to its rotation invariant feature characteristics. Similar to the global aggregation in PointHop and PointHop++, we aggregate the point features of R-PointHop and train a Least Squares classifier. We also report the classification accuracy with only one hop for these methods. Both PointHop and PointHop++ perform poor since their features are not invariant to rotations. Especially, for the $z/SO(3)$ case, there is an imbalance in the train and test sets, the accuracy is worse. R-PointHop only considers local octant features with respect to a local reference frame. Although they are invariant to rotations, they are not optimal for classification. \begin{table}[htbp] \centering \caption{Classification accuracy comparison of PointHop-family methods.} \label{tab:pointhop} \renewcommand\arraystretch{1.3} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \resizebox{\columnwidth}{!}{ \begin{tabular}{c | c | c | c | c } \hline Method & \# hops & z/z & z/SO(3) & SO(3)/SO(3) \\ \hline \multirow{ 2}{*}{PointHop \cite{zhang2020pointhop}} & 1 & 70.50 & 21.35 & 45.70 \\ & 4 & 75.12 & 22.85 & 50.48 \\ \hline \multirow{ 2}{*}{PointHop++ \cite{zhang2020pointhop++}} & 1 & 9.11 & 7.90 & 9.09 \\ & 4 & 82.49 & 20.62 & 57.61 \\ \hline \multirow{ 2}{*}{R-PointHop \cite{kadam2022r}} & 1 & 53.44 & 53.42 & 53.44 \\ & 4 & 64.87 & 64.86 & 64.86 \\ \hline S3I-PointHop & 1 & \bf{83.10} & \bf{83.10} & \bf{83.10} \\ \hline \end{tabular} } \end{table} \subsection{Comparison with Deep Learning Networks} We compare the performance of S3I-PointHop with 4 deep-learning-based point cloud classification networks in Table \ref{tab:others}. They are PointNet \cite{qi2017pointnet}, PointNet++ \cite{qi2017pointnet++}, PointCNN \cite{li2018pointcnn} and Dynamic Graph CNN (DGCNN) \cite{wang2019dynamic}. Since these methods were originally developed for aligned point clouds, we retrain them with rotated point clouds and report the corresponding results. We see from the table that S3I-PointHop outperforms these benchmarking methods significantly. These methods offer reasonable accuracy when rotations are restricted about the azimuthal (z) axis. However, they are worse when rotations are applied about all three axes. \begin{table}[htbp] \centering \caption{Comparison with Deep Learning Networks.} \label{tab:others} \renewcommand\arraystretch{1.3} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{tabular}{c | c | c | c } \hline Method & z/z & z/SO(3) & SO(3)/SO(3) \\ \hline PointNet \cite{qi2017pointnet} & 70.50 & 21.35 & 45.70 \\ \hline PointNet++ \cite{qi2017pointnet++} & 75.12 & 22.85 & 50.48 \\ \hline PointCNN \cite{li2018pointcnn} & 82.11 & 24.89 & 51.66 \\ \hline DGCNN \cite{wang2019dynamic} & 82.49 & 20.62 & 57.61 \\ \hline S3I-PointHop & \bf{83.10} & \bf{83.10} & \bf{83.10} \\ \hline \end{tabular} \end{table} \subsection{Ablation Study} It is worthwhile to consider the contributions of different elements in S3I-PointHop. To do so, we conduct an ablation study and report the results in Table \ref{tab:ablation_study}. From the first three rows, it is evident that the global octant features are most important, and their removal results in the highest drop in accuracy. The results also reinforce the fact that locally oriented features such as those in R-PointHop are not optimal for classification. In rows 4 and 5, we compare the proposed spatial aggregation scheme (termed as local aggregation) with global pooling as done in PointHop. The accuracy sharply drops by 12\% when only the global aggregation is used. Clearly, global aggregation is not appropriate in S3I-PointHop. Finally, we show in the last row that the accuracy drops to 78.56\% without DFT. The is because, when the feature dimension is too high, the classifier can overfit easily without DFT. \begin{table}[htbp] \centering \caption{Ablation Study} \label{tab:ablation_study} \renewcommand\arraystretch{1.3} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \resizebox{\columnwidth}{!}{ \begin{tabular}{c c c | c c |c | c} \hline \multicolumn{3}{c|}{Feature} & \multicolumn{2}{c|}{Aggregation} & \multirow{ 2}{*}{DFT} & \multirow{ 2}{*}{SO(3)/SO(3)} \\ \tabincell{c}{Geometric} & \tabincell{c}{Covariance} & \tabincell{c}{Octant} & \tabincell{c}{Local} & \tabincell{c}{Global} & & \\ \hline & \checkmark & \checkmark & \checkmark & & \checkmark & 82.49 \\ \hline \checkmark & & \checkmark & \checkmark & & \checkmark & 82.45 \\ \hline \checkmark & \checkmark & & \checkmark & & \checkmark & 80.75 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \bf{83.10} \\ \hline \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & 71.02 \\ \hline \checkmark & \checkmark & \checkmark & \checkmark & & & 78.56 \\ \hline \end{tabular} } \end{table} \subsection{Discussion} One advantage of S3I-PointHop is that its rotation invariant features allow it to handle point cloud data captured from different orientations. To further support this claim, we retrain PointHop with PCA coarse alignment as a pre-processing step during the training and the testing. The test accuracy is 78.16\% and 74.10\% with four-hop and one-hop, respectively. This reinforces that only the PCA alignment is not the reason for the performance gain of S3I-PointHop. While efforts to learn rotation invariant features were already made in R-PointHop, we see that the lack of global features in it degrades its performance. On the other hand, appending the same global feature to R-PointHop does not help in the registration problem. An interesting aspect of S3I-PointHop is its use of a single hop (rather than four hops such as in PointHop). It is generally perceived that deeper networks perform better than shallower counterparts. However, the use of multiple spatial aggregations on top of a single hop, S3I-PointHop can achieve good performance. This leads to the benefit of reducing the training time and the model size as explained below. In any point-cloud processingn method, one of the most costly operations is the nearest neighbor search. To search the $k$ nearest neighbors of each of $N$ points, the complexity of an efficient algorithm is $O(k \log N)$. PointHop uses the nearest neighbor search in four hops and three intermediate farthest point downsampling operations. In contrast, the nearest neighbor search is only conducted once for each point in S3I-PointHop. Another costly operation is the PCA in the Saab transform. It is performed only once in S3I-PointHop. Its model size is 900 kB, where only one-hop Saab filters are stored. \section{Conclusion and Future Work} \label{sec:conclusion} A point cloud classification method called S3I-PointHop was proposed. It extends PointHop-like methods for 3D classification of objects which have arbitrary orientations. S3I-PointHop extracts local and global point neighborhood information using an ensemble of geometric, covariance and octant features. Only a single hop is adopted in S3I-PointHop followed by conical and spherical aggregations of point features from multiple spatial regions. There are several possible extensions of this work. It is desired to further improve the performance of S3I-PointHop and compare it with that of state-of-the-art rotation invariant and equivariant networks. Furthermore, it is interesting to examine the application of single-hop rotation invariant methods to the registration problem and the pose estimation problem. \bibliographystyle{IEEEtran}
1,314,259,996,425
arxiv
\section{Introduction} Planetary nebulae (PNe) are valuable tracers of stellar populations in all types of galaxies. They are widely used as distance indicators, via the invariant bright cutoff of the PN luminosity function \citep[PNLF; e.g.][]{c02}, and as tracers of the luminosity \citep{bu06}, dynamics \citep[e.g.][]{m06} and chemistry (e.g. \citealt{k12,b13}, hereafter papers I and II, respectively) of their stellar progenitors. Recently, PNe have been used to determine the metallicity gradient in the disk of the nearby spiral galaxies M33 \citep{m09} and M31 (papers I and II). These gradients provide direct constraints to models of disk formation and evolution. In particular, measurements in the outer regions of the disks have the power to test novel ideas such as the importance of external disturbances to produce extended young stellar disks \citep{w11}. In the case of M31, it has been proposed that a tidal encounter with M33 that occurred about 3 Gyr ago would have produced a vast extended disk with homogeneous metallicity \citep{b12,b15}. Such an extended disk has been reported before \citep{ib05}, and PNe provide the opportunity to test its predicted chemical content. In earlier papers, we have studied PNe out to deprojected galactocentric distances of 60~kpc, finding nearly solar O/H abundances and a flat O/H gradient in these regions. In this work, we extend our study by obtaining high-quality spectra of another nine PNe at larger radii in the outskirts of M31. \begin{deluxetable}{lcccrrrrrcc} \tabletypesize{\footnotesize} \tablecolumns{10} \tablewidth{0pc} \tablecaption{Basic properties of the target PNe\label{T-target}} \tablehead{ Name & \multicolumn{2}{c}{R.A.\tablenotemark{a}\,\phantom{pipppp}Dec\tablenotemark{a}} & m($5007$)\tablenotemark{a} & $\xi$\phantom{p} & $\eta$\phantom{p} & $d_{app}$ & $d_{app}$ &$d_{disk}$ & V$_{\odot,sys}$\tablenotemark{a} &V$_{diff}$ \\ & [J2000] & [J2000] & [mag] & [deg] & [deg] & [deg] & [kpc] & [kpc] & [km~s$^{-1}$] & [km~s$^{-1}$]} \startdata M2507 & 00 48 27.2 & +39 55 34.3 & 21.23 & 1.10 & -1.33 & 1.73 & 23.2 & 106.3 & -147 & 180 \\ M2538 & 00 36 28.8 & +39 35 26.4 & 20.25 & -1.21 & -1.67 & 2.06 & 27.7 & 28.0 & -426 & 110 \\ M2539 & 00 36 12.6 & +39 35 41.9 & 21.16 & -1.26 & -1.66 & 2.08 & 28.0 & 28.0 & -426 & 110 \\ M2541 & 00 35 09.1 & +39 28 25.2 & 21.78 & -1.46 & -1.78 & 2.30 & 31.0 & 31.3 & -456 & \phantom{0}80\\ M2543 & 00 35 50.7 & +42 21 04.5 & 21.61 & -1.27 & 1.09 & 1.68 & 22.6 & 105.8 & -272 & \phantom{0}40\\ M2549 & 00 36 27.2 & +42 06 21.9 & 21.35 & -1.17 & 0.85 & 1.44 & 19.4 & 90.9 & \phantom{0}-17 & 290 \\ M2566 & 00 49 28.3 & +40 59 53.9 & 21.26 & 1.27 & -0.26 & 1.30 & 17.4 & 73.8 & -247 & \phantom{0}10\\ M2988 & 00 52 00.0 & +43 03 23.5 & 21.98 & 1.69 & 1.81 & 2.48 & 33.3 & 36.2 & \phantom{0}-98 & \phantom{0}20\\ M31-372\tablenotemark{b} & 00 46 41.5 & +43 59 03.7 & 22.6\phantom{0}& 0.71 & 2.72 & 2.81 & 37.8 & 77.6 & \phantom{0}-60 & 100 \\ \enddata \tablenotetext{a}{From \cite{m06}, except for M31-372.} \tablenotetext{b}{Coordinates are from \cite{jf86}. V$_{sys}$ (uncertainty 40~km~s$^{-1}$) and m($5007$)\ (uncertainty 0.3~mag) are estimated from our spectrum.} \tablecomments{The following parameters for M31 have been assumed throughout the paper: distance 770~kpc \citep{fm90}; center at RA(J2000)= 0 42 44.3 and Dec(J2000)=+41 16 09.0 \citep{m06}; disk inclination $i$=77\degr.7 \citep{dv58} and position angle PA=37\degr.7 \citep{m06}; heliocentric systemic velocity V$_{sys}$=$-309$~kms\ \citep{m06}.} \end{deluxetable} \begin{figure*}[!ht] \epsscale{2.0} \plotone{plotsp.eps} \caption{The spectra of the target PNe. For the lower S/N spectrum of M2538, regions with high noise have been masked, and the whole spectrum was smoothed with a boxcar of three pixels.} \label{F-spectra} \end{figure*} \section{Observations} The target PNe are listed in Table~\ref{T-target}. All but one were selected from the list of \citet{m06} to be at large galactocentric distances and sufficiently bright to obtain high quality spectra with a 10m telescope. The putative halo PN M31-372 observed by \citet{jf86} was additionally included. Spectroscopy of eight of these PNe was obtained in different nights in October 2013 in service-queue mode at the 10.4m~GTC telescope on the island of La Palma, Spain. The OSIRIS instrument was used in its longslit mode. The combination of grism R1000B and a slit width of $0''.8$ provides a spectral dispersion of 0.21~nm per (binned $\times$2) pixel, a resolution of 0.63~nm, and a spectral coverage from 370 to 785~nm. Seeing varied between $0''.6$ and $1''.1$ (full width at half maximum), and spatial scale along the slit was $0''.254$ per binned pixel. Data were generally obtained under photometric weather conditions and grey moon. The slit was oriented along the parallactic angle for all targets and standard stars. Total exposure times per target varied from 120 to 135 min, depending on the brightness of the source, split into three or four sub-exposures. The GTC calibration plan provides, in each observing night, at least one spectrophotometric standard to be used for flux calibration. Additionally, the PN M2538 was observed with the Dual-Imaging Spectrograph at Apache Point Observatory (New Mexico, USA) in 2011 October. We observed through a 2$''$ slit oriented along the parallactic angle. The B1000 grating provided coverage from 370 to 505~nm at 0.25~nm resolution; the R300 grating covered 520--960~nm at 0.70~nm resolution. Standard bias, flatfield, and emission-lamp exposures were taken and applied to the data. Flux calibration was obtained via observations of the standard star BD+28 4211. We obtained six 20~min integrations of M2538, which were co-added after calibration. As PNe at the distance of M31 (770~kpc, see Table~\ref{T-target}) are spatially unresolved, 1--D spectra were extracted and reduced using {\it twodspec} in IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation.}. \section{Physics and chemistry of the nebulae}\label{S-abund} The spectra of the target PNe are displayed in Figure~\ref{F-spectra}. Emission-line fluxes were measured by multi-Gaussian fit using {\it splot}. These fluxes are the input for the abundance determinations, carried out with ELSA, our five-level atom code \citep{j06}. We used the same method as in Papers I and II, in order to produce a homogeneous set of data and abundance estimations. The reader is referred to those articles and to \citet{mi10} for the details of the analysis. The procedure includes corrections of the observed line fluxes for interstellar reddening, using the law of \citet{sm90}, and for the contamination of the hydrogen Balmer lines by coincident recombination lines of He$^{++}$. Dereddened fluxes were used to determine the electron temperature and density $N_e$\ and $T_e$\ with the appropriate line diagnostics. Note that $T_e$([\ion{O}{3}]) was always properly determined, as the auroral line \oiii4363 was accurately detected in all PNe (Figure~\ref{F-spectra}). The same applies for $N_e$([\ion{S}{2}]), except for M31-372 and M2538 where the \sii6717,6731 doublet could not be measured and a default $N_e$\ of 10000~cm$^{-3}$ was assumed. In PNe where no direct low-ionization temperature diagnostic has been observed, the determination of the temperature to be used for calculating low-ionization abundances ($T_e$([\ion{N}{2}])) depends on whether \heii4686 is observed. If it is, we adopt the carefully derived result from \citet{kaler86} that applies under this condition, i.e. $T_e$([\ion{N}{2}])$=$10\,300~K. If not, then we derive $T_e$([\ion{N}{2}]) from $T_e$([\ion{O}{3}]) according to the prescription from \citet{p92}. The detected lines allowed the calculation of the abundances of several O, Ar, N, S, and Ne ions. Total abundances were then determined using ionization correction factors ({\it icf}) calculated as described in \citet{kh01}. The observed emission-line measurements, logarithmic reddening parameter c(H$\beta$), $T_e$\ and $N_e$, ionic and total abundances are listed in the tables in Appendix~A. They indicate that none of the PNe presented in this paper fulfill the original definition by \citet{p83} for Galactic Type~I PNe of having either $\log(N/O)>-0.3$ or He/H$>0.125$. This also applies to most of PNe studied in papers I and II, indicating at most a very moderate N and He enrichment for these bright PNe of M31. The PNe studied in this article also follow the relationships between O/H and other elemental abundances (such as N/O, Ne/H, and Ar/H) discussed in paper I. \begin{figure}[!ht] \epsscale{1.0} \plotone{newm31fig1.eps} \caption{Location of the target PNe, overlaid on the DSS optical image. Circles and triangles indicate PNe with deviations from the model disk kinematics smaller and larger than 100~km~s$^{-1}$, respectively (see text). Red symbols are the PNe presented in this work, and black symbols those discussed in papers I and II. Squares indicate the HST/ACS fields studied by \citet{b15}. The ellipse indicates the R$_{25}$ radius of M31.} \label{F-location} \end{figure} \section{Location of targets within M31}\label{S-location} Figure~\ref{F-location} shows the position in the sky of the target PNe, including those discussed in papers I and II. The geometric transformations of \citet{h91}, with the parameters indicated in the Note of Table~\ref{T-target}, were adopted to determine the R.A. and Dec offsets $\xi$ and $\eta$ from the center of M31. These offsets, and the total apparent distance $d_{app}$ in the plane of the sky, are also listed in Table~\ref{T-target}. The ellipse in the figure marks the R$_{25}$ radius of M31 ($\sim$30~kpc, see paper II). It corresponds to $\sim$5 scale-lengths of the exponential bright disk of M31 \citep{wk88}, showing that the newly observed PNe are located well outside the familiar bright disk of M31. These outer regions of M31 are surprisingly complex. A huge halo with a radius $\ge$300~kpc and a smooth star density distribution has been identified \citep{ib07,ib14}. A number of structures, such as streams, loops, and overdensity regions are seen in projection throughout the halo \citep{l13,ib14}. They are mostly associated with accretion of satellite dwarf galaxies, as expected in the standard hierarchical halo formation scenario. The most prominent structure is the so-called Giant Stellar Stream (GSS), which is likely the latest, most metal enhanced accretion event associated with the streams. In addition, outside the usual bright disk of M31, a smooth ``exodisk'' was found to extend out to a galactocentric distance of $\sim$40~kpc, with detections as far as $\sim$70~kpc \citep{ib05}. A set of globular clusters in this zone forms a coherent kinematic system that is similar to an extrapolation of M31's inner disk \citep{v14}. As far as metallicity is concerned, there is a significant overlap between the stellar [Fe/H] content of the different structures in the outer regions of M31. Overall, the mean halo metallicity decreases with radius from [Fe/H]$\sim$$-0.7$ at $R\le$30~kpc to [Fe/H]$\sim$$-1.5$ at $R\sim$150~kpc, but the smooth halo component seems 0.2 dex more metal poor at all radii, and substructures span a wide range in metallicity covering roughly two orders of magnitude \citep{ta10,ib14}. The strongest evidence of metal-rich stars in the halo (beyond the bound satellite galaxies) at $\avg{[\mbox{Fe/H}]}$$\sim$$-0.5$ is found in the GSS \citep{ib14}. As with the extended exodisk, \citet{b15} find an average value of [Fe/H]$\sim$$-0.3$ for the several disk-like fields considered. \citet{ch06} find several fields in M31's exodisk in which AGB stars have roughly solar metal abundances. Figure 8 in \citep{ib14} illustrates the complexity of the metallicity distribution in the outer regions of M31 caused by the overlapping halo (with its ancient and more recent components) and the extended disk. Adopting the inclination and position angle of M31's inner disk, the deprojected distances in the plane of the disk for the target PNe, $d_{disk}$, are indicated in Table~\ref{T-target}. The radial velocities of PNe provide useful insights into their possible relation to M31's extended disk. They are adopted from \citet{m06}, except for M31--372 for which a rough estimate could be obtained from our spectra (Table~\ref{T-target}). We compared them with the average velocity expected at each position according to the kinematic model of the extended disk presented by \citet{ib05}. The most significant differences (V$_{diff}\ge100$~km~s$^{-1}$, see Table~\ref{T-target}) are found in PNe M2507, M2538, M2539, M2549, and M31-372, suggesting that they may instead be associated with the halo or some of its substructures. \citet{b15} present colour-magnitude diagrams in fourteen HST/ACS fields sampling galactocentric distances similar to the PNe presented in this work. They cover regions of the extended disk of M31, or some of the substructures in the halo, none representing the metal--poor halo component. The positions in the sky of the HST fields are indicated by squares in Figure~\ref{F-location}. While some of these fields are not far from our PNe, no noteworthy association can be identified, except for the case of M2538, M2539, and M2541 which seem related to the so-called Warp and G1 Clump, which are both disk components (but note the abovementioned deviations from the disk kinematics of two of these PNe). \begin{figure}[!t] \epsscale{1.0} \plotone{gradient.eps} \caption{The O/H abundance radial gradient. Symbols are like in Figure~\ref{F-location}. Dashed lines are described in the text.} \label{F-gradient} \end{figure} \section{M31 O/H abundance gradient}\label{S-grad} The O/H abundance radial gradient, compiled with all PNe studied in papers I and II (black symbols) and this work (red symbols), is presented in Figure~\ref{F-gradient}. It extends the metallicity gradient from this type of object to galactocentric distances larger than 100~kpc, assuming that all PNe are located in the plane of the disk of M31. Sources with radial velocities consistent with the disk kinematics (V$_{diff}$$<$100~km~s$^{-1}$, circles) are separated from those with large deviations from the disk kinematical model of \citet[][triangles]{ib05}. The graph shows that even at large distances PNe with solar metallicity and disk-like kinematics exist. A least-square-fit to sources with V$_{diff}$$<$100~km~s$^{-1}$\ provides an almost flat metallicity slope ([O/H]$=$$-0.0909$$-0.0004$$\times d_{disk}$, with $\avg{[\mbox{O/H}]}$$=$$-0.11$). It is indicated by the upper dashed line in the figure. As a note of caution, the two PNe with solar oxygen abundance at $d_{disk}>60$~kpc are M2543 and M2566. Both are located along the minor axis of M31's disk. Therefore their deprojection factors to the disk plane are large, and radial velocities are close to the systemic velocity of the galaxy. They do not appear to be associated with any major substructure in the halo \citep[see also][]{m06}, although it should also be noted that that the GSS pervades a large portion of the outer regions of M31 out to 100 kpc, and its debris extend beyond the SE quadrant where it is more prominent \citep{ib14}. Figure~\ref{F-gradient} also shows that the three PNe at $d_{disk}>60$~kpc which deviate from the disk kinematics have a lower metallicity ($\avg{[O/H]}$$=$$-0.37$, lower dashed line). This indicates that they might be related to an older population, but their only slightly subsolar O/H seemingly precludes their association with the stars in the most metal-poor and ancient component of the halo identified by e.g. \citet{ib14}. \begin{figure}[!ht] \epsscale{1.0} \plotone{magabun31_all.eps} \plotone{magabun31_range.eps} \plotone{magabun33_range.eps} \caption{The O/H abundances as a function of the absolute [\ion{O}{3}]\ magnitudes of PNe in M31 (top two panels, adopted distance 770~kpc) and in M33 (bottom panel, adopted distance 840~kpc). The dashed lines indicate the adopted extinction values (see text). For M31, symbols are like in Figure~\ref{F-location}.} \label{F-magabun} \end{figure} Before moving to a more general interpretation, two things should be noted. In this paper, we assume that the O/H abundance ratio in the PNe reflects the original oxygen content of the stellar progenitors, i.e. of the ISM from which these stars formed. There are some indication that oxygen may be enhanced in some AGB stars during the third dredge-up episodes, but this possible effect is neglected here as no clear conclusion has been drawn yet \citep{k14,h12,di15}. Also, it should be recalled that our target PNe were selected to be bright enough to allow an accurate chemical analysis. Indeed, all lie within 2.4~mag from the bright cutoff of the PNLF. \citet{r93} in the LMC, and \citet{jc99} in M31, suggested that there might be a slight dependence of metallicity with the [\ion{O}{3}]\ luminosity, but concluded that the luminosities of the brightest PNe are nearly independent of O/H. On the other hand, \citet{m04} did not find any trend of chemical abundances as a function of the [\ion{O}{3}]\ luminosity for PNe in the LMC and our Galaxy, concluding that the chemical abundances derived from the brightest PNe are representative of the total PN population. Figure~\ref{F-magabun}, compiled with our data of M31, shows some tendency of decreasing metallicity with [\ion{O}{3}]\ luminosity. The upper panel displays our measured O/H abundances as a function of the absolute [\ion{O}{3}]\ magnitude corrected for foreground and internal extinction using the c(H$\beta$) values determined in our spectra. PNe in the brightest magnitude bin (M($5007$)$\le$$-3.7$) have some spread in metallicity, but on average have higher O/H abundances than PNe in the next magnitude bin. To minimize the effects of the -- albeit shallow -- abundance gradient, the middle panel confirms that the trend persists if sources are selected in a more limited range of galactocentric distances. A similar relation is also seen seen in M33 (lower panel), where we have used the data from \citet{m09}, which were dereddened adopting the M33 foreground and internal extinction from Cepheids \citep{f01}. The decrease of mean metallicity with the [\ion{O}{3}]\ luminosity may explain why the dispersion in the radial gradient of M31 at $d_{disk}<40$~kpc is increased, compared to the corresponding graphs in papers I and II, with the addition of the slightly fainter PNe presented in this work. In a standard scenario where metallicity of a galaxy increases with time and the PN progenitors do not form or destroy oxygen, the observed correlation between luminosity and O/H content is qualitatively consistent with the hypothesis that the most luminous PNe are produced by younger, i.e. more massive progenitors than fainter PNe. The hydrodynamical simulations by \citet[][see also \citealt{men08} and \citealt{c10}]{sch07} support such a view and provide further constraints to the stellar masses of our target PNe. \citet{sch07} found that only PNe with central stars masses $>$0.6~M$_\odot$\ can attain the [\ion{O}{3}]\ luminosity of the bright cutoff of the PNLF if accompanied by sufficiently delayed optically thin/thick transition of the nebular gas. Adopting the empirical initial-to-final mass relationships of \citet{cat08}, these relatively high core masses imply progenitors with an initial mass larger than $\sim$2~M$_\odot$. Our new targets are between 1 and 2 magnitudes below the PNLF cutoff. These slightly fainter magnitudes can either correspond to the most luminous PNe that have started to fade, or by PNe from slightly less massive progenitors at the their maximum luminosity. Figure~7 in \citet{sch07} shows for instance that PNe with core masses of 0.565~M$_\odot$\ (initial masses $\sim$1.5~M$_\odot$) are not able to reach the [\ion{O}{3}]\ luminosities of our targets. We adopt this value as the lower limit for the PN progenitors studied in this work. On the other side, it should be considered that with increasing mass the number of progenitors decreases (according to the standard IMF), and the duration of the high [\ion{O}{3}]\ luminosity phase of their nebulae becomes much shorter \citep{sch07}. Therefore, statistically, it is unlikely to find very massive progenitors among our targets. A rough upper limit of $\sim$2.5~M$_\odot$\ may be assumed, considering that none of them is a type~I PN \citep{p01}. We conclude that the PNe presented in this work are expected to be produced by stars with masses roughly between 1.5~M$_\odot$\ and 2.5~M$_\odot$. The lifetime of these stars is $\le$3.5~Gyr. This mass range is also consistent with the modest nitrogen enhancement of the target nebulae, because the solar-metallicity models of \citet{k10} predict significant N overabundances only for significantly more massive progenitors ($>$4~M$_\odot$) such as observed in some Galactic bipolar nebulae \citep{cs95}. It is important to note that our arguments are purely based on single-star evolution considerations. They do not tackle the problem that the PNLF cutoff magnitude is the same in all galaxies, even in older stellar populations where $\ge$2~M$_\odot$\ stars are not found \citep{c02,c10}. This may require alternative channels of PN production to populate the bright end of the PNLF, such as mergers or mass accretion in interacting binaries \citep{c05,s06}. In such case, our mass estimates would not be valid. However, low-mass progenitors would not be expected to have the high metallicity that we measure. In addition, as discussed in the next section, there is independent evidence that in the outer regions of M31 2~M$_\odot$\ stars with solar metallicity do exist. \section{Conclusions} We have identified two PNe at very large galactocentric distances with solar oxygen abundances and radial velocities consistent with that of the extended disk of M31. Their stellar progenitors, according to single-star evolutionary theories, are estimated to be in the range 1.5--2.5~M$_\odot$. These two additional objects support our previous conclusion (paper II) that the luminous PNe found outside the disk of M31 trace a burst of star formation that occurred $<$3~Gyr ago. This interpretation is also fully consistent with the results of \citet{b15}, who detect trace of such a recent starburst in all the HST fields studied, which span a similar galactocentric distance range as our PNe. A similar starburst of solar metallicity stars is also found throughout the disk of M31 by the Panchromatic Hubble Andromeda Treasury (Ben Williams, private communication), as well as in the outer disk of M33 \citep{b12}. The most likely explanation for the luminous oxygen-rich PNe in M31's exodisk is that the observed, relatively massive stars found in the outer regions of M31 are part of the thin disk that has been kicked out during a recent encounter with M33, and/or following the impact with the unidentified progenitor of the Giant Stellar Stream \citep[e.g.][]{mc09,b12,b15}. Three other PNe in the outer regions of M31 have O/H abundances $\sim$0.4 dex lower, and show significant deviations from the kinematics of M31's exodisk. It is possible that they belong to one or another of the halo's substructures discussed in the literature. No bona fide PN belonging to the smooth, mostly metal-poor, ancient halo described by \citet{ib14} has been found yet. Nor would this be expected since the ancient PNe will be far fainter than the younger ones. Assuming a total luminosity of the smooth halo component of $\sim$10$^{-9}$~L$_\odot$\ \citep{ib14} and a luminosity-specific PN number of $\sim$10$^{-7}$ \citep{bu06}, some 10$^2$ halo PNe are expected, but only a tenth of them within two magnitudes from the bright cutoff of the PNLF, which is the luminosity range that we have explored so far. The number could be even smaller considering that, in old stellar populations, AGB stars with core masses $\le$0.55~M$_\odot$\ may not produce PNe, either because the post-AGB evolution of the core is too slow to ``light up'' the nebulae before they disperse, or because stars escape the AGB phase \citep{bu06}. Given the complexity of the outer regions of M31 and the small number of PNe available, our interpretation should be considered as speculative. However, it fits well into the modern view of a rich interaction and merger history for M31, providing independent support to the results obtained using other classes of stars, and adding complementary information about chemical elements that are best studied in ionized nebulae. \acknowledgments The results of this paper are based on observations made with (1) the Gran Telescopio Canarias (GTC), installed at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'\i sica de Canarias, in the island of La Palma and (2) the 3.5~m telescope at Apache Point Observatory in Sunspot, New Mexico. The Apache Point Observatory 3.5-meter telescope is owned and operated by the Astrophysical Research Consortium. R.L.M.C. acknowledges funding from the Spanish AYA2012-35330 grant. B.B., K.B.K., and R.B.C.H. are grateful to our institutions and to the NSF for support under grants AST-0806490, AST-0808201, AST-0806577, respectively. This research has made use of the USNOFS Image and Catalogue Archive operated by the United States Naval Observatory, Flagstaff Station (\anchor{http://www.nofs.navy.mil/data/fchpix/}{http://www.nofs.navy.mil/data/fchpix/}). {\it Facilities:} \facility{GTC (OSIRIS)}; \facility{APO (DIS); \facility{SDSS SkyServer}}
1,314,259,996,426
arxiv
\section{Theoretical calculations} Events with an isolated photon (prompt photon) are an important tool to study hard interaction processes since such photons emerge without the hadronisation phase. In particular, final states with a prompt photon together with a jet are directly sensitive to the quark content of the proton through the elastic scattering of a photon by a quark, $\gamma q\to\gamma q$ (see Fig.~1). However, QCD contributions to this lowest-order process lead to significant sensitivity to the gluon structure function. In particular, a contribution to prompt-photon events from $gq \to q\gamma$ process, in which the photon displays a hadronic structure (resolved process), is important \cite{pr:d52:58,pr:d64:14017,ejp:c21:303}. Thus, prompt-photon events can constrain both proton and photon parton densities (PDF). A number of QCD predictions \cite{pr:d52:58, pr:d64:14017, ejp:c21:303, Lipatov:2005tz} can be confronted with the data. \begin{wrapfigure}{r}{0.35\columnwidth} \centerline{\includegraphics[width=0.25\columnwidth]{chekanov_sergei_prph.fig1.eps}} Figure~1. Lowest-order diagram (Compton scattering) for $\gamma$+jet events in $ep$ collisions. \end{wrapfigure} The next-to-leading order (NLO) calculations based on the collinear factorisation and the DGLAP formalism were performed by Krawczyk and Zembrzuski (KZ) \cite{pr:d64:14017} and by Fontanaz, Guillet and Heinrich (FGH)~\cite{ejp:c21:303}. No intrinsic transverse momentum of the initial-state partons in the proton was assumed. The renormalisation scale for such calculations was taken to be $\mu_{R}=E_{T}^{\gamma}$, where $E_{T}^{\gamma}$ is the transverse energy of the photon. In case of the KZ prediction, the GRV parameterisations for the proton and photon, as well as for the fragmentation function were used~\cite{zfp:c67:433,*pr:d45:3986,*pr:d46:1973,epj:c14:133}. For the FGH calculation, MRST01 proton PDF and the AFG02 photon PDF were used ~\cite{epj:c14:133,*zfp:c64:621}. The latter calculation takes into account high-order terms in the QCD expansion which have not been considered in the KZ approach. The QCD calculations based on the $k_T$-factorisation~\cite{sovjnp:53:657,*np:b366:135,*np:b360:3} approach were performed by A.~Lipatov and N.~Zotov (LZ)~\cite{Lipatov:2005tz}. The unintegrated quark and gluon densities of the proton and photon using the Kimber-Martin-Ryskin (KMR) prescription \cite{Kimber:2001sc,*Watt:2003mx} were used. As for the NLO QCD, both the direct and the resolved contributions were taken into account. For all the calculations discussed above, an isolation requirement $E_{T}^{\gamma}>0.9\, E_{T}^{tot}$ was used, where $E_{T}^{tot}$ is the total energy of the jet which contains prompt photon. Jets were reconstructed with the longitudinally-invariant $k_{T}$ algorithm in inclusive mode \cite{pr:d48:3160,*np:406:187}. The $\gamma$+jet cross sections were corrected for hadronisation effects using a Monte Carlo (MC) simulation. \section{Event reconstruction} Each jet, reconstructed from energy-flow objects (EFO), was classified as either a photon candidate or a hadronic jet. The photon-candidate jet was required to consist of EFOs without associated tracks and to be within the central tracking detector, $-0.74<\eta^{\gamma}<1.1$. For this jet, $E_{\rm EMC}/E_{\rm tot}>0.9$ is required, where $E_{\rm EMC}$ is the energy reconstructed in the electromagnetic part of the CAL and $E_{\rm tot}$ is the total energy of this jet. After correction for energy losses, the cut $E_{T}^{\gamma}>5{\,\text{Ge}\eVdist\text{V\/}}$ was applied. \begin{center} \vspace{-0.5cm} \begin{minipage}[c]{0.48\textwidth} \includegraphics[width=6.8cm,angle=0]{chekanov_sergei_prph.fig2.eps} \label{theta1} \end{minipage} \hfill \begin{minipage}[c]{.48\textwidth} \includegraphics[width=6.8cm,angle=0]{chekanov_sergei_prph.fig3.eps} \label{theta2} \end{minipage} \end{center} \vspace{-0.5cm} Figure~1. The differential $\gamma$+jet cross sections as functions of $E_{T}$ and $\eta$ of the prompt photon and the jet, as described in the figure. The data are compared to QCD calculations and MC models. The shaded bands correspond to a typical renormalisation scale uncertainty which was obtained by changing $\mu_R$ by a factor of 0.5 and 2. \vspace{0.5cm} \begin{center} \vspace{-1.0cm} \begin{minipage}[c]{0.48\textwidth} \includegraphics[width=6.1cm,angle=0]{chekanov_sergei_prph.fig4.eps} \label{theta1a} \end{minipage} \hfill \begin{minipage}[c]{.48\textwidth} \includegraphics[width=6.1cm,angle=0]{chekanov_sergei_prph.fig5.eps} \label{theta2a} \end{minipage} \end{center} \vspace{-0.5cm} Figure~2. The $x_{\gamma}^{\mathrm{obs}}$ cross section for $\gamma$+jet events compared to the NLO QCD calculations and MC models for $E_{T}^{\gamma}>5{\,\text{Ge}\eVdist\text{V\/}}$ (left) and $E_{T}^{\gamma}>7{\,\text{Ge}\eVdist\text{V\/}}$ (right). \vspace{0.5cm} Hadronic jets, after correction for energy losses, were selected in the kinematic range $E_{T}^{\rm jet}>6{\,\text{Ge}\eVdist\text{V\/}}$, $-1.6<\eta^{\rm jet}<2.4$. If more than one jet was found within the above kinematic cuts, the jet with the highest $E_{T}^{\rm jet}$ was selected. For the prompt-photon identification, the conversion-probability method was used~\cite{Chekanov:2006un}. In contrast to the shower-profile approach adopted in previous HERA measurements, the present approach uses the probability of conversion of photons to $e^{+}e^{-}$ pairs in detector elements and inactive material (mainly the ZEUS superconducting coil) in front of the barrel calorimeter (BCAL). Since the conversion probability for a single photon is smaller than for multiphoton events arising from neutral meson decays ($\pi^0$, $\eta$, etc.), one can extract the $\gamma$ signal by performing a statistical background subtraction. To determine the number of charged particles in the photon shower, the ZEUS barrel preshower detector (BPRE) \cite{magill:bpre} located in front of the BCAL was used. The measured output, calibrated in units of minimum ionising particle (mips), is proportional to the energy loss of the incident particle after interaction with inactive material. The response of the BPRE to single isolated photons was verified using deeply virtual Compton scattering events. For the $\gamma$+jet, the BPRE signal for the $\gamma$ candidates was fitted using a MC model with and without prompt photons, and the number of events associated with the photon signal was extracted. \section{Results and conclusions} The total cross section for the process $ep\to e+\gamma_{\rm prompt}+\mathrm{jet}+X$ for $0.2<y<0.8$, $Q^{2}<1\,\mathrm{GeV^{2}}$, $5<E_{T}^{\gamma}<16$ GeV, $6<E_{T}^{\rm jet}<17$ GeV, $-0.74<\eta^{\gamma}<1.1$, $-1.6<\eta^{\rm jet}<2.4$ and $E_{T}^{\gamma, \rm (true)}>0.9\, E_{T}^{\gamma}$ was measured to be $ \sigma(ep\to e+\gamma_{\rm prompt}+\mathrm{\rm jet}+X)=33.1\pm 3.0\,(\mathrm{stat.})\,_{-4.2}^{+4.6}(\mathrm{syst.}) \:\mathrm{pb.} $ This value agrees well with the LZ calculation ($30.7^{+3.2}_{-2.7}\,\text{pb}$), but is higher than for the NLO QCD ($23.3^{+1.9}_{-1.7}\,\text{pb}$ (KZ) and $23.5^{+1.7}_{-1.6}\,\text{pb}$ (FGH)) and MC models. The differential cross sections as functions of $E_{T}$ and $\eta$ for the prompt-photon candidates and for the accompanying jets are shown in Figure~1. The MC differential cross sections do not rise as steeply at low $E_{T}^{\gamma}$ as do the data. The KZ NLO prediction describes the data better. However, it underestimates the observed cross section at low $E_{T}^{\gamma}$ and in the forward jet region. The FGH prediction is similar to the KZ NLO. The LZ prediction based on the $k_T$-factorisation approach gives the best description of the $E_{T}$ and $\eta$ cross sections. Figure~2(left) shows the distribution for $x_{\gamma}^{\rm obs}$ defined as $\sum_{\gamma, \rm jet} (E_i-P_Z^i)/(2E_e y)$ (the sum runs over the photon candidate and the hadronic jet). The difference between the NLO QCD and the data is mainly concentrated in the resolved photon region. It is important to verify the level of agreement with NLO when the minimum transverse energy of the detected prompt photons is increased from $5{\,\text{Ge}\eVdist\text{V\/}}$ to $7{\,\text{Ge}\eVdist\text{V\/}}$. In comparison with previous measurements, such a choice may emphasize different aspect of contributions of high-order QCD radiation, since the transverse energy of the prompt-photon is larger than that of the jet. Figure~2(right) shows the corresponding $x_{\gamma}^{\rm obs}$ distribution. For the $E_{T}^{\gamma}>7{\,\text{Ge}\eVdist\text{V\/}}$ cut, both the NLO QCD and the LZ predictions agree well with the data. There is also good agreement for the $E_T$ and $\eta$ kinematic variables~\cite{Chekanov:2006un}. {\it Acknowledgements.} I thank M.~Fontannaz, G.~Heinrich, M.~Krawczyk, A.~Lipatov, N.~Zotov and A.~Zembrzuski for discussions and for providing the QCD calculations. \bibliographystyle{./l4z_default} \def\Large\bf References{\Large\bf References} \def\Large\bf References{\Large\bf References} \pagestyle{plain}
1,314,259,996,427
arxiv
\section*{Introduction} Data science in general and more specifically signal and image processing relies on mathematical methods, with the fast Fourier transform as the most prominent example. Besides its favourable computational complexity, its success relies on the good approximation of smooth functions by trigonometric polynomials. Mainly driven by specific applications, functions with additional properties together with their computational schemes have gained some attention: signals might for instance be sparse like in single molecule fluorescence microscopy \cite{moerner03}, or live on some other lower dimensional structure like microfilaments, again in bio-imaging. Such properties are well modeled by measures, which can express the underlying structure through the geometry of their support, \textit{e.g.\ } being discrete or singular continuous. This representation has in particular led to a better understanding of the sparse super-resolution problem \cite{candes13,denoyelle17,ehler19}, but has also proven useful in many more applications, such as phase retrieval in X-ray crystallography, or contour reconstruction in natural images. In this work, we consider measures supported on the torus. The available data then consists in trigonometric moments of low to moderate order, and one asks for the reconstruction or approximation of the measures. \paragraph{Related work} For discrete measures, there is a large variety of methods that compute or approximate the parameters of the measure, e.g., parametric methods like Prony's method \cite{prony1795,PlTa14,kunis16,josz19,sauer17}, matrix pencil \cite{HuSa90,Moitra_15,ehler19}, ESPRIT \cite{roy89,andersson18,sahnoun17,li20} or MUSIC \cite{schmidt86,liao14}, or variational methods, such as TV-minimization via the Beurling LASSO \cite{deCastro12,candes13}, which can be challenging for higher spatial dimensions \cite{castro17,poon18} or larger polynomial degrees. The positive-dimensional case on the other hand is more involved. Specific curves in a two-dimensional domain are identified by the kernel of moment matrices in \cite{pan14,vetterli2016,ongie16}, more general discussions can be found in \cite{laurentrostalski2012} and \cite{wageringel2022:truncated}. In another line of work, Christoffel functions offer interesting guarantees both in terms of support identification \cite{lasserre19} or approximation \emph{on the support} \cite{kroo12,marx21,pauwels20}, but, to the best of our knowledge, require strong regularity assumptions, and only come with separate guarantees on and outside the support of the measure. \paragraph{Contributions} Following the seminal paper \cite{Mh19}, we introduce easily computable trigonometric polynomials to approximate an arbitrary measure on the $d$-dimensional torus. In contrast to \cite{Mh19}, we provide tight bounds on the pointwise approximation error as well as with respect to the Wasserstein-1 distance, the latter scaling inverse linearly with respect to the polynomial degree (up to a logarithmic factor). After setting up the notations, \cref{sec:approx} considers the approximation of measures by trigonometric polynomials. \cref{Thm_existence} proves the existence of a best approximation and provides a lower bound which is attained for the univariate case and is sharp within a factor $6d$ for spatial dimensions $d>1$. The convolution of the measure with the Fejér kernel has a representation via the moment matrix of the measure and is shown to be a sum of squares for non-negative measures in \cref{thm:Tn}. \cref{thm:W1p} proves a sharp upper bound for its approximation error being a $\log$-factor worse than the best approximation. \cref{Thm_lower_bound} and \cref{Rem_Jackson} discuss the saturation of this approximation and the removal of the $\log$-factor by using the Jackson kernel. In the univariate case, the Wasserstein-1 distance of measures is realized as $L^1$-norm after convolution with the Bernoulli spline of degree~$1$ and this also allows for the uniqueness of the best approximation for absolutely continuous real measures. Section \ref{sec:interp} studies another sum-of-squares trigonometric polynomial defined via the moment matrix of the measure, similarly suggested in \cite[Thm.~3.5]{kunis16} and \cite[Prop.~5.3]{ongie16} (and indeed closely related to the \emph{rational} function \cite[Eq.~(6)]{schmidt86}). This polynomial interpolates the constant one function on the Zariski closure of the support of the measure and converges pointwisely to zero outside. \cref{thm:p1characterization} proves a variational characterisation as well as the interpolation property. The pointwise convergence is proved in \cref{Thm_pointwise_conv} and \cref{thm:pointw_pos} for the discrete and singular continuous case, respectively. The discrete case also allows for a weak convergence result in \cref{thm:p1weak}. We end by illustrating the theoretical results by numerical examples in \cref{sec:num}. \section{Preliminaries} Let $d\in\ensuremath{\mathbb{N}}$, $1\le p\le \infty$ and let $|x-y|_p = \min_{k\in\ensuremath{\mathbb{Z}}^d} \norm{x-y+k}_p$ denote the wrap-around $p$-norm on $\ensuremath{\mathbb{T}}^d=[0,1)^d$. For $d=1$ these wrap-around distances coincide and we denote them by $|x-y|_1$ to distinguish from the absolute value. Throughout this paper, let $\mu,\nu$ denote some complex Borel measures on $\ensuremath{\mathbb{T}}^d$ with finite total variation and normalization $\mu(\ensuremath{\mathbb{T}}^d)=\nu(\ensuremath{\mathbb{T}}^d)=1$. We denote the set of all such measures by $\mathcal{M}$ and restrict to the real signed and non-negative case by $\mathcal{M}_{\ensuremath{\mathbb{R}}}$ and $\mathcal{M}_+$, respectively. A function has Lipschitz-constant at most $1$ if $|f(x)-f(y)|\le |x-y|_1$ for all $x,y\in\ensuremath{\mathbb{T}}^d$ and we denote this by the shorthand $\Lip(f)\le 1$. Using the dual characterisation by Kantorovich-Rubinstein, the Wasserstein-1-distance of $\mu$ and $\nu$ is defined by \begin{align*} W_1(\nu,\mu) =\sup_{\Lip(f)\leq 1} \left|\int_{\ensuremath{\mathbb{T}}^d} f(x) \;\mathrm{d}(\nu-\mu)\left(x\right)\right|, \end{align*} for any $\mu,\nu\in\mathcal{M}$, and $\mu,\nu\in\mathcal{M}_+$ also admit the primal formulation \begin{align*} W_1(\nu,\mu)=\inf_{\pi} \int_{\ensuremath{\mathbb{T}}^{2d}} |x-y|_1 \mathrm{d} \pi(x,y) \end{align*} where the infimum is taken over all couplings $\pi$ with marginals $\mu$ and $\nu$, respectively. We note in passing that the Wasserstein-1-distances for other $p$ norms on $\ensuremath{\mathbb{T}}^d$ are equivalent with lower and upper constant $1$ and $d^{1-1/p}$, respectively. Moreover, the Wasserstein distance defines a metric induced by the norm \begin{align*} \|\mu\|_{\Lip^*}=\sup_{f: \Lip(f)\leq 1, \|f\|_{\infty}\leq \frac{d}{2}} \left|\int_{\ensuremath{\mathbb{T}}^d} f(x) d\mu(x)\right| \end{align*} which makes the space of Borel measures with finite total variation a Banach space. By slight abuse of notation, we also write $W_1(p,\mu)$ in case the measure $\nu$ has density $p$, i.e., $\mathrm{d}\nu(x)=p(x)\mathrm{d} x$. The Fourier coefficients or trigonometric moments of $\mu$ are given by \begin{align*} \hat\mu(k)=\int_{\ensuremath{\mathbb{T}}^d} \eim{kx}\mathrm{d}\mu(x),\qquad k\in\ensuremath{\mathbb{Z}}^d, \end{align*} and these are finite with $|\hat\mu(k)|\le \TV{\mu}$ and $\hat\mu(0)=1$. We are interested in the reconstruction of the measure given these moments for indices $k\in\{-n,\hdots,n\}^d$. Besides collecting them in a vector, we also set up the finite moment matrix \begin{align}\label{eq:moment-matrix} T_n=\left(\hat\mu(k-\ell)\right)_{k,\ell\in [n]},\qquad [n]=\{0,\hdots,n\}^d, \end{align} and denote its singular value decomposition by $T_n=U_n \Sigma_n V_n^*$, $\Sigma_n=\operatorname{diag}\big(\sigma_j^{(n)}\big)_j$. From the data stored in $T_n$, we compute trigonometric approximations $q_n$ to the underlying measure and distinguish between pointwise convergence to the characteristic function of $\operatorname{supp} \mu$, i.e. \begin{align*} \lim_{n\to\infty} q_n(x) = \begin{cases} 1, \quad & x\in\operatorname{supp}\mu, \\ 0, \quad & \text{else,} \end{cases} \end{align*} and weak convergence, i.e. \begin{align*} \lim_{n\to\infty} \int_{\ensuremath{\mathbb{T}}^d} f(x) q_n(x) \mathrm{d} x = \int_{\ensuremath{\mathbb{T}}^d} f(x) \mathrm{d}\mu(x) \end{align*} for all continuous test functions $f\colon \ensuremath{\mathbb{T}}^d\to \ensuremath{\mathbb{C}} $. The latter is denoted by $q_n\rightharpoonup \mu$ as $n\to\infty$ and the space of test functions can be restricted to Lipschitz continuous test functions by Portmanteau's theorem. Moreover, $q_n\rightharpoonup \mu$ is equivalent to $\lim_{n\to\infty} W_1(q_n,\mu)=0$ on the bounded set $\ensuremath{\mathbb{T}}^d$ and we can quantify rates of weak convergence in terms of the Wasserstein distance (e.g.\,cf.\,\cite[Thm.\,6.9]{Villani_09}). \section{Approximation}\label{sec:approx} We give two introductory examples, prove a lower bound on a best approximation, and an upper bound on the easily computable approximation by convolution with the Fejér kernel. \begin{example}\label{ex:1} Our first example for $d=1$ is the measure \begin{align}\label{eq:ex1} \mu=\frac13\delta_{\frac18}+\nu\in\mathcal{M}_+,\quad \frac{\mathrm{d}\nu}{\mathrm{d}\lambda}(x)=\frac89\chi_{\left[\frac14,\frac58\right]}(x)+\frac{\sqrt{2}}{3}\left( \frac{1}{\sqrt{\abs{x-\frac78}}} - \sqrt{8}\right)\chi_{\left[\frac34,1\right]}(x), \end{align} where $\lambda$ denotes the Lebesgue measure, which obviously has singular and absolutely continuous parts including an integrable pole at $x=\frac78$. Given the first trigonometric moments, the Fourier partial sums \begin{align*} S_n\mu(x)=\sum_{k\in\ensuremath{\mathbb{Z}}^d, |k|\le n} \hat \mu(k) \eip{kx} \end{align*} might serve as a sequence of approximations. Another classical sequence of approximations is given by convolution with the Fejér kernels, see \cref{eq:pn} below. Both approximations for $n=19$ are shown in the left and right panel of \cref{fig:ExampleMeasure}, respectively. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{ExampleMeasure1.eps} \includegraphics[width=0.4\textwidth]{ExampleMeasure2.eps} \caption{The example measure \cref{eq:ex1} and its approximations by the Fourier partial sum (left) and the convolution with the Fejér kernel (right). The weight $\frac13$ of the Dirac measure in $\frac18$ is displayed by an arrow of height $n/3$ for visibility.} \label{fig:ExampleMeasure} \end{figure} Our second example is a singular continuous measure for $d=2$. We take $\mu=(2\pi r_0)^{-1}\delta_{C}\in\mathcal{M}_+$ as the uniform measure on the circle \begin{align*} C=\{x\in\ensuremath{\mathbb{T}}^2: |x|_2=r_0\} \end{align*} for some radius $0<r_0<\frac{1}{2}$. The total variation of this measure is \begin{align*} \TV{\mu} = \hat\mu(0)=\int_{\ensuremath{\mathbb{T}}^2} \mathrm{d} \mu(x) = \frac{1}{2\pi r_0} \int_{C} \mathrm{d} x = 1. \end{align*} Using the Poisson summation formula and a well-known representation of the Fourier transform of a radial function, we find \begin{align}\label{eq:circlemoments} \hat{\mu}(k)&=\int_{\ensuremath{\mathbb{T}}^2} \eim{kx} \mathrm{d}\mu(x)= \frac{1}{r_0} \int_{0}^{\infty} r J_0(2\pi r \|k\|_2) \mathrm{d} \delta_{r_0}(r) = J_0(2\pi r_0 \|k\|_2) \end{align} for the trigonometric moments of $\mu$, where $J_0$ denotes the 0-th order Bessel function of the first kind. These decay asymptotically with rate $\|k\|_2^{-1/2}$. The Fourier partial sum as well as the convolution with the Fejér kernel for $n=29$ are shown with maximal contrast in the left and right panel of \cref{fig:ExampleMeasure2d}, respectively. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{ExampleMeasure2d1.eps} \includegraphics[width=0.4\textwidth]{ExampleMeasure2d2.eps} \caption{Uniform measure on a circle of radius $r_0=\frac13$ and its approximations by the Fourier partial sum (left) and the convolution with the Fejér kernel (right).} \label{fig:ExampleMeasure2d} \end{figure} \end{example} \begin{thm} \label{Thm_existence} For any $d,n\in\ensuremath{\mathbb{N}}$ and for every $\mu\in\mathcal{M}$ there exists a polynomial of best approximation in the Wasserstein-1 distance. Moreover, we have \begin{align*} \sup_{\mu\in\mathcal{M}} \min_{\deg(p)\leq n} W_1(p,\mu) \geq \frac{1}{4(n+1)}. \end{align*} \end{thm} \begin{proof} We directly have existence of a best approximation by polynomials in the Banach space of Borel measures with finite total variation (e.g.\,cf.\,\cite[Thm.\,3.1.1]{DeVore93}). For the lower bound, we compute \begin{align*} \sup_{\mu\in\mathcal{M}} \inf_{\deg(p)\leq n} W_1(p,\mu) &\geq \inf_{\deg(p)\leq n} W_1(p,\delta_0) \\ &= \inf_{\deg(p)\leq n} \sup_{f: \Lip(f)\leq 1} \left|f(0)-\int_{\ensuremath{\mathbb{T}}^d} f(x)p(x) \mathrm{d} x\right| \\ &=\inf_{\deg(p)\leq n} \sup_{\genfrac{}{}{0pt}{}{f: \|f\|_{\infty}\leq \frac{d}{2}}{\Lip(f)\leq 1}} \left\|f-\check{p}*f\right\|_{\infty} \\ &\geq \sup_{\genfrac{}{}{0pt}{}{f: \|f\|_{\infty}\leq \frac{d}{2}}{\Lip(f)\leq 1}} \inf_{\deg(p)\leq n} \|f-p\|_{\infty}, \end{align*} where $\check{p}$ denotes the reflection of $p$. It remains to find the worst case error for the best approximation of a Lipschitz function by a trigonometric polynomial. This is well-understood for $d=1$ (cf.\,\cite{AK1937,Favard1937}) while we did not find a reference that and how $d>1$ is possible as well. Therefore, we show that the idea by \cite{Fisher77} for the case $d=1$ works also for $d>1$ in our situation. A main ingredient of Fishers proof is the duality relation \begin{align*} \inf_{x\in Y\subset X} \|x_0-x\| = \sup_{\genfrac{}{}{0pt}{}{\ell\in X^*}{\ell|_Y=0, \|\ell\|_{X^*}\leq 1} } |\ell(x_0)| \end{align*} for a Banach space $X$, $x_0\in X$, with subset $Y$ and dual space $X^*$. The second ingredient is given by the 1-periodic Bernoulli spline of degree 1 \begin{align} \label{eq_Bernoulli_spline} \mathcal{B}_1(x)=\sum_{k\in\ensuremath{\mathbb{Z}}\setminus\{0\}} \frac{\eip{kx}}{2\pi\mathit{i} k} = \sum_{k=1}^\infty \frac{\sin(2\pi k x)}{\pi k} = \frac12-x \end{align} for $0< x \leq 1$. A Lipschitz continuous and 1-periodic function $f\colon \ensuremath{\mathbb{T}}\to \ensuremath{\mathbb{R}}$ with $\Lip(f)\leq 1$ has a derivative $f'$ almost everywhere and this derivative satisfies $\int_{\ensuremath{\mathbb{T}}} f'(s)=0$ by the periodicity of $f$. Therefore, it follows that \begin{align*} \left( f'*\mathcal{B}_1\right)(t)&=\int_{\ensuremath{\mathbb{T}}} f'(s) \mathcal{B}_1(t-s) \mathrm{d} s \\ &= -\int_0^{t} (t-s) f'(s) \mathrm{d} s -\int_{t}^1 (t-s+1) f'(s) \mathrm{d} s \\ &=f(t)-\int_0^1 f(s) \mathrm{d} s \end{align*} for $0<t,s\leq 1$. The dual space of the space of continuous periodic functions is the space of periodic finite regular Borel measures equipped with the total variation norm and the duality formulation gives \begin{align*} \sup_{\genfrac{}{}{0pt}{}{f:\ensuremath{\mathbb{T}}^d\to \ensuremath{\mathbb{R}}}{\|f\|_{\infty}, \Lip(f)\leq 1}}\inf_{\deg(p)\leq n} \|f-p\|_{\infty} &= \sup_{\genfrac{}{}{0pt}{}{f:\ensuremath{\mathbb{T}}^d\to \ensuremath{\mathbb{R}}}{\|f\|_{\infty}, \Lip(f)\leq 1}}\sup_{\genfrac{}{}{0pt}{}{\hat{\mu}(k) =0, \|k\|_{\infty}\leq n}{\TV{\mu}\leq 1} } \left|\int_{\ensuremath{\mathbb{T}}^d} f(x) \mathrm{d} \mu(x)\right|. \end{align*} Our main contribution to this result is the observation how to transfer the multivariate setting back to the univariate one. It is easy to verify that $f(x)=\frac{1}{d}\sum_{\ell=1}^d f_0(x_\ell)$ for a univariate Lipschitz function $f_0$, $\Lip(f_0)\leq d$, $\|f_0\|_{\infty}\leq \frac{d}{2}$ fulfils the conditions for the Lipschitz function $f$. Additionally, $\mu^*=\frac{1}{d}\sum _{s=1}^d \mu_s$ with $\mu_s=\left(\bigotimes_{\ell\neq s} \lambda(x_\ell)\right) \otimes \mu_0^*(x_s)$, \begin{align*} \mu_0^*(x_s)=\frac{1}{2(n+1)} \sum_{j=0}^{2n+1} (-1)^j \delta_{j/(2n+2)}(x_s) \end{align*} and $\lambda$ being the Lebesgue measure on $\ensuremath{\mathbb{T}}$ is admissible. Since this choice of $\mu_s$ integrates $\int g \mathrm{d}\mu_s=0$ if $g$ is constant with respect to $x_s$ (and the same holds for constant univariate functions integrated against $\mu_0^*$), we obtain \begin{align*} \sup_{\genfrac{}{}{0pt}{}{f:\ensuremath{\mathbb{T}}^d\to \ensuremath{\mathbb{R}}}{\|f\|_{\infty}, \Lip(f)\leq 1}}\inf_{\deg(p)\leq n} \|f-p\|_{\infty} &\geq \sup_{\genfrac{}{}{0pt}{}{f_0:\ensuremath{\mathbb{T}}\to \ensuremath{\mathbb{R}}}{\|f_0\|_{\infty}\leq 1, \Lip(f_0)\leq d}} \left|\frac{1}{d^2} \sum_{s,\ell=1}^d \int_{\ensuremath{\mathbb{T}}^d} f_0(x_\ell) \mathrm{d} \mu_s(x)\right| \\ &=\sup_{\genfrac{}{}{0pt}{}{f_0:\ensuremath{\mathbb{T}}\to \ensuremath{\mathbb{R}}}{\|f_0\|_{\infty}\leq 1, \Lip(f_0)\leq d}} \left|\frac{1}{d^2} \sum_{\ell=1}^d \int_{\ensuremath{\mathbb{T}}} f_0(x_\ell) \mathrm{d} \mu_0^*(x_\ell)\right| \\ &=\sup_{\genfrac{}{}{0pt}{}{f_0:\ensuremath{\mathbb{T}}\to \ensuremath{\mathbb{R}}}{\|f_0\|_{\infty}\leq 1, \Lip(f_0)\leq d}} \frac{1}{d} \left| \int_{\ensuremath{\mathbb{T}}} f_0'(s) \left(\int_{\ensuremath{\mathbb{T}}}\mathcal{B}_1(t-s) \mathrm{d} \mu_0^*(t)\right) \mathrm{d} s\right|. \end{align*} We denote $\mathcal{B}_{\mu*}(s)= \int_{\ensuremath{\mathbb{T}}}\mathcal{B}_1(t-s) \mathrm{d} \mu_0^*(t)$ and observe $\int_{\ensuremath{\mathbb{T}}} \mathcal{B}_{\mu*}(s) \mathrm{d} s=0$. Moreover, $\mu_0^*$ has moments $\hat{\mu}_0^*(k) = 1$ for $k\in (n+1)\left(2\ensuremath{\mathbb{Z}}+1\right)$ and $\hat{\mu}_0^*(k)=0$ otherwise. Together with the Fourier representation \eqref{eq_Bernoulli_spline} of $\mathcal{B}_1$ this gives \begin{align*} \mathcal{B}_{\mu*}(s) &= \sum_{k\in\ensuremath{\mathbb{Z}}\setminus\{0\}} \frac{\eip{k(n+1)x}}{2\pi\mathit{i} k(n+1)} - \sum_{k\in\ensuremath{\mathbb{Z}}\setminus\{0\}} \frac{\eip{2k(n+1)x}}{2\pi\mathit{i} 2k(n+1)} \\ &= \frac{1}{n+1} \mathcal{B}_1((n+1)s)- \frac{1}{2n+2} \mathcal{B}_1((2n+2)s). \end{align*} Hence, taking $f'_0(s)=\pm d$ depending on the sign of $\mathcal{B}_{\mu*}(s)$ is possible as $\|f_0\|_{\infty}\leq \frac{d}{2(n+1)}< \frac{d}{2}$ with this choice. Finally, we end up with \begin{align*} \sup_{\genfrac{}{}{0pt}{}{f:\ensuremath{\mathbb{T}}^d\to \ensuremath{\mathbb{R}}}{\|f\|_{\infty}, \Lip(f)\leq 1}}\inf_{\deg(p)\leq n} \|f-p\|_{\infty} & \geq \int_{\ensuremath{\mathbb{T}}} \left| \mathcal{B}_{\mu*}(s)\right|\mathrm{d} s \\ &= \frac{1}{n+1} \int_{\ensuremath{\mathbb{T}}} \left|\mathcal{B}_1((n+1)s)-\frac12\mathcal{B}_1((2n+2)s)\right| \mathrm{d} s \\ &= \frac{1}{n+1} \int_{\ensuremath{\mathbb{T}}} \left|\mathcal{B}_1(s)-\frac{1}{2}\mathcal{B}_1(2s)\right| \mathrm{d} s \\ &= \frac{1}{4(n+1)} \end{align*} and this was the claim. \end{proof} Let the Fejér kernel $F_n\colon\ensuremath{\mathbb{T}}\to\ensuremath{\mathbb{R}}$ and by slight abuse of notation also its multivariate version be given by \begin{align*} F_n(x)=\sum_{k=-n}^n \left(1-\frac{|k|}{n+1}\right)\eip{kx} =\frac{1}{n+1}\left(\frac{\sin(n+1)\pi x}{\sin\pi x}\right)^2 \end{align*} and $F_n(x_1,\hdots,x_d)=F_n(x_1)\cdot\hdots\cdot F_n(x_d)$, respectively. The main object of study now is the approximation \begin{align}\label{eq:pn} p_n(x) &=\left(F_n*\mu\right)(x) =\int_{\ensuremath{\mathbb{T}}^d} F_n(x-y) \mathrm{d}\mu(y), \end{align} two examples are given in \cref{ex:1}. We start by noting that the $p_n$ can be expressed in terms of the moment matrix and preserves non-negativity and normalization. \begin{lemma}\label{thm:Tn} Let $d,n\in\ensuremath{\mathbb{N}}$, $\mu\in\mathcal{M}$, and the moment matrix $T_n$ in \cref{eq:moment-matrix} be given, then the approximation \cref{eq:pn} fulfils \begin{align} p_n(x) &=\frac{e_n(x)^* T_n e_n(x)}{(n+1)^d},\qquad &e_n(x)&=\left(\eim{kx}\right)_{k\in[n]},\nonumber \\ &=\frac{1}{(n+1)^d} \sum_{j=1}^N \sigma^{(n)}_j u^{(n)}_j(x)\overline{v^{(n)}_j(x)},\qquad &u^{(n)}_j(x)&=e_n(x)^* U_n,\;v^{(n)}_j(x)=V_n^* e_n(x) \nonumber \intertext{If $\mu\in\mathcal{M}_{\ensuremath{\mathbb{R}}}$, then the moment matrix is Hermitian. If $\mu\in\mathcal{M}_+$, then the moment matrix is positive semi-definite, $\|p_n\|_{L^1}=\TV{\mu}=1$, and} p_n(x)&=\frac{1}{(n+1)^d} \sum_{j=1}^N \sigma^{(n)}_j \left|u^{(n)}_j(x)\right|^2. \label{eq:pnuj} \end{align} \end{lemma} \begin{proof} Let $q\in\ensuremath{\mathbb{C}}^{(n+1)^d}$, then direct computation shows \begin{align*} q^* T_n q = \sum_{k,l\in [n]} \overline{q_k} \left(\int_{\ensuremath{\mathbb{T}}^d} \eim{(k-\ell)y}\mathrm{d}\mu(y)\right) q_\ell =\int_{\ensuremath{\mathbb{T}}^d} \left|\sum_{k\in [n]} q_k \eip{ky}\right|^2 \mathrm{d}\mu(y) \ge 0. \end{align*} Choosing $q=e_n(x)$ yields the second claim and by interchanging the order of integration and noting that the value of the inner integral is independent of $y$ also \begin{align*} \|p_n\|_{L^1} =\frac{1}{(n+1)^{d}} \int_{\ensuremath{\mathbb{T}}^d} \int_{\ensuremath{\mathbb{T}}^d} \left|\sum_{k\in [n]} \eip{k(y-x)}\right|^2 \mathrm{d} x \mathrm{d}\mu(y) =\mu(\ensuremath{\mathbb{T}}^d)=\TV{\mu}. \end{align*} \end{proof} Our next goal is a quantitative approximation result, for which we need the following preparatory lemma. This result can be found in qualitative form e.g.~in \cite[Lemma 1.6.4]{BuNe71}. \begin{lemma}\label{le:Fn} Let $n,d\in\ensuremath{\mathbb{N}}$, then we have \begin{align*} \frac{d}{\pi^2} \left(\frac{\log(n+2)}{n+1}+\frac{1}{n+3}\right) \leq \int_{\ensuremath{\mathbb{T}}^d} F_n(x) |x|_1 \mathrm{d} x \leq \frac{d}{\pi^2}\frac{\log(n+1) + 3}{n}. \end{align*} \end{lemma} \begin{proof} First note that \begin{align*} \int_{\ensuremath{\mathbb{T}}^d} \prod_{s=1}^d F_n(x_s) \sum_{\ell=1}^d |x_\ell|_1 \mathrm{d} x = \sum_{\ell=1}^d \int_{\ensuremath{\mathbb{T}}^d} \prod_{s=1}^d F_n(x_s) |x_\ell|_1 \mathrm{d} x = d \int_{\ensuremath{\mathbb{T}}} F_n(x) |x|_1 \mathrm{d} x \end{align*} such that it is sufficient to consider the univariate case. With the representation $F_n(x)=1+2\sum_{k=1}^n \left(1-\frac{k}{n+1}\right) \cos(2\pi k x)$ we find after elementary integration \begin{align*} \int_{\ensuremath{\mathbb{T}}} F_n(x) |x|_1 \mathrm{d} x &= 2 \int_0^{1/2} x+2\sum_{k=1}^n \left(1-\frac{k}{n+1}\right) \cos(2\pi k x)x \d x \\ &=2 \left[\frac{1}{8}-\frac{1}{(n+1)\pi^2}\sum_{j=0}^{\lfloor \frac{n}{2} \rfloor} \frac{n-2j}{(2j+1)^2} \right] \\ &=2\left[\frac{1}{\pi^2} \sum_{j=\lfloor \frac{n}{2} \rfloor+1}^{\infty} \frac{1}{(2j+1)^2} + \frac{1}{(n+1)\pi^2} \sum_{j=0}^{\lfloor \frac{n}{2} \rfloor} \frac{1}{2j+1} \right] \\ &\leq \frac{2}{\pi^2} \int_{\lfloor \frac{n}{2} \rfloor}^\infty \frac{1}{(2y+1)^2} \mathrm{d} y + \frac{2}{(n+1)\pi^2} \left(1+ \int_{0}^{\lfloor \frac{n}{2} \rfloor} \frac{1}{2y+1} \mathrm{d} y\right) \\ &\leq\frac{1}{\pi^2n}+ \frac{1}{(n+1)\pi^2} \left(2+ \log(n+1)\right) \\ &\leq \frac{3+\log(n+1)}{\pi^2 n}, \end{align*} since we have that $\sum_{j=0}^{\infty} \frac{1}{(2j+1)^2}=\frac{\pi^2}{8}$. The lower bound follows similarly by bounding the series from the previous calculation by integrals from below \end{proof} \begin{thm}\label{thm:W1p} Let $d,n\in\ensuremath{\mathbb{N}}$ and $\mu\in\mathcal{M}$, then the measure with density $p_n$ converges weakly to $\mu$ with \begin{align*} W_1(p_n,\mu)\le \frac{d}{\pi^2}\frac{\log(n+1) + 3}{n} \cdot \TV{\mu} , \intertext{which is sharp since $\mu\in\mathcal{M}_+$ implies $\TV{\mu}=1$ and} \sup_{\mu\in\mathcal{M}} W_1(p_n,\mu) \geq \frac{d}{\pi^2} \left(\frac{\log(n+2)}{n+1}+\frac{1}{n+3}\right). \end{align*} \end{thm} \begin{proof} We compute \begin{align*} W_1(p_n,\mu) &=\sup_{\Lip(f)\leq 1} \left|\langle F_n*\mu,f\rangle-\langle \mu,f\rangle\right|\\ &=\sup_{\Lip(f)\leq 1} \left|\langle \mu,F_n*f-f\rangle\right|\\ &\leq \sup_{\Lip(f)\leq 1} \int_{\ensuremath{\mathbb{T}}^d} \int_{\ensuremath{\mathbb{T}}^d} F_n(x) \left|f(y-x)-f(y)\right| \mathrm{d} x \mathrm{d} |\mu|(y) \\ &\leq \TV{\mu} \int_{\ensuremath{\mathbb{T}}^d} F_n(x) |x|_1 \mathrm{d} x, \end{align*} note that both inequalities become equalities when choosing $\mu=\delta_0$ and $f(x)=|x|_1$, and then apply \cref{le:Fn}. We note in passing that $W_1(F_n,\delta_0)= \int_{\ensuremath{\mathbb{T}}^d} F_n(x) |x|_1 \mathrm{d} x$. \end{proof} \begin{example} \cref{thm:W1p} gives a worst case lower bound and, on the other hand, the Lebesgue measure is approximated by $F_n*\lambda=\lambda$ without any error. We may thus ask how well a measure $\mathrm{d}\mu=w(x) \mathrm{d} x$ with smooth (non-negative) density might be approximated. If we choose the analytical density $w(x)=1+\cos(2\pi x)$, then $F_n*w(x)-w(x)=\cos(2\pi x)/(n+1)$ and, by testing with the Lipschitz function $f(x)=\cos(2\pi x)/(2\pi)$, we see that \begin{align*} W_1(F_n*w,w) &\ge \frac{1}{2\pi(n+1)}\int_{\ensuremath{\mathbb{T}}}\cos^2(2\pi x)\mathrm{d} x=\frac{1}{4\pi(n+1)}. \end{align*} This effect is called saturation (e.g.\,cf.\,\cite{BuNe71}). \end{example} In greater generality, such a lower bound holds for each measure individually and can be inferred by a nice relationship between the Wasserstein distance and a discrepancy, cf.~\cite{EhGrNeSt21}. \begin{thm} \label{Thm_lower_bound} For each individual measure $\mu\in\mathcal{M}$ different from the Lebesgue measure, there is a constant $c>0$ such that \begin{align*} W_1(p_n,\mu)\geq \frac{c}{n+1} \end{align*} holds for all $n\in\ensuremath{\mathbb{N}}$. \end{thm} \begin{proof} Let $\hat{h}\in\ell^2(\ensuremath{\mathbb{Z}}^d)$, $\hat{h}(k)\in\ensuremath{\mathbb{R}}\setminus\{0\}$, $\hat{h}(k)=\hat{h}(-k)$, and consider the reproducing kernel Hilbert space \begin{align*} H=\{f\in L^2(\ensuremath{\mathbb{T}}^d): \sum_{k\in\ensuremath{\mathbb{Z}}^d} |\hat{h}(k)|^{-2} |\hat{f}(k)|^2 <\infty\},\qquad \|f\|_{H}^2=\sum_{k\in\ensuremath{\mathbb{Z}}^d} |\hat{h}(k)|^{-2} |\hat{f}(k)|^2. \end{align*} Given two measures $\mu,\nu$, their discrepancy (which also depends on the space $H$) is defined by \begin{align*} \mathcal{D}(\mu,\nu)=\sup_{\|f\|_{H}\leq 1} \left|\int_{\ensuremath{\mathbb{T}}^d} f~\mathrm{d} (\mu- \nu)\right| \end{align*} and fulfils by the geometric-arithmetic inequality \begin{align*} \mathcal{D}(p_n,\mu)^2 &=\sum_{\|k\|_{\infty}\leq n} |\hat{h}(k)|^{2} \left|1-\prod_{\ell=1}^d \left(1-\frac{|k_\ell|}{n+1}\right)\right|^2 |\hat{\mu}(k)|^2 + \sum_{\|k\|_{\infty}> n} |\hat{h}(k)|^{2}|\hat{\mu}(k)|^2 \\ &\geq \sum_{\|k\|_{\infty}\leq n} |\hat{h}(k)|^{2} \left|\frac{\|k\|_1}{d(n+1)}\right|^2 |\hat{\mu}(k)|^2 + \sum_{\|k\|_{\infty}> n} |\hat{h}(k)|^{2}|\hat{\mu}(k)|^2 \\ &=\sum_{\|k\|_{\infty}\leq n} |\hat{h}(k)|^{2} \left|\frac{\|k\|_1}{d(n+1)}\right|^2 |\hat{\mu}(k)-\hat{\lambda}(k)|^2 + \sum_{\|k\|_{\infty}> n} |\hat{h}(k)|^{2}|\hat{\mu}_k-\hat{\lambda}(k)|^2 \\ &\geq \frac{1}{d^2(n+1)^2} \|h*(\mu-\lambda)\|_{L^2(\ensuremath{\mathbb{T}}^d)}^2 \end{align*} where $h(x)=\sum_{k\in\ensuremath{\mathbb{Z}}^d} \hat{h}(k) \eip{kx}$ and $\lambda$ denotes the Lebesgue measure with $\hat \lambda(0)=1$ and $\hat \lambda(k)=0$ for $k\in\ensuremath{\mathbb{Z}}^d\setminus\{0\}$. Our second ingredient is a Lipschitz estimate: If $f\in H$ with $\|f\|_{H}\leq 1$, then \begin{align*} |f(y)-f(y+x)|^2 &=\left|\sum_{k\in\ensuremath{\mathbb{Z}}^d} \hat{f}(k) \left(\eip{ky}-\eip{k(y+x)}\right)\right|^2\\ &\leq \|f\|_{H}^2 \sum_{k\in\ensuremath{\mathbb{Z}}^d} \left|\eip{ky}-\eip{k(y+x)}\right|^2 |\hat{h}(k)|^2\\ &\leq 2\left(K(0)-K(x)\right), \end{align*} where $K(x)=\sum_{k\in\ensuremath{\mathbb{Z}}^d} |\hat{h}(k)|^2 \eip{kx}=(h*h)(x)$ denotes the so-called reproducing kernel of the space $H$. If this kernel is $K(x_1,\hdots,x_d)=h^{[4]}(x_1)\cdot\hdots\cdot h^{[4]}(x_d)$ for some univariate function $h^{[4]}\in C^2(\ensuremath{\mathbb{T}})$, $\left(h^{[4]}\right)'(0)=0$, we find by a telescoping sum and direct calculation \begin{align*} K(0)-K(x)&= \prod_{\ell=1}^dh^{[4]}(0)-\prod_{\ell=1}^d h^{[4]}(x_\ell) \\ &\leq \sum_{\ell=1}^d \left(h^{[4]}(0)^\ell\prod_{k=1}^{d-\ell} h^{[4]}(x_k)-h^{[4]}(0)^{\ell-1}\prod_{k=1}^{d-\ell+1} h^{[4]}(x_k)\right)\\ &\leq \sum_{\ell=1}^d \|h^{[4]}\|^{d-1}_{\infty} \left[h^{[4]}(0)-h^{[4]}(x_\ell)\right] \\ &\leq \frac12 \|h^{[4]}\|^{d-1}_{\infty} \left\|\left(h^{[4]}\right)''\right\|_{\infty} |x|^2. \end{align*} To make a specific choice, let $a\in(0,\frac{1}{8})$ be some irrational number and set $h^{[2]}= \chi_{[-a,a]}*\chi_{[-a,a]}$ as the convolution of the indicator function on $[-a,a]$ with itself, $h^{[4]}=h^{[2]}*h^{[2]}$, and $h(x_1,\hdots,x_d)=h^{[2]}(x_1)\cdot\hdots\cdot h^{[2]}(x_d)$. Since the space of Lipschitz test functions is at least as large as the reproducing kernel Hilbert space, we derive \begin{align*} W_1(p_n,\mu) \geq \frac{1}{2}\cdot \left(\frac{\sqrt{3}}{4}\right)^{d-1} a^{1-\frac{3}{2}d} \mathcal{D}(p_n,\mu) \geq \frac{1\cdot \left(\frac{\sqrt{3}}{4}\right)^{d-1} a^{1-\frac{3}{2}d}}{d(n+1)} \|h*(\mu-\lambda)\|_{L^2(\ensuremath{\mathbb{T}}^d)}. \end{align*} Since $a$ is irrational, we can directly see by Parseval's theorem that $\|h*(\mu-\lambda)\|_{L^2(\ensuremath{\mathbb{T}}^d)}=0$ if and only if $\mu=\lambda$. For $\mu\neq\lambda$, we obtain the statement with a positive constant $c$ depending on the measure $\mu$, the constant $a$, and the spatial dimension $d$. \end{proof} \begin{remark} \label{Rem_Jackson} The gap between upper and lower bounds can be narrowed by choosing another convolution kernel, which then however does not allow for the representation in \cref{thm:Tn}. The Jackson kernel \begin{align*} J_{2m-2}(x)=\frac{3}{m(2m^2+1)} \frac{\sin^4(m\pi x)}{\sin^4(\pi x)}, \quad m\in\ensuremath{\mathbb{N}}, \end{align*} has degree $n=2m-2$ and satisfies \begin{align*} \int_{\ensuremath{\mathbb{T}}} J_n(x) |x|_1 \mathrm{d} x \leq \frac{6}{m(2m^2+1)} \left[\int_0^{1/2m} m^4 x \mathrm{d} x+ \int_{1/2m}^{\infty} \frac{1}{16x^3} \mathrm{d} x\right] \leq \frac{3m}{2(2m^2+1)} \le \frac{3}{2(n+2)}. \end{align*} Analogously to \cref{thm:W1p}, we get \begin{align} \label{eq:W1-jackson} W_1(J_n*\mu,\mu)\leq \frac{3}{2} \frac{d \cdot \TV{\mu}}{n+2}, \end{align} which still is an approximate factor 6 worse than the lower bound in the univariate case. A factor 3 is due to the above estimate and a factor 2 seems to indicate that the Jackson kernel is not optimal. Moreover, upper and lower bound differ by a factor $d$ in the multivariate case which might be due to the used norms or our proof techniques. We mention at this point that \begin{align*} W_1(K_n*\mu,\mu)\leq \TV{\mu} W_1(\delta_0,K_n),&&\text{and}&& W_1(\delta_0,K_n)\le \int_{\ensuremath{\mathbb{T}}^d} |K_n(x)| |x|_1 \mathrm{d} x \end{align*} for any kernel $K_n$ and measure $\mu\in\mathcal{M}$ with equality in the second inequality if $K_n$ is nonnegative. \end{remark} \subsection{Univariate case} In one variable, the question of uniqueness of the best approximation can be equivalently characterised by the uniqueness of the best approximation in $L^1(\ensuremath{\mathbb{T}})$ and thus allows for the following lemma. \begin{lemma}[Best approximation in the univariate case] For $d=1$, any absolutely continuous real measure admits a unique best approximation by a polynomial of degree $n\in\ensuremath{\mathbb{N}}$ with respect to the Wasserstein-1 distance.\label{Lem_W11D} \end{lemma} \begin{proof} Let $\mu,\nu\in\mathcal{M}_\ensuremath{\mathbb{R}}$ and $\mathcal{B}_1$ denote the Bernoulli spline of degree 1 from the proof of \cref{Thm_existence}, then we have \begin{align*} W_1(\nu,\mu)&=\sup_{f: \Lip(f)\leq 1} \left|\int_{\ensuremath{\mathbb{T}}} f(x)\left[ \mathrm{d} \nu(x) -\mathrm{d}\mu(x)\right]\right| \\ &=\sup_{f: \Lip(f)\leq 1} \left|\int_{\ensuremath{\mathbb{T}}}\int_{\ensuremath{\mathbb{T}}} f'(t)\mathcal{B}_1(x-t)\left[\mathrm{d} \nu(x) -\mathrm{d}\mu(x)\right]\mathrm{d} t\right| \\ &=\sup_{f: \Lip(f)\leq 1} \left|\int_{\ensuremath{\mathbb{T}}} f'(t)\left[(\mathcal{B}_1*\nu)(t)-(\mathcal{B}_1*\mu)(t)\right] \mathrm{d} t\right|. \end{align*} Since the integral over $f'$ is zero by the periodicity of $f$, any $c\in\ensuremath{\mathbb{R}}$ yields \begin{align*} \left|\int_{\ensuremath{\mathbb{T}}} f'(t)\left[(\mathcal{B}_1*\nu)(t)-(\mathcal{B}_1*\mu)(t)\right] \mathrm{d} t\right| &= \left|\int_{\ensuremath{\mathbb{T}}} f'(t)\left[(\mathcal{B}_1*\nu)(t)-(\mathcal{B}_1*\mu)(t)-c\right] \mathrm{d} t\right| \\ &\leq \inf_{c\in\ensuremath{\mathbb{R}}} \int_{\ensuremath{\mathbb{T}}} \left|(\mathcal{B}_1*\nu)(t)-(\mathcal{B}_1*\mu)(t)-c\right| \mathrm{d} t. \end{align*} On the other hand, choosing $c^*$ such that $\{t: (\mathcal{B}_1*\nu)(t)-(\mathcal{B}_1*\mu)(t)>c^*\}$ and $\{t: (\mathcal{B}_1*\nu)(t)-(\mathcal{B}_1*\mu)(t)<c^*\}$ have the same mass yields \begin{align*} \sup_{f: \Lip(f)\leq 1} \left|\int_{\ensuremath{\mathbb{T}}} f'(t)\left[(\mathcal{B}_1*\nu)(t)-(\mathcal{B}_1*\mu)(t)-c^*\right] \mathrm{d} t\right| \geq \int_{\ensuremath{\mathbb{T}}} \left|(\mathcal{B}_1*\nu)(t)-(\mathcal{B}_1*\mu)(t)-c^*\right| \mathrm{d} t \end{align*} by taking $f$ with $f'(t)=\pm 1$ depending on the sign of the term in brackets. Because of $\int_\ensuremath{\mathbb{T}} (\mathcal{B}_1*\nu-\mathcal{B}_1*\mu)(t)\mathrm{d} t =0$, this gives $c^*=0$ and thus \begin{align}\label{eq:W1L1} W_1(\nu,\mu)=\int_{\ensuremath{\mathbb{T}}} \left|(\mathcal{B}_1*\nu)(t)-(\mathcal{B}_1*\mu)(t)\right| \mathrm{d} t. \end{align} We proceed by computing explicitly \begin{align}\label{eq:B1mu} (\mathcal{B}_1*\mu)(t) = \frac{\mu([0,t))+\mu([0,t])}{2} - \mu([0,1))\left(t +\frac12\right)+\int_{[0,1)} x \mathrm{d}\mu(x). \end{align} If $\mu$ does not give mass to single points, we have that $\mathcal{B}_1*\mu$ is continuous and hence there exists a unique best $L^1$-approximation $\tilde{p}$ (e.g.\,cf.\,\cite[Thm.\,3.10.9]{DeVore93}) which defines $p^*$ uniquely by $\tilde{p}=\mathcal{B}_1*p^*$. \end{proof} \begin{example} \label{Ex:Unique_best} Uniqueness and non-uniqueness of $L^1$ approximation is discussed in some detail in \cite{Moskona95,Dryanov12} and we note the following: \begin{enumerate}[(i)] \item For $\mu=\frac{1}{2}\delta_0-\frac{1}{2}\delta_{1/2}+\lambda\in\mathcal{M}_{\ensuremath{\mathbb{R}}}$ where $\lambda$ is again the Lebesgue measure, one finds \begin{align*} (\mathcal{B}_1*\mu)(t)=\frac12\left(\mathcal{B}_1(t)-\mathcal{B}_1\Big(t-\frac12\Big)\right)= \begin{cases} 0,&\quad t=0,\\ \frac14, & \quad t\in\big(0,\frac12\big), \\ 0,& \quad t=\frac12, \\ -\frac14,&\quad t\in\big(\frac12,1\big). \end{cases} \end{align*} As proved in \cite[Thm.~5.1]{Moskona95}, this function does not have a unique $L^1$ approximation and thus $\mu$ does not admit a unique approximation by a polynomial due to \cref{Lem_W11D} for even $n$. \item For $\mu=\delta_0$ one has $\mathcal{B}_1*\mu=\mathcal{B}_1$ and according to \cite[Lem.~2.2]{Moskona95} this function with only one jump has a unique best $L^1$-approximation given by the interpolation polynomial \begin{align*} \tilde{p}(x) =\sum_{j=1}^n \frac{1}{2n+2} \cot\left(\frac{j\pi }{2n+2}\right) \sin(2\pi jx). \end{align*} Deconvolving $\tilde{p}=\mathcal{B}_1*p^*$ gives \begin{align*} p^*(x) = 1 + \sum_{j=1}^n \frac{j\pi }{n+1} \cot\left(\frac{j\pi}{2n+2}\right) \cos(2\pi jx) \end{align*} as the unique best approximation to $\delta_0$. Since the error of the best $L^1$ approximation of $\mathcal{B}_1$ is known from a theorem by Favard \cite{Favard1937} (e.g.\,this is mentioned in \cite[p.\,213]{DeVore93}), we can compute \begin{align*} W_1(\delta_0,p^*)=\left\|\mathcal{B}_1-\mathcal{B}_1*p^*\right\|_{L^1(\ensuremath{\mathbb{T}})}=\frac{1}{4(n+1)} \end{align*} and this reveals that the estimates in the proof of \cref{Thm_existence} are sharp. \end{enumerate} \end{example} \cref{fig:BestApprox} and \cref{tab:W1p} summarize our findings on the approximation of $\delta_0$. The best approximation $p^*$ as well as the Dirichlet kernel $D_n(x)=\sin(2n+1)\pi x /\sin\pi x$ are signed with small full width at half maximum but positive and negative oscillations at the sides. The latter might be seen as an unwanted artifact in applications. The approximations given by the Fejér and the Jackson kernel are non-negative. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{Best_approximation.eps} \caption{Interpolation of $\mathcal{B}_1$ (left) and comparison of different polynomial approximations of degree $n=10$ to $\delta_0$ (right).} \label{fig:BestApprox} \end{figure} For completeness, we note that the Dirichlet kernel is the Fourier partial sum of $\delta_0$ and allows for the estimate \begin{align*} W_1(\delta_0,D_n)\leq W_1(\delta_0,p^*) + W_1(p^*,D_n) \leq \left(1 + \|D_n\|_1\right) W_1(\delta_0,p^*) \leq \frac{\frac{4}{\pi^2} \log(n)+5}{4(n+1)} \end{align*} which relies on $W_1(p^*,D_n)= W_1(D_n*p^*,D_n*\delta_0)\leq \|D_n\|_1 W_1(\delta_0,p^*)$, the well known bound on the Lebesgue constant \cite[Prop.\,1.2.3]{BuNe71}, and \cref{Ex:Unique_best} (iii). \begin{table}[h] \centering \begin{tabular}{|c|c|l|} \hline Trigonometric polynomial & Sign of polynomial & $W_1(\delta_0,K_n)$ \\\hline \rule{0pt}{15pt} Dirichlet $D_n$ & signed & $\leq \frac{\frac{1}{\pi^2}\log(n)+\frac{5}{4}}{n+1}$ (\cref{Ex:Unique_best} (iii)) \\[8pt]\hline \rule{0pt}{15pt} Fejér $F_n$ & nonnegative & $\leq \frac{1}{\pi^2}\frac{\log(n+1) + 3}{n}$ (\cref{thm:W1p}) \\[8pt]\hline \rule{0pt}{15pt} Jackson $J_n$, $n$ even & nonnegative & $\leq \frac{3}{2} \frac{1}{n+2}$ (\cref{Rem_Jackson})\\[8pt]\hline \rule{0pt}{15pt} Best approximation $p^*$ & signed & $= \frac{1}{4(n+1)}$ (\cref{Ex:Unique_best} (ii)) \\[8pt]\hline \end{tabular} \caption{Convergence rates of different trigonometric polynomials approximating $\delta_0$. } \label{tab:W1p} \end{table} \begin{remark} We close by some remarks which are specific for the univariate setting: \begin{enumerate}[(i)] \item We stress that \eqref{eq:W1L1} in the proof of the \cref{Lem_W11D} allows to compute the Wasserstein distance as an $L^1$-distance for real signed univariate measures. Similarly, this allows to compute the so-called star discrepancy $\|\nu([0,\cdot))\|_{\infty}$ as suggested in \cite[eq. (2.1) and (2.2)]{Mh19}. However note that \eqref{eq:B1mu} has some additional term such that $\nu=\frac{1}{2}\delta_0-\frac{1}{2}\delta_{1/2}$ with $\nu(\ensuremath{\mathbb{T}})=0$ gives \begin{align*} \|\nu([0,\cdot))\|_{\infty} = \frac12 \neq \frac14 = \|\mathcal{B}_1*\nu\|_{\infty} \end{align*} and thus \cite[eq. (2.1) and (2.2)]{Mh19} needs some adjustment. On the real line, we indeed have $\mu((-\infty,x])=(H*\mu)(x)$ for the Heaviside function \begin{align*} H(x)=\begin{cases} 1, & \quad x\geq 0, \\ 0, & \quad \text{else,} \end{cases} \end{align*} such that the Wasserstein distance again can be computed via \begin{align*} W_1(\mu,\nu)=\int_{\ensuremath{\mathbb{R}}} \left|\mu((-\infty,x])-\nu((-\infty,x])\right| \mathrm{d} x = \|H*(\mu-\nu)\|_{L^1(\ensuremath{\mathbb{R}})}, \end{align*} see e.g.~\cite[Prop.\,2.17]{Santambrogio_15}. \cref{Lem_W11D} might be considered as the periodic analogue of this result. \item One can relate our work to a main result in \cite{Mh19}. As \cref{Lem_W11D} reformulates the Wasserstein distance of two univariate measures in terms of the $L^1$-distance of their convolution with the Bernoulli spline, one can view this Bernoulli spline as a kernel of type $\beta=1$ following the notation of \cite{Mh19}. Thus, one can take $p=1,p'=\infty$ in \cite[Thm.\,4.1]{Mh19} yielding that the Wasserstein distance between a measure $\mu$ and its trigonometric approximation is bounded from above by $c/n$. The latter agrees with our \cref{Rem_Jackson} which additionally gives an explicit and small constant. \item The observation that the construction of $p^*$ for $\delta_0$ is possible via FFT's might lead to the idea to construct near-best approximations to any measure $\mu$ by interpolating $\mathcal{B}_1*\mu$ by some $\tilde{p}$ and to obtain the polynomial $p$ of near best approximation which satisfies $\tilde{p}=\mathcal{B}_1*p$ by multiplying with the Fourier coefficients of the Bernoulli spline $\mathcal{B}_1$. A first problem would be that the limited knowledge of moments only allows to interpolate the partial Fourier sum $S_n(\mathcal{B}_1*\mu)$ which does not converge to $\mathcal{B}_1*\mu$ uniformly as $n\to\infty$ for discrete $\mu$. Secondly, the near-best approximation $p$ cannot be expected to be nonnegative for a nonnegative measure $\mu$ which is another drawback compared to convolution with nonnegative kernels like the Fejér or Jackson kernel. \item Finally note that kernels $K_n$ with stronger localization and `smoother' Fourier coefficients, e.g.~higher powers of the Fejér kernel, allow to improve the rate beyond $n^{-1}$ if the measure has a smooth density $w$. This can be seen from partial integration \begin{align*} W_1(K_n * w, w) &=\sup_{\Lip(f)\le 1} \left|\int_{\ensuremath{\mathbb{T}}} \left(K_n*f(y)-f(y)\right) w(y) \mathrm{d} y\right|\\ &=\sup_{\psi, \Lip(\psi')\le 1} \left|\int_{\ensuremath{\mathbb{T}}} \left(K_n*\psi(y)-\psi(y)\right) w'(y) \mathrm{d} y\right| \end{align*} and the above arguments. However note that from a practical perspective, this asks for a-priori smoothness assumptions on the measure to choose a suitable kernel. \end{enumerate} \end{remark} \section{Interpolation}\label{sec:interp} Using the singular functions of the moment matrix as in \cref{eq:pnuj} and with $r=r(n)=\operatorname{rank} T_n$, we define noise- and signal-functions $p_{0,n},p_{1,n}:\ensuremath{\mathbb{T}}^d\to [0,1]$, \begin{align}\label{eq:p1} p_{1,n}(x)=\frac{1}{N} \sum_{j=1}^r \left|u^{(n)}_j(x)\right|^2,\qquad p_{0,n}(x)=\frac{1}{N} \sum_{j=r+1}^N \left|u^{(n)}_j(x)\right|^2, \end{align} respectively. In what follows, we suppose that $V\subseteq\ensuremath{\mathbb{T}}^d$ denotes the smallest set containing $\operatorname{supp}\mu$ that is the zero-locus of some unknown trigonometric polynomial, i.\,e.\ $V \supseteq \operatorname{supp} \mu$ is the Zariski closure of the support. We show pointwise convergence \begin{align}\label{eq:chiV} p_{1,n}(x) \xrightarrow{n\to\infty} \chi_V(x)=\begin{cases} 1, & x\in V,\\ 0, & \text{otherwise},\end{cases} \end{align} to the characteristic function of this set. Clearly this cannot be uniform and our first result shows interpolation of the value $1$ for finite $n$ as well as a variational characterization. \begin{thm}\label{thm:p1characterization} Let $d,n\in\ensuremath{\mathbb{N}}$, $\mu\in\mathcal{M}$, and suppose $V(\ker T_n)=V\subseteq\ensuremath{\mathbb{T}}^d$, where $V(\ker T_n)$ is the set consisting of the common roots of all the polynomials in $\ker T_n$. Then $p_{0,n}(x)+p_{1,n}(x)=1$ for all $x\in\ensuremath{\mathbb{T}}^d$. In particular, we have \begin{equation}\label{eq:p1interpolating1} \begin{aligned} p_{1,n}(x)\begin{cases} =1, & \text{\ if $ x\in V$},\\ <1, & \text{otherwise}. \end{cases} \end{aligned} \end{equation} If $V\subsetneq\ensuremath{\mathbb{T}}^d$, the variational characterization \begin{align}\label{eq:var:p0} p_{0,n}(x)=\max_{p} \frac{|p(x)|^2}{N \|p\|_{L^2}^2} \end{align} holds, where the maximum is subject to all trigonometric polynomials $p\in\langle \eip{kx}: k\in [n]\rangle$, $p\ne 0$, such that $p(y)=0$ for all $y\in V$. \end{thm} \begin{proof} We have \begin{equation}\label{eq:p1p0sos} p_{1,n}(x) + p_{0,n}(x) = \frac{1}{N} \sum_{j=1}^N \abs{u_j^{(n)}(x)}^2 = \frac{1}{N} e_n(x)^* U_n U_n^* e_n(x) = \frac{1}{N} e_n(x)^* e_n(x) = 1, \end{equation} so in particular $p_{1,n}(x) \in [0,1]$. Since $V(\ker T_n) = V$ and $\ker T_n = \langle u_{j+1}^{(n)},\dots,u_N^{(n)}\rangle$, it follows that the polynomials $u_{j+1}^{(n)}(x),\dots,u_N^{(n)}(x)$ vanish on $V$, so $p_{1,n}(x) = 1$ for all $x\in V$. Conversely, if $x\in\ensuremath{\mathbb{T}}^d$ such that $p_{1,n}(x) = 1$, we claim that $x\in V$. Indeed, we have $1 - p_{1,n}(x) = \sum_{j=r+1}^N \abs{u_j^{(n)}(x)}^2 = 0$, so it follows that $x$ lies in the vanishing set of $u_{j+1}^{(n)}(x),\dots,u_N^{(n)}(x)$, so $x\in V(\ker T_n) = V$. For the variational characterization, first note that the admissible polynomials belong to $\ker T_n \setminus\{0\}$, which is non-empty since $V(\ker T_n) = V \subsetneq \ensuremath{\mathbb{T}}^d$. Let $q \coloneqq U_0 U_0^{*} e_n(x) \in \ker T_n$, where $U_0$ denotes a matrix whose columns form an orthonormal basis of $\ker T_n$. For all $p \in \ker T_n$, we have \[ q^{*} p = e_n(x)^{*} U_0 U_0^{*} p = e_n(x)^{*} p = p(x). \] In particular, note that \begin{equation}\label{eq:p1variational:qq} \norm[2]{q}^2 = q^{*} q = q(x) = e_n(x)^{*} U_0 U_0^{*} e_n(x) = N p_{0,n}(x). \end{equation} Therefore, by the Cauchy--Schwarz inequality, it follows that \[ \abs{p(x)}^2 = \abs{q^{*} p}^2 \le \norm[2]{q}^2 \cdot \norm[2]{p}^2 = N p_{0,n}(x) \norm[2]{p}^2. \] Hence, we have \[ p_{0,n}(x) \ge \max_{p\in\ker T_n\setminus\{0\}} \frac{\abs{p(x)}^2}{N \norm[2]{p}^2} \ge \frac{\abs{q(x)}^2}{N \norm[2]{q}^2} = p_{0,n}(x), \] if $q\ne 0$. The first inequality also holds when $q=0$, in which case the result follows due to \eqref{eq:p1variational:qq}. \end{proof} Note that \eqref{eq:p1interpolating1} and \eqref{eq:p1p0sos} generalise and strengthen \cite[Propositions~5.2,\ 5.3]{ongie16} from algebraic curves on the two-dimensional torus to algebraic varieties of arbitrary dimension. \begin{remark} The hypothesis $V(\ker T_n) = V$ in \cref{thm:p1characterization} is satisfied for all sufficiently large $n$ if $\mu\in\mathcal{M}$ is finitely-supported, see e.g.~\cite{KuNaSt22}. Similarly, it holds for sufficiently large $n$ if $\mu\in\mathcal{M}_+$; see for example \cite[Theorem~2.10]{laurentrostalski2012} or \cite[Proposition~4.10]{wageringel2022:truncated}. If $\mu\in\mathcal{M}$ is not finitely-supported, then $V(\ker T_n) = V$ can fail to hold for any $n\in\ensuremath{\mathbb{N}}$ (cf.~\cite[Example~4.9]{wageringel2022:truncated}). In this case, it is possible to rephrase the hypothesis in terms of a non-square moment matrix of suitable size (cf.~\cite[Theorem~4.3]{wageringel2022:truncated}) to obtain a statement similar to \cref{thm:p1characterization}. \end{remark} \begin{example} For $\mu=\delta_0$, we have $p_{1,n}(x)=F_n(x)/(n+1)^d$ and the proofs of Theorem \ref{Thm_pointwise_conv} and \ref{thm:p1weak} also show that $p_{1,n}$ is close to a sum of normalized Fejér kernels for arbitrary discrete measures. Moreover note that the above construction only yields the Zariski closure for general support sets, i.\,e., $p_{1,n}(x)=1$ for all $x\in\ensuremath{\mathbb{T}}^d$ and $n\in\ensuremath{\mathbb{N}}$ holds for every support set that is not contained in the zero locus of any non-trivial trigonometric polynomial. A singular continuous measure with support on the zero locus of a specific trigonometric polynomial is discussed as a numerical example in \cref{sec:num}. \end{example} \subsection{Zero-dimensional situation} If the measure is given by \begin{align*} \mu=\sum_{j=1}^r \lambda_j \delta_{x_j} \end{align*} with support $V=\operatorname{supp}\mu=\{x_1,\hdots,x_r\}\subset\ensuremath{\mathbb{T}}^d$ and complex weights $\Lambda=\operatorname{diag}(\lambda_1,\hdots,\lambda_r)$, then the support is zero-dimensional and the moment matrix allows for the Vandermonde factorisation \begin{align*} T_n=A_n^*\Lambda A_n,\qquad A_n=(\eip{kx_j})_{j=1,\hdots,r; k\in[n]}\in\ensuremath{\mathbb{C}}^{r\times N}, \end{align*} which will be instrumental. \begin{thm}[Pointwise convergence] \label{Thm_pointwise_conv} Let $\mu = \sum_{j=1}^r \lambda_j \delta_{x_j}$ be a discrete measure and let $x\in\ensuremath{\mathbb{T}}^d$ such that $x\ne x_j$ for all $1\le j\le r$. If $n+1>4d/\min_{j\ne\ell}|x_j-x_\ell|_{\infty}$, then \begin{align*} p_{1,n}(x) \le \frac{1}{(n+1)^2} \cdot \frac{|\lambda_{\max}|}{|\lambda_{\min}|} \cdot \frac{1}{3}\sum_{j=1}^r \frac{1}{ |x - x_j|_{\infty}^2}, \end{align*} in particular, this implies the pointwise convergence \eqref{eq:chiV}. Moreover, if the support of $\mu$ fulfils $n+1>2\sqrt{d}/\min_{j\ne\ell}|x_j-x_\ell|_{\infty}$ and $\min_j |x-x_j|_{\infty}\le \sqrt{d}/(n+1)$, then \begin{align*} p_{1,n}(x) \le 1- \frac{3^{d-1}(2d-1)}{2^d d^{2+d/2}} \cdot (n+1)^2 \cdot \min_j |x-x_j|_{\infty}^2. \end{align*} \end{thm} \begin{proof} Comparing \eqref{eq:p1} with \cref{thm:Tn} and using \eqref{eq:pn} yields \begin{align*} p_{1,n}(x) &\le \frac{1}{N} \sum_{j=1}^r \frac{\sigma_j}{\sigma_{\min}} \abs{u^{(n)}_j(x)}^2 = \frac{1}{\smin} \sum_{j=1}^r |\lambda_j| F_n(x-x_j). \end{align*} The final estimate follows from \begin{align*} F_n(x-x_j) \le \frac{(n+1)^{d-1}}{(n+1) \sin^2\lr{\pi |x - x_j|_{\infty}}} \le \frac{(n+1)^{d-2}}{4 |x - x_j|_{\infty}^2} \end{align*} and $\sigma_{\min}\ge 0.8 (n+1)^d |\lambda_{\min}|$, see \cite[Thm.~4.2]{KuNaSt22}. Regarding the second estimate, consider the Vandermonde matrix \begin{align*} \tilde{A}_{n,x}=\begin{bmatrix} e_n(x_1) & \cdots & e_n(x_r) & e_n(x)\end{bmatrix} \in\ensuremath{\mathbb{C}}^{N\times (r+1)} \end{align*} and note that its pseudo-inverse gives rise to the Lagrange polynomial \begin{align*} \ell_{r+1}(y)=e_{r+1}^*\tilde{A}_{n,x}^\dagger e_n(y). \end{align*} satisfying $\ell_{r+1}(x_j)=0$ for $j=1,\hdots,r$ and $\ell_{r+1}(x)=1$. We compute \begin{align*} \|\ell_{r+1}\|_{L^2}^2 &=\int_{\ensuremath{\mathbb{T}}^d} |e_{r+1}^*\tilde{A}_{n,x}^\dagger e_n(y)\rangle|^2 \d y \\ &=\int_{\ensuremath{\mathbb{T}}^d} |\langle \tilde{A}_{n,x}^{\dagger *} e_{r+1}, e_n(y)\rangle|^2 \d y =\|\tilde{A}_{n,x}^{\dagger *} e_{r+1}\|_2^2\le \sigma_{\min}(\tilde{A}_{n,x})^{-2} \end{align*} and use \cref{thm:p1characterization} to bound \begin{align*} 1-p_{1,n}(x)=p_{0,n}(x)=\max_{p} \frac{|p(x)|^2}{N \|p\|_{L^2}^2} \ge \frac{|\ell_{r+1}(x)|^2}{N \|\ell_{r+1}\|_{L^2}^2} \geq \frac{\sigma_{\min}(\tilde{A}_{n,x})^{2}}{N}. \end{align*} The assertion follows from known estimates on the smallest singular value for the Vandermonde matrix with pairwise clustering nodes, see \cite[Corollary\,3.20]{HoKu21}. \end{proof} \begin{remark} \label{remark_p1_upper_bound} Actually, \cref{Thm_pointwise_conv} shows the correct orders in $n$ and $\min_j |x-x_j|_{\infty}^2$ in the upper bound of $p_{1,n}(x)$. First note that $1-p_{1,n}$ and all its partial derivatives of order 1 vanish on $x_1,\hdots,x_r$. For fixed $x\in\ensuremath{\mathbb{T}}^d$, the Taylor expansion in $x_0=\argmin_j |x-x_j|_{\infty}$ thus gives \begin{align*} 1-p_{1,n}(x)&= \frac{1}{2} (x-x_0)^\top H_x(\xi) (x-x_0),\qquad H_x(\xi)=\left(\frac{-\partial^2 p_{1,n}}{\partial x_r\partial x_s}\left(\xi\right)\right)_{1\le r,s\le d}\\ &\leq \|H_x(\xi)\|_{\mathrm{F}} \cdot \frac{d}{2} \cdot \min_j |x-x_j|_{\infty}^2 \\ &\leq 2\pi^2d^2n^2\min_j |x-x_j|_{\infty}^2, \end{align*} where the last inequality uses an entrywise Bernstein inequality and $\|p_{1,n}\|_{\infty}=1$. \end{remark} \begin{figure}[htb] \includegraphics[width=0.47\columnwidth]{p1_cross_section.eps} \includegraphics[width=0.52\columnwidth]{Bounds_on_p1.eps} \caption{Summary of the bounds on $p_{1,n}$ from \cref{Thm_pointwise_conv} and \cref{remark_p1_upper_bound} for $d=2$, $n=20$, and a discrete measure $\mu$ supported on four points. The polynomial $p_{1,20}$ was evaluated on a grid in $\ensuremath{\mathbb{T}}^2$ and interpolated on the magenta cross section (left), while the bounds on $p_{1,20}$ on this cross section are displayed (right). We see that specifically the bound $1-\sigma_{\min}(\tilde{A}_{n,x})^2/N$ from the proof of \cref{Thm_pointwise_conv} reproduces the behaviour of $p_{1,n}$. The constant upper bound on $p_{1,n}$ away from the support of $\mu$ can be derived by using estimates for $\sigma_{\min}(\tilde{A}_{n,x})$ in the case of separated nodes.} \label{fig:Bounds_on_p1} \end{figure} \begin{lemma}[Convergence of singular values]\label{svalsconvergence} Let $\mu=\sum_{j=1}^r \lambda_j \delta_{x_j}$ be a discrete complex measure whose weights are ordered non-increasingly with respect to their absolute value. Assume that $(n+1)\min_{j\ne\ell}|x_j-x_{\ell}|_{\infty} > d$, then the singular values $\sigma_j$ of the moment matrix $T_n$ fulfil \begin{align*} \left||\lambda_j|-\frac{\sigma_j}{N}\right| \leq \frac{1}{n+1}\cdot \frac{\abs{\lambda_{1}} \lr{1+\sqrt{\textnormal{e}}} r}{2\min_{j\ne\ell}|x_j-x_{\ell}|_{\infty}},\qquad j=1,\hdots,r. \end{align*} \end{lemma} \begin{proof} With the polar decomposition $\frac{1}{\sqrt{N}} A_n^{*} = P H$, where $P\in\ensuremath{\mathbb{C}}^{N\times r}$ is unitary and $H\in\ensuremath{\mathbb{C}}^{r\times r}$ is positive-definite, we have that $\abs{\lambda_1}\ge\cdots\ge\abs{\lambda_r}$ are the singular values of the matrix $P \Lambda P^*$. Therefore, for the singular values of $T_n = A_n^{*} \Lambda A_n$, we obtain \begin{align*} \max_{1\le j\le r} \abs{\frac{\sigma_j}{N} - \abs{\lambda_j}} &\le \norm[2]{\frac{1}{N} T_n - P \Lambda P^* = \norm[2]{H \Lambda H^* - \Lambda}\\ &\le \norm[2]{H \Lambda \lr{H - \idmat{r}}} + \norm[2]{\lr{H - \idmat{r}}\Lambda}\\ &\le \abs{\lambda_1} \lr{\norm[2]{H} + 1} \norm[2]{H - \idmat{r}}\\ &\le \abs{\lambda_1} \lr{\norm[2]{H} + 1} \norm[2]{(H+\idmat{r})^{-1}} \norm[2]{H^2 - \idmat{r}}\\ &\le \abs{\lambda_1} \frac{\frac{1}{\sqrt{N}} \smax(A_n) + 1}{\frac{1}{\sqrt{N}} \smin(A_n) + 1} \normf{\frac{1}{N} A_n A_n^* - \idmat{r}}, \end{align*} where the first inequality is due to \cite[Theorem~2.2.8]{bjorck2015}. Each entry of the matrix $\frac{1}{N} A_n A_n^* - \idmat{r}$ is a modified Dirichlet kernel and can be bounded uniformly by \begin{align*} \normf{\frac{1}{N}A_n^*A_n - \idmat{r}} &= \frac{1}{N} \left(\sum_{j=1}^r\sum_{l\neq j} \left|\sum_{k\in [n]}\eip{k(x_l-x_j)}\right|^2\right)^{1/2} \leq \frac{r}{N} \cdot \frac{(n+1)^{d-1}}{2 \min_{j\ne\ell}|x_j-x_{\ell}|_{\infty}}. \end{align*} Moreover, since $(n+1)\min_{j\ne\ell}|x_j-x_{\ell}|_{\infty}> d$, it follows from \cite[Lemma~2.1]{KuNaSt22} that \[ \frac{1}{\sqrt{N}} \smax(A_n) \le \sqrt{\lr{1 + \frac{1}{d}}^d} \le \sqrt{\textnormal{e}}. \qedhere \] \end{proof} \begin{thm}\label{thm:p1weak} Normalizing differently, we have \begin{equation*} \frac{p_{1,n}}{\|p_{1,n}\|_{L^1}} \rightharpoonup \tilde\mu=\frac{1}{r}\sum_{j=1}^r \delta_{x_j} \end{equation*} as $n\to\infty$. \end{thm} \begin{proof} First note that $\|p_{1,n}\|_{L^1}=r/N$. We define $\tilde p_n=F_n*\tilde\mu$ and observe that for any continuous function $f$ on $\ensuremath{\mathbb{T}}^d$ we have \begin{align*} &\mathrel{\hphantom{\le}}\abs{\int_{\ensuremath{\mathbb{T}}^d} \frac{p_{1,n}(x)}{\|p_{1,n}\|_{L^1}} f(x) \d x - \frac{1}{r}\sum_{j=1}^r f(x_j)}\\ &\le \abs{\int_{\ensuremath{\mathbb{T}}^d} \lr{\frac{p_{1,n}(x)}{\|p_{1,n}\|_{L^1}} - \tilde p_n(x)} f(x) \d x} + \abs{\int_{\ensuremath{\mathbb{T}}^d} \tilde p_n(x)f(x)\d x - \frac{1}{r}\sum_{j=1}^r f(x_j)} \\ &\le \normlonetm{\frac{N}{r}p_{1,n} - \tilde p_n} \normlinftm{f} + \abs{\int_{\ensuremath{\mathbb{T}}^d} f\d(F_n*\tilde\mu) - \int_{\ensuremath{\mathbb{T}}^d} f\d\tilde\mu}, \end{align*} so, by \cref{thm:W1p}, it is enough to show that $\normlonetm{\frac{N}{r} p_{1,n} - \tilde p_n}$ converges to zero for $n\to\infty$. If $n$ is sufficiently large, then by \eqref{eq:pnuj} we can write $\tilde p_n(x) = \frac{1}{N} e_n(x)^{*} \tilde U \tilde\Sigma \tilde U^{*} e_n(x)$ where $\tilde\Sigma\in\ensuremath{\mathbb{C}}^{r\times r}$ denotes the diagonal matrix consisting of non-zero singular values and $\tilde U \in \ensuremath{\mathbb{C}}^{N\times r}$ denotes the corresponding singular vector matrix of the moment matrix of $\tilde\mu$. As $p_{1,n}$ only depends on the signal space of the moment matrix $T_n$ of $\mu$, which agrees with the signal space of the moment matrix of $\tilde\mu$, it follows by \eqref{eq:p1} that $p_{1,n}(x) = \frac{1}{N} e_n(x)^{*} \tilde U \tilde U^{*} e_n(x)$ and thus \[ \abs{\frac{N}{r} p_{1,n}(x) - \tilde p_n(x)} = \abs{e_n(x)^{*} \tilde U \lr{\frac{\idmat{r}}{r} - \frac{\tilde\Sigma}{N}} \tilde U^{*} e_n(x)} \le \norm[2]{e_n(x) \tilde U}^2 \norm[2]{\frac{1}{r}\idmat{r} - \frac{1}{N}\tilde\Sigma}. \] Since $\int_{\ensuremath{\mathbb{T}}^d} \norm[2]{e_n(x) \tilde U}^2 \d x = N \|p_{1,n}\|_{L^1} = r$ is constant, the result follows from \cref{svalsconvergence}. \end{proof} \subsection{Positive-dimensional situation} For a measure $\mu$ whose support is an algebraic variety, we derive a pointwise convergence rate $p_{1,n}(x)=\mathcal{O}\left(n^{-1}\right)$ outside of the variety in \cref{thm:pointw_pos} and this proves \eqref{eq:chiV}. It is not clear whether this is already optimal, as we found $\mathcal{O}\left(n^{-2}\right)$ as an approximation rate in the case of a discrete measure. \begin{thm} \label{thm:pointw_pos} Let $y\in \ensuremath{\mathbb{T}}^d$ and let $g \in\langle \eip{\scalarprod{k}{x}} \mid k\in [m]\rangle$ be a trigonometric polynomial of max-degree $m$ such that $g(y)\ne 0$ and $g$ vanishes on $\operatorname{supp}\mu$. Then \[ p_{1,n+m}(y) \le 1 - \frac{(n+1)^d}{(n+m +1)^d} \cdot \frac{\abs{g\lr{y}}^2}{\lr{F_n * \abs{g}^2}(y)} \le \frac{\|g\|_{L^2}^2}{|g(y)|^2} \frac{m(4m+2)^d}{n+1} + \frac{d m}{\lr{n+m + 1}}, \] for $n\in\ensuremath{\mathbb{N}}$, $n\geq m$. \end{thm} \begin{proof} Set $N_n = (n+1)^d$ for $n\in\ensuremath{\mathbb{N}}$ and define the trigonometric polynomial $p(x) = e_{n,y}(x) g(x)$ of max-degree $n+m$, where $e_{n,y}(x) \coloneqq e_n(x)^{*} e_n(y)$. Furthermore, we define $f(x)\coloneqq \abs{g(x)}^2$. Then \[ \abs{p(x)}^2 = N_n F_n(x-y) f(x), \] for all $x\in\ensuremath{\mathbb{T}}^d$. On the other hand, \[ \normltm{p}^2 = N_n \lr{F_n * f}(y). \] Thus, by \eqref{eq:var:p0}, we obtain \begin{equation}\label{eq:p1bound} 1 - p_{1,n+m}(y) \ge \frac{\abs{p(y)}^2}{N_{n+m} \normltm{p}^2} = \frac{N_n}{N_{n+m}} \cdot \frac{f(y)}{\lr{F_n * f}(y)} \ge \lr{1 - \frac{d m}{n+m + 1}} \frac{1}{1 + h_n}, \end{equation} where we define $h_n \coloneqq {\normlinftm{F_n * f - f}}/{f(y)}$, which proves the first statement. For the upper bound, we compute \begin{align*} \abs{(F_n * f - f)(x)} &= \abs{\sum_{k\in\{-m,\dots,m\}^d} \sum_{\genfrac{}{}{0pt}{}{s\in\{0,1\}^d}{1\leq |s|\leq d}} \frac{(-1)^{|s|} |k^s|}{(n+1)^{|s|}} \hat{f}_k \eip{kx}} \\ &= \abs{\sum_{k\in\{-m,\dots,m\}^d} \sum_{\genfrac{}{}{0pt}{}{s\in\{0,1\}^d}{1\leq |s|\leq d}} \frac{(-1)^{|s|}|k^s|}{(n+1)^{|s|}} \int_{\ensuremath{\mathbb{T}}^d} \abs{g(z)}^2 \eip{k(x-z)} \mathrm{d} z} \\ &\le \sum_{\genfrac{}{}{0pt}{}{s\in\{0,1\}^d}{1\leq |s|\leq d}} \left(\frac{m}{n+1}\right)^{|s|} (2m+1)^d \|g\|_{L^2}^2 \\ &\leq \|g\|_{L^2}^2 \frac{m(4m+2)^d}{n+1} \end{align*} by using that $f=|g|^2$ is a trigonometric polynomial of degree $m$. Then it follows from \eqref{eq:p1bound} that \[ p_{1,n+m}(y) \le h_n + \frac{d m}{\lr{n+m + 1}} \le \frac{\|g\|_{L^2}^2}{|g(y)|^2} \frac{m(4m+2)^d}{n+1} + \frac{d m}{\lr{n+m + 1}}, \] since we can apply $(1+h_n)^{-1}\geq 1-h_n$. \end{proof} \section{Numerical examples}\label{sec:num} We illustrate in this section the asymptotic behaviour of $p_n$ and $p_{1,n}$ for several types of singular measures, with respect to the Wasserstein-1 distance. We compute the distance using a semidiscrete optimal transport algorithm, described below. Our experiments focus on three examples on $\ensuremath{\mathbb{T}}^2$: a discrete measure $\mu_{\mathrm{d}}$ supported on $15$ points, with (nonnegative) random amplitudes, a uniform measure $\mu_{\mathrm{cu}}$ supported on the trigonometric algebraic curve \begin{equation}\label{eq:implicit-curve} \cos(2\pi x)\cos(2\pi y) + \cos(2\pi x) + \cos(2\pi y) = \frac{1}{4}, \end{equation} and a uniform measure $\mu_{\mathrm{ci}}$ supported on the circle centered in $c_0=(\frac{1}{2},\frac{1}{2})$ with radius $r_0 = 0.3$. The moments of $\mu_{\mathrm{cu}}$ are computed numerically up to machine precision using Arb \cite{Johansson2017arb} with a parametrization of the implicit curve \eqref{eq:implicit-curve}. It follows from \eqref{eq:circlemoments} that the trigonometric moments of the measure $\mu_{\mathrm{ci}}$ are given by \begin{equation*} \widehat{\mu_{\mathrm{ci}}}(k) = \eim{k c_0} J_0(2\pi r_0 \norm{k}_2). \end{equation*} The polynomials $p_n$, $J_n*\mu$, and $p_{1,n}$ can be evaluated efficiently via the fast Fourier transform over a regular grid in $\ensuremath{\mathbb{T}}^2$. For the polynomial $p_{1,n}$, the singular value decomposition of the moment matrix $T_n$ can be computed at reduced cost by exploiting that $T_n$ has Toeplitz structure and resorting only to matrix-vector multiplications. To compute transport distances to the measure $\mu\in\{\mu_{\mathrm{cu}},\mu_{\mathrm{ci}}\}$, let the curve $C=\operatorname{supp}\mu\subset \ensuremath{\mathbb{T}}^d$ denote its support with arc-length $L$. Now let $s\in\ensuremath{\mathbb{N}}$, take a partition $C=\bigcup_{\ell=1}^s C_\ell$ into path-connected curves with measure $\mu(C_\ell)=s^{-1}$ and arc-length $L_\ell$, and any $x_\ell\in C_\ell$, then \begin{align*} W_1\left(\frac{1}{s}\sum_{\ell=1}^s \delta_{x_\ell}, \mu \right) &=\sup_{f: \Lip(f)\leq 1} \left|\sum_\ell \int_{C_\ell} \left[f(x)-f(x_\ell)\right] \mathrm{d} \mu(x) \right|\\ &\leq \sum_{\ell=1}^s \int_{C_\ell} |x-x_\ell|_1 \mathrm{d} \mu(x) \leq \sum_{\ell=1}^s \sqrt{d} L_\ell \mu(C_\ell) =\frac{\sqrt{d}\cdot L}{s}. \end{align*} We denote the resulting discrete measures by $\mu_{\mathrm{cu}}^s$ and $\mu_{\mathrm{ci}}^s$, respectively (see Figure~\ref{fig:singular}). In our tests, we use $s = 3000$ samples, which offers a satisfactory tradeoff between computational time and accuracy for our range of degrees $n$. Indeed, the computational cost of evaluating the objective \eqref{eq:semidiscrete-objective} or its gradient grows linearly in $s$, while for degrees up to $n=250$, sampling beyond 3000 points has no effect on the output of our algorithm for computing $W_1(p_n,\mu^s)$, see Figure~\ref{fig:singular}. \begin{figure} \captionsetup[subfloat]{labelformat=empty} \centering \subfloat[Algebraic curve]{\includegraphics[width=0.3\textwidth]{example-curve.eps}} \hspace{1mm} \subfloat[Circle]{\includegraphics[width=0.3\textwidth]{example-circle.eps}} \subfloat[Effect of sampling]{ \includegraphics[width=0.3\textwidth,height=0.306\textwidth]{sampling-effect.eps} } \caption{The two example measures $\mu_{\mathrm{cu}}^s$ (left) and $\mu_{\mathrm{ci}}^s$ (middle) used in our numerical tests. In this display the two continuous measures are discretized using $s=60$ samples. The amplitudes of the spikes in both measures are taken equal, and normalized. The last plot shows the Wasserstein distance $W_1(F_n * \mu_{\mathrm{cu}},\mu_{\mathrm{cu}}^s)$ for degrees $n=1,\ldots,250$ and several values of $s$. \label{fig:singular} \end{figure} Now let $\mu = \sum_{j=1}^s \lambda_j\delta_{x_j}$ refer to either $\mu_{\mathrm{d}}$, $\mu_{\mathrm{cu}}^s$ or $\mu_{\mathrm{ci}}^s$. The semidiscrete optimal transport between a measure with density $p$ and the discrete measure $\mu$ may be computed by solving the finite-dimensional optimization problem \begin{equation} \label{eq:semidiscrete-objective} \max_{w\in\ensuremath{\mathbb{R}}_+^s} f(w),\qquad f(w)=\sum_{j=1}^s \lambda_j w_j + \sum_{j=1}^s \int_{\Omega_j(w)} (\abs{x_j-y}-w_j)p(y)\mathrm{d} y \end{equation} where the Laguerre cells associated to the weight vector $w$ are given by \begin{equation*} \Omega_j(w) = \setcond{y\in\ensuremath{\mathbb{T}}^d}{\abs{x_j-y} - w_j \leq \abs{x_i-y}-w_i,\;i=1,\hdots,s}, \end{equation*} see e.g.~\cite{Peyre_19}. In our implementation, the density measure (and the Laguerre cells) are computed over a $502\times 502$ grid. We use a BFGS algorithm to perform the maximization, using the Matlab implementation \cite{Schmidt05}; we stop the iterations when the change of value of the objective goes below $10^{-9}$, or when the infinite norm $\norm{\nabla f}_\infty$ goes below $10^{-5}$. Note that this last condition has a geometrical interpretation since the $j$-th component of $\nabla f$ corresponds to the difference between the measure of the Laguerre cell $\Omega_j(w)$ and the amplitude $\lambda_j$. We set the limit number of iterations to 100. \begin{figure}[htb] \centering \captionsetup[subfloat]{labelformat=empty} \subfloat[Discrete]{\includegraphics[width=0.33\textwidth]{W1-rates-discrete.eps}} \subfloat[Algebraic curve]{\includegraphics[width=0.33\textwidth]{W1-rates-curve.eps}} \subfloat[Circle]{\includegraphics[width=0.33\textwidth]{W1-rates-circle.eps}} \caption{Asymptotics of $p_n$ and $p_{1,n}$. For $p_{1,n}$, the distance is computed with respect to the unweighted measure $\tilde{\mu}^s$, that is $\tilde{\mu}^s = \frac{1}{s}\sum_{i=1}^s\delta_{x_i}$ where $\{x_1,\dots,x_s\}$ is the support of $\mu$.} \label{fig:my_label} \end{figure} In the discrete case, our numerical results show that the Wasserstein distance $W_1(p_n,\mu^s)$ decreases at a rate close to the worst-case bound derived in Theorem~\ref{thm:W1p}. This is also the case for $W_1(p_{1,n},\tilde{\mu}^s)$, which is coherent with the bound given in the proof of Theorem~\ref{thm:p1weak}. In the positive dimensional cases, one would need to compute the Wasserstein distances for degrees larger than $n=250$ to be able to reliably estimate a rate, but this would require better optimized algorithms, in the spirit for instance of \cite{lakshmanan22}, which goes beyond the scope of this paper. Still, our preliminary results seem to indicate that the rates for $F_n * \mu$ and $J_n * \mu$ in the positive dimensional situation are similar to the ones for discrete measures, but with better constants. For $p_{1,n}$ on the other hand, although the theory does not foresee weak convergence in that case, if it were to occur, our results indicate that the rate would then be worse than in the discrete case. \section{Summary and outlook} We provided tight bounds on the pointwise approximation error as well as with respect to the Wasserstein-1 distance when approximating arbitrary measures by trigonometric polynomials. The approximation by the convolution with the Fejér kernel is both simple and up to a logarithmic factor best possible in the worst case. In contrast and beyond the scope of this paper, a computation similar to \cref{thm:W1p} and \cref{Rem_Jackson} shows that stronger localised trigonometric kernels seem necessary when considering the Wasserstein-2 distance. Future work might also consider the truncation of the singular value decomposition in \cref{sec:interp} if the support of the measure is only approximated by the zero set of an unknown trigonometric polynomial or the available trigonometric moments are disturbed by noise. \setlength{\emergencystretch}{1em \setcounter{biburlnumpenalty}{5000 \printbibliography{} \end{document}
1,314,259,996,428
arxiv
\section{introduction} There are many possible platforms to realize quantum computation~\citep{steane1998quantum,preskill2018quantum,bennett2000quantum}, e.g., cold atoms~\citep{weiss2017quantum,negretti2011quantum,briegel2000quantum,garcia2005quantum}, trapped ions~\citep{garcia2005quantum,haffner2008quantum,benhelm2008towards,bruzewicz2019trapped,cirac2000scalable,lanyon2013measurement,garcia2003speed,pachos2002quantum,feng2002quantum}, and superconducting quantum circuits~\citep{jeffrey2014fast,gambetta2017building,brecht2016multilayer,devoret2004superconducting,you2006superconducting,yan2019strongly,song201710,neill2018blueprint,huang2020superconducting,krantz2019quantum,you2003quantum,osborn2007frequency,pechal2018superconducting}. On these platforms, quantum state transfer (QST) in a controllable way~\citep{bennett2000quantum,bouwmeester2000physics} is one of crucial requirements. Though long-distance quantum communication has been widely achieved in optical fibers and free-space~\citep{zhu2017experimental,liu2019energy,huo2018deterministic,chen2021integrated,liao2018satellite}, it is still very important to find a promising way for transferring quantum states through solid-state devices or condensed matter~\citep{bose2003quantum,christandl2004perfect,balachandran2008adiabatic,song2005quantum,yung2006quantum,yung2005perfect,wu2009perfect,wu2009universal}. A number of QST protocols have been proposed for different solid-state medium~\citep{bienfait2019phonon,sete2015high,vermersch2017quantum,he2017quantum,maring2017photonic,northup2014quantum,cirac1997quantum,shi2005quantum,zwick2011robustness,kandel2021adiabatic}. In recent years, QST via a spin chain has attracted extensive attentions~\citep{bose2003quantum,christandl2004perfect,balachandran2008adiabatic} and many technologies have already been developed ~\citep{shi2005quantum,zwick2011robustness,kandel2021adiabatic,PhysRevLett.106.040505,bose2007quantum,mei2018robust,longhi2019topological,d2020fast,wang2018dynamical}. Perfect QST can be realized through well designed spin chain with invariable couplings~\citep{Zhang2005Simulation,Petrosyan2010State,Gualdi2008Perfect}. Meanwhile, it can also be realized by precisely modulating the spin-spin couplings~\citep{Wang2020Almost,lyakhov2006use,zwick2014optimized,benjamin2001quantum}. For example, quantum states can be transferred by simply applying a sequence of SWAP operations implemented by \textgreek{p} pulses between the pairs of nearest neighboring sites~\citep{Petrosyan2010State}, which needs only the spin-spin couplings to be switched on and off periodically~\citep{benjamin2003quantum}. Nevertheless, these known methods require the accurate design of the system Hamiltonian, thus are usually less robust against disorders and imperfections in large-scale implementations. To overcome this issue, adiabatic QST protocols have been widely studied~\citep{balachandran2008adiabatic,tan2020high,demirplak2003adiabatic,gong2004complete,gong2004adiabatic,eckert2007efficient,ho2012quantized,tian2012adiabatic,agundez2017superadiabatic,sandberg2016efficient}, as the QST exploiting the adiabatic theorem~\citep{messiah1962quantum,berry1984quantal} is independent of the protocol operation details so long as the evolution of the system is slow enough. Our QST method here is based on the spectral and dynamical features of a topological qubit chain. For topological insulators, robust conducting edge states are guaranteed by the nontrivial topology of bulk bands~\citep{hasan2010colloquium,qi2011topological,asboth2016short,luo2019advanced}. These edge states are insensitive to smooth changes in the system parameters unless a topological phase transition occurs~\citep{hasan2010colloquium,qi2011topological}. Such robustness based on topological protection provides topological quantum systems a great potential for quantum information and quantum computing~\citep{nayak2008non,freedman2003topological,stern2013topological,bomantara2018simulation,bomantara2018quantum,bomantara2020measurement}. The Su-Schrieffer-Heeger (SSH) chain~\citep{su1979solitons} is the simplest model of the topological insulators and can be realized by a qubit chain with staggered couplings when the qubit chain is restricted to the subspace of single excitation. Building on the concepts of topological edge states and adiabatic QST via a qubit chain, protocols for efficient QST~\citep{yao2013topologically,mei2018robust,palaiodimopoulos2021fast,longhi2019topological,d2020fast,wang2018dynamical} have also been proposed, with more robustness to disorder due to the underlying topological protection. With specific design of topological qubit chains, quantum states can be encoded in edge states of the systems, and transferred from one end of the chain to the other by adiabatically altering couplings between qubits~\citep{mei2018robust,palaiodimopoulos2021fast,longhi2019topological,d2020fast}. However, most of these available proposals focus on single qubit state transfer, and multi-qubit state transfer with arbitrary entanglement is still a challenging task. Recently, a more advanced protocol for QST via the so-called Floquet topological edge modes was proposed~\citep{tan2020high}. In this protocol, some entangled states are encoded in the edge states of quasienergy zero and $\pi$ modes, and the high-fidelity transfer of entangled states can be achieved over a long distance. However, this method requires additional dynamical modulation on the time-periodic couplings between the qubits and may pose new experimental challenges. In our method, arbitrary entangled state can be encoded in topological edge states, which are supported by a generalized SSH chain. As the parameters of the system are slowly altered, the edge states can be adiabatically transferred from one end of the chain to the other. Thus the entangled state can be transferred along this adiabatic passage. It is well known that the adiabatic evolution leads to two phases, i.e, geometric phase and dynamical phase. The geometric phase can be easily wiped out with a chosen gauge in non-closed adiabatic evolution~\citep{berry1984quantal}, and the dynamical phase can be eliminated when the evolution time is carefully chosen. Thus, the quantum phases between different components encoding an entangled state can be well-controlled or recovered, and the entangled state can be truly transferred. For the concreteness of discussions, we assume that the qubit chain is formed by the superconducting quantum circuits, which have been developing rapidly~\citep{jeffrey2014fast,gambetta2017building,brecht2016multilayer}. In particular, superconducting qubit chains have been widely studied for simulating many-body quantum physics~\citep{houck2012chip,paraoanu2014recent,roushan2017spectroscopic}. There are several advantages to use superconducting qubits as quantum simulators. First, superconducting qubits are highly coherent with long coherence time ($10\sim100\mu s$)~\citep{PhysRevLett.107.240501,novikov2016raman}. Second, most of the parameters of superconducting qubits are highly controllable~\citep{chen2014qubit,geller2015tunable,barends2015digital,reuther2010two,baust2015tunable}, thus we can perform rather arbitrary operation on the systems. Third, the superconducting quantum circuits have high scalability and designability. One quantum chip can support a large number of controllable qubits with different connecting manners~\citep{arute2019quantum}. These advantages make superconducting quantum circuits one of the best platforms to perform quantum simulation and quantum computing. Moreover, the topological chain constructed by superconducting qubits has already been experimentally demonstrated~\citep{cai2019observation}. Thus, for the state-of-the-art, superconducting qubit circuits are very suitable to construct such a chain for QST. The system parameters considered in this work are all based on the existing experiments of Xmon qubits~\citep{chen2014qubit,geller2015tunable,barends2015digital}. The paper is organized as follows. In Section~\ref{sec:generalized-ssh-model}, we introduce a generalized SSH model, in which each unit cell contains three qubits, and the topology of the model is characterized by winding number. We also use Xmon qubits as an example to show how to form such generalized model. In Section~\ref{sec:2-qubit-entanglment-state}, we derive the edge states of the Hamiltonian given in Section~\ref{sec:generalized-ssh-model} for the generalized SSH model, and then show that arbitrary two-qubit entangled state can be encoded in these edge states and transferred from one end of the chain to the other through adiabatic process. The exact dynamical solution based on adiabatic theorem is also given. Furthermore, we illustrate the robustness of our proposal against disorder from two parts, i.e., the coupling strength and evolution time. In Section~\ref{sec:extending-our-formular}, we generalize QST from the entangled state of the two-qubit to those of $\mathcal{N}$-qubit with $\mathcal{N}\geq 3$. In particular, the QST of three-qubit state is carefully analyzed. In Section~\ref{sec:discussions-and-conclution}, we further discuss our proposal and analyze the experimental feasibility, and finally summarize our results. \section{generalized ssh model with three qubits in each unit cell\label{sec:generalized-ssh-model}} \begin{figure}[H] \begin{centering} \includegraphics[width=9cm]{Figure/Figure1} \par\end{centering} \caption{\label{Fig:Model scheme} (a) Schematic diagram of a one-dimensional qubit chain. Each unit cell hosts three qubits, labelled as $A_{1,m}$, $A_{2,m}$ and $B_{m}$ separately (The first subscript of $A$ denotes the intracell index. The subscript of $B$ and the second subscript of $A$ denote the intercell index). The coupling strength $g$ between $A_{1,m}$ and $A_{2,m}$ is uniform along the chain, whereas the hopping between $A_{1,m}$ and $B_{m}$ is staggered, denoted as $v$ and $w$. Each qubit can also be labeled by one unique numerical index $x$ in order of $A_{1,1}A_{2,1}B_{1}A_{1,2}\cdots B_{M-1}A_{1,M}A_{2,M}$. (b) Realization of the qubit chain with Xmon qubits. Each qubit contains three basic superconducting circuits elements, i.e, the josephson junction, the capacitance and the inductance. Two adjacent qubits are coupled through a Josephson coupler. The qubit-qubit coupling can be tuned via an external magnetic flux $\Phi_{\mathrm{ext}}$ with a dc control. The Josephson junctions labeled $\mathrm{L_{J}}$ are each double junctions threaded by additional fluxes (not shown) that tune the qubit frequencies. Therefore, both the qubit frequencies and the couplings in the qubit chain are tunable.} \end{figure} The SSH model~\citep{su1979solitons} is one of the simplest examples hosting one-dimensional topological phases. This model and its various extensions have been widely used to study different physical phenomena~\citep{perez2018ssh,li2014topological,li2019extended,xu2020general,atala2013direct,li2015winding,meier2016observation,sirker2014boundary}. In the standard SSH model, each unit cell has two sublattices. By contrast, for one class of extended SSH models, each unit cell contains 3 or more sublattices, thus called the SSH3 or SSHN models~\citep{xie2019topological,he2020non}. As schematically shown in Fig.~\ref{Fig:Model scheme}(a), we first consider a extended SSH3 model consisting of a qubit chain with $M$ unit cells. Hereafter, we use the unit cell number $M$ to denote the length of the chain. Each unit cell contains three sublattices labelled as $A_{1,m}$, $A_{2,m}$, and $B_{m}$ with $m=1,2,\cdots, M$. The sublattice $B_{M}$ of the $M$th unit cell is removed from the right end of the chain. Sublattices $A_{1,m}$ and $B_{m}$ are analogous to those in the standard two-band SSH model with staggered coupling strengths $v$ and $w$. An extra sublattice $A_{2,m}$ is coupled to $A_{1,m}$ with the coupling strength $g$ ($g>0$). Thus, the Hamiltonian of such a qubit chain is ($\hbar=1$) \begin{eqnarray} H & = & \stackrel[m=1]{M-1}{\sum}\left(v\sigma_{A_{1,m}}^{+}\sigma_{B_m}^{-}+w\sigma_{A_{1,m+1}}^{+}\sigma_{B_m}^{-}+\textrm{H.c.}\right)\nonumber \\ & + & \stackrel[m=1]{M}{\sum}\left(g\sigma_{A_{1,m}}^{+}\sigma_{A_{2,m}}^{-}+\textrm{H.c.}\right).\label{eq:H} \end{eqnarray} with $m$ denoting the index of each unit cell. The ladder operators in the $m$th unit cell is given by $\sigma_{I}^{+}=\vert e_{I}\rangle\langle g_{I}\vert$ ($I=A_{1,m},A_{2,m},B_{m}$), with $\vert e_{I}\rangle$ and $\vert g_{I}\rangle$ denoting the excited and ground states, and the operator $\sigma_{I}^{-}$ is the Hermitian conjugate of the operator $\sigma_{I}^{+}$. We note that our proposal can be applied to any kinds of controllable qubit systems. However, for the concreteness of the studies, we use superconducting qubit circuits, e.g., Xmon qubits, to construct our theoretical model. This $X$-shape qubit has several advantages, e.g., high-coherence, fast tunable coupling, and easy connection~\citep{chen2014qubit,geller2015tunable,barends2015digital}, thus is more suitable for our proposal. Our proposal is not limited to the Xmon qubit, and can also be applied to other types of superconducting qubits, e.g., transmon or flux qubit. For our setup, as shown in Fig.~\ref{Fig:Model scheme}(b), two Xmon qubits are connected with each other by a Josephson junction coupler~\citep{chen2014qubit,geller2015tunable}. An extra magnetic flux bias $\Phi_{\rm ext}$ is applied to tune the effective linear inductance of the coupler junction. Thus, the coupling constant between these two qubits is tunable with controllable extra magnetic flux $\Phi_{\rm ext}$ (see details in Appendix~\ref{sec:Xmon-qubut-chain}). In the following studies, we only consider the single-excitation of the chain, thus the Hamiltonian can be rewritten as \begin{eqnarray} H& = & \sum_{m=1}^{M-1}\left(v\vert\mathcal{A}_{1,m}\rangle\langle\mathcal{B}_{m}\vert+w\vert\mathcal{A}_{1,m+1}\rangle\langle\mathcal{B}_{m}\vert+\textrm{H.c.}\right)\nonumber \\ & + & \sum_{m=1}^{M}\left(g\vert\mathcal{A}_{1,m}\rangle\langle\mathcal{A}_{2,m}\vert+\textrm{H.c.}\right) \label{eq:Hamiltonian-2} \end{eqnarray} in the single-excitation subspace \{$\vert\mathcal{A}_{1,m}\rangle,\vert\mathcal{A}_{2,m}\rangle,\vert\mathcal{B}_{m}\rangle$\}, with $\vert\mathcal{A}_{1,m}\rangle=\sigma_{A_{1,m}}^{+}\vert G\rangle$, $\vert\mathcal{A}_{2,m}\rangle=\sigma_{A_{2,m}}^{+}\vert G\rangle$ and $\vert\mathcal{B}_{m}\rangle=\sigma_{B_m}^{+}\vert G\rangle$. Here $\vert G\rangle$ denotes that all qubits in the chain are in the ground state, i.e., $\vert G\rangle=\vert g_{A_{1,1}}g_{A_{2,1}}g_{B_{1}}\cdots g_{A_{2,M}}\rangle$, which is written as $\vert G\rangle=\vert gg\cdots g\rangle$ for simplicity. For the standard two-band SSH model, the topologically nontrivial phase is characterized by a nonzero winding number. Two topological in-gap edge modes are degenerate at zero energy in the thermodynamic limit~\citep{asboth2016short}. However, for a finite lattice size, the two edge states hybridize due to finite-size effect and so that their energy eigenvalues are shifted by an exponentially small amount. Dynamics-wise, this edge-state hybridization would then induce Rabi oscillation between two topological edge modes if the initial state is prepared by exciting the leftmost or rightmost qubit only~\citep{nie2020bandgap}. To obtain a steady edge state in a finite-size system, one may remove one edge qubit from the standard SSH chain~\citep{mei2018robust,cai2019observation}. In this case, this imperfect SSH chain only supports one edge state. This is useful for our considerations here since we are considering the same platform. Indeed, applying this idea of eliminating one edge qubit from an extended SSH model, we can likewise engineer steady edge states in the extended SSH setting, as shown below. For our extended SSH3 model shown in Fig.~\ref{Fig:Model scheme}(a), the Hilbert space of the Hamiltonian is enlarged by the extra qubits $A_{2,m}$ compared with the standard SSH model. The edge states are expected to form from two renormalized branches of the qubit chain, thus resulting in the upper and the lower edge states with positive and negative eigenenergies, as discussed further in Appendix~\ref{sec:A-straightforward-diagram}. To see the topological aspect of these edge states, we first consider the case with $g=0$, where sublattices $A_{1,m}$ and $B_{m}$ are decoupled from $A_{2,m}$. In this case, the imperfect SSH model with an odd number of qubits is formed, where $v$ and $w$ are the intra- and inter- cell couplings. In the standard SSH model, edge states appear when the coupling strength at the edge is weaker than the coupling next to it. Similarly, for the imperfect SSH model, a weaker coupling strength appears at the left (right) edge and generates an edge state there when $v<w$ ($v>w$). Therefore, in the presence of an odd number of sites, the system always has one edge state in the topologically nontrivial regime~\citep{mei2018robust}. That is, in our extended SSH3 model, either the left or the right edge state corresponds to a winding number of $1$ for the Fourier transformed Hamiltonian (in momentum space), with unit cells defined either as sites $(A_{1,m},A_{2,m},B_{m})$ or as sites $(B_{m},A_{1,m+1},A_{2,m+1})$ in Fig.~\ref{Fig:Model scheme}(a) respectively. When $g=0$, the edge state is only located at the sublattice $A_{1,m}$, which returns to the case of the standard SSH chain. When $g>0$, the edge states are expected to occupy both sublattices $A_{1,m}$ and $A_{2,m}$, because they take the role of the $A$-type sublattice in the standard SSH chain (see Appendix~\ref{sec:Topological-invariant}). With this understanding, we can see that the edge states in our extended SSH model do originate from the topological edge states in the standard SSH model, and hence should be robust to local disorder. Below we show how such edge states in the extended SSH chain can be used to robustly transfer arbitrary entangled states. We here emphasize that the parameters used for the following numerical simulations are taken from superconducting qubit circuits, e.g., Xmon qubit circuits~\citep{chen2014qubit,geller2015tunable,barends2015digital}. \section{2-qubit entanglment state transfer\label{sec:2-qubit-entanglment-state}} \subsection{Edge states of the qubit chain} In our extended SSH model, there are $\mathcal{L}=3M-1$ qubits in the chain when one $B$-type qubit at the right end of the chain is removed. Let us first give an ansatz that the resultant edge state in our system exclusively occupies sublattices $A_{1,m}$ and $A_{2,m}$ (see appendix \ref{sec:A-straightforward-diagram}), i.e., the associated wavefunction can be written as \begin{equation} \vert\varPsi_{\rm edge}\rangle=\stackrel[m=1]{M}{\sum}\lambda^{m}\left(a\sigma_{A_{1,m}}^{+}+b\sigma_{A_{2,m}}^{+}\right)\vert G\rangle. \label{eq:psi} \end{equation} As energy-eigenstates of the system's Hamiltonian, the wavefunction in Eq.~(\ref{eq:psi}) must satisfy the stationary Schr\"odinger equation \begin{equation} H\vert\varPsi_{\rm edge}\rangle=E\vert\varPsi_{\rm edge}\rangle. \end{equation} Combining Eq.~\eqref{eq:H} with Eq.~\eqref{eq:psi}, we have \begin{align} E\left(a\sigma_{A_{1,m}}^{+}+b\sigma_{A_{2,m}}^{+}\right)\vert G\rangle & = a\left(v+w\lambda\right)\sigma_{A_{2,m}}^{+}\vert G\rangle \\ & + g\left(b\sigma_{A_{1,m}}^{+}+a\sigma_{A_{2,m}}^{+}\right)\vert G\rangle. \nonumber \end{align} It is straightforward to get $v+w\lambda=0$, i.e., $\lambda=-v/w$, and the coefficients $a$ and $b$ satisfy the following equation as \begin{equation} \left(\begin{array}{cc} 0 & g\\ g & 0 \end{array}\right)\left(\begin{array}{c} a\\ b \end{array}\right)=E\left(\begin{array}{c} a\\ b \end{array}\right). \end{equation} Thus, the energy eigenvalues of the edge states are given by $E_{\pm}=\pm g$, with coefficients ($a,b$) being ($1/\sqrt{2},1/\sqrt{2}$) or ($1/\sqrt{2},-1/\sqrt{2}$) respectively. This explicit edge state solution indicates that there are two edge states with the form of the Bell state $\vert\chi_{m,\pm}\rangle=\left(\vert\mathcal{A}_{1,m}\rangle\pm\vert\mathcal{A}_{2,m}\rangle\right)/\sqrt{2}$ in each unit cell. The wavefunctions corresponding to the edge states hence take the following form (unnormalized) \begin{equation} \vert\varPsi_{\pm}\rangle=\stackrel[m=1]{M}{\sum}\lambda^{m}\left(\frac{\sigma_{A_{1},m}^{+}\pm\sigma_{A_{2},m}^{+}}{\sqrt{2}}\right)\vert G\rangle,\label{eq:psi-1} \end{equation} where both the upper (labeled as $+$) and lower (labeled as $-$) edge states only occupy the $A$-type qubits. When $\left|\lambda\right|=v/w\ll1$, i.e., $v\ll w$, the edge states are mostly localized in the left end of the chain as \begin{equation} \vert\varPsi_{\pm}\rangle\approx\vert L_{\pm}\rangle=\frac{1}{\sqrt{2}}\left(\sigma_{A_{1,1}}^{+}\pm\sigma_{A_{2,1}}^{+}\right)\vert G\rangle. \end{equation} In the limit of $v=0$, the qubit chain degenerates into $M-1$ trimers and an additional dimer, an exact edge state $\vert L_{\pm}\rangle$ can be obtained at the left end of the chain. When $\left|\lambda\right|=v/w\gg1$, i.e., $v\gg w$, the edge states are mostly localized in the right end of the chain as \begin{equation} \vert\varPsi_{\pm}\rangle\approx\vert R_{\pm}\rangle=\frac{1}{\sqrt{2}}\left(\sigma_{A_{1,M}}^{+}\pm\sigma_{A_{2,M}}^{+}\right)\vert G\rangle. \end{equation} In the limit of $w=0$, an exact edge state $\vert R_{\pm}\rangle$ can be obtained at the right end of the chain. In Fig.~\ref{Fig:two-qubit state}(a), as an example, the energy spectrum of a $14$-qubit chain with $5$ unit cells is plotted for $\left|\lambda\right|=v/w\in\left[0,\infty\right]$. It clearly shows that the two topological edge modes (color-dashed lines) exist in the gaps of three bulk bands. The bulk band in the middle of two edge modes is 4-fold degenerate as shown in Fig~\ref{Fig:two-qubit state}(b). As the parameter $\lambda$ changes from $0$ to $\infty$, the localized topological edge states as a function of $\lambda$ also change their qualitatively behavior, namely, from being localized at the left end (red) to being localized at the right end (green). Meanwhile, the eigenvalues $E_{\pm}$ corresponding to these two edge states keep constants all along. \subsection{QST process for arbitrary two-qubit entangled states} \begin{figure}[htp] \begin{centering} \includegraphics[width=8.5cm]{Figure/Figure2} \par\end{centering} \caption{\label{Fig:two-qubit state}Two-qubit state transfer via a 14 qubits chain with $5$ unit cells, i.e., $M=5$. The system parameters are $J=5g$ and $g/2\pi=10$MHz. (a) The energy spectrum for an extended SSH3 model. Black-solid lines represent the bulk states and color-dashed lines represent the edge states. Three bulk bands are divided by two edge states, the middle bulk is $4$-fold degenerate. As $\omega t$ changes from $0$ to $\pi$, edge states are transferred from the left end (red) to the right end (green). (b) Schematics for eigenstates distribution at $\omega t=\pi/6$. Each point represents one eigenstate. Blue and red dots represent bulk and edge states, respectively. (c) The variation of $|\langle \Phi_{r}(t)|\dot{{\Phi}}_{l}(t)\rangle|$ with the time evolution. Here $\Phi_{l}(t)$ takes the lower edge state, i.e., $l=5$. (d) Time evolutions of the target state occupations $\left|\langle\varPsi_{F}\vert\varPsi\left(t\right)\rangle\right|$ for different initial states. Red-solid and blue-dashed curves denote theoretical solutions. Red dots and blue stars denote the numerical simulations. The red-solid curve and red dots correspond to the initial state $\left(\vert \mathcal{A}_{1,1}\rangle+\vert \mathcal{A}_{2,1}\rangle\right)/\sqrt{2}$. Meanwhile, the blue-dashed curve and blue stars correspond to the initial state $\vert \mathcal{A}_{1,1}\rangle$. (e) Time-dependent population distribution on each qubit in the chain when the initial state is prepared to single-qubit state $\vert \mathcal{A}_{1,1}\rangle$. (f) Time-dependent population distribution on each qubit in the chain when the initial state is prepared to two-qubit Bell state $\left(\vert \mathcal{A}_{1,1}\rangle+\vert \mathcal{B}_{2,1}\rangle\right)/\sqrt{2}$.} \end{figure} According to the quantum adiabatic theorem, if the parameters of the qubit chain in Eq.~\eqref{eq:H} can be changed slowly enough, it will remain in its instantaneous eigenstate. At $v=0$, the leftmost two qubits $A_{1,1}$ and $A_{2,1}$ are isolated and the edge states are $\vert L_{\pm}\rangle=\vert\chi_{1,\pm}\rangle\vert gg\cdots g\rangle$. At $w=0$, the rightmost two qubits $A_{1,M}$ and $A_{2,M}$ are isolated and the edge states are $\vert R_{\pm}\rangle=\vert gg\cdots g\rangle\vert\chi_{M,\pm}\rangle$. It is obvious that the coupling strengths \begin{equation} v=J\left[1-\cos\left(\omega t\right)\right],w=J\left[1+\cos\left(\omega t\right)\right],\label{eq:vw} \end{equation} can be changed slowly, then the state of the left end can be transferred to the right end of the chain. Where $\omega$ and $J$ are the frequency and strength of the control field, respectively. For example, as shown in Fig.~\ref{Fig:two-qubit state}(a), if the system is initially prepared to the state $\vert L_{\pm}\rangle$ for the time $t=0$, then this state adiabatically evolves from $\vert L_{\pm}\rangle$ to $\vert R_{\pm}\rangle$ when the time $t$ slowly varies from $0$ to $\pi/\omega$. The adiabatic following is feasible here because the considered state is the system's edge state, which is gapped from the bulk states. In above, we analyze QST when the edge state is the initial state. Let us now consider the general situation that the system is initially prepared to an arbitrary entangled state \begin{equation} \vert\varPsi_{\rm in}\rangle=\left(\alpha\sigma_{A_{1,1}}^{+}+\beta\sigma_{A_{2,1}}^{+}\right)\vert G\rangle,\label{eq:psi-1-1} \end{equation} at the left end of the chain, which can be decomposed as \begin{equation} \vert\varPsi_{\rm in}\rangle=\frac{\alpha+\beta}{\sqrt{2}}\vert L_{+}\rangle+\frac{\alpha-\beta}{\sqrt{2}}\vert L_{-}\rangle.\label{eq:10} \end{equation} For such more general situation involving the superpositions of two edge states, one must carefully analyze the quantum phases. In an adiabatic process, if the system is initially prepared to an eigenstate $\vert\varPhi\left(0\right)\rangle$, then the final state at the time $t$ is given as \begin{equation} \vert\psi_{\rm ad}\left(t\right)\rangle=e^{ir\left(t\right)}e^{i\theta\left(t\right)}\vert\varPhi\left(t\right)\rangle, \end{equation} where $\theta\left(t\right)=-\int_{0}^{t}E(t^{\prime})dt^{\prime}$ is the dynamical phase and $r\left(t\right)=i\int_{0}^{t}\langle\varPhi\left(t^{\prime}\right)\vert\dot{\varPhi}\left(t^{\prime}\right)\rangle dt^{\prime}$ is the geometric phase, which can be gauged out unless the evolution path is closed. The normalization of $\vert\varPhi\left(t^{\prime}\right)\rangle$ implies that $\langle\varPhi\left(t^{\prime}\right)\vert\dot{\varPhi}\left(t^{\prime}\right)\rangle$ is imaginary, which guarantees that $r\left(t\right)$ is real~\citep{berry1984quantal}. In our protocol, with the chosen gauge as shown in Eq.~\eqref{eq:psi-1}, $\vert\varPsi_{\pm}\rangle$ only contains real parameters along $\lambda\left(t\right)=\left[1-\cos\left(\omega t\right)\right]/\left[1+\cos\left(\omega t\right)\right]$, thus $\langle\varPhi\left(t^{\prime}\right)\vert\dot{\varPhi}\left(t^{\prime}\right)\rangle$ is real, which guarantees that $r\left(t\right)$ is imaginary. Therefore, the geometric phase $r\left(t\right)$ must be zero and is naturally gauged out. We thus only need to consider the dynamical phase associated with the adiabatic process. If we adiabatically change the parameters $v$ and $w$ to the final time $t$, following the quantum adiabatic theorem, the initial state in Eq.~(\ref{eq:10}) should evolve to the state \begin{eqnarray} \vert\varPsi\left(t\right)\rangle&=&\frac{\alpha+\beta}{\sqrt{2}}\vert\varPsi_{+}(t)\rangle e^{-i\int_{0}^{t}E_{+}dt'}\nonumber\\ & +& \frac{\alpha-\beta}{\sqrt{2}}\vert\varPsi_{-}(t)\rangle e^{-i\int_{0}^{t}E_{-}dt'}. \label{eq:Analytical solution} \end{eqnarray} As discussed above, $E_{\pm}$ in our model are both constants due to the nature of the topological edge state. As such the final state at the time $t$ is \begin{equation} \vert\varPsi\left(t\right)\rangle=\frac{\alpha+\beta}{\sqrt{2}}\vert\varPsi_{+}(t)\rangle e^{-igt}+\frac{\alpha-\beta}{\sqrt{2}}\vert\varPsi_{-}(t)\rangle e^{igt}.\label{eq:analy} \end{equation} Indeed, by solely observing the values of $E_{\pm}=\pm g$, we find that the phase factors $e^{-igt}$ and $e^{igt}$ for two involved edge states (from upper and lower branch respectively) have the same period $T=2\pi/g$. Therefore, if the time evolution takes the dynamical period $T$, the dynamical phase difference between the two involved edge states in Eq.~(\ref{eq:analy}) will be zero. We can call this dynamical period as one evolution circle. If the total adiabatic protocol time is always chosen to be a multiple of evolution circles, then the concern with dynamical phases can be lifted. As shown in Fig.~\ref{Fig:two-qubit state}(a), if the control field applied to the coupling strength involves from $t_{0}=0$ to $t_{f}=\pi/\omega$, then the edge state is transferred from the left end to the right end. In this case, Eq.~(\ref{eq:analy}) becomes \begin{equation} \vert\varPsi_{f}\rangle=\frac{\alpha+\beta}{\sqrt{2}}\vert R_{+}\rangle e^{-i\pi \frac{g}{\omega}}+\frac{\alpha-\beta}{\sqrt{2}}\vert R_{-}\rangle e^{i\pi \frac{g}{\omega}}.\label{eq:Finalstate-1} \end{equation} Here we choose the evolution time $t_{f}$ to be a multiple of evolution circles, i.e., $t_{f}/T=g/2\omega=n$. $n$ should be a large integer number to satisfy the adiabatic condition. Thus, the dynamical phases become zero and the state in Eq.~(\ref{eq:Finalstate-1}) is involved to \begin{equation} \vert\varPsi_{F}\rangle=\left(\alpha\sigma_{A_{1,M}}^{+}+\beta\sigma_{A_{2,M}}^{+}\right)\vert G\rangle. \label{eq:14} \end{equation} Here, the subscript $F$ denotes the state transferred perfectly, i.e., the final state. Therefore, an arbitrary two-qubit entangled state can be encoded by two edge states and perfectly transferred from the left end to the right end via an adiabatic passage. Notice that all along the protocol, only the $A_{1,m}$- and $A_{2,m}$-type qubits are occupied by the edge states, and the $B_{m}$-type qubits serve as the invariable medium. Thus the qubits $A_{1,m}$ and $A_{2,m}$ can be considered as the transport qubits, and the qubits $B_{m}$ can be considered as the mediated qubits. We here have two remarks. First, for the special case of two-qubit state transfer, the evolution time can be half of the evolution cycle defined above. In this case, the final state acquires a global phase factor $-1$, which does not affect the task of entangled state transfer. Second, one may worry about the robustness of such adiabatic protocol as we need a precise timing. This is unnecessary because along the way, the eigenvalues of the edge states are pinned at special constant values due to topological features and hence we still expect to have robust protocols. \subsection{Analysis on adiabatic condition and two examples of QST} We now analyze the condition of the adiabatic evolutions. The adiabatic approximation requires a small changing rate of the Hamiltonian $\dot{H}\left(t\right)$ and a large energy gap $\left|E_{r}-E_{l}\right|$ between the $r$th and $l$th eigenstates. For our protocol, if we assume that the $l$th eigenstate $\vert\varPhi_{l}\left(t\right)\rangle$ is the edge state, then the adiabatic condition is given by \begin{equation} \left|\langle\varPhi_{r}\left(t\right)\vert\dot{\varPhi}_{l}\left(t\right)\rangle\right|=\left|\frac{\langle\varPhi_{r}\left(t\right)\vert\dot{H}\left(t\right)\vert\varPhi_{l}\left(t\right)\rangle}{E_{r}\left(t\right)-E_{l}\left(t\right)}\right|\ll1, \label{eq:adiabatic condition} \end{equation} where $E_{r}\left(t\right)$ is the instantaneous eigenenergy corresponding to the instantaneous state $\vert\varPhi_{r}\left(t\right)\rangle$ for the time $t$. The eigenstates are sorted according to the corresponding eigenenergies (lowest to highest), and the $5$th eigenstate in Fig.~\ref{Fig:two-qubit state}(a) is the lower edge state. Here we make the 14-qubit chain evolve 5 evolution circles, i.e., $\omega=0.1g$. As shown in Fig.~\ref{Fig:two-qubit state}(c), for the lower edge state ($l=5$), the adiabatic conditions are checked to be satisfied with $r=3,4,6,7,8,9$. For the bulk state between two edge states ($r=6,7,8,9$), the results are not continuous due to the numerical instablity of degenerate eigensolutions of the system. To interpret our protocol, we choose the final state occupation $\left|\langle\varPsi_{F}\vert\varPsi\left(t\right)\rangle\right|$ as the dynamical indicator. In Fig.~\ref{Fig:two-qubit state}(d), we compare the analytical and numerical results of this indicator. The analytical solution for $|\Psi(t)\rangle$ is given in Eq.~(\ref{eq:analy}), while the numerical solution is calculated with ordinary differential equation (ODE) solver for small and fixed step size. Here, $\omega=0.1g$, and the evolving time is $t_{f}=0.5$$\mu$s, i.e., exact $5$ times of the dynamical period ($T=2\pi/g=0.1$$\mu s$). We typically choose two initial states to be transferred. The first one is single-qubit state \begin{equation} \vert\varPsi_{\rm in}^{\left(1\right)}\rangle=\sigma_{A_{1,1}}^{+}\vert G\rangle \equiv \frac{\sqrt{2}}{2}\left(\vert L_{+}\rangle+\vert L_{-}\rangle\right) \end{equation} which can be obtained from Eq.~(\ref{eq:10}) by setting $\alpha=1$ and $\beta=0$. The other one is two-qubit Bell state \begin{equation} \vert\varPsi_{\rm in}^{\left(2\right)}\rangle=\frac{\sqrt{2}}{2}\left(\sigma_{A_{1,1}}^{+}+\sigma_{A_{2,1}}^{+}\right)\vert G\rangle \equiv\vert L_{+}\rangle \end{equation} with $\alpha=\beta=1/\sqrt{2}$. For the first one of single-qubit state transfer, the analytical evolution of the state is given as \begin{equation} \vert\Psi^{\left(1\right)}(t)\rangle=\frac{\sqrt{2}}{2}\left(\vert \varPsi_{+}\rangle e^{-igt}+\vert\varPsi_{-}\rangle e^{igt}\right)\label{eq:first-case} \end{equation} The analytical solution (blue-dashed line) and numerical simulation (blue stars) for the dynamical indicator $\left|\langle\varPsi_{F}\vert\varPsi\left(t\right)\rangle\right|$ are plotted as a function of the evolution time $t$ in Fig.~\ref{Fig:two-qubit state}(d). In this case, the indicator $\left|\langle\varPsi_{F}\vert\varPsi\left(t\right)\rangle\right|$ has a periodic variation due to the dynamical phase difference between the two edge states. The analytical result agrees well with numerical one, this confirms that the chosen parameters have fulfilled the adiabatic conditions. For two-qubit Bell state transfer, the adiabatic evolution of the state is \begin{equation} \vert\varPsi^{\left(2\right)}\left(t\right)\rangle=\vert \varPsi_{+}\rangle e^{-igt}.\label{eq:second-case} \end{equation} In this case, the dynamical phase can be gauged out as one global phase, so the dynamical indicator does not oscillate and increases smoothly with the time. Again, the analytical result (red line) agrees well with numerical (red dots) one. The first case for single-qubit state transfer of Eq.~(\ref{eq:first-case}) is further analyzed in Fig.~\ref{Fig:two-qubit state}(e), where we show the population distribution $|\langle P_{m}|\Psi^{\left(1\right)}(t)\rangle|$ ($P_{m} \in \{\vert\mathcal{A}_{1,m}\rangle,\vert\mathcal{A}_{2,m}\rangle,\vert\mathcal{B}_{m}\rangle\}$) on each site of the state $|\Psi^{\left(1\right)}(t)\rangle$ during the adiabatic protocol. The state distribution shifts from the left to the right with rapid oscillations, due to coherent effect caused by the dynamical phase difference between the two involved edge states as mentioned above. However, for the second case shown in Eq.~(\ref{eq:second-case}), Fig.~\ref{Fig:two-qubit state}(f) demonstrates the smooth transfer of the state $|\Psi^{\left(2\right)}(t)\rangle$ without any oscillation. \subsection{Robustness analysis for two-qubit state transfer} Any realistic implementation of the theoretical protocol unavoidably involves disorders from many aspects, e.g., the environmental effect, the time inaccuracy of the control field applied for the adiabatic evolution, nonuniform of the prepared qubits and imprecise couplings between the qubits in the chain. Here, we consider two main imperfections: one is the disorder of the qubit couplings and the other one is inaccuracy of the evolution time to achieve perfect state transfer. The first kind of the disorders can be analyzed by modelling it as external perturbation terms in our system Hamiltonian, i.e., \begin{align} \delta H & = \stackrel[m=1]{M-1}{\sum}\left(\delta\mu_{A_{1,m}}\sigma_{A_{1,m}}^{+}\sigma_{B_m}^{-}+\delta\mu_{B_m}\sigma_{A_{1,m+1}}^{+}\sigma_{B_m}^{-}+\textrm{H.c.}\right)\nonumber \\ & + \stackrel[m=1]{M}{\sum}\left(\delta\mu_{A_{2,m}}\sigma_{A_{1,m}}^{+}\sigma_{A_{2,m}}^{-}+\textrm{H.c.}\right),\label{eq:H-1} \end{align} where $\delta\mu_{A_{1,m}}$, $\delta\mu_{A_{2,m}}$, and $\delta\mu_{B_{m}}$ are the disorder coefficients, and assumed to satisfy the the Gaussian distribution as $\sim \exp[(-\delta\mu)^2/2\xi^2]$, with $\xi$ being the standard deviation of the disorder in the coupling strength, i.e., the coupling disorder strength. Note that all these disorders are related to the control field. Thus, with the variation of the adiabatic parameters, it is more realistic to assume these disorders to be time-dependent, i.e., the temporal noises. If the initial state is prepared to $\vert\varPsi_{\rm in}\rangle=\left(\alpha\sigma_{A_{1},1}^{+}+\beta\sigma_{A_{2},1}^{+}\right)\vert G\rangle$, then the adiabatic evolution can be derived as \begin{equation} \vert\psi\left(t\right)\rangle=U\left(t\right)\vert\varPsi_{\rm in}\rangle=\mathcal{T}e^{-i\int_{0}^{t}H\left(t'\right)+\delta H\left(t'\right)dt'}\vert\varPsi_{\rm in}\rangle, \label{eq:16} \end{equation} where $\mathcal{T}$ is the time order operator. The numerical simulation of this process can be written as $U\left(t\right)=\mathcal{T}\prod e^{-i\left[H\left(t'\right)+\delta H\left(t'\right)\right]\Delta t}$ ($\Delta t\ll T$). When $t=t_{f}$, the state described in Eq.~(\ref{eq:16}) evolves to $\vert\psi_{f}\rangle$. Thus, the fidelity is given as \begin{equation} F=\left|\langle\varPsi_{F}\vert\psi_{f}\rangle\right|,\label{eq:17} \end{equation} where $\vert\varPsi_{F}\rangle$ is the perfectly transferred state given in Eq.~(\ref{eq:14}). \begin{figure}[htb] \begin{centering} \includegraphics[width=9cm]{Figure/Figure3} \par\end{centering} \caption{\label{Fig:two-qubit disorder} Robustness of our protocol for two-qubit state transfer. (a) The right edge target-state occupations (fidelity $F$) with different evolving circles. As the qubit number of the chain becomes larger, the evolving circles demanded for the adiabatic evolution increase from 5 to 10. For a qubit chain containing 23 qubits, i.e., 8 unit cells, it needs at least 10 evolving circles to guarantee the adiabaticity. (b) The distribution of fidelities for one settled state transfer process with 100 repetitions where $\xi=0.5g$ and $M=4$. (c) The average fidelities of two-qubit entangled state transfer with the coupling disorder. The numbers of unit cells are 2, 4, 6, and 8 for each curve, separately. (d) The average fidelities of two-qubit entangled state transfer with the imperfection of the evolution time.} \end{figure} To verify the protocol robustness against these disorders, we first determine the proper evolution circles $n$ under the ideal condition without disorder. As shown in Fig.~\ref{Fig:two-qubit disorder}(a), we have numerically calculated the fidelity $F$ given in Eq.~(\ref{eq:17}) for the right edge state transferred with different evolution circles and different lengths of the chain. These results help us to determine what extent we are working in the adiabatic regime. Indeed, the fidelity rises rapidly with the increases in the number of evolution circles. As shown in Fig.~\ref{Fig:two-qubit disorder}(a), when the number of unit cells of the chain varies from $2$ to $8$, the necessary numbers of evolution circles to achieve good fidelity increase from $5$ to $10$. Therefore, to guarantee the adiabatic condition and obtain good fidelity for the transferred state, we choose $n=10$ as the number of evolution circles for the following discussions, i.e., $t_{f}=10\,T=1\mu s$. In our numerical simulations, we do $100$ repetitions of the adiabatic evolution for a given time $t_{f}$, and each repetition has its different random choice of disorder. We here consider the average fidelity $\mathcal{F}=\overline{F}$ over these $100$ calculations. In Fig.~\ref{Fig:two-qubit disorder}(b), we plot the distribution of fidelities for $100$ repetitions of one chosen state transfer protocol. Here, the number of unit cells is taken as $M=4$, and the coupling disorder strength $\xi$ is taken as $\xi=0.5g$. The distribution of fidelities for each simulation run is highly concentrated around the average fidelity, so the average fidelity here is a proper representation for the protocol fidelity. In Fig~\ref{Fig:two-qubit disorder}(c), the effect of the coupling disorder on the protocol fidelities is shown for different lengths of the chain, i.e., $M=2,4,6,8$, respectively. For each length of the chain, we find that the average fidelity $\mathcal{F}$ for $100$ repetition calculations decreases from $1$ to $0.92$, as the coupling disorder strength $\xi$ changes from $10^{-3}g$ to $g$. We also find that there is a plateau near $\mathcal{F}=1$ for each case when $\xi$ is taken from the range $\xi\in\left[10^{-3}g,10^{-1}g\right]$. Therefore, for our protocol, the fidelity $\mathcal{F}$ will remain above $99\%$ as long as the coupling disorder strength $\xi$ is less than $0.1g$, which should be experimentally accessible. The second kind of the disorders is the inaccuracy of protocol execution time. Unlike a usual adiabatic process, our protocol demands the total evolution time be exact multiple of the dynamical period. However, the control inaccuracy for the evolution time may lead to the imperfection of the target state. This kind of imperfection can be examined by an external perturbation time $\delta t$, which satisfies the Gaussian distribution as $\delta t\sim \exp[(-\delta t)^2/2\eta^2]$. $\eta$ is the standard deviation of the evolution time. In this case, the modified transferred state can be derived as \begin{equation} \vert\psi_{f}\rangle=U\left(t_{f}+\delta t\right)\vert\varPsi_{\rm in}\rangle=\mathcal{T}e^{-i\int_{0}^{t_{f}+\delta t}H\left(t'\right)dt'}\vert\varPsi_{\rm in}\rangle, \end{equation} with the time inaccuracy $\delta t$. Figure~\ref{Fig:two-qubit disorder}(d) depicts the impact of the evolution time imperfection. We find that the average fidelity $\mathcal{F}$ for each chain decreases from $1$ to $0.94$ as the time disorder strength $\eta$ changes from $10^{-3}T$ to $10^{-1}T$. For our protocol, the fidelity will remain above $99\%$ when the time disorder strength $\eta$ is less than $0.01T$. Therefore, we conclude that our protocol is robust against both the qubit coupling disorder and the inaccuracy of the protocol execution time. \section{Extended protocols for $\mathcal{N}$-qubit state transfer\label{sec:extending-our-formular}} \begin{figure}[H] \begin{centering} \includegraphics[width=9cm]{Figure/Figure4} \par\end{centering} \caption{\label{Fig:N-qubit state transfer}Schematic of the extended SSH chain for $3$-qubit and $\mathcal{N}$-qubit state transfer. Each unit cell hosts $4$ to $\mathcal{N}+1$ qubits. The edge states are localized among the transport qubits from $A_{1,m}$ to $A_{\mathcal{N},m}$ with $m=1,\cdots, M$. The rest qubits are mediated qubits labeled as $B_{m}$ with $m=1,\cdots, M-1$. $v$ denotes the coupling between qubits $A_{1,m}$ and $B_{m}$, $w$ denotes the coupling between qubits $A_{1,m+1}$ and $B_{m}$.} \end{figure} Our protocol for arbitrary two-qubit entangled state transfer through extended SSH chain can be easily generalized to $\mathcal{N}$-qubit state transfer. For $\mathcal{N}$-qubit transfer process, as schematically shown in Fig.~\ref{Fig:N-qubit state transfer}, the extended SSH model has $\mathcal{N}+1$ sites ($\mathcal{N}$ transport qubits and $1$ mediated qubit) in each unit cell, and the whole qubit chain with $M$ unit cells has $\mathcal{L}=\left(\mathcal{N}+1\right)M-1$ qubits. With the increase of the site number in each unit cell, more edge states emerge. Thus it is not an easy task to find a proper evolution time to cancel all dynamical phase differences between these edge states in our protocol. As such, the couplings between the transport qubits should not simply be the constant $g$. To this end, we may consider the following modified Hamiltonian \begin{eqnarray} H & = & \stackrel[m=1]{M-1}{\sum}\left(v\sigma_{A_{1,m}}^{+}\sigma_{B_m}^{-}+w\sigma_{A_{1,m+1}}^{+}\sigma_{B_m}^{-}+\textrm{H.c.}\right)\label{eq:H-N}\nonumber\\ & + & \stackrel[m=1]{M}{\sum}\left(g_{1}\sigma_{A_{1,m}}^{+}\sigma_{A_{2,m}}^{-}+g_{2}\sigma_{A_{2,m}}^{+}\sigma_{A_{3,m}}^{-}+\cdots\right.\nonumber\\ & + & \left. g_{\mathcal{N}-1}\sigma_{A_{\mathcal{N}-1,m}}^{+}\sigma_{A_{\mathcal{N},m}}^{-}+\textrm{H.c.}\right). \end{eqnarray} In each unit cell, these $\mathcal{N}$ transport quibts form an array with coupling constants $g_{1}, \cdots, g_{\mathcal{N}-1}$, and all of the mediated qubits $B_{m}$ are used to couple the arrays of transport qubits with coupling constants $v$ and $w$, respectively. \subsection{$3$-qubit state transfer} Let us now further illustrate our proposal for $3$-qubit entangled state transfer, i.e., $\mathcal{N}=3$ as an example. In this case, the qubit chain contains $\mathcal{L}=4M-1$ qubits and the corresponding Hamiltonian is given by Eq.~(\ref{eq:H-N}) with $\mathcal{N}=3$ as \begin{eqnarray} H & = & \stackrel[m=1]{M-1}{\sum}\left(v\sigma_{A_{1,m}}^{+}\sigma_{B_m}^{-}+w\sigma_{A_{1,m+1}}^{+}\sigma_{B_m}^{-}+\textrm{H.c.}\right)\nonumber \\ & + & \stackrel[m=1]{M}{\sum}\left(g\sigma_{A_{1,m}}^{+}\sigma_{A_{2,m}}^{-}+g\sigma_{A_{2,m}}^{+}\sigma_{A_{3,m}}^{-}+\textrm{H.c.}\right).\label{eq:3-H} \end{eqnarray} The edge states are only localized in all $A$-type qubits (i.e., $A_{1,m}$-, $A_{2,m}$-, and $A_{3,m}$-type) and can be expanded as \begin{equation} \vert\varPsi_{\rm edge}\rangle=\stackrel[m=1]{M}{\sum}\lambda^{m}\left(a\sigma_{A_{1,m}}^{+}+b\sigma_{A_{2,m}}^{+}+c\sigma_{A_{3,m}}^{+}\right)\vert G\rangle.\label{eq:3-edgestates} \end{equation} Substituting this equation into eigen-energy function $H\vert\varPsi_{\rm edge}\rangle=E\vert\varPsi_{\rm edge}\rangle$, we have \begin{align} E & \left(a\sigma_{A_{1,m}}^{+}+b\sigma_{A_{2,m}}^{+}+c\sigma_{A_{3,m}}^{+}\right)\vert G\rangle\nonumber \\ & = g\left(b\sigma_{A_{1,m}}^{+}+a\sigma_{A_{2,m}}^{+}+c\sigma_{A_{2,m}}^{+}+b\sigma_{A_{3,m}}^{+}\right)\vert G\rangle\nonumber \\ & + a\left(v\sigma_{B_m}^{+}+w\lambda\sigma_{B_m}^{+}\right)\vert G\rangle. \end{align} Here, for this special case of 3-qubit entangled state transfer, we have assumed that the coupling constant $g_{1}$ between $A_{1,m}$-qubit and $A_{2,m}$-qubit equals to that $g_{2}$ between $A_{2,m}$-qubit and $A_{3,m}$-qubit, i.e., $g_{1}=g_{2}=g$. It is straightforward to obtain $\lambda=-v/w$, and the coefficients $a$, $b$, and $c$ satisfy the following eigen-equation \begin{equation}\label{eq:25} \left(\begin{array}{ccc} 0 & g & 0\\ g & 0 & g\\ 0 & g & 0 \end{array}\right)\left(\begin{array}{c} a\\ b\\ c \end{array}\right)=E\left(\begin{array}{c} a\\ b\\ c \end{array}\right). \end{equation} Solving Eq.~(\ref{eq:25}), we can obtain three eigenvalues $E_{\pm}=\pm\sqrt{2}g$ and $E_{0}=0$, corresponding to three eigenstates $\left(1/2,\pm\sqrt{2}/2,1/2\right)$ and $\left(1/\sqrt{2},0,-1/\sqrt{2}\right)$, respectively. These three eigenvalues are also eigenenergies of three edge states. Thus, three edge states are constructed by the eigenstates, i.e., $\vert\chi_{m,\pm}\rangle=\left(\vert\mathcal{A}_{1,m}\rangle\pm\sqrt{2}\vert\mathcal{A}_{2,m}\rangle+\vert\mathcal{A}_{3,m}\rangle\right)/2$ and $\vert\chi_{m,0}\rangle=\left(\vert\mathcal{A}_{1,m}\rangle-\vert\mathcal{A}_{3,m}\rangle\right)/\sqrt{2}$ associated with the $m$th unit cell. As shown in Eq.~(\ref{eq:3-edgestates}), then these edge states can be given as \begin{align} \vert\varPsi_{\pm}\rangle & =\stackrel[m=1]{M}{\sum}\lambda^{m}\left(\frac{\sigma_{A_{1,m}}^{+}\pm\sqrt{2}\sigma_{A_{2,m}}^{+}+\sigma_{A_{3,m}}^{+}}{2}\right)\vert G\rangle\, \nonumber \\ \vert\varPsi_{0}\rangle & =\stackrel[m=1]{M}{\sum}\lambda^{m}\left(\frac{\sigma_{A_{1,m}}^{+}-\sigma_{A_{3,m}}^{+}}{\sqrt{2}}\right)\vert G\rangle. \end{align} When $\left|\lambda\right|\ll1$, i.e., $v\ll w$, the edge states are mainly localized at the left end of the chain, and when $\left|\lambda\right|\gg1$, i.e., $v\gg w$, the edge states are mainly localized at the right end of the chain. In particular, when $v=0$, these edge states are written as $\vert L_{\pm}\rangle=\vert\chi_{1,\pm}\rangle\vert gg\cdots g\rangle$ and $\vert L_{0}\rangle=\vert\chi_{1,0}\rangle\vert gg\cdots g\rangle$, supported by the transport qubits on the left end of the chain. However, when $w=0$, the edge states are $\vert R_{\pm}\rangle=\vert gg\cdots g\rangle\vert\chi_{M,\pm}\rangle$ and $\vert R_{0}\rangle=\vert gg\cdots g\rangle\vert\chi_{M,0}\rangle$, supported by the transport qubits on the right end of the chain. \begin{figure}[htb] \begin{centering} \includegraphics[width=9cm]{Figure/Figure5} \par\end{centering} \caption{\label{Fig:3-qubit state transfer} $3$-qubit state transfer with extended SSH$4$ chain. The parameters are taken as $J=5g$ and $g/2\pi=10$MHz. (a) The energy spectrum of a qubit chain with 6 unit cells when $\omega t$ changes from $0$ to $\pi$. Edge states can be transferred from left edge (red) to right edge (green) along the dashed energy levels. (b) Schematics for eigenstate distribution of the SSH$4$ chain in a topological nontrivial regime ($v\ll w$) when $\omega t=\pi/6$. Four bulk bands (blue dots), are divided by three edge states (red dots). (c) Target-state occupation probabilities $F(nT)=\left|\langle\varPsi_{F}\vert\varPsi\left(nT\right)\rangle\right|$ as a function of the number of evolution circles $n$. Each point represents a complete adiabatic evolution from $t=0$ to $t=\pi /\omega$. Various colors from red to green represent different lengths of the chain, where $M=2,4,6,8$. (d) Time evolution of the whole qubit-chain state with initial state prepared to a three-qubit W state $\left(\vert \mathcal{A}_{1,1}\rangle+\vert \mathcal{A}_{2,1}\rangle+\vert \mathcal{A}_{3,1}\rangle\right)/\sqrt{3}$ when the total evolution time is $20$ evolution circles $n$. The color from dark-red to bright-yellow represents the population distribution of the state on each qubit site. (e) The average fidelities of 3-qubit W-state transfer with different coupling disorder strength. $M=2,4,6,8$ for each curve, separately. (f) The average fidelities of $3$-qubit W-state transfer with different execution time disorder.} \end{figure} As discussed above, we can change the coupling constants slowly as $v=J\left[1-\cos\left(\omega t\right)\right]$ and $w=J\left[1+\cos\left(\omega t\right)\right]$, then the edge states of the system will adiabatically evolve from the left end to the right end of the chain. As shown in Fig.~\ref{Fig:3-qubit state transfer}(a), using a qubit chain with $M=6$ unit cells (i.e., $23$ qubits) as an example, we plot the variations of the instantaneous eigeneneries from $t=0$ to $\pi/\omega$. We find that there are four bands of bulk states, separated by three topological edge states. The bulk bands outside of the edge states are not degenerate, however, each bulk band inside the gap of the edge states has five-fold degenerate. This has been further illustrated in Fig.~\ref{Fig:3-qubit state transfer}(b) by arranging $23$ eigenenstates by the corresponding eigenenergies (from the lowest to the highest), when the time $t$ is taken as $t=\pi/6\omega$. Figure~\ref{Fig:3-qubit state transfer}(a) also shows if the system is prepared to one of the left edge states ($\vert L_{\pm}\rangle$ and $\vert L_{0}\rangle$) at $t=0$, then the state will evolve to the corresponding right edge state ($\vert R_{\pm}\rangle$ or $\vert R_{0}\rangle$) when $t_{f}=\pi/\omega$. More generally, if the initial state is prepared to an arbitrary 3-qubit entangled state at the left end of the chain as \begin{equation} \vert\varPsi_{\rm in}\rangle=\left(\alpha\sigma_{A_{1,1}}^{+}+\beta\sigma_{A_{2,1}}^{+}+\gamma\sigma_{A_{3,1}}^{+}\right)\vert G\rangle, \end{equation} which can be rewritten as \begin{equation} \vert\varPsi_{\rm in}\rangle=\frac{\alpha+\sqrt{2}\beta+\gamma}{2}\vert L_{+}\rangle+\frac{\alpha-\sqrt{2}\beta+\gamma}{2}\vert L_{-}\rangle+\frac{\alpha-\gamma}{\sqrt{2}}\vert L_{0}\rangle, \end{equation} by using the left edge states, then the state will adiabatically evolve to \begin{eqnarray} \vert\varPsi\left(t\right)\rangle & = & \frac{\alpha+\sqrt{2}\beta+\gamma}{2}\vert\varPsi_{+}\left(t\right)\rangle e^{-i\int_{0}^{t}E_{+}dt'}\nonumber \\ & + & \frac{\alpha-\sqrt{2}\beta+\gamma}{2}\vert\varPsi_{-}\left(t\right)\rangle e^{-i\int_{0}^{t}E_{-}dt'}\nonumber \\ & + & \frac{\alpha-\gamma}{\sqrt{2}}\vert\varPsi_{0}\left(t\right)\rangle e^{-i\int_{0}^{t}E_{0}dt'}, \end{eqnarray} at the moment $t$. As we learn from the case of the two-qubit state transfer, $E_{\pm}=\pm\sqrt{2}g$ and $E_{0}=0$ here are still constant during the adiabatic protocol, thus the final state at $t_{f}=\pi/\omega$ is \begin{eqnarray} \left\vert\varPsi_{f}\right\rangle & = & \frac{\alpha+\sqrt{2}\beta+\gamma}{2}\vert R_{+}\rangle e^{-i2\pi\frac{g}{\sqrt{2}w}}\nonumber \\ & + & \frac{\alpha-\sqrt{2}\beta+\gamma}{2}\vert R_{-}\rangle e^{i2\pi\frac{g}{\sqrt{2}w}}\nonumber \\ & + & \frac{\alpha-\gamma}{\sqrt{2}}\vert R_{0}\rangle. \end{eqnarray} Again, let us consider the evolution time to be exact integral multiples of the dynamical period $T=2\pi/\sqrt{2}g$, i.e., $t_{f}/T=g/\sqrt{2}\omega=n$ ($n\gg1$), then the final state becomes \begin{equation} \vert\varPsi_{F}\rangle=\left(\alpha\sigma_{A_{1,M}}^{+}+\beta\sigma_{A_{2,M}}^{+}+\gamma\sigma_{A_{3,M}}^{+}\right)\vert G\rangle. \end{equation} Clearly then, as time $t$ changes from $0$ to $\pi/\omega$, arbitrary $3$-qubit entangled state can be transported from the left end to the right end of the chain. Same as the case of $2$-qubit state transfer, we choose different numbers of evolution circles to examine the adiabaticity. In Fig.~\ref{Fig:3-qubit state transfer}(c), target-state occupation probabilities $F(t)=\left|\langle\varPsi_{F}\vert\varPsi\left(t\right)\rangle\right|$ are plotted as a function of the number of evolution circles when a W-state is transferred from the left edge to the right one for different lengths of the chain. We find that the evolution circles to achieve high fidelity increase with the length $M$ of the chain when the qubit number in the unit cell is given. For example, Fig.~\ref{Fig:3-qubit state transfer}(c) shows that $20$ evolving circles are required to achieve fidelity one when $M=8$, however $5$ evolving circles are enough to achieve fidelity one when $M=2$. For a specific case shown in Fig.~\ref{Fig:3-qubit state transfer}(d), the W state is shown to be transferring from the left to the right with a rapid oscillation within the execution time (20 evolution circles). In Fig.~\ref{Fig:3-qubit state transfer}(e), we have also evaluated the average fidelities of the 3-qubit W-state transfer with different disorder strengths $\xi$ for the coupling strengths. There is also a plateau at $\xi\in\left[10^{-3}g,10^{-1}g\right]$ for each qubit chain. However, different from the two-qubit transfer, these plateaus are pinned at different values of fidelity with the increase of the qubit number. We find that the average fidelity for all the qubit chains considered here is far beyond $96\%$ so long as the disorder strength is $\xi<0.1g$. Meanwhile, the effect of the disorder of the time evolution is presented in Fig.~\ref{Fig:3-qubit state transfer}(f). The average fidelity $\mathcal{F}$ is also good enough for state transfer as the disorder strength $\eta$ is less than $0.01T$. Figs.~\ref{Fig:3-qubit state transfer}(e) and (f) clearly show that our proposal is also promising for transferring 3-qubit state along a long qubit chain. \subsection{$\mathcal{N}$-qubit state transfer} \begin{figure}[h] \begin{centering} \includegraphics[width=9cm]{Figure/Figure6} \par\end{centering} \caption{\label{Fig:4 and 5 qubit} (a) Spectrum of the qubit chain for 4-qubit state transfer. Here, $M=5$ and $\mathcal{L}=24$. The red-dashed lines represent the edge states and the black-solid lines represent the bulk states. (b) Schematics for eigenstates distribution corresponding to (a) in a topologically nontrivial regime ($v\ll w$) when $\omega t=\pi/6$. Four red dots denote four edge states. The blue dots denote the bulk bands. The four blue dots in line between two red dots denote a bulk band with $4$-fold degenerate. (c) Spectrum of the qubit chain for 5-qubit state transfer. Here, $M=5$ and $\mathcal{L}=29$. (d) Schematics for eigenstates distribution corresponding to (c) in a topologically nontrivial regime ($v\ll w$) when $\omega t=\pi/6$. Five red dots denote five edge states. The blue dots denote the bulk bands. The four dots in line between two red dots denote a bulk band with $4$-fold degenerate.} \end{figure} We have extended our protocol from 2-qubit state transfer to 3-qubit state transfer. Analogously, we can extend our proposal to $\mathcal{N}$-qubit state case. Similar to Eq.~(\ref{eq:psi}) for the two-qubit entangled state transfer, the edge states corresponding to the Hamiltonian in Eq.~(\ref{eq:H-N}) do not occupy the mediated qubits $B$ and can be expanded as \begin{equation} \vert\varPsi_{\rm edge}\rangle=\stackrel[m=1]{M}{\sum}\lambda^{m}\left(\chi_{1}\sigma_{A_{1,m}}^{+}+\cdots+\chi_{\mathcal{N}}\sigma_{A_{\mathcal{N},m}}^{+}\right)\vert G\rangle.\label{eq:20} \end{equation} Substituting Eq.~(\ref{eq:H-N}) and Eq.~(\ref{eq:20}) into the Schr\"odinger equation $H\vert\varPsi_{\rm edge}\rangle=E\vert\varPsi_{\rm edge}\rangle$, then we can have \begin{align} E & \left(\chi_{1}\sigma_{A_{1,m}}^{+}+\chi_{2}\sigma_{A_{2,m}}^{+}+\cdots+\chi_{\mathcal{N}}\sigma_{A_{\mathcal{N},m}}^{+}\right)\vert G\rangle\nonumber \\ & = \left(g_{1}\chi_{2}\sigma_{A_{1,m}}^{+}+\cdots+g_{\mathcal{N}-1}\chi_{\mathcal{N}}\sigma_{A_{\mathcal{N}-1,m}}^{+}\right)\vert G\rangle\nonumber \\ & + \left(g_{1}\chi_{1}\sigma_{A_{2,m}}^{+}+\cdots+g_{\mathcal{N}-1}\chi_{\mathcal{N}-1}\sigma_{A_{\mathcal{N},m}}^{+}\right)\vert G\rangle\nonumber \\ & + \chi_{1}\left(v\sigma_{B_m}^{+}+w\lambda\sigma_{B_m}^{+}\right)\vert G\rangle.\label{eq:21} \end{align} From Eq.~(\ref{eq:21}), it is straightforward to get $\lambda=-v/w$, and the coefficients $\chi_{1}$, $\chi_{2},\cdots,\chi_{\mathcal{N}}$ satisfy the following equation \begin{equation} \left(\begin{array}{ccccc} 0 & g_{1} & 0 & 0 & 0\\ g_{1} & 0 & g_{2} & 0 & 0\\ 0 & g_{2} & 0 & \ddots & 0\\ 0 & 0 & \ddots & \ddots & g_{\mathcal{N}-1}\\ 0 & 0 & 0 & g_{\mathcal{N}-1} & 0 \end{array}\right)\left(\begin{array}{c} \chi_{1}\\ \chi_{2}\\ \chi_{3}\\ \vdots\\ \chi_{\mathcal{N}} \end{array}\right)=E\left(\begin{array}{c} \chi_{1}\\ \chi_{2}\\ \chi_{3}\\ \vdots\\ \chi_{\mathcal{N}} \end{array}\right).\label{eq:N-eigen} \end{equation} Solving this eigen-equation, we can obtain the eigenenergies of the edge states as $E_{1},\cdots, E_{\mathcal{N}}$. The explicit eigenstates can also be derived, and then edge states can be obtained. Similar to Eq.~(\ref{eq:vw}), when the time $t$ is adiabatically changed from $0$ to $\pi/\omega$, the coupling constant $v$ ($w$) is changed from $2J$ ($0$) to $0$ ($2J$), and the edge states initially at the left end of the chain can be adiabatically transferred to the right end. We know that an arbitrary $\mathcal{N} $-qubit entangled state can be represented by these edge states, thus $\mathcal{N} $-qubit entangled state can also be perfectly transferred from the left end to the right end of the chain if the total evolution time $t_{f}=\pi/\omega$ is exact multiple of all the dynamical periods corresponding to all eigenenergies $E_{1},\cdots, E_{\mathcal{N}}$. That is, there must be a least common period of all dynamical periods $T_{1}=2\pi/E_{1}, \cdots, T_{\mathcal{N}}=2\pi/E_{\mathcal{N}} $ corresponding to the $\mathcal{N}$ edge states. This common period can only exist for specific set of parameters $\left\{ g_{1},\cdots,g_{\mathcal{N}-1}\right\}$. It seems that the generalization is straightforward, however the main problem is how to engineer the proper series of parameters $\left\{ g_{1},\cdots,g_{\mathcal{N}-1}\right\}$ to get a common period. It is highly challenging to find a general solution for any $\mathcal{N}$, because the solution varies case by case. However, for given qubit states, we can always engineer the coupling constants between the transport qubits such that these states can be transferred through the extended SSH chain. Below, two examples for $\mathcal{N}=4$ and $\mathcal{N}=5$ are further shown in Fig.~\ref{Fig:4 and 5 qubit}. At $\mathcal{N}=4$, the parameters $\left\{ g_{1},g_{2},g_{3}\right\} $ can be set as $\left\{ \sqrt{2}g,g,\sqrt{2}g\right\} $ and the corresponding eigenenergies are $\left\{ 2g,g,-g,-2g\right\} $. In Figs.~\ref{Fig:4 and 5 qubit}(a) and (b), we show the spectrum of the qubit chain for $4$-qubit state transfer with a number of $24$ qubits. Five bulk bands are divided by four edge states, and the three bulk bands between edge states are $4$-fold degenerate. The least common oscillation period of these edge states is $2\pi/g$, which can be taken as the ideal time evolution circle. At $\mathcal{N}=5$, the parameters $\left\{ g_{1},g_{2},g_{3},g_{4}\right\} $ can be set as $\left\{ g,2g,2g,g\right\} $ and the corresponding eigenenergies are $\left\{ 3g,g,0,-g,-3g\right\} $. The spectrum of the qubit chain in this case is plotted in Figs.~\ref{Fig:4 and 5 qubit}(c) and (d) for the total number $29$ of the qubits in the chain. Six bulk bands are divided by five edge states, and the four bulk bands between edge states are $4$-fold degenerate. The least common period is also $2\pi/g$. Similarly, for the case of $\mathcal{N}$-qubit state transfer, we can find a proper set of parameters to get a common period as the time evolution circle. By use of such time evolution cycles, the involved edge states will not suffer from mutual dynamical phase differences at the end of our adiabatic protocol, and hence an arbitrary $\mathcal{N}$-qubit entangled state can be transferred from the left end to the right end. \section{discussions} \subsection{General discussions} We have proposed a QST approach along an extended SSH qubit chain. However, several issues should be further discussed. First, we consider only the neighbor-couplings and on-site potentials are not included as shown in Eq.~\eqref{eq:H}. This is because the on-site potentials only result in a global dynamical phase if all the qubits are tuned to resonate with each other, and thus this global phase can be dropped out. Second, we note that the gap between the edge states and the bulk states decreases with the increase in length of the qubit chain. This indicates that the adiabatic condition is highly demanded for very large systems. However, as shown in Fig.~\ref{Fig:3-qubit state transfer}(c), for an extended SSH4 chain consisting of $8$ unit cells, the adiabatic condition is still well met for an execution time of only 20 evolution circles. When the length of the chain becomes much larger, one possible solution is hinted in Ref.~\citep{tan2020high}. That is, our QST protocol realized by one-step adiabatic evolution can be decomposed into multi-step process to achieve a better performance. To verify the topological protection of our QST protocol, one may compare our approach with QST protocols independent of topology. Using mirror symmetry, one can think of an obvious non-topological QST protocol in a qubit chain as follows. At the beginning, the leftmost qubit is decoupled from other part of the chain, and at the end the couplings of the qubit chain is slowly changed to the reversed form, i.e., the rightmost qubit is decoupled with other qubits. With this adiabatic process, the quantum state initially prepared to the leftmost qubit can be transferred from the left end to the right end. Ref.~\citep{palaiodimopoulos2021fast} has shown that such topologically-unrelated QST protocol is not robust to disorder. Hence, our topology-based protocol does have an advantage, accounting for the fairly good fidelity presented above in the presence of disorder. \subsection{Discussions for implementations using superconducting qubit circuits} In principle, our proposal can be implemented in various platforms, e.g., cold atoms~\citep{du2016experimental}, trapped ions~\citep{harty2014high}, or coupled waveguides~\citep{shen2020acoustic,chen2021landau}. However, with the significant development in recent years, superconducting qubit circuits, e.g., transmon or Xmon qubits~\citep{chen2014qubit}, seem to be a more promising platform~\citep{PhysRevA.76.042319}. Also, a fast and high-fidelity transfer of arbitrary single-qubit state in a chain of superconducting qubits has been achieved in experiment recently~\citep{PhysRevApplied.10.054009}. The good scalability and flexible tunability for the couplings make our extended SSH chain easy to be realized by the superconducting qubit circuits. In particular, the coupling strengths can be tuned from topologically trivial to non-trivial regime. Thus topological and non-topological phenomena can be studied in one quantum circuits. Moreover, the coherent time is a very important for realizing our proposal. As shown in Appendix~\ref{sec:Xmon-qubut-chain}, the coupling coefficient $g$ can be chosen as $g/2\pi=10$Mhz by using the parameters of current superconducting qubit circuits for realizing our proposal. Therefore, for two-qubit state transfer, one evolution circle can be $T=2\pi/g=0.1\mu s$. The total evolution time is 10 evolution circles, i.e., $t_{f}=1\mu s$. For three-qubit state transfer, one evolution circle can be $T=2\pi/\sqrt{2}g=0.0707\mu s$. The total evolution time is 20 evolution circles, i.e., $t_{f}=1.414\mu s$. In superconducting qubit circuits, the coherence time of single qubit is about $10\sim100\mu s$ ~\citep{PhysRevLett.107.240501,novikov2016raman}, which is much longer than the adiabatic time in our proposal. Meanwhile, steady topological edge states have already been observed in superconducting qubit circuits, which can last more than $1\mu s$~\citep{cai2019observation}. Therefore, the decoherence effect is not expected to be troublesome in our protocol. To make sure that the adiabatic evolution time for our proposal is indeed integral multiple of the evolution circle, one needs to measure the exact value of $g$, which is equivalent to determining the eigenenergy of edge states. This can be achieved by the reflection spectrum of a weak probe light through a waveguide coupled to the extended SSH chain (see Appendix~\ref{sec:Energy-spectrum-with}). The reflection peaks of the input weak signal can reveal the energy spectrum of the qubit chain with appropriate parameters, and the value of coupling $g$ can be obtained with the energy shift between the edge states. \section{Conclusion \label{sec:discussions-and-conclution}} In summary, we have proposed an experimentally feasible approach for transferring arbitrary entangled state through an extended SSH chain. The entangled states are encoded in the edge states of a class of extended SSH chains, and then they are transported via an adiabatic protocol. Due to the topological protection of the edge states, our protocol is robust against the the temporal noise caused by the imperfection in the control field. We have numerically confirmed this robustness against two kinds of disorders, i.e, the coupling strength disorder and the execution time disorder. Compared with most contemporary studies realizing QST in qubit chains, this work represents an exciting advance that a general scenario is proposed to achieve QST of arbitrary $\mathcal{N}$-qubit entangled state. Our proposal is easy to be realized by using superconducting qubit circuits and the parameters required in our protocol are estimated according to the recent experiments. Given the feasibility and tunability of the proposed protocal, our idea can also be extended for realizing QST in two dimensional quantum networks. \section{ACKNOWLEDGMENTS} We thank Wei Nie for helpful discussions. J.G. acknowledges funding support by the Singapore NRF Grant No. NRF-NRFI2017-04 (WBS No. R-144-000-378- 281). Y.X.L. is supported by the Key-Area Research and Development Program of GuangDong Province under Grant No. 2018B030326001, the National Basic Research Program (973) of China under Grant No. 2017YFA0304304, and the NSFC under Grant No. 11874037.
1,314,259,996,429
arxiv
\section{Introduction} In this paper, we investigate the numerical approximation to the following time dependent problem: given a bounded Lipschitz polygonal domain $\Omega$, a final time ${\mathsf T}>0$, an initial value $v\in L^2(\Omega)$ (a complex valued Sobolev space) and a forcing function $f\in L^\infty(0,{\mathsf T};L^2(\Omega))$, we seek $u:[0,{\mathsf T}]\times \Omega\rightarrow{\mathbb{R}}$ satisfying \beq\label{e:p} \left\{ \bal \partial_t^{\gamma} u+L^{\beta} u &= f,\qquad \text{in } (0,{\mathsf T}]\times\Omega,\\ u&=0,\qquad \text{on } (0,{\mathsf T}]\times \partial\Omega,\\ u&=v,\qquad \text{on } \{0\}\times\Omega. \\ \eal\right . \eeq Here the fractional derivative in time $\partial^{\gamma}_t$ with ${\gamma}\in (0,1)$ is defined by the left-sided Caputo fractional derivative of order ${\gamma}$, \beq\label{e:cfd} \partial^{\gamma}_t u(t):=\frac{1}{\Gamma(1-{\gamma})}\int_0^t \frac{1}{(t-r)^{\gamma}}\frac{\partial u(r)}{\partial r}\, dr. \eeq Note that \eqref{e:cfd} holds for smooth $u$ and extends by continuity to a bounded operator on $H^{\gamma}(0,{\mathsf T})\cap C[0,{\mathsf T}]$ satisfying $$\partial^{\gamma}_t u={}^R\partial^{\gamma}_t (u-u(0))$$ where ${}^R\partial^{\gamma}_t$ denotes the Riemann-Liouville fractional derivative. The differential operator $L$ appearing in \eqref{e:p} is an unbounded operator associated with a Hermitian, coercive and sesquilinear form $d(\cdot,\cdot)$ on $H^1_0(\Omega)\times H^1_0(\Omega)$. For ${\beta}\in(0,1)$, the fractional differential operator $L^{\beta}$ is defined by the following eigenfunction expansion \beq L^{\beta} v :=\sum_{j=1}^\infty \lambda_j^ {\beta} (v,\psi_j)\psi_j, \label{ldef} \eeq where $(\cdot,\cdot)$ denotes the $L^2(\Omega)$ inner product and $\{\psi_j\}$ is an $L^2(\Omega)$-orthonormal basis of eigenfunctions of $L$ with eigenvalues $\{\lambda_j\}$. The above definition is valid for $v\in D(L^{\beta})$, where $D(L^{\beta})$ denotes the functions $v\in L^2(\Omega)$ such that $L^{\beta} v\in L^2(\Omega)$. A weak formulation of \eqref{e:p} reads: find $u\in L^2(0,{\mathsf T};D(L^{{\beta}/2}))\cap C([0,{\mathsf T}];L^2(\Omega))$ and $\partial_t^{\gamma} u \in L^2(0,{\mathsf T};D(L^{-{\beta}/2}))$ satisfying \beq\label{e:wp} \left\{ \bal \langle\partial_t^{\gamma} u, \phi\rangle+A(u,\phi)&=(f,\phi),\quad \text{for all }\phi\in D(L^{{\beta}/2})\text{ and for a.e. }t\in(0,{\mathsf T}],\\ u(0)&=v . \eal\right . \eeq Here the bilinear form $A(u,\phi):=(L^{{\beta}/2}u,L^{{\beta}/2}\phi)$ and $\langle\cdot,\cdot\rangle$ denotes the duality pairing between $D(L^{-\beta/2})$ and $D(L^{\beta/2})$. As a consequence of \cite[Theorem 6]{NOS16}, the above problem has a unique solution, which can be explicitly written as \beq\label{e:sol} u(t):=u(t,\cdot)=E(t)v+\int_0^t W({r})f(t-{r})\, d{r} . \eeq Here, for $w\in L^2(\Omega)$, \beq\label{e:sol1} E(t)w:=e_{{\gamma},1}(-t^{\gamma} L^{\beta})w=\sum_{j=1}^\infty e_{{\gamma},1}(-t^{\gamma} \lambda_j^{\beta})(w,\psi_j)\psi_j \eeq and \beq\label{e:sol2} W(t)w:=t^{{\gamma}-1}e_{{\gamma},{\gamma}}(-t^{\gamma} L^{\beta})w=\sum_{j=1}^\infty t^{{\gamma}-1}e_{{\gamma},{\gamma}}(-t^{\gamma} \lambda_j^{\beta})(w,\psi_j)\psi_j , \eeq with $e_{\gamma,\mu}(z)$ denoting the Mittag-Leffler function (see the defintion \eqref{e:mlf}). We also refer to Theorem 2.1 and 2.2 of \cite{SY11} for a detailed proof of the above formula when ${\beta}=1$, noting that the argument is similar for any ${\beta}\in(0,1)$. A major difficulty in approximation solutions of \eqref{e:wp} involves time stepping in the presence of the fractional time derivative. The L1 time stepping method was developed in \cite{LX07} and applied for the case ${\beta}=1$. Letting ${\tau}$ be the time step, it was shown in \cite{LX07} that the L1 scheme gives the rate of convergence $O({\tau}^{2-\gamma})$ provided that the solution is twice continuously differentiable in time. For the homogeneous problem ($f=0$), the L1 scheme is guaranteed to yield first order convergence assuming the initial data $v$ is in $L^2(\Omega)$ (see \cite{JLZ16a}). See also \cite{JLZ16b} and the reference therein for other time discretization methods and error analyses. We also refer to \cite{thesis} for the backward time stepping scheme for the case ${\gamma}=1$. The numerical approximation to the solution \eqref{e:sol} has been studied recently in \cite{NOS16}. The main difficulty is to discretize the fractional differential operators $\partial_t^{\gamma}$ and $L^{\beta}$ simultaneously. In \cite{NOS15}, the factional-in-space operator $L^{\beta}$ was approximated as a Dirichlet-to-Neumann mapping via a Caffarelli-Silvestre extension problem \cite{CS07} on $\Omega\times(0,\infty)$. In \cite{NOS16}, Nochetto {\it et. al.} analyze an L1 time stepping scheme for \eqref{e:wp} in the context of the Caffarelli-Silvestre extension problem and obtain a rate of convergence in time of $O({\tau}^\theta)$ with $\theta\in(0,1/2)$ (see Theorem 3.11 in \cite{NOS16}). The goal of the paper is to approximate the solution of \eqref{e:wp} directly based on the solution formula \eqref{e:sol}. Our approximation technique and its numerical analysis relies on the Dunford-Taylor integral representation of the solution formula \eqref{e:sol}. Such a numerical method has been developed for the classical parabolic problem \cite{BLP17,thesis} (i.e. the case ${\gamma}=1$) and the stationary problem \cite{BP15}; see also \cite{BP16} when the differential operator $L$ is regularly accretive \cite{kato1961}. The outline of the remainder of the paper is as follows. Section~\ref{s:preli} provides some notation and preliminaries related to \eqref{e:p}. In Section~\ref{preli:FEM}, we review some classical results from the finite element discretization and provide a key result (Theorem~\ref{l:semi-sp}) instrumental to derive error estimates for semi-discrete schemes. In Section~\ref{s:h}, we study the semi-discrete approximation $E_h(t) v:=e_{{\gamma},1}(-t^{\gamma} L_h^{\beta}){\pi_h} v$ to $E(t)v$. Here $L_h$ is the Galerkin finite element approximation of $L$ in the continuous piecewise linear finite element space $\mathbb V_h$ and ${\pi_h}$ denote the $L^2$ projection onto $\mathbb V_h$. We subsequently apply a sinc quadrature scheme to the Dunford-Taylor integral representation of the semi-discrete solution. For the sinc approximation, we choose the hyperbolic contour $z(y)=b(\cosh(y)+i\sinh(y))$ for $y\in {\mathbb{R}}$, with $b\in (0,\lambda_1/\sqrt{2})$. Here $\lambda_1$ denotes the smallest eigenvalue of $L$. Theorem~\ref{l:semi-sp} directly gives an error estimate for the semi-discrete approximation in fractional Sobolev spaces of order $s$, with $s\in [0,1]$. As expected, the rate of convergence depends on the smoothness of the solution which, in term, depends on the smoothness of the initial data and the regularity pickup associated with the spatial exponent $\beta$. Theorem~\ref{l:hsincquad} proves that for a quadrature of $2N+1$ points with quadrature spacing $k=cN^{-1/2}$ and $c$ depending on ${\beta}$, the sinc quadrature error is bounded by $Ct^{-{\gamma}}\exp(-c\sqrt{N})$, where the constant $C$ is independent of $t$ and $N$. In Section~\ref{s:nh}, we focus on the approximation scheme for the non-homogeneous forcing problem. The approximation in time is based on a pseudo-midpoint quadrature applied to the convolution in \eqref{e:sol}, i.e., given a partition $\{t_j\}$ on $[0,t]$, \beq \int_{t_{j-1}}^{t_j} W_h({r}){\pi_h} f(t-{r})\, d{r}\approx \bigg (\int_{t_{j-1}}^{t_j} W_h({r})\, d{r}\bigg)\ {\pi_h} f(t-{t_{j-\frac12}}) , \label{w-semi} \eeq where $W_h(t)$ is the semi-discrete approximation to $W(t)$. Assuming that the forcing function $f$ is in $H^2(0,t;L^2)$, We show in Theorem~\ref{t:geo} that the error in the approximation \eqref{w-semi} in time is $O({\mathcal N}^{-2})$ under a geometric partition refined towards $t=0$ (with $C({\gamma}){\mathcal N}\log_2 {\mathcal N}$ subintervals). We then apply an exponentially convergent sinc quadrature scheme to approximate the Dunford-Taylor integral representation of the discrete operator $\int_{t_{j-1}}^{t_j} W_h({r})\, d{r}$. Theorem~\ref{t:sincquad2} shows that the sinc quadrature leads to an additional error which is $O(\log({\mathcal N})\exp(-\sqrt{cN}))$. Some technical proofs are given in Appendices~\ref{a:lemma1} and \ref{a:lemma2}. Throughout this paper, $c$ and $C$ denote generic constants. We shall sometimes explicity indicate their dependence when appropriate. \section{Notation and Preliminaries}\label{s:preli} \subsection{Notation} Let $\Omega \subset \mathbb R^d$ be a bounded polygonal domain with Lipschitz boundary. Denote by $L^2(\Omega)$ and $H^1(\Omega)$ (or in short $L^2$ and $H^1$) the standard Sobolev spaces of complex valued functions equipped with the norms $$\|u\|:=\|u\|_{L^2}:=\bigg(\int_\Omega |u|^2\, dx\bigg)^{1/2} \quad\text{and}\quad\|u\|_{H^1}:=(\|u\|^2_{L^2}+ \| | \nabla u | \|_{L^2}^2)^{1/2} .$$ The $L^2$ scalar product is denoted $(\cdot,\cdot)$: $$ (v,w) := \int_{\Omega} v(x) \overline w(x)\, dx. $$ We also denote by $H^1_0:=H^1_0(\Omega) \subset H^1(\Omega)$ the closed subspace of $H^1$ consisting of functions with vanishing traces. Thanks to the Poincar\'e inequality, we will use the semi-norm $|\cdot|_{H^1}:=\| | \nabla (\cdot) |\|$ as the norm on $H^1_0$. The dual space of $H^1_0$ is denoted $H^{-1}:=H^{-1}(\Omega)$ and is equipped with the dual norm: $$ \|F\|_{H^{-1}}:=\sup_{\theta\in H^1_0}\frac{\langle F,\theta\rangle}{|\theta|_{H^1}}, $$ where $\langle\cdot,\cdot\rangle$ stands for the duality pairing between $H^{-1}$ and $H^1_0$. The norm of an operator $A: B_1 \rightarrow B_2$ between two Banach spaces $(B_1, \|.\|_{B_1})$ and $(B_2, \|.\|_{B_2})$ is given by $$ \| A \|_{B_1 \to B_2} = \sup_{v \in B_1, \ v \not = 0} \frac{\|A v\|_{B_2}}{\| v \|_{B_1}} $$ and in short $\| A\|$ when $B_1 = B_2 = L_2$. \subsection{The Unbounded Operator $L$}\label{ss:unbounded} Let us assume that $d(\cdot,\cdot)$ is a Hermitian, coercive and sesquilinear form on ${H^1_0}\times{H^1_0}$. We denote by $c_0$ and $c_1$ the two positive constants such that $$ c_0 | v |^2_{H^1}\leq d(v,v); \qquad |d(v,w)| \leq c_1 | v|_{H^1} | w |_{H^1},{\quad\hbox{for all }} v,w\in H^1_0. $$ Furthermore, we let $T:H^{-1} \rightarrow H^1_0$ be the solution operator, i.e. for $f \in H^{-1}$, $Tf:=w \in H^1_0$, where $w$ is the unique solution (thanks to Lax-Milgram lemma) of \beq d(w,\theta)= \langle f,\theta \rangle ,{\quad\hbox{for all }} \theta\in H^1_0 . \label{laxm} \eeq Following Section 2 of \cite{kato1961}, see also Section~2.3 in \cite{BP16}, we denote $L$ to be the inverse of $T|_{L^2}$ and define $D(L):=\text{Range}(T|_{L^2})$. \subsection{The Dotted Spaces}\label{ss:dotted} The operator $T$ is compact and symmetric on $L^2$. Fredholm theory guarantees the existence of an $L^2$-orthonormal basis of eigenfunctions $\{\psi_j\}_{j=1}^\infty$ with non-increasing real eigenvalues $\mu_1\geq\mu_2 \geq \mu_3 \geq ... >0$. For every positive integer $j$, $\psi_j$ is also an eigenfunction of $L$ with corresponding eigenvalue $\lambda_j=1/\mu_j$. The decay of the coefficients $(v,\psi)$ in the representation $$ v = \sum_{j=1}^\infty (v,\psi_j) \psi_j $$ characterizes the dotted spaces ${\dot{H}}^s$. Indeed, for $s\geq 0$, we set $$ {\dot{H}}^s:= \left\lbrace v\in L^2\ \text{s.t.} \ \sum_{j=1}^\infty \lambda_j^s |(v,\psi_j)|^2<\infty \right\rbrace. $$ On ${\dot{H}}^s$, we consider the natural norm $$ \|v\|_{{\dot{H}}^s} := \bigg(\sum_{j=1}^\infty \lambda_j^s |(v,\psi_j)|^2\bigg)^{1/2} . $$ We also denote by ${\dot{H}}^{-s}$ the dual space of ${\dot{H}}^s$ for $s\in [0,1]$. It is known that (see for instance \cite{BP16}) $$ {\dot{H}}^{-s}= \left\{F\in H^{-1} \ \text{s.t} \ \|F\|_{{\dot{H}}^{-s}}:= \bigg(\sum_{j=1}^\infty \lambda_j^{-s}|\langle F,\psi_j\rangle|^{2}\bigg)^{1/2}<\infty\right \}. $$ Note that, we identify $L^2$ functions with $H^{-1}$ functionals by $\langle F,\cdot\rangle :=(f,\cdot)\in H^{-1}$ for $f \in L^2$. \subsection{Fractional Powers of Elliptic Operators} Let $L$ be defined from a Hermitian, coercive and sesquilinear form on ${H^1_0}\times{H^1_0}$ as described in Section~\ref{ss:unbounded}. For ${\beta}\in (0,1)$, the fractional power of $L$ is given by \beq\label{spe} L^{\beta} v := \sum_{j=1}^\infty \lambda_j^{\beta} (v,\psi_j)\psi_j, \qquad \forall v \in D(L^{\beta}):={\dot{H}}^{2{\beta}}. \eeq In addition, we define the associated sesquilinear form $A: {\dot{H}}^{\beta} \times {\dot{H}}^{\beta} \rightarrow \mathbb C$ by \beq\label{Asf} A(v,w):= (L^{{\beta}/2} v,L^{{\beta}/2} w) =\sum_{j=1}^\infty \lambda_j^{{\beta}} (v,\psi_j) \overline{(w,\psi_j)}, \eeq which satisfies $A(v,v)=\|v\|_{{\dot{H}}^{{\beta}}}^2$. \subsection{Intermediate Spaces and the Regularity Assumption} As we saw above, the dotted spaces relies on the eigenfunction decomposition of a compact operator. These are natural spaces to consider fractional powers of operators but are less adequate to describe standard smoothness properties. The latter are better characterized by the intermediate spaces ${\mathbb{H}}^s$ defined for $s\in[-1,2]$ by real interpolation \begin{equation}\label{e:interpolation_spaces} {\mathbb{H}}^s:=\left\{\bal &[{H^1_0}, H^1_0\cap H^2]_{s-1,2},& 1\leq s\leq 2,\\ &[L^2,H^1_0]_{s,2},& 0 \leq s \leq 1,\\ &[H^{-1},L^2]_{s+1,2},& -1\leq s \leq 0.\eal \right. \end{equation} In order to link the two set of functional spaces introduced above, we assume the following elliptic regularity condition: \begin{assumption}[Elliptic Regularity]\label{regularity} There exists $\alpha\in(0,1]$ so that \begin{enumerate}[(a)] \item $T$ is a bounded map of ${\mathbb{H}}^{-1+\alpha}$ into ${\mathbb{H}}^{1+\alpha}$; \item $L$ is a bounded operator from ${\mathbb{H}}^{1+\alpha}$ to ${\mathbb{H}}^{-1+\alpha}$. \end{enumerate} \end{assumption} Under the above assumption we have the following equivalence property: \begin{proposition}[Equivalence, Proposition 4.1 in \cite{BP15}]\label{p:equiv} Suppose that Assumption~\ref{regularity} holds for $\alpha \in (0,1]$. Then the spaces ${\mathbb{H}}^s$ and $\dot{H}^s$ coincide for $s\in[-1,1+\alpha]$ with equivalent norms. \end{proposition} Notice that Assumption~\ref{regularity} is quite standard and holds for a large class of sesquilinear forms $d(\cdot,\cdot)$. An important example is the diffusion process given by $$ d(u,v) = \int_\Omega a(x) \nabla u \cdot \nabla v\, dx $$ defined on $H^1_0 \times H^1_0$, where $a\in L^\infty(\Omega)$ satisfies $$ 0<c_0 \leq a(x) \leq c_1 \qquad \textrm{for a.e.}\quad x\in \Omega. $$ The $\alpha$ in Assumption ~\ref{regularity} is related to the domain $\Omega$ and the smoothness of the coefficients. For example, if $\Omega$ is convex and $a$ is smooth, Assumption ~\ref{regularity} holds for any $\alpha$ in $(0,1]$. In contrast, for the two dimensional L-shaped domain and smooth $a$, Assumption ~\ref{regularity} only holds for $\alpha\in(0,2/3)$. \subsection{The Mittag-Leffler Function} The Mittag-Leffler functions are instrumental to represent the solution of fractional time evolution, see \eqref{e:sol1} and \eqref{e:sol2}. We briefly introduce them together with their properties used in our argumentation. We refer to Section 1.8 in \cite{KST06} for more details. For ${\gamma}>0$ and $\mu\in{\mathbb{R}}$, the two-parameter Mittag-Leffler function ${e_{\tpow,\mu}}(z)$ is defined by \beq\label{e:mlf} {e_{\tpow,\mu}}(z):=\sum_{k=0}^\infty\frac{z^k}{\Gamma(k{\gamma}+\mu)},\qquad z\in{\mathbb{C}} . \eeq These functions are entire functions (analytic in ${\mathbb{C}}$). We note that \cite[equation (3.1.42)]{KST06} (see also \cite{MR2053894}) implies that $u(t)={e_{\tpow,1}}(-\lambda t^{\gamma})$ for $t,\lambda>0$ satisfies $$ \partial^{\gamma}_t u +\lambda u=0, $$ i.e., is a solution of the scalar homogeneous version of the first equation of \eqref{e:p}. For this reason, the function ${e_{\tpow,1}}(-\lambda t^{\gamma})$ will play a major role in our analysis. We also note that \begin{equation}\label{e:fd2} \partial_t e_{{\gamma},1}(-t^{\gamma}\lambda^{\beta})=\lambda^{\beta} t^{{\gamma}-1}e_{{\gamma},{\gamma}}(-t^{\gamma}\lambda^{\beta}) \end{equation} and \begin{equation}\label{e:fd2_2} \partial_t e_{{\gamma},{\gamma}}(-t^{\gamma}\lambda^{\beta})=\lambda^{\beta} t^{{\gamma}-1}\left((\gamma-1)e_{{\gamma},2{\gamma}}(-t^{\gamma}\lambda^{\beta}) -e_{{\gamma},2{\gamma}-1}(-t^{\gamma}\lambda^{\beta})\right). \end{equation} Recall that $\partial^\gamma_t$ always denotes the left-sided Caputo fractional derivative \eqref{e:cfd}. Another critical property for our study is their decay when $|z|\to \infty$ in a positive sector: For $0<{\gamma}<1$, $\mu\in{\mathbb{R}}$ and $\frac{{\gamma}\pi}{2}<\zeta< {\gamma}\pi$, there is a constant $C$ only depending on ${\gamma},\mu,\zeta$ so that \beq\label{ml-bound-scalar} |{e_{\tpow,\mu}}(z)|\leq\frac{C}{1+|z|},\quad\hbox{for }\zeta\leq |\arg(z)|\leq\pi . \eeq \subsection{Solution via superposition}\label{s:weak_hom} The solution $u$ of \eqref{e:wp} is the superposition of two solutions: the homogeneous solution $f=0$ and the non-homogeneous solution $v=0$, \begin{equation}\label{e:full_sol} u(t) = E(t)v+\int_{0}^t W(s) f(t-s)~ds, \end{equation} where $E(t)$ is defined by \eqref{e:sol1} and $W(t)$ by \eqref{e:sol2}. Following \cite{SY11}, we have that $u \in C^0([0,T];L^2)$ and in particular $ u(0)=v$. We discuss the approximation of each term in the decomposition separately. For the homogeneous problem ($f=0$), we use the Dunford-Taylor integral representation of $u(t)= E(t)v$, \beq\label{DFint} u(t) = \frac 1 {2\pi i} \int_{\mathcal{C}} e_{{\gamma},1}({-t^{\gamma} z^{\beta}}) R_z(L)v\, dz. \eeq Here $R_z(L):=(zI-L)^{-1}$ and $z^{{\beta}}:= e^{{{\beta}} \ln z}$ with the logarithm defined with branch cut along the negative real axis. Given $r_0\in (0,\lambda_1)$, the contour ${\mathcal{C}}$ consists of three segments (see Figure~\ref{f:anal}): \begin{equation}\label{e:contour} \begin{aligned} {\mathcal{C}}_1&:=\left\lbrace z(r):=re^{-i\pi/4} \text{ with }r \text{ real going from }+\infty \text{ to } r_0\right\rbrace \text{ followed by } \\ {\mathcal{C}}_2&:=\left\lbrace z(\theta):=r_0 e^{i\theta} \text{ with }\theta \text{ going from }-\pi/4 \text{ to } \pi/4\right\rbrace \text{ followed by } \\ {\mathcal{C}}_3&:=\left\lbrace z(r):=re^{i\pi/4} \text{ with } r \text{ real going from } r_0 \text{ to }+\infty \right\rbrace. \end{aligned} \end{equation} \begin{figure}[H] \centering \includegraphics[scale=.4]{analc-eps-converted-to.pdf} \caption{The contour ${\mathcal{C}}$ given by \eqref{e:contour}.} \label{f:anal} \end{figure} We use an analogous representation for $W(s)$, namely, \beq \label{Wsint} W(s)v = \frac {s^{\gamma-1}} {2\pi i} \int_{\mathcal{C}} e_{{\gamma},{\gamma}}({-s^{\gamma} z^{\beta}}) R_z(L)v\, dz. \eeq The justification of \eqref{DFint} and \eqref{Wsint} are a consequence of \eqref{ml-bound-scalar} and standard Dunford-Taylor integral techniques, see, \cite{yoshida,MR1192782} for additional details. \section{Finite Element Approximations}\label{preli:FEM} \subsection{Subdivisions and Finite Element Spaces } Let $\{\mathcal{T}_h\}_{h>0}$ be a sequence of globally shape regular and quasi-uniform conforming subdivisions of $\Omega$ made of simplexes, i.e. there are positive constants $\rho$ and $c$ independent of $h$ such that if for $\uptau \in \mathcal T_h$, $h_\uptau$ denotes the diameter of $\uptau$ and $r_\uptau$ denotes the radius of the largest ball which can be inscribed in $\uptau$, then \blal\label{shape-regular} &\hbox{(shape regular)}\qquad \max_{\uptau \in \mathcal T_h} \frac{h_\uptau}{r_\uptau} \leq c,\quad \hbox{and}\\ \label{ineq:quasi} &\hbox{(quasi-uniform)}\qquad \max_{\uptau\in\mathcal{T}_h}h_\uptau \leq \rho\min_{\uptau\in\mathcal{T}_h} h_\uptau. \elal Fix $h>0$ and denote by $\mathbb V_h\subset {H^1_0}$ the space of continuous piecewise linear finite element functions with respect to $\mathcal{T}_h$ and by ${M_h}$ the dimension of $\mathbb V_h$. The $L^2$ projection onto $\mathbb V_h$ is denoted by ${\pi_h}: L^2 \rightarrow \mathbb V_h$ and satisfies $$ ({\pi_h} f,\phi_h)=(f,\phi_h),\qquad{\quad\hbox{for all }} \phi_h\in\mathbb V_h . $$ For $s\in [0,1]$ and $\sigma>0$ satisfying $s+\sigma\leq 2$, Lemma~5.1 in \cite{BP16} guarantees the existence of a constant $c(s,\sigma)$ independent of $h$ such that \begin{equation}\label{pih-approx} \|(I-\pi_h)f\|_{{\mathbb{H}}^s} \leq c(s,\sigma) h^{\sigma}\|f\|_{{\mathbb{H}}^{s+\sigma}} . \end{equation} In addition, for any $s\in [0,1]$, there exists a constant $c$ such that \begin{equation}\label{pih-bound} \|\pi_h f\|_{{\mathbb{H}}^{s}}\leq c \|f\|_{{\mathbb{H}}^{s}}. \end{equation} The case $s=0$ follows from the definition of the $L^2$-projection, the case $s=1$ is treated in \cite{BY14,BX91} and the general case follows by interpolation. \subsection{Discrete Operators}\label{ss:discrete_op} The finite element analogues of the operators $T$ and $L$ given in Section~\ref{ss:unbounded} are defined as follows: For $F\in H^{-1}$, $T_h:H^{-1}\rightarrow\mathbb V_h$ is defined by $$d(T_h F,\phi_h)=\langle F,\phi_h \rangle, {\quad\hbox{for all }} \phi_h\in \mathbb V_h$$ and $L_h:\mathbb V_h \rightarrow \mathbb V_h$ is given by $$ (L_h v_h,\phi_h)=d(v_h,\phi_h), {\quad\hbox{for all }} \phi_h\in \mathbb V_h. $$ so that $T_h|_{\mathbb V_h}=L_h^{-1}$. We now recall the following finite element error estimates. \begin{proposition}[Lemma 6.1 in \cite{BP16}]\label{prop:T_error} Let Assumption~\ref{regularity}~(a) holds for some $\alpha \in (0,1]$. Let $s\in [0,\frac 1 2]$ and set $\alpha^*:=\frac 1 2 (\alpha+\min(\alpha,1-2s))$. There is a constant $C$ independent of $h$ such that \beq\label{duality} \|T-T_h\|_{\dot{H}^{\alpha-1}\to \dot{H}^{2s}}\leq C h^{2\alpha^*}. \eeq \end{proposition} Similar to the operator $T$, $T_h|_{\mathbb V_h}$ has positive eigenvalues $\{\mu_{j,h}\}_{j=1}^{M_h}$ with corresponding $L^2$-orthonormal eigenfunctions $\{\psi_{j,h}\}_{j=1}^{M_h}$. The eigenvalues of $L_h$ are denoted as $\lambda_{j,h}:=\mu_{j,h}^{-1}$ for $j=1,2,\ldots,{M_h}$. Then the discrete fractional operator $L_h^{\beta}\ :\ \mathbb V_h \rightarrow \mathbb V_h$ is then given by $$ L_h^{\beta} v_h:=\sum_{j=1}^{M_h} \lambda_{j,h}^{\beta} (v_h,\psi_{j,h})\psi_{j,h}.$$ Its associated sesquilinear form reads \begin{equation}\label{e:Ah} A_h(v_h,w_h):=(L_h^{\beta/2} v_h, L_h^{\beta/2} w_h) = \sum_{j=1}^{M_h} \lambda_{j,h}^{\beta} (v_h,\psi_{j,h})\overline{(w_h,\psi_{j,h})}, \end{equation} for all $v_h, w_h\in \mathbb V_h$. For any $s\in [0,1]$, the dotted spaces described in Section~\ref{ss:dotted} also have discrete counterparts $\dot{H}_h^s$, which are characterized by their norms \begin{equation}\label{e:dotted_discrete_norm} \|v_h\|_{\dot{H}_h^s}:=\Bigg(\sum_{j=1}^{M_h} \lambda_{j,h}^s|(v_h,\psi_{j,h})|^2\Bigg)^{1/2},\qquad \text{for } v_h\in \mathbb V_h . \end{equation} On $\mathbb V_h$, the two dotted norms are equivalent: For $s\in[0,1]$, there exists a constant $c$ independent of $h$ such that for all $v_h\in \mathbb V_h$, \begin{equation}\label{ineq:H_h_H} \frac{1}{c}\|v_h\|_{\dot{H}_h^s}\leq\|v_h\|_{\dot{H}^s}\leq c\|v_h\|_{\dot{H}_h^s}, \end{equation} (see Appendix A.2 in \cite{BZ00}). From the property $\max_j {\lambda_{j,h}} \leq ch^{-2}$ (cf. \cite[equation (2.8)]{BZ00}), we also deduce an inverse inequality in discrete dotted spaces: For $s,\sigma\ge 0$, we have \beq\label{inverse} \|v_h\|_{{\dot{H}}_h^{s+\sigma}}\leq c h^{-\sigma}\|v_h\|_{{\dot{H}}_h^s},\qquad{\hbox{for }} v_h\in \mathbb V_h . \eeq \subsection{The Semi-discrete Scheme in Space} We propose a Galerkin finite element method for the space discretization of \eqref{e:sol}. This is to find $u_h(t)\in \mathbb V_h$ satisfying \begin{equation}\label{e:weakfe} \left\lbrace \begin{aligned} (\partial^{\gamma}_t u_{h}(t),\phi_h)+A_h(u_h(t),\phi_h)&=(f,\phi_h), \qquad \text{for } t\in (0,{\mathsf T}],\ \hbox{and } \phi_h\in \mathbb V_h\text{, and}\\ u_h(0)&=\pi_h v, \end{aligned} \right. \end{equation} where the bilinear form $A_h(\cdot,\cdot)$ is defined by \eqref{e:Ah} and $\pi_h$ is the $L^2$-projection onto $\mathbb V_h$. Similarly to the continuous case (see discussion in Section~\ref{s:weak_hom}), the solution of the above discrete problem is given by \beq\label{e:dsol1} u_h(t)=\underbrace{e_{{\gamma},1}(-t^{\gamma} L_h^{\beta})}_{=:E_h(t)}{\pi_h} v +\int_0^t \underbrace{s^{\gamma-1} e_{\gamma,\gamma}(-s^\gamma L_h^{\beta})}_{=:W_h(s)} {\pi_h} f(t-s)\, ds \eeq where \beq\label{e:fesol-dt} e_{\gamma,\mu}(-t^\gamma L_h^\beta) = \frac{1}{2\pi i}\int_{{\mathcal{C}}} e_{{\gamma},\mu}(-t^{\gamma} z^{\beta}) R_z(L_h)\, dz \eeq and ${\mathcal{C}}$ is as in \eqref{e:contour}. \subsection{A semi-discrete estimate} The purpose of this section (Theorem~\ref{l:semi-sp}) is to obtain estimates for \begin{equation}\label{e:to_prove_semi} \| e_{\gamma,\mu}(-t^\gamma L^\beta)v - e_{\gamma,\mu}(-t^\gamma L_h^\beta){\pi_h} v\|_{{\dot{H}}^{2s}}, \end{equation} which, in view of representations \eqref{e:full_sol} and \eqref{e:dsol1}, will be instrumental to derive error estimates for the space discretization. The following lemma assesses the discrepancy between the resolvant $R_z(L)=(z-L)^{-1}$ and its finite element approximation. Its somewhat technical proof is postponed to Appendix~\ref{a:lemma1}. \begin{lemma}[Space Discretization of Resolvant]\label{l:residue} Assume that Assumption~\ref{regularity} holds for some $\alpha\in(0,1]$. Let $s \in [0,\frac 12]$ and $\delta \in [0, (1+\alpha)/2]$. Then, there exists a positive constant $C$ independent of $h$ such that for all $\tilde \alpha$ with $2\tilde \alpha \in (0,\alpha+\min(\alpha,1-2s)]$, $z\in{\mathcal{C}}$ and $v\in{\dot{H}}^{2\delta}$ \beq\label{rz-bound} \|(\pi_h R_z(L)-R_z(L_h)\pi_h)v\|_{{\dot{H}}^{2s}}\leq C |z|^{-1+\tilde \alpha+s-\delta} h^{2\tilde \alpha} \|v\|_{{\dot{H}}^{2\delta}}. \eeq \end{lemma} We are now in a position to prove the error estimate for the semi-discrete approximation in space. Before doing so, for $s \in [0,1/2]$ and $0<\epsilon \ll 1$, we set \begin{equation}\label{e:astar} \alpha^* := \alpha/ 2+ \min\{\alpha/2,1/2-s,{\beta}+\delta-s-\alpha/2-\epsilon/2\}. \end{equation} We assume that \beq\label{delta} \delta \geq \max\{0,s-\beta+\epsilon/2\}. \eeq The assumption \eqref{delta} is sufficient to guarantee that the solution $ e_{\gamma,\mu} (-t^\gamma L^\beta) $ is in ${\dot{H}}^{2s+\epsilon}$ and we have the following theorem. \begin{theorem}[Space Discretization of $e_{\gamma,\mu}(-t^{\gamma} L^{\beta})$]\label{l:semi-sp} Let $0<\gamma<1$, $s\in [0,1/2]$, $\mu \in \mathbb R$ and $\alpha^*$ be as in \eqref{e:astar}. Assume that Assumption~\ref{regularity} holds for $\alpha\in (0,1]$, and that $\delta $ satisfies \eqref{delta}. Then there exists a constant $C$ such that $$ \|e_{\gamma,\mu}(-t^{\gamma} L^\beta)- e_{\gamma,\mu}(-t^{\gamma} L_h^\beta)\pi_h \|_{{\dot{H}}^{2\delta} \to {\dot{H}}^{2s}}\leq D(t)h^{2{\alpha^*}}, $$ where \begin{equation}\label{e:cd} D(t):=\left \{\bal C: & \qquad\hbox{if } \delta>{\alpha^*}+s, \\ C\max(1,\ln(t^{-\gamma})):&\qquad\hbox{if } \delta={\alpha^*}+s,\\ Ct^{-{{\gamma}({\alpha^*}+s-\delta)}/{{\beta}}}:& \qquad\hbox{if } \delta<{\alpha^*}+s.\\ \eal \right. \end{equation} \end{theorem} \begin{proof} Without loss of generality we assume that $2\delta \leq 1+{\alpha^*}$ as the case $2\delta > 1+{\alpha^*}$ follows from the continuous embedding $$ {\dot{H}}^{2\delta}\subset {\dot{H}}^{1+{\alpha^*}}. $$ Also, we use the notation $E^{\gamma,\mu}(t) := e_{\gamma,\mu}(-t^{\gamma} L^\beta)$, $E^{\gamma,\mu}_h(t):=e_{\gamma,\mu}(-t^{\gamma} L_h^\beta)$ and decompose the error in two terms: \begin{equation}\label{e:decomp} \begin{split} \|(E^{\gamma,\mu}(t)-E^{\gamma,\mu}_h(t)\pi_h)v\|_{{\dot{H}}^{2s}} & \leq \|(I-\pi_h)E^{\gamma,\mu}(t)v\|_{{\dot{H}}^{2s}}\\ & + \|\pi_h (E^{\gamma,\mu}(t)-E^{\gamma,\mu}_h(t)\pi_h)v\|_{{\dot{H}}^{2s}}. \end{split} \end{equation} $\boxed{1}$ For the first term on the right hand side above, we note that the assumptions on the parameters imply that ${\alpha^*}+s\leq (\alpha+1)/2\leq 1$ and so the approximation property \eqref{pih-approx} of $\pi_h$ yields \begin{equation}\label{e:thm_proj} \|(I-\pi_h)E^{\gamma,\mu}(t)v\|_{{\dot{H}}^{2s}}\leq Ch^{2{\alpha^*}}\|E^{\gamma,\mu}(t) v\|_{\dot{H}^{2({\alpha^*}+s)}} . \end{equation} We estimate $\|E^{\gamma,\mu}(t)v\|_{\dot{H}^{2({\alpha^*}+s)}}$ by expanding $v$ in Fourier series with respect to the eigenfunctions of $L$ (see Section~\ref{ss:dotted}) and denote by $c_j:=(v,{\psi}_j)$ the Fourier coefficient of $v$ so that $$ E^{\gamma,\mu}(t)v = \sum_{j=1}^{\infty}e_{\gamma,\mu}(-t^{\gamma}\lambda_j^{\beta}) c_j \psi_j. $$ Two cases need to be considered: Case 1: $\delta\ge{\alpha^*}+s$. Here, the regularity of the initial condition is large enough to directly use the bound $|{e_{\tpow,\mu}}(-t^\gamma \lambda_j^\beta)|\leq C$ deduced from \eqref{ml-bound-scalar} to get $$\bal \|E^{\gamma,\mu}(t)v\|^2_{{\dot{H}}^{2{\alpha^*}+2s}} &=\sum_{j=1}^{\infty}\lambda_j^{2{\alpha^*}+2s}|e_{\gamma,\mu}(-t^{\gamma}\lambda_j^{\beta})|^2 |c_j|^2\\ &\leq C\lambda_1^{2({\alpha^*}+s-\delta)}\sum_{j=1}^{\infty}\lambda_j^{2\delta}|c_j|^2=C\lambda_1^{2({\alpha^*}+s-\delta)} \|v\|_{\dot{H}^{2\delta}}^2. \eal$$ Case 2: $\delta < {\alpha^*}+s$. In this case, we need to rely on the parabolic regularity for $t>0$. We apply \eqref{ml-bound-scalar} again and obtain $$\bal \|E^{\gamma,\mu}(t)v\|_{{\dot{H}}^{2{\alpha^*}+2s}}^2 &=t^{-2{\gamma}({\alpha^*}+s-\delta)/{{\beta}}}\sum_{j=1}^\infty \lambda_j^{2\delta} \left| (t^{\gamma}\lambda_j^{\beta})^{({\alpha^*}+s-\delta)/{\beta}}e_{\gamma,\mu}(-t^{\gamma} \lambda_j^{\beta})\right|^2 |c_j|^2\\ &\leq C t^{-2{\gamma}({\alpha^*}+s-\delta)/{{\beta}}}\sum_{j=1}^\infty \lambda_j^{2\delta} \left|\frac{(t^{\gamma}\lambda_j^{\beta})^{({\alpha^*}+s-\delta)/{\beta}}}{1+t^{\gamma}\lambda_j^{\beta}}\right|^2 |c_j|^2. \eal $$ Noting that $0<{\alpha^*}+s-\delta<{\beta}$, a Young's inequality implies $$ \left|\frac{(t^{\gamma}\lambda_j^{\beta})^{({\alpha^*}+s-\delta)/{\beta}}}{1+t^{\gamma}\lambda_j^{\beta}}\right|\leq 1. $$ Whence, $$ \|E^{\gamma,\mu}(t)v\|_{{\dot{H}}^{2{\alpha^*}+2s}}^2 \leq Ct^{-2{\gamma}({\alpha^*}+s-\delta)/{{\beta}}} \|v\|^2_{\dot{H}^{2\delta}}. $$ Returning to \eqref{e:thm_proj} after gathering the estimates obtained for the two different cases, we obtain \begin{equation}\label{e:thm_proj2} \|(I-\pi_h)E^{\gamma,\mu}(t)v\|_{{\dot{H}}^{2s}}\leq D(t) h^{2{\alpha^*}}\|v\|_{\dot{H}^{2\delta}}. \end{equation} $\boxed{2}$ We return to \eqref{e:decomp} and estimate now $\|\pi_h (E(t)-E_h(t)\pi_h)v\|_{{\dot{H}}^{2s}}$. This time we use the integral representations and the resolvant approximation (Lemma~\ref{l:residue}) to get $$ \bal \|\pi_h(E^{\gamma,\mu}(t)-E^{\gamma,\mu}_h(t)\pi_h)v\|_{{\dot{H}}^{2s}}&\leq C\int_{{\mathcal{C}}}|e_{\gamma,\mu}(-t^{\gamma} z^{\beta})| |(\pi_h R_z(L)-R_z(L_h)\pi_h)v\|_{{\dot{H}}^{2s}} \, d|z|\\ &\leq Ch^{2{\alpha^*}} \| v \|_{{\dot{H}}^{2\delta}}\int_{{\mathcal{C}}}|e_{\gamma,\mu}(-t^{\gamma} z^{\beta})| |z|^{-1+{\alpha^*}+s-\delta} \, d|z|. \eal $$ Furthermore, the decay estimate \eqref{ml-bound-scalar} of the Mittag-Leffler function evaluated at $-t^{\gamma} z^{\beta}$ for $z \in {\mathcal{C}}$ yields \begin{equation}\label{e:thm2} \|\pi_h(E^{\gamma,\mu}(t)-E^{\gamma,\mu}_h(t)\pi_h)v\|_{{\dot{H}}^{2s}} \leq Ch^{2{\alpha^*}} \| v \|_{{\dot{H}}^{2\delta}} \int_{{\mathcal{C}}}\frac{ |z|^{-1+{\alpha^*}+s-\delta}}{1+t^{\gamma} |z|^{\beta}} \, d|z|. \end{equation} $\boxed{3}$ To prove \begin{equation}\label{e:thm3} \|\pi_h(E^{\gamma,\mu}(t)-E^{\gamma,\mu}_h(t)\pi_h)v\|_{{\dot{H}}^{2s}} \leq D(t) h^{2{\alpha^*}} \| v \|_{{\dot{H}}^{2\delta}}, \end{equation} it remains to show that \begin{equation}\label{e:to_show_D} \int_{{\mathcal{C}}}\frac{ |z|^{-1+{\alpha^*}+s-\delta}}{1+t^{\gamma} |z|^{\beta}} \, d|z| \leq D(t) \end{equation} This is done separately on each part of the contour ${\mathcal{C}}$, see \eqref{e:contour}. On ${\mathcal{C}}_2$, $|z|= r_0$ so that we directly have $$ \int_{{\mathcal{C}}_2}\frac{|z|^{-1+{\alpha^*}+s-\delta}}{1+ t^{\gamma} |z|^{\beta}} \, d|z| \leq \int_{{\mathcal{C}}_2}|z|^{-1+{\alpha^*}+s-\delta} \, d|z|\leq C. $$ On ${\mathcal{C}}_1\cup{\mathcal{C}}_3$, we use the parametrization $z(r) = re^{{\pm}i\pi/4}$ to write $$ \int_{{\mathcal{C}}_1\cup{\mathcal{C}}_3}\frac{|z|^{-1+{\alpha^*}+s-\delta}}{1+ t^{\gamma} |z|^{\beta}}\, d|z| =2\int_{r_0}^\infty \frac{r^{-1+{\alpha^*}+s-\delta}}{1+ t^{\gamma} r^{\beta}}\, dr .$$ When $\delta>{\alpha^*}+s$, we have enough decay to directly obtain $$ \int_{{\mathcal{C}}_1\cup{\mathcal{C}}_3}\frac{|z|^{-1+{\alpha^*}+s-\delta}}{1+ t^{\gamma} |z|^{\beta}}\, d|z| \leq 2\int_{r_0}^\infty r^{-1+{\alpha^*}+s-\delta}\, dr\leq C . $$ When $\delta \leq {\alpha^*} + s$, we perform the change of variable $y:=t^{\gamma} |z|^{\beta}$ and obtain \beq\label{I-bound} \int_{{\mathcal{C}}_1\cup{\mathcal{C}}_3}\frac{|z|^{-1+{\alpha^*}+s-\delta}}{1+ t^{\gamma} |z|^{\beta}}\, d|z| = \frac 2 \beta t^{-\frac{{\gamma}({\alpha^*}+s-\delta)}{{\beta}}}\int_{t^{\gamma} r_0^{\beta}}^\infty \frac{y^{{({\alpha^*}+s-\delta)}/{{\beta}}-1}}{1+y}\, dy. \eeq Thus, \begin{equation*} \begin{split} &\int_{{\mathcal{C}}_1\cup{\mathcal{C}}_3}\frac{|z|^{-1+{\alpha^*}+s-\delta}}{1+ t^{\gamma} |z|^{\beta}}\, d|z| \\ & \qquad \leq \frac 2 \beta t^{-{{\gamma}({\alpha^*}+s-\delta)}/{{\beta}}}\left(\int_ {t^{\gamma} r_0^{\beta}}^1 y^{\frac{{\alpha^*}+s-\delta}{{\beta}}-1}\, dy+\int_1^\infty y^{\frac{{\alpha^*}+s-\delta}{{\beta}}-2}\, dy\right) \\ & \qquad \leq C \left\lbrace \begin{array}{ll} t^{-{{\gamma}({\alpha^*}+s-\delta)}/{{\beta}}}, &\quad \textrm{when }\delta < {\alpha^*} +s, \\ \max(1,\ln(t^{-\gamma})), & \quad \textrm{when } \delta = {\alpha^*}+s. \end{array} \right. \end{split} \end{equation*} $\boxed{4}$ Gathering the estimates for each part of the contour yields \eqref{e:to_show_D} and thus \eqref{e:thm3}, which, combined with \eqref{e:thm_proj2}, yields the desired result. \end{proof} \section{Approximation of the Homogeneous Problem}\label{s:h} This section presents and analyzes the proposed approximation algorithm in the case $f=0$. We note that the bound for the finite element approximation for the space discretization error is contained in Theorem~\ref{l:semi-sp}. In this section, we define a sinc quadrature approximation to $E_h(t)$ and analyze the resulting quadrature error. \subsection{The Sinc Quadrature Approximation}\label{sinc} We discuss the approximation of the contour integral in $$ u_h(t) = e_{\gamma,1}(-t^\gamma L_h^\beta) \pi_h v = \frac 1 {2\pi i} \int_{{\mathcal{C}}} e_{\gamma,1}(-t^\gamma z^\beta) R_z(L_h) \pi_h v \,dz. $$ The first step involves replacing the contour ${\mathcal{C}}$ by one more suitable for application of the sinc quadrature technique. For $y\in {\mathbb{C}}$, we set \beq z(y) =b(\cosh{ y}+i\sinh{ y}) \label{hc} \eeq and, for $0<b<\lambda_1/\sqrt{2}$, consider the hyperbolic contour ${{\cC'}}:=\{z(y)\ : \ y\in {\mathbb{R}}\}$. Using this contour, we have $$ e_{\gamma,1}(-t^\gamma L_h^\beta) g_h = \frac{1}{2\pi i} \int_{-\infty}^\infty e_{{\gamma},1}({-t^{\gamma} z(y)^{\beta}}) z'(y)[(z(y)I-L_h)^{-1}g_h] \, dy. \quad\text{for } g\in \mathbb V_h . $$ Given a positive integer $N$ and a quadrature spacing $k>0$, we set $y_j := j k$ for $j = -N,...,N$ and define the sinc quadrature approximation of $e_{\gamma,1}(-t^\gamma L_h^\beta) g_h$ by \beq\label{e:sincapp} Q_{h,k}^N(t)g_h:=\frac{k}{2\pi i}\sum_{j=-N}^{N} e_{{\gamma},1}(-t^{\gamma} z(y_j)^{\beta})z'(y_j) [(z(y_j)I-L_h)^{-1}g_h] . \eeq \subsection{Quadrature Error} We now discuss the quadrature error. Expanding $(E_h(t)-Q_{h,k}^N(t))g_h$ in term of the discrete eigenfunction $\{\psi_{j,h}\}_{j=1}^{M_h}$ (see Section~\ref{ss:discrete_op}), for $s>0$ we have \begin{equation}\label{errorl} \bal \| (E_h(t)-Q_{h,k}^N(t))g_h\|^2_{{\dot{H}}_h^{2s}} &= (2\pi )^{-2}\sum_{j=1}^{M_h} \lambda_{j,h}^{2s} |{\mathcal E}(\lambda_{j,h},t)|^2 |(g_h,\psi_{j,h})|^2\\ & \leq (2\pi)^{-2} \|g_h\|^2_{{\dot{H}}^{2s}_h} \max_{j=1,\ldots,{M_h}} |{\mathcal E}(\lambda_{j,h},t)|^2, \eal \end{equation} where \beq\label{e:quaderr} {\mathcal E}({\lambda},t):= \int_{-\infty}^{\infty} g_{\lambda}(y,t)\, dy - k\sum_{j=-N}^{N}g_{\lambda}(j k,t) \eeq and \begin{equation}\label{eq:lamdafun} g_\lambda(y,t):= e_{{\gamma},1}(-t^{\gamma} z(y)^{\beta})z'(y) (z(y)-\lambda)^{-1}. \end{equation} The function $g_\lambda(y,t)$ is well defined for $t>0$, $\lambda \geq \lambda_1$, $y\in {\mathbb{C}}$ with $z(y)\neq \lambda$ and $z(y)$ not on the branch cut for the logarithm. Following \cite{LB92}, we show that when $k=c/\sqrt{N}$ for some constant $c$, the quantity ${\mathcal E}(\lambda,t) \to 0$ when $k \to 0$ uniformly with respect to $\lambda \geq\lambda_1$. Moreover, the convergence rate is $O(\exp{(-c\sqrt{N})})$. We then use this estimate in \eqref{errorl} to deduce exponential rate of convergence for the sinc quadrature scheme \eqref{e:sincapp}. This program requires additional notations and we start with the class of functions $S(B_d)$. \begin{definition}\label{class_SB} Given $d>0$, we define the space $S(B_d)$ to be the set of functions $f$ defined on ${\mathbb{R}}$ having the following properties: \begin{enumerate}[(i)] \item $f$ extends to an analytic function in the infinite strip $$ B_d:=\left\{z \in \mathbb C : \ \Im(z)< d\right\} $$ and is continuous on $\overline{B_d}$. \item There exists a constant $C$ independent of $y\in {\mathbb{R}}$ such that $$ \int_{-d}^{d} |f(y+iw)|\, dw\leq C $$ \item We have $$N(B_d):=\int_{-\infty}^\infty \left(|f(y+id)|+|f(y-id)| \right) dy < \infty .$$ \end{enumerate} \end{definition} Note that condition $(ii)$ is more restrictive than actually needed (see Definition 2.12 in \cite{LB92}) but sufficient for our considerations. In addition, For $f\in S(B_d)$, Theorem 2.20 in \cite{LB92} provides the error estimate for the quadrature approximation to $\int_{\mathbb R} f(x)\,dx$ using an infinite number of equally spaced quadrature points with spacing $k>0$: \beq\label{infbound} \left|\int_{-\infty}^\infty f(x)\, dx- k\sum_{j=-\infty}^{\infty}f(jk)\right|\leq \frac{N(B_d)}{2\sinh(\pi d/k)}e^{-\pi d/k}. \eeq The lemma below is proved in Appendix~\ref{a:lemma2} and is the first step in estimating the sinc quadrature error. \begin{lemma}\label{l:contour-esti-i} Let $\lambda \geq \lambda_1$ and $t>0$. The function $w \mapsto g_\lambda(w,t)$ belongs to $S(B_{d})$ for $0<d<\pi/4$. Moreover, there exists a constant $C$ only depending on ${\beta}$, $d$ and $b$ such that \beq\label{nd-bound} \bal N(B_d)\leq C({\beta},d,b)t^{-{\gamma}}. \eal \eeq \end{lemma} The above lemma together with the quadrature estimate \eqref{infbound} leads to exponential decay for ${\mathcal E}(\lambda,t)$ as provided in the following lemma. \begin{lemma}\label{l:sincquad} Let $0<d<\pi/4$. There exists a constant $C$ only depending on $d$, $b$, $\beta$ and $\lambda_1$ such that for $k<1$, $N>0$, $t>0$ and $\lambda \geq \lambda_1$, \beq\label{quad-epsilon-bound} |{\mathcal E}(\lambda,t)|\leq Ct^{-{\gamma}}\left(e^{-\pi d/k}+e^{-{\beta} Nk}\right) . \eeq \end{lemma} \begin{proof} In order to derived the desired estimate, we write $$ {\mathcal E}(\lambda,t) = \left(\int_{-\infty}^\infty g_\lambda (x,t)\, dx -k\sum_{j=-\infty}^\infty g_\lambda(jk,t)\right)+ k\sum_{|j| \geq N+1} g_\lambda(jk,t) . $$ Lemma~\ref{l:contour-esti-i} guarantees that $g_\lambda(.,t) \in S(B_d)$ and so in view of \eqref{infbound}, we obtain $$ \left|\int_{-\infty}^\infty g_\lambda (x,t)\, dx- k\sum_{j=-\infty}^{\infty}g_\lambda (jk,t)\right|\leq \frac{N(B_d)}{2\sinh(\pi d/k)}e^{-\pi d/k} \leq C t^{-\gamma} e^{-\pi d/k}, $$ where $C$ is the constant in \eqref{nd-bound}. For the truncation term, we use \eqref{e:estim_app} (in the appendix) to write $$ k\sum_{|j|\ge N+1} |g_\lambda(jk,t) | \leq C k\sum_{|j|\ge N+1} t^{-\gamma} e^{-\beta jk}, $$ where $C$ is a constant only depending on $d$, $b$ and $\lambda_1$. Next we bound the infinite sum by the integral and arrive at $$ k\sum_{|j|\ge N+1} | g_\lambda(jk,t)| \leq C t^{-{\gamma}}e^{-{\beta} Nk}, $$ where now the constant depends on $\beta$ as well. Gathering the above estimates completes the proof. \end{proof} \begin{remark}[Choice of $k$ and $N$]\label{r:kN} The optimal combination of $k$ and $N$ is obtained by balancing the two exponentials on the right hand side of \eqref{quad-epsilon-bound}. Hence, we select $k$ and $N$ such that $\pi d/k={\beta} Nk$, i.e. $k=\sqrt{\frac{\pi d}{{\beta} N}}$, and the estimate on ${\mathcal E}(\lambda,t)$ becomes \begin{equation}\label{e:sinc_quad_opt} |{\mathcal E}(\lambda,t)|\leq Ct^{-{\gamma}}e^{-\sqrt{\pi d{\beta} N}}. \end{equation} \end{remark} Estimates on the difference between $E_h(t)$ defined by \eqref{e:dsol1} and $Q_{h,k}^N$ defined by \eqref{e:sincapp} follow from \eqref{e:sinc_quad_opt} and \eqref{errorl} as stated in the following theorem. \begin{theorem}\label{l:hsincquad} Let $s\in [0,1/2]$, $d\in (0,\pi/4)$, and let $N$ be a positive integer. Set $k=\sqrt{\frac{\pi d}{{\beta} N}}$. Then there exists a constant $C$ independent of $k$, $N$, $t$ and $h$ such that for every $g_h \in {\dot{H}}_h^{2s}$ \begin{equation}\label{e:hsincquad} \|(E_h(t)-Q_{h,k}^N(t))g_h\|_{{\dot{H}}_h^{2s}}\leq Ct^{-{\gamma}}e^{-\sqrt{\pi d{\beta} N}}\|g_h\|_{{\dot{H}}_h^{2s}}. \end{equation} \end{theorem} \subsection{The Total Error} The discrete approximation after space and quadrature discretization is \begin{equation}\label{e:fully} u_{h}^N(t) := Q_{h,k}^N \pi_h v, \quad\text{with } k =\sqrt{\frac{\pi d}{{\beta} N}}. \end{equation} Gathering the space and quadrature error estimates, we obtain the final estimate for the approximation of the homogeneous problem. \begin{theorem}[Total error]\label{t:hterr} Assume that the conditions of Theorem~\ref{l:semi-sp} and Theorem~\ref{l:hsincquad} hold. Then there exists a constant $C$ independent of $h$, $t$ and $N$ such that $$ \|u(t)-u_{h}^N(t)\|_{{\mathbb{H}}^{2s}}\leq D(t)h^{2{\alpha^*}}\|v\|_{{\mathbb{H}}^{2\delta}}+Ct^{-{\gamma}}e^{-\sqrt{\pi d{\beta} N}}\|v\|_{{\mathbb{H}}^{2s}}, $$ provided the initial condition $v$ is in $ {\mathbb{H}}^{2s}\cap {\mathbb{H}}^{2\delta}$. Here $D(t)$ is the constant given by \eqref{e:cd}. \end{theorem} \begin{proof} We use the decomposition $$ u(t) - u_h^N(t) = u(t) - u_h(t) + u_h(t) - u_h^N(t) $$ and invoke Theorem~\ref{l:semi-sp} with $\mu=1$ and Lemma~\ref{l:hsincquad} with $g_h = \pi_h v$ to arrive at $$ \|u(t)-u_{h}^N(t)\|_{{\dot{H}}^{2s}}\leq D(t)h^{2{\alpha^*}}\|v\|_{{\dot{H}}^{2\delta}}+Ct^{-{\gamma}}e^{-\sqrt{\pi d{\beta} N}}\|\pi_h v\|_{{\dot{H}}_h^{2s}}. $$ The equivalence of norms \eqref{ineq:H_h_H} together with stability of the $L^2$ projection \eqref{pih-bound} and the equivalence property between the dotted spaces and interpolation spaces \eqref{e:interpolation_spaces} (see Proposition~\ref{p:equiv}) yield the desired result. \end{proof} \begin{remark}[Implementation]\label{r:implementation} Denote $\widetilde U(t)$ the vector of coefficients of $u_h^N(t)$ with respect to the finite element local basis functions and $\widetilde V$ the vector of inner product between $v$ and local basis functions. Let $\widetilde A$ and $\widetilde M$ be the stiffness and mass matrices. Then $$ \widetilde U(t) = \frac{k}{2\pi i}\sum_{j=-N}^N e_{\gamma,1}(-t^\gamma z(y_j)^\beta)(z(y_j)\widetilde M+\widetilde A)^{-1}\widetilde V. $$ \end{remark} \begin{remark}[Complexity of the Implementation]\label{r:mat-aspect} We take advantage of the exponential decay of the sinc quadrature by setting $N=c({\alpha^*}\ln(1/h))^2$ so that $$ \|u(t)-u_{h}^N(t)\|_{{\dot{H}}^{2s}}\leq C\max(D(t),t^{-{\gamma}})h^{2{\alpha^*}}. $$ Hence, computing $u_h^N(t)$ for a fixed $t$ requires $O( \log(1/h)^2)$ complex finite element system solves. \end{remark} \subsection{Numerical Illustration}\label{s:num} In this section, we provide numerical illustrations of the rate of convergence predicted by Theorem~\ref{l:semi-sp} and Lemma~\ref{l:hsincquad}. \subsubsection*{Space Discretization Error} In order to illustrate the space discretization error, we start with a one dimensional problem and use a spectral decomposition to compute the exact solution without resorting to quadrature. Set $\Omega=(0,1)$ , $Lu:=-u^{\prime\prime}$. We chose the initial condition to be $v \equiv 1$ or, using the eigenvalues $\lambda_{\ell} = \pi^2 \ell^2$ and associated eigenfunctions $\psi_\ell(x) = \sqrt{2}\sin(\pi \ell x)$, $$ v = 2\sum_{\ell=1}^\infty \frac{1-(-1)^\ell}{\pi \ell} \sin(\pi\ell x) \approx 2\sum_{\ell=1}^{50000} \frac{1-(-1)^\ell}{\pi \ell} \sin(\pi\ell x). $$ The number of term used before the truncation is chosen large enough not to influence the space discretization ($50000$). With these notations, the exact solution for $\gamma=1/2$ and $0<\beta<1$ is approximated by \begin{equation}\label{e:u_exact_1d} u(t) \approx 2\sum_{\ell=1}^{50000} e_{1/2,1}(-t^{1/2} (\pi\ell)^\beta) \frac{1-(-1)^\ell}{\pi \ell} \sin(\pi \ell x). \end{equation} For the space discretization, we consider a sequence of uniform meshes with mesh sizes $h_j=2^{-j}$, where $j=1,2,\dots$ and denote by $\{\varphi_{k,h}\}_{k=1,\ldots,M_{h_j}}$ the continuous piecewise linear finite element basis of $\mathbb V_h$. The eigenvalues of $L_{h_j}$ corresponds to the eigenvalues of $M_{h_j}^{-1}S_{h_j}$, where $M_{h_j}$ and $S_{h_j}$ are the mass and stiffness matrices and are given by $$ \lambda_{\ell,h_j}= \frac{6(1+\cos(k\pi h_j))}{h_j^2 (2+\cos(k \pi h_j))}. $$ The associated eigenfunctions to $L_h$ are $$ \psi_{\ell,h_j} := \sum_{k=1}^{M_{h_j}}\sqrt{2 h_j} \sin( h_j \ell k \pi) \varphi_{k,h_j}. $$ Similar to \eqref{e:u_exact_1d}, we use the discrete spectral representation below of $u_{h_j}(t)$ for our computation $$ u_{h_j}(t) = \sum_{\ell=1}^{M_{h_j}} e_{1/2,1}(-t^{1/2} \lambda_{\ell,h_j}^\beta) v_{h_j,\ell} \psi_{\ell,h_j}, $$ with $$v_{\ell,h_j} = \int_0^1 \psi_{\ell,h_j}(x)\, dx = h_j \sqrt{2h_j}\sum_{k=1}^{M_{h_j}} \sin(h_j \ell k \pi) .$$ Note that $\alpha$ in Assumption~\ref{regularity} is 1, $v\in {\dot{H}}^{1/2-\epsilon}$ for any $\epsilon>0$ so that $\delta=1/4-\epsilon$. The error will be computed in $L^2$ and $H^1$, i.e. $s=0$ and $s=1/2$. For the latter we need $\beta > 1/4$. The predicted convergence rates (Theorem~\ref{l:semi-sp}) are $$ 2{\alpha^*}=1+\min(1,1-2s,2(\beta+\delta-s)-1-\epsilon) $$ for every $\epsilon>0$, i.e. $$ \|u(t)-u_h(t)\|+h\|u(t)-u_h(t)\|_{H^1}\leq D(t)h^{\min(2,2\beta+1/2)-\epsilon} \| v \|_{{\dot{H}}^{1/2-\epsilon}}. $$ We use the \textit{MATLAB} code \cite{mlcode} to evaluate $e_{\gamma,1}(z)$ for any $z\in\mathbb C$ and fix $t=0.5$. In Figure~\ref{f:hsp}, we report the errors $$e_j:=\|u(t)-u_{h_j}(t)\|\quad\text{and}\quad e^1_j:=\|u'(t)-u'_{h_j}(t)\|$$ for $j=3,4,5,6,7$ and different values of ${\beta}$. The observed rate of convergence $$OROC:=\frac{\ln(e_7/e_6)}{\ln{2}}\quad\text{and}\quad OROC^1:=\frac{\ln(e^1_7/e^1_6)}{\ln{2}}$$ are also reported in this figure and match the rates predicted by Theorem~\ref{l:semi-sp}. \begin{figure}[hbt!] \begin{center} \begin{tabular}{cc} \includegraphics[scale=.47]{L2SpHomogeneous-eps-converted-to.pdf} & \includegraphics[scale=.47]{H1SpHomogeneous-eps-converted-to.pdf}\\ \end{tabular} \end{center} \caption{Errors $e_j$ (left) and $e^1_j$ (right) versus the mesh size $h$ for different values of $\beta$. The observed rate of convergence $OROC$ and $OROC^1$ are reported on the left of each graph and match the rate predicted by Theorem~\ref{l:semi-sp} shown in between parentheses.} \label{f:hsp} \end{figure} \subsubsection*{Effect of the Sinc Quadrature} We examine the error between the semi-discrete approximation and its sinc quadrature approximation. To this end and in order to factor out the space discretization, it suffices to observe ${\mathcal E}({\lambda},t)$ defined by \eqref{e:quaderr} for all $\lambda \geq \lambda_1$. Here we fix $t=0.5$ and approximate $\|{\mathcal E}(.,t)\|_{L^\infty(\lambda_1,\infty)}$ with $\lambda_1=10$ using the method discussed in Section 5.2 in \cite{BLP17}. For the hyperbolic contour $z(y)$ in \eqref{hc}, we choose $b=1$ so that $b\in (0,\lambda_1/\sqrt{2})$. Following Remark~\ref{r:kN}, we fix the number of quadrature points to $2N+1$ and balance the two source of errors by setting $k=\sqrt{{\pi d}/{({\beta} N)}}$ with $d={\pi}/{8}$. According to \eqref{e:sinc_quad_opt}, we have $$ \|{\mathcal E}(\lambda,t)\|_{L^\infty(10,\infty)} \leq C t^{-\gamma} e^{-\sqrt{\pi d \beta N}}. $$ The left graph of Figure~\ref{f:hsinc} illustrates the exponential decay of $\|{\mathcal E}(\lambda,t)\|_{L^\infty(10,\infty)}$ as $N$ increases for ${\gamma}=0.5$ and ${\beta}=0.3,0.5,0.7$. We also report (right) the singular behavior of $\|{\mathcal E}(.,t)\|_{L^\infty(10,\infty)}$ in time for $N=100$, ${\beta}=0.5$ and ${\gamma}=0.3,0.5,0.7$. \begin{figure}[hbt!] \begin{center} \begin{tabular}{cc} \includegraphics[scale=.31]{HomogeneousQuad-eps-converted-to.pdf} & \includegraphics[scale=.31]{HomogeneousQuadTime-eps-converted-to.pdf}\\ \end{tabular} \end{center} \caption{(Left) Exponential decay of $\| {\mathcal E}({.},0.5)\|_{L^\infty(10,\infty)}$ versus the number of quadrature points used for different values of $\beta$. (Right) Singular behavior of $\| {\mathcal E}(.,t)\|_{L^\infty(10,\infty)}$ as $t \to 0$ for $\beta=0.5$ and different values of $\gamma$. The rate $-{\gamma}$ predicted by \eqref{e:sinc_quad_opt} is observed.} \label{f:hsinc} \end{figure} \subsubsection*{A Two Dimensional Problem} We now focus our attention to the total error in a two dimensional problem. Let $\Omega=(0,1)^2$, $L=-\Delta$ and the initial condition be the eigenfunction of $L$ given by $$ v(x_1,x_2)=\sin{(\pi x_1)}\sin(\pi x_2). $$ The exact solution is then given by $$u(t,x_1,x_2)=e_{{\gamma},1}(-t^{\gamma} (2\pi^2)^{\beta})\sin{(\pi x_1)}\sin{(\pi x_2)} .$$ The space discretizations are subordinate to a sequence of uniform subdivisions made of triangles with the mesh size $h_j=2^{-j}\sqrt{2}$. For the quadrature, we chose $N=400$ and set $k= \sqrt{{\pi^2}/{(8{\beta} N)}}$ for the quadrature error not to affect the space discretization error. Since $\lambda_1=\pi^2$, we again set $b=1$ in \eqref{hc}. We fix $t=0.5$, $\gamma=0.5$ and report in Figure~\ref{f:2Dh}, the quantities $\|u(t)-u_{h_j}^N(t)\|$ for $j=3,4,5,6,7,8$ and different ${\beta}$. As announced in Theorem~\ref{t:hterr}, a second order rate of convergence is observed. \begin{figure}[hbt!] \begin{center} \includegraphics[scale=.47]{L22DSpH-eps-converted-to.pdf} \end{center} \caption{$L^2$ error between $u(0.5)$ and $u_{h_j}^N(0.5)$ with ${\gamma}=0.5$ and different values of ${\beta}$. A second order convergence rate is observed.} \label{f:2Dh} \end{figure} \section{Approximation of the Non-homogeneous Problem}\label{s:nh} We now turn our attention to the non-homogeneous problem, i.e. $f\neq 0 $ and $v=0$ in \eqref{e:p}, for which the solution reads \begin{equation}\label{e:exact_nh} u(t) = \int_{0}^t \underbrace{r^{{\gamma}-1}e_{{\gamma},{\gamma}}(-r^{\gamma} L^{\beta})}_{=:W(s)} f(t-r)\, dr. \end{equation} \subsection{The Semi-discrete Scheme} According to \eqref{e:dsol1}, the finite element approximation of \eqref{e:exact_nh} is given by \beq\label{e:discrete_nh} u_h(t)=\int_0^t \underbrace{r^{\gamma-1} e_{\gamma,\gamma}(-r^\gamma L_h^{\beta})}_{=:W_h(r)} {\pi_h} f(t-r)\, dr. \eeq As in the homogeneous case, the finite element approximation error is derived from Lemma~\ref{l:semi-sp} and we have the following lemma. \begin{lemma}[Space Discretization for the non-homogeneous problem]\label{l:semirhs} Assume that Assumption~\ref{regularity} holds for $\alpha\in (0,1]$. Let $\gamma \in (0,1)$, $s \in [0,\frac 1 2]$ and let $\alpha^*$ and $\delta$ be as in \eqref{e:astar} and \eqref{delta}, respectively. There exists a constant $C$ such that $$ \|u(t)-u_h (t)\|_{{\dot{H}}^{2s}}\leq \widetilde D(t) h^{2{\alpha^*}} \|f\|_{L^{\infty}(0,t;\dot{H}^{2\delta})}, $$ where \begin{equation}\label{e:tildeD} \widetilde D(t) = C \left\lbrace \begin{array}{ll} t^{\gamma} & \qquad \text{when }\delta > \alpha^*+s, \\ t^{\gamma} \max(1,\ln(1/t)) & \qquad \text{when }\delta = \alpha^*+s, \\ t^{{\gamma}-{\gamma}({\alpha^*} +s- \delta)/\beta} & \qquad \text{when }\delta < \alpha^*+s. \end{array} \right. \end{equation} \end{lemma} \begin{proof} Applying Theorem~\ref{l:semi-sp} gives \begin{align*} \|u(t)-u_h(t)\|_{{\dot{H}}^{2s}}&\leq \int_0^t {r}^{{\gamma}-1} \|e_{{\gamma},{\gamma}}(-t^{\gamma} L^{\beta})-e_{{\gamma},{\gamma}}(-t^{\gamma} L_h^{\beta}){\pi_h}\|_{{\dot{H}}^{2\delta}\rightarrow {\dot{H}}^{2s}} \|f(t-{r})\|_{{\dot{H}}^{2\delta}}\, d{r} \\ &\leq Ch^{2{\alpha^*}}\|f\|_{L^\infty(0,t;{\dot{H}}^{2\delta})} \int_0^t {r}^{{\gamma}-1} D({r})\, d{r}, \end{align*} where $D(t)$ is given by \eqref{e:cd}. The conclusion follow from $\int_{0}^t r^{\gamma-1} D(r)\, dr = \widetilde D(t)$. \end{proof} \subsection{Time Discretization via Numerical Integration} Given a final time ${\mathsf T}$, we discuss first a numerical approximation of the integral $$ \int_0^{\mathsf T} W_h(s) \pi_h f({\mathsf T}-s)\, ds. $$ For simplicity, we set $$g(s)= f({\mathsf T}-s)$$ so that the above integral becomes $$ \int_0^{\mathsf T} W_h(s) \pi_h g(s)\, ds. $$ For a positive integer ${\mathcal M}$, let $0=t_0<t_1<...<t_{{\mathcal M}}={\mathsf T}$ be a partition of the time interval $[0,{\mathsf T}]$. On each subinterval we set $t_{j-\frac12} = \frac 1 2 (t_j+t_{j-1})$ and propose the pseudo-midpoint approximation \begin{equation} \label{e:quad_int} \begin{split} &\int_{t_{j-1}}^{t_{j}} W_h(r) \pi_h g(r) \, dr \\ &\qquad \approx \int_{t_{j-1}}^{t_{j}} W_h(r)\, dr\, \pi_h g(t_{j-\frac12}) \\ & \qquad \ = L_h^{-\beta} \left( e_{\gamma,1}(-t_{j-1}^\gamma L_h^\beta) - e_{\gamma,1}(-t_{j}^\gamma L_h^\beta) \right)\pi_h g(t_{j-\frac12}), \end{split} \end{equation} where to achieve the last step, we used relation \eqref{e:fd2}. Before going further, we note that numerical methods based on \eqref{e:quad_int} cannot perform optimally when using a uniform decomposition of the time interval because $W_h(t)$ is singular at $t=0$. Hence, the performance of algorithms based on uniform partitions are bound to the error on the first interval $(0,t_1)$. Measuring in the ${\dot{H}}^{2s}$-norm for $s\in [0,1/2]$, we have \begin{equation}\label{e:rate_first_uniform} \begin{split} & \bigg\| \int_0^{t_1} W_h(r) \pi_h (g(r)-g(t_{1/2}))\, dr\bigg\|_{{\dot{H}}^{2s}} \\ &\qquad \leq C \int_0^{t_1} \|W_h(r)\|_{{\dot{H}}^{2s}\rightarrow {\dot{H}}^{2s}} \|g(r)-g(t_{1/2})\|_{{\dot{H}}^{2s}}\, dr \\ &\qquad \leq C t_1 \|f_t\|_{L^\infty(0,T;{\dot{H}}^{2s})} \int_0^{t_1} r^{\gamma-1}\, dr \leq C t_1^{1+\gamma} \|f_t\|_{L^\infty(0,T;{\dot{H}}^{2s})}. \end{split} \end{equation} To overcome this deterioration, we propose a geometric refinement of the partition near $t_0=0$ which depends on two positive integers ${\mathcal M}$ and ${\mathcal N}$ (see also Section 3.1 of \cite{BP15}). We first set $$ t_j := 2^{-({\mathcal M}-j)} {\mathsf T}, \qquad j=1,...,{\mathcal M}. $$ We decompose further all but the first interval $$ I_j:= [t_{j},t_{j+1}] = [ 2^{-({\mathcal M}-j)}{\mathsf T}, 2^{-({\mathcal M}-j-1)}{\mathsf T}], \qquad j=1,\ldots,{\mathcal M}-1 $$ onto ${\mathcal N}$ subintervals $$ t_{j} = t_{j,0} < ... < t_{j,l} < ... < t_{j,{\mathcal N}} = t_{j+1} $$ where, for $l=0,...,{\mathcal N}$, \begin{equation}\label{e:tjl} t_{j,l} := t_{j} + l \tau_j , \qquad \textrm{with }\tau_j := |I_j|/{\mathcal N} = 2^{-({\mathcal M}-j)} {\mathsf T}/{\mathcal N}. \end{equation} As in \eqref{e:quad_int}, we approximate $$ \int_{t_{j,l-1}}^{t_{j,l}} W_h(r) \pi_h g(r)\, dr $$ on each subinterval $I_{j,l}:= [ t_{j,l-1},t_{j,l}]$ by \begin{equation}\label{e:rel_num_nh} L_h^{-{\beta}}\left( e_{{\gamma},1}(-{t}_{j,l-1}^{\gamma} L_h^{\beta})-e_{{\gamma},1}(-{t}_{j,l}^{\gamma} L_h^{\beta}) \right){\pi_h} g({t}_{j,l-1/2}). \end{equation} Here $t_{j,l-1/2}:=\frac 1 2 (t_{j,l-1}+t_{j,l})$. We use the bar symbol to denote average quantities over the interval $[t_{j,l-1},t_{j,l}]$, e.g., $$ {\overline W}_{j,l} : \mathbb V_h \rightarrow \mathbb V_h, \qquad {\overline W}_{j,l}:=\frac{1}{\tau_j}\int^{{t}_{j,l}}_{{t}_{j,l-1}} W_h(r)\, dr. $$ and $$ {\overline g}_{j,l}:=\frac{1}{\tau_j}\int_{t_{j,l-1}}^{t_{j,l}} g(r)\, dr. $$ The approximate solution after time integration is thus given by \begin{equation}\label{e:space_time_solution} u_h^{{\mathcal N},{\mathcal M}}({\mathsf T}) :=\sum_{j=1}^{{\mathcal M}-1}\tau_j \sum_{l=1}^{\mathcal N} {\overline W}_{j,l} (\pi_h f({\mathsf T}-t_{j,l-\frac12})). \end{equation} We start by assessing the local integration error $$ \int_{t_{j,l-1}}^{t_{j,l}} W_h(r) \pi_h (f({\mathsf T}-r)-f({\mathsf T}-t_{j,l-1/2}))\, dr. $$ \begin{lemma}[Local Approximation]\label{l:time_stepping_local} Let $\gamma \in (0,1)$ and $s\in [0,1/2]$. Let $j\geq 2$ and assume that $g(t)=f({\mathsf T}-t)$ belongs to $H^2(t_{j-1},t_j;{\dot{H}}^{2s})$. There exists a constant $C$ independent of $h$, and $\tau_j$ such that on every interval $I_{j}=[t_{j-1},t_j]$, we have \begin{equation*} \begin{split} & \| \sum_{l=1}^{{\mathcal N}} \int_{t_{j,l-1}}^{t_{j,l}} W_h(r) \pi_h (g(r)-g(t_{j,l-1/2}))~dr \|_{{\dot{H}}^{2s}} \\ & \qquad \leq C \tau_j^{5/2} \left( \sum_{l=0}^{\mathcal N} t_{j,l}^{2\gamma-2} \right)^{1/2} \| g_{tt} \|_{L^2(t_{j-1},t_j;{\dot{H}}^{2s})} + C \tau_j^3 \left( \sum_{l=0}^{\mathcal N} t_{j,l}^{\gamma-2}\right) \| g_t \|_{L^\infty(t_{j-1},t_j;{\dot{H}}^{2s})}. \end{split} \end{equation*} \end{lemma} \begin{proof} We use the following decomposition on each sub-interval: $$ \bal & \int_{t_{j,l-1}}^{t_{j,l}} W_h(r) \pi_h (g(r)-g(t_{j,l-1/2}))\, dr\\ & \qquad =\underbrace{{\tau}_j {\overline W}_{j,l}\pi_h ({\overline g}_{j,l}-g({t}_{j,l-\frac12}))}_{=:E_1} + \underbrace{\int_{{t}_{j,l-1}}^{{t}_{j,l}} (W_h({r})-{\overline W}_{j,l})\pi_h(g({r})-g({t}_{j,l-\frac12}))\, d{r}}_{=:E_2}. \eal $$ $\boxed{1}$ We estimate $E_1$ \beq\label{e2-bound} \|E_1\|_{{\dot{H}}^{2s}} \leq {\tau}_j\|{\overline W}_{j,l}\pi_h\|_{{\dot{H}}^{2s} \to {\dot{H}}^{2s}}\|{\overline g}_{j,l}-g({t}_{j,l-\frac12})\|_{{\dot{H}}^{2s}} . \eeq We now bound $\|{\overline W}_{j,l}\pi_h\|_{{\dot{H}}^{2s} \to {\dot{H}}^{2s}}$ and $\|{\overline g}_{j,l}-g({t}_{j,l-\frac12})\|_{{\dot{H}}^{2s}} $ separately. For the latter, we expand $g(\eta)$ at $\eta=t_{j,l-\frac12}$ to get $$ g(\eta)-g({t}_{j,l-\frac12})=(\eta-{t}_{j,l-\frac12})g_t({t}_{j,l-\frac12})+\int_{t_{j,l-\frac12}}^{\eta}(r-{t}_{j,l-\frac12}) g_{tt}(r)\, dr , $$ where $g_t$ and $g_{tt}$ denote the first and second partial derivative in time of $g$. As a consequence, taking advantage of $t_{j,l-\frac12}$ being the midpoint of the interval $I_{j,l}$, we obtain \begin{align*} {\overline g}_{j,l}-g({t}_{j,l-\frac12}) &= \frac{1}{{\tau}_j} \int_{{t}_{j,l-1}}^{{t}_{j,l}}\left(g(\eta)-g({t}_{j,l-\frac12}) \right)\, d\eta \\ &=\frac{1}{{\tau}_j} \int_{{t}_{j,l-1}}^{{t}_{j,l}} \int_{t_{j,l-\frac12}}^{\eta}(r-{t}_{j,l-\frac12}) g_{tt}(r)\, dr\, d\eta \end{align*} and so using a Cauchy-Schwarz inequality \begin{equation}\label{gt-bound} \| {\overline g}_j-g({t}_{j-\frac12})\|_{{\dot{H}}^{2s}} \leq \tau_j^{3/2} \| g_{tt}\|_{L^2(t_{j,l-1},t_{j,l};{\dot{H}}^{2s})}. \end{equation} In order to bound $\|{\overline W}_{j,l}\pi_h\|_{{\dot{H}}^{2s} \to {\dot{H}}^{2s}}$, we note that from the definition of the discrete dotted spaces ${\dot{H}}^{2s}_h$ (see \eqref{e:dotted_discrete_norm}), we have $$ \|e_{{\gamma},{\gamma}}(-t^{\gamma} L_h^{\beta})\|_{{\dot{H}}^{2s}_h \to {\dot{H}}^{2s}_h}\leq C. $$ Therefore, from the expression of $W_h(t)$ in \eqref{e:discrete_nh}, the equivalence of norms \eqref{ineq:H_h_H} and the stability estimate \eqref{pih-bound} for $\pi_h$, we derive that \beq\label{bw-bound} \bal \|{\overline W}_{j,l}\pi_h\|_{{\dot{H}}^{2s}\to {\dot{H}}^{2s}}&\leq \frac{1}{{\tau}_j}\int_{{t}_{j,l-1}}^{{t}_{j,l}}\eta^{{\gamma}-1}\|e_{{\gamma},{\gamma}}(-\eta^{\gamma} L_h^{\beta}) \pi_h\|_{{\dot{H}}^{2s}_h \to {\dot{H}}^{2s}_h}\, d\eta\\ &\leq \frac{C}{{\tau}_j}\int_{{t}_{j,l-1}}^{{t}_{j,l}}\eta^{{\gamma}-1}\, d\eta\leq C {t}_{j,l-1}^{{\gamma}-1} . \eal \eeq Estimates \eqref{gt-bound} and \eqref{bw-bound} into \eqref{e2-bound} give the final bound for $E_1$ \beq\label{e2-last-bound} \|E_1\|_{{\dot{H}}^{2s}}\leq C{\tau}_j^\frac52 {t}_{j,l-1}^{{\gamma}-1} \|g_{tt}\|_{L^2({t}_{j,l-1},{t}_{j,l};{\dot{H}}^{2s})}. \eeq $\boxed{2}$ We estimate $E_2$ \beq \label{e:E2} \|E_2\|_{{\dot{H}}^{2s}} \leq \int_{{t}_{j,l-1}}^{{t}_{j,l}}\|(W_h({r})-{\overline W}_{j,l})\pi_h\|_{{\dot{H}}^{2s}\to {\dot{H}}^{2s}}\|g({r})-g({t}_{j,l-\frac12})\|_{{\dot{H}}^{2s}}\, d{r} .\eeq In this case as well, we need to estimate two terms separately, namely $\|(W_h({r})-{\overline W}_{j,l})\pi_h\|_{{\dot{H}}^{2s}\to {\dot{H}}^{2s}}$ and $\|g({r})-g({t}_{j,l-\frac12})\|_{{\dot{H}}^{2s}}$. For the latter, we write \beq\label{gj-bound} \|g({r})-g({t}_{j,l-\frac12})\|_{{\dot{H}}^{2s}} = \| \int_{t_{j-\frac12}}^r g_t(\eta)\, d\eta \|_{{\dot{H}}^{2s}} \leq \tau_j \|g_t\|_{L^\infty(t_{j,l-1},t_{j,l}; {\dot{H}}^{2s})} \eeq Next, we bound $\|(W_h({r})-{\overline W}_{j,l})\pi_h\|_{{\dot{H}}^{2s} \to {\dot{H}}^{2s}}$. As before, it suffices to estimate $\|W_h({r})-{\overline W}_{j,l}\|_{{\dot{H}}^{2s}_h \to {\dot{H}}^{2s}_h}$. To achieve this, we use the eigenfunctions $\{\psi_{i,h}\}_{i=1}^{M_h}$ of $L_h$. By \eqref{e:fd2_2}, \begin{equation*} \begin{split} W_h'({r})\psi_{i,h} = & r^{{\gamma}-2} \lbrace ({\gamma}-1)e_{{\gamma},{\gamma}}(-r^{\gamma} \lambda_{i,h}^{\beta})\\ & +r^{\gamma} \lambda_{i,h}^\beta (({\gamma}-1)e_{{\gamma},2{\gamma}}(-r^{\gamma} \lambda_{i,h}^\beta)-e_{{\gamma},2{\gamma}-1}(-r^{\gamma} \lambda_{i,h}^{\beta}))\rbrace \psi_{i,h}. \end{split} \end{equation*} This and \eqref{ml-bound-scalar} with $z=-r^\gamma\lambda_{i,h}^\beta$ imply that for $r \in I_{j,l}$, $$ \| W_h'({r})\psi_{i,h} \|\leq C r^{\gamma-2} \leq C t_{j,l-1}^{\gamma-2}, $$ where the constant in the above inequality is independent of $j$, $l$ and $h$. Whence, $$\|W'_h(r)\|_{\dot H^{2s}_h\to \dot H^{2s}_h}\leq C t_{j,l-1}^{\gamma-2}$$ and $$ \|W_h({r})-{\overline W}_{j,l}\|_{{\dot{H}}^{2s}_h \to {\dot{H}}^{2s}_h} \leq C \tau_j \sup_{r \in I_{j,l}}\|W_h'(r) \|_{{\dot{H}}^{2s}_h \to {\dot{H}}^{2s}_h} \leq C \tau_j t_{j,l-1}^{\gamma-2}. $$ The above estimate and \eqref{gj-bound} in \eqref{e:E2} yield the final bound on $E_2$ \beq\label{e3-last-bound} \|E_2\|_{{\dot{H}}^{2s}_h} \leq C{\tau}_j^3 t_{j,l-1}^{\gamma-2} \|g_t\|_{L^\infty(t_{j,l-1},t_{j,l};{\dot{H}}^{2s})}. \eeq \boxed{3} Summing up the contribution from each subinterval and using a Cauchy-Schwarz inequality, yields the desired result. \end{proof} \begin{remark}[Uniform time-stepping]\label{r:uniform_step} In the case of uniform time-stepping, i.e. ${\mathcal N}=0$ and $t_j = j \tau$, $\tau = {\mathsf T}/{\mathcal M} $, we derive from the estimate provided in Lemma~\ref{l:time_stepping_local} and the first interval estimate \eqref{e:rate_first_uniform} that the quadrature error behaves asymptotically like $\tau^{1+\gamma}$. We do not pursue this further but rather investigate errors coming from the geometric partition. \end{remark} \begin{theorem}[Time Discretization of Non-Homogeneous Problem]\label{t:geo} Let $\gamma \in (0,1)$, $s\in [0,1/2]$, ${\mathsf T} \geq{\mathsf T}_0 >0$. Let ${\mathcal N}$ be a positive integer and \beq {\mathcal M}=\left\lceil\frac{2\log_2{{\mathcal N}}}{{\gamma}}\right\rceil. \label{cM} \eeq Assume that $f$ is in $H^2(0,{\mathsf T};{\dot{H}}^{2s})$ and let $u_h^{{\mathcal N}}({\mathsf T}):= u_h^{{\mathcal N},{\mathcal M}}$ be defined by \eqref{e:space_time_solution} and let $u_h({\mathsf T})$ be the semi-discrete in space solution \eqref{e:discrete_nh}. Then there exists a constant $C$ independent of ${\mathcal N}$, $h$ and ${\mathsf T}$ satisfying $$ \|u_h({\mathsf T})-u_h^{{\mathcal N}}({\mathsf T})\|_{{\dot{H}}^{2s}} \leq C\max({\mathsf T}^\gamma,{\mathsf T}^{\frac32+\gamma}){\mathcal N}^{-2} \|f\|_{H^2(0,{\mathsf T};{\dot{H}}^{2s})}. $$ \end{theorem} \begin{proof} Using the definitions of $u_h({\mathsf T})$ and $u_h^{{\mathcal N}}({\mathsf T})$, we write \begin{equation*} \begin{split} u_h({\mathsf T})-u_h^{{\mathcal N}}({\mathsf T}) &= \int_0^{t_1} W_h(r) \pi_h f({\mathsf T}-r)\, dr \\ & \quad + \sum_{j=1}^{{\mathcal M}-1} \sum_{l=1}^{\mathcal N} \int_{t_{j,l-1}}^{t_{j,l}} W_h(r) \pi_h (f({\mathsf T}-r)-f({\mathsf T}-t_{j,l-1/2}))\, dr. \end{split} \end{equation*} For the first term, we note that \eqref{ml-bound-scalar} immediately implies that $\| e_{\gamma,\gamma}(-r^\gamma L_h^\beta) \|_{{\dot{H}}^{2s}\to {\dot{H}}^{2s}} \leq C$. The stability of the $L^2$ projection \eqref{pih-bound} and \eqref{cM} give \begin{align*} \bigg\| \int_0^{t_1} W_h(r) \pi_h f({\mathsf T}-r)\, dr \bigg \|_{{\dot{H}}^{2s}}& \leq C 2^{-\gamma({\mathcal M}-1)}{\mathsf T}^{\gamma} \| f \|_{L^\infty(0,{\mathsf T};{\dot{H}}^{2s})}\\ &\leq C {\mathsf T}^\gamma {\mathcal N}^{-2} \| f\|_{L^\infty(0,{\mathsf T};{\dot{H}}^{2s})}. \end{align*} For the second term, we apply Lemma~\ref{l:time_stepping_local} on each interval $I_j$, $j=1,\ldots,{\mathcal M}-1$ to get \begin{align*} &\bigg \|\sum_{j=1}^{{\mathcal M}-1} \sum_{l=1}^{\mathcal N} \int_{t_{j,l-1}}^{t_{j,l}} W_h(r) \pi_h (f({\mathsf T}-r)-f({\mathsf T}-t_{j,l-1/2}))\, dr\bigg\|_{{\dot{H}}^{2s}}\\ & \qquad \leq C \sum_{j=1}^{{\mathcal M}-1} \tau_j^{5/2} {\mathcal N}^{1/2} t_{j}^{\gamma-1} \| g_{tt} \|_{L^2(t_{j-1},t_j;{\dot{H}}^{2s})} + C \sum_{j=1}^{{\mathcal M}-1} \tau_j^3 {\mathcal N} t_{j}^{\gamma-2} \| g_t \|_{L^\infty(t_{j-1},t_j;{\dot{H}}^{2s})}, \end{align*} where we use the fact that $C^{-1} t_j \leq t_{j,l} \leq C t_j$ for some constant $C$ independent of ${\mathcal N}$ and ${\mathcal M}$. Hence, a Cauchy-Schwarz inequality and the definitions of $t_j$ and $\tau_j$ yield \begin{align*} &\bigg\|\sum_{j=1}^{{\mathcal M}-1} \sum_{l=1}^{\mathcal N} \int_{t_{j,l-1}}^{t_{j,l}} W_h(r) \pi_h (f({\mathsf T}-r)-f({\mathsf T}-t_{j,l-1/2}))dr\bigg\|_{{\dot{H}}^{2s}}\\ & \qquad \leq C{\mathsf T}^{\frac32+\gamma} {\mathcal N}^{-2} \| g_{tt} \|_{L^2(0,T;{\dot{H}}^{2s})} \left( \sum_{j=1}^{{\mathcal M}} 2^{-(3+2\gamma)({\mathcal M}-j)}\right)^{1/2} \\ & \qquad + C {\mathsf T}^{1+\gamma} {\mathcal N}^{-2} \| g_t \|_{L^\infty(t_{j-1},t_j;{\dot{H}}^{2s})} \sum_{j=1}^{{\mathcal M}} 2^{-(1+\gamma)({\mathcal M}-j)}\\ & \qquad \leq C {\mathcal N}^{-2} ({\mathsf T}^{\frac32+\gamma} \| g_{tt} \|_{L^2(0,{\mathsf T};{\dot{H}}^{2s})} + {\mathsf T}^{1+\gamma} \| g_t \|_{L^\infty(0,{\mathsf T};{\dot{H}}^{2s})}). \end{align*} This, together with the estimate for the first interval, implies \begin{equation*} \begin{split} \| u_h({\mathsf T})-u_h^{{\mathcal N}}({\mathsf T}) \|_{{\dot{H}}^{2s}} &\leq C{\mathcal N}^{-2} \big( {\mathsf T}^\gamma\| f\|_{L^\infty(0,{\mathsf T};{\dot{H}}^{2s})}\\ & \qquad+ {\mathsf T}^{3/2+\gamma} \| g_{tt} \|_{L^2(0,{\mathsf T};{\dot{H}}^{2s})} + {\mathsf T}^{1+\gamma} \| g_t \|_{L^\infty(0,{\mathsf T};{\dot{H}}^{2s})}\big). \end{split} \end{equation*} To conclude, we observe that $$\| g_{t} \|_{L^\infty(0,{\mathsf T};{\dot{H}}^{2s})} = \|f_{t} \|_{L^\infty(0,{\mathsf T};{\dot{H}}^{2s})},\qquad \| g_{tt} \|_{L^2(0,{\mathsf T};{\dot{H}}^{2s})}= \| f_{tt} \|_{L^2(0,{\mathsf T};{\dot{H}}^{2s})}$$ and that the embedding $H^1(0,{\mathsf T}) \subset L^\infty(0,{\mathsf T})$ is continuous with norm independent of ${\mathsf T} \geq {\mathsf T}_0$. \end{proof} \subsection{A Sinc Approximation of the Contour Integral} In view of \eqref{e:rel_num_nh}, one remaining problem is to compute $$ H_h(t,\tau):=L_h^{-\beta} \left( e_{\gamma,1}(-t^\gamma L_h^\beta) - e_{\gamma,1}(-(t+\tau)^\gamma L_h^\beta) \right)g_h $$ for $t>0$, $\tau>0$ and $g_h \in \mathbb V_h$. We proceed as in the homogeneous case discussed in Section~\ref{sinc}. Let $N$ be a positive integer and let $k>0$ be a quadrature spacing. For $t,{\tau}>0$ and $g_h\in \mathbb V_h$, we propose the following sinc approximation of $H_h(t,\tau)$: \beq\label{e:sincapp2} \bal Q_{h,k}^{N}(t,\tau)g_h := \frac{k}{2\pi i} \sum_{j=-N}^N & [e_{{\gamma},1}({-t^{\gamma} z(y_j)^{\beta}})-e_{{\gamma},1}({-(t+{\tau})^{\gamma} z(y_j)^{\beta}})]\\ &z(y_j)^{-{\beta}} z'(y_j)[(z(y_j)I-L_h)^{-1}g_h], \eal \eeq where $z(y)$ for $y\in {\mathbb{R}}$ is the hyperbolic contour \eqref{hc}. With this, the computable approximation of the solution to the non-homogeneous problem becomes \beq\label{e:fully} u_{h,k}^{{\mathcal N},N}({\mathsf T}):= \sum_{j=1}^{{\mathcal M}-1} \sum_{l=1}^{{\mathcal N}} Q_{h,k}^N({t}_{j,l-1},{\tau}_j){\pi_h} f(t-{t}_{j,l-\frac12}) . \eeq We start with the approximation of $H_h(t,\tau)$ by $Q_{h,k}^N(t,\tau)$. \begin{lemma}\label{t:sincquad} Let $t,{\tau}>0$, $s\in [0,1/2]$ and $d\in (0,\pi/4)$. There exists a constant $C$ only depending on $d$, $b$, $\beta$, $\lambda_1$ such that for any $g_h \in \mathbb V_h$, $$ \|(H_h(t,{\tau})- Q_{h,k}^N(t,{\tau}))g_h\|_{{\dot{H}}^{2s}}\leq C t^{-1}{\tau}\left(e^{-\pi d/k}+e^{-{\beta} Nk}\right)\|g_h\|_{{\dot{H}}_h^{2s}} . $$ \end{lemma} \begin{proof} For $y\in B_d$, define $$h_\lambda(y,t,{\tau})=z(y)^{-{\beta}}[e_{{\gamma},1}(-t^{\gamma} z(y)^{\beta})-e_{{\gamma},1}(-(t+{\tau})^{\gamma} z(y)^{\beta})]z'(y)(z(y)-\lambda)^{-1}$$ and note that $$ \bal &|e_{{\gamma},1}(-t^{\gamma} z(y)^{\beta})-e_{{\gamma},1}(-(t+{\tau})^{\gamma} z(y)^{\beta})|\\ & \qquad\leq \int_t^{t+{\tau}} |z(y)^\beta s^{{\gamma}-1}e_{{\gamma},{\gamma}}(-s^{\gamma} z(y)^\beta)|\, ds\leq Ct^{-1}{\tau}. \eal $$ Here we applied \eqref{ml-bound-scalar} replacing $z$ with $-z(y)^{\beta} s^{\gamma}$ so that $$|z(y)^{\beta} s^{\gamma} e_{{\gamma},{\gamma}}(-s^{\gamma} z(y)^{\beta})|\leq C .$$ Hence, the desired estimate follows upon proceeding as in the proofs of Lemmas~\ref{l:contour-esti-i} and~\ref{l:sincquad}. \end{proof} We are now in a position to prove the error estimate for the sinc quadrature on the non-homogeneous problem. \begin{lemma}\label{t:sincquad2} Let ${\mathsf T}>0$, $s\in [0,1/2]$ and assume that $f \in L^\infty(0,{\mathsf T};{\dot{H}}^{2s})$. Let $N$ be a positive integer, $d\in (0,\pi/4)$ and set $k=\sqrt{\frac{\pi d}{{\beta} N}}$. Let $u_h^{{\mathcal N},{\mathcal M}}$ be as in \eqref{e:space_time_solution} and let $u_{h,k}^{{\mathcal N},N}$ be as in \eqref{e:fully}. There exists a constant $C$ independent of $h$, ${\mathsf T}$, $k$, $N$, ${\mathcal N}$, ${\mathcal M}$ satisfying $$ \|u_{h}^{{\mathcal N},{\mathcal M}}({\mathsf T}) - u_{h,k}^{{\mathcal N},{\mathcal M}}({\mathsf T})\|_{{\dot{H}}^{2s}} \leq C{\mathcal M} e^{-\sqrt{\pi \beta d N}} \|f\|_{L^\infty(0,{\mathsf T};{\dot{H}}^{2s})}. $$ \end{lemma} \begin{proof} Note that both $u^{{\mathcal N},{\mathcal M}}_h$ and $u^{{\mathcal N},{\mathcal M}}_{h,k}$ are approximations starting at $t_1$ (the first interval $I_0=[0,t_1]$ is skipped). Hence, applying Lemma~\ref{t:sincquad} on each interval $I_{j,l}$ (i.e. with $\tau=\tau_j$, $t=t_{j,l}$ and $g_h = \pi_h f({\mathsf T}-t_{j,l-\frac12})$) for $j=1,...,{\mathcal M}-1$ and $l=0,...,{\mathcal N}$, yields $$ \bal & \left\|\sum_{j=1}^{{\mathcal M}-1} \sum_{l=1}^{{\mathcal N}} (H_h({t}_{j,l-1},{\tau}_j)- Q_{h,k}^N({t}_{j,l-1},{\tau}_j))g_h({t}_{j,l-\frac12}) \right\|_{{\dot{H}}^{2s}}\\ &\qquad \leq C {\mathcal N} \left|\sum_{j=1}^{{\mathcal M}-1} \tau_j t_{j}^{-1}\right|e^{-\sqrt{\pi \beta d N}}\|f\|_{L^\infty(0,{\mathsf T};{\dot{H}}^{2s})}\\ &\qquad \leq C{\mathcal M} e^{-\sqrt{\pi \beta d N}}\|f\|_{L^\infty(0,{\mathsf T};{\dot{H}}^{2s})}, \eal $$ where we have used the definition \eqref{e:tjl} of $t_{j,l}$ to guarantee that $C^{-1} t_j \leq t_{j,l}\leq C t_j$ as well as the definition of $\tau_j = 2^{-({\mathcal M}-j)}{\mathsf T}/{\mathcal N}$. This is the desired result. \end{proof} \subsection{Total Error} We summarize this section by the following total error estimate for the fully discrete approximation \eqref{e:fully} to the solution of the non-homogeneous problem. Since $k=k(N)$ and ${\mathcal M}={\mathcal M}({\mathcal N})$, we denote by $u_{h}^{{\mathcal N},N}({\mathsf T})$ the fully discrete solution \eqref{e:fully}. \begin{theorem}[Total Error]\label{t:nfull} Assume that Assumption~\ref{regularity} holds for $\alpha\in (0,1]$. Furthermore, let $\gamma \in (0,1)$, $\delta\geq 0$, $s \in [0,\min(1/2,\delta)]$ and let $\alpha^*$ be as in \eqref{e:astar}. Let ${\mathsf T}>0$, ${\mathcal N}$ a positive integer and ${\mathcal M}=\left\lceil\frac{2\log_2{{\mathcal N}}}{{\gamma}}\right\rceil$. Let $N$ be a positive integer, $d\in (0,\pi/4)$ and set $k=\sqrt{\frac{\pi d}{{\beta} N}}$. There exists a constant C independent of $h$, ${\mathcal N}$, ${\mathsf T}$ and $N$ such that for every $f\in H^2(0,T;{\mathbb{H}}^{2\delta})$ we have $$ \bal \|u({\mathsf T})- u_h^{{\mathcal N},N}({\mathsf T})\|_{{\mathbb{H}}^{2s}}&\leq \tilde D({\mathsf T})h^{2\alpha_*}\|f\|_{L^\infty(0,{\mathsf T};{\mathbb{H}}^{2\delta})}+C\max({\mathsf T}^\gamma,{\mathsf T}^{\frac32+\gamma}) {\mathcal N}^{-2} \|f\|_{H^2(0,{\mathsf T};{\mathbb{H}}^{2s})}\\ &\qquad +C\log_2({\mathcal N}) e^{-\sqrt{\pi d{\beta} N}}\|f\|_{L^\infty(0,{\mathsf T};{\mathbb{H}}^{2s})} , \eal $$ where $\tilde D(T)$ is given by \eqref{e:tildeD}. \end{theorem} \begin{proof} This is in essence Lemmas~\ref{l:semirhs},~\ref{t:geo} and~\ref{t:sincquad2} together with the equivalence property between the dotted spaces and interpolation spaces \eqref{e:interpolation_spaces} (see Proposition~\ref{p:equiv}). \end{proof} \begin{remark}[Choice of ${\mathcal N}$ and $N$] In practice, we balance the three error terms in Theorem~\ref{t:nfull} by setting $$ N = c_1 (2{\alpha^*}\ln(1/h))^2\quad\text{and}\quad {\mathcal N} = c_2\lceil h^{-{\alpha^*}}\rceil, $$ for some positive constants $c_1$ and $c_2$ so that the total error behaves like $h^{2{\alpha^*}}$. We note that the number of the finite element systems that need to be solved for the non-homogeneous problem is the same as for the homogeneous problem, i.e. $O(\ln(1/h)^2)$ complex systems (see the numerical illustration below). \end{remark} \subsection{Numerical illustration} To minimize the number of system solves in the computation of \eqref{e:fully}, we rewrite \begin{align*} u_{h,k}^{{\mathcal N},N}({\mathsf T}) &= \sum_{j=1}^{{\mathcal M}-1} \sum_{l=1}^{{\mathcal N}} \frac{k}{2\pi i}\sum_{n=-N}^N \left( e_{{\gamma},1}(-{t}_{j,l-1}^{\gamma} z(y_n)^{\beta})-e_{{\gamma},1}(-{t}_{j,l}^{\gamma} z(y_n)^{\beta}) \right)\\ &\qquad \qquad \qquad \qquad \qquad z'(y_n) (z(y_n)I-L_h)^{-1} {\pi_h} f(T-{t}_{j,l-1/2})\\ &=\frac{k}{2\pi i}\sum_{n=-N}^N z(y_n)^{-\beta}z'(y_n)(z(y_n)I-L_h)^{-1} {\mathcal{H}}_{n} , \end{align*} where \beq {\mathcal{H}}_{n}:=\sum_{j=1}^{{\mathcal M}-1} \sum_{j=1}^{\mathcal N} \left(e_{\gamma,1}(-{t}_{l,j-1}^{\gamma} z(y_n)^{\beta})-e_{{\gamma},1}(-{t}_{j,l}^{\gamma} z(y_n)^{\beta}) \right){\pi_h} f(T-{t}_{j,l- 1/2}). \label{chsum} \eeq To implement the above we proceed as follows: \begin{enumerate}[1)] \item Compute the inner product vectors, i.e., the integral of $ f(t-{t}_{j,l-1/2})$ against the finite element basis vectors, for all $(j,l)$. \item For each, $n$: \begin{enumerate}[a)] \item compute the sums in \eqref{chsum} but replacing $\pi_hf(T-t_{j,l-1/2}) $ by the corresponding inner product vector, and \item compute $z(y_n)^{-\beta}z'(y_n)(z(y_n)I-L_h)^{-1} {\mathcal{H}}_{n}$ by inversion of the corresponding stiffness matrix applied to the vector of Part~a). \end{enumerate} \item Sum up all contribution and multiply the result by $\frac{k}{2\pi i}$. \end{enumerate} We illustrate the error behavior in time on a two dimensional problem with domain $\Omega=(0,1)^2$ and $L=-\Delta$ with homogeneous Dirichlet boundary conditions. We set ${\beta}=0.5$ and consider the exact soltuion $u(t,x_1,x_2)=t^3\sin{(\pi x_1)}\sin{(\pi x_2)}$ which vanishes at $t=0$. This corresponds to $$ f(x_1,x_2,t) = \left( \frac{\Gamma(4)}{\Gamma(4-\gamma)} t^{3-\gamma} + t^3 (2\pi^2)^\beta \right)\sin{(\pi x_1)}\sin{(\pi x_2)}. $$ We partition $\Omega$ using uniform triangles with the mesh size $h=2^{-5}\sqrt{2}$ and use $N= 400$ for the sinc quadrature parameter. We also set $b=1$ in the hyperbolic contour \eqref{hc}. In Figure~\ref{f:2Dnh} (left), we report $\|u(0.5)- Q_{h}^{{\mathcal N},N}(0.5)\|$ for ${\mathcal N} = 2,4,8,16,32$ and different values of ${\gamma}$. In each cases, as predicted by Theorem~\ref{t:geo}, the rate of convergence ${\mathcal N}^{-2}$ is observed. For comparison, the approximation based on a uniform partition is also provided. In this case, the error decay behaves like $\tau^{1+\gamma}$ (see Remark~\ref{r:uniform_step}). \begin{figure}[hbt!] \begin{center} \begin{tabular}{cc} \includegraphics[scale=.47]{L22DTNHGEO.pdf} & \includegraphics[scale=.47]{L22DTNH.pdf} \\ \end{tabular} \end{center} \caption{The left graph depicts for different values of $\gamma$, the $L^2$ error between $u(0.5)$ and the fully discrete approximation $u_{h}^{{\mathcal N},N}(0.5)$ as a function of ${\mathcal N}$. The optimal rate of convergence ${\mathcal N}^{-2}$ predicted by Theorem~\ref{t:geo} is observed. In contrast, when using uniform time stepping (right), the observed rate is $\tau^{1+\gamma}$ as announced in Remak~\ref{r:uniform_step}. } \label{f:2Dnh} \end{figure}
1,314,259,996,430
arxiv
\section*{Introduction} In a certain generation many people, all around the world, received their first lesson in Western musical scales from Julie Andrews when she and the von Trapp children sang `do-re-mi..' in `{\em The Sound of Music}'. In India, what surprised the uninitiated is the equivalence of this scale with the `saptak' (a scale containing seven basic notes) that forms the basis of Indian traditional music. \keywords{\em swara, saptak, murchhana, raga} Indian classical music is a genre that is prevalent in the Indian sub-continent and parts of the far-eastern reaches of South Asia. There exist two major traditions - the North Indian tradition called the {\em Hindustani} classical, and the South Indian variant known as the {\em Carnatic} classical. They began as one but later diverged into two separate forms because of various historical reasons. However, much of the basic structure remain the same till date. The guiding principle of Indian classical music is to exploit the freedom accorded by the special nature of human sensitivity (discussed in article-I) to the acoustic frequencies. \rightHighlight{A {\em Raga} is built from a basic scale called a {\em thaat}.} The primary characteristic of this genre is that it is based on a standard set of melodic forms ({\em raga}s), which are themselves built from a basic set of scales ({\em thaat}). The {\em raga}s basically define the overall mood of the music by specifying scales (ascending and descending, which may or may not be the same) and provide the general prescription according to which a piece of music should be composed or performed. As there is no rigidity about a set piece of music, a musician is entirely free to bring her/his individual flavour to the composition as long as the prescription specific to a particular {\em raga} is adhered to. \section{Basic Structure} \begin{table} \caption{Correspondence between the Indian {\em shruti}s and the Western notes. Note that the {\em shuddha swara}s coincide with the pure notes of C-major. This is because the Indian base note $s$ has been matched to the Western $C$ and Indian {\em saptak} is intrinsically a major scale. The absolute frequencies has been obtained by setting $A$ to 440~Hz.} \label{t-shruti} \begin{tabular}{|l|l|l|l|l|} \hline $Shruti$ & ratio & $\nu$ (Hz) & Note & $\nu$ (Hz) \\ \hline {\em Chandovati} (\textcolor{magenta}{$sa$}) & 1 & 261.6256 & \textcolor{magenta}{C} & 261.6256 \\ {\em Dayavati} & 256/243 & 275.6220 & C\# & 277.1826 \\ {\em Ranjani} & 16/15 & 279.0673 & & \\ {\em Ratika} & 10/9 & 290.6951 & & \\ {\em Raudri} (\textcolor{magenta}{$re$}) & 9/8 & 294.3288 & \textcolor{magenta}{D} & 293.6648 \\ {\em Krodha} & 32/27 & 310.0747 & D\# & 311.1270 \\ {\em Vajrika} & 6/5 & 313.9507 & & \\ {\em Prasarini} (\textcolor{magenta}{$ga$}) & 5/4 & 327.0319 & \textcolor{magenta}{E} & 329.6275 \\ {\em Marjani} (\textcolor{magenta}{$ma$}) & 4/3 & 348.8341 & \textcolor{magenta}{F} & 349.2282 \\ {\em Rakta} & 45/32 & 367.9109 & F\# & 369.9944 \\ {\em Sandipani} & 729/512 & 372.5098 & & \\ {\em Alapini} (\textcolor{magenta}{$pa$}) & 3/2 & 392.4383 & \textcolor{magenta}{G} & 391.9954 \\ {\em Madantī} & 128/81 & 413.4330 & G\# & 415.3047 \\ {\em Rohini} & 8/5 & 418.6009 & & \\ {\em Ramya} (\textcolor{magenta}{$dha$}) & 5/3 & 436.0426 & \textcolor{magenta}{A} & 440.0000 \\ {\em Ugra} & 27/16 & 441.4931 & & \\ {\em Ksobhini} & 16/9 & 465.1121 & A\# & 466.1638 \\ {\em Tivra} & 9/5 & 470.9260 & & \\ {\em Kumudvati} (\textcolor{magenta}{$ni$}) & 15/8 & 490.5479 & \textcolor{magenta}{B} & 493.8833 \\ {\em Manda} & 243/128 & 496.6798 & & \\ {\em Chandovati} (\textcolor{magenta}{$sa'$}) & 2 & 523.2511 & \textcolor{magenta}{C} & 523.2511 \\ \hline \end{tabular} \end{table} In Indian music there are 7 pure notes ({\em shuddha swara}) - $sa$ ($shadaj$), $re$ ($rishabh$), $ga$ ($gandhar$), $ma$ ($madhyam$), $pa$ ($pancham$), $dha$ ($dhaiwat$) and $ni$ ($nishad$). The first and the fifth notes $sa$ and $pa$ have fixed frequencies and are commonly known as {\em atal swara}s (invariant notes). The other 5 notes are variables and the variants are known as the {\em vikrita swara}s or the impure notes. These impure notes are $\mathcal{R}, \mathcal{G}, \mathcal{D}, \mathcal{N}, \mathcal{M}$ corresponding to the $komal$ (flat or lower frequency) variants of $re, ga, dha, ni$ and $teevra$ (sharp or higher frequency) variant of $ma$ respectively. \leftHighlight{The Indian {\em saptak} roughly corresponds to the {\bf Major} scale of Western tradition, and consists of 12 {\em swara} - 7 {\em shuddha} and 5 {\em vikrita}.} An octave consists of the seven pure notes and is known as a $saptak$, the eighth note having twice the frequency of the first note. In reality though, a $saptak$ contains 12 notes - 7 pure and 5 impure. The seven pure notes are obtained according to the ratio - 1, 9/8, 5/4, 4/3, 3/2, 5/3, 15/8 between the consecutive notes. From our earlier discussion (article-II) it is easy to see that this corresponds very closely to the {\bf Major} scale of the Western tradition, as can be seen from {\em Table}~[\ref{t-shruti}], though the temperament used is neither Pythagorean nor the ETS, but the {\em Just}. \begin{table} \caption{Indian {\em saptak} - {\em mandra-saptak} notes are denoted with $hasanta$ symbols ($_{_\smallsetminus}$) and {\em tar-saptak} notes are denoted with $ref$ ($'$) symbols. (There exist other styles of notation to distinguish the notes in different $saptak$s.)} \label{t-saptak} \begin{tabular}{|l|c|c|c|c|} \hline Key & Note & $\nu$ (Hz) & \multicolumn{2}{|c|}{\bf \em Saptak} \\ \hline \textcolor{light}{W} & C$_3$ & \textcolor{blue}{130.81} & $sa_{_\smallsetminus}$ & \\ {\bf B} & & 138.59 & $\mathcal{R}_{_\smallsetminus}$ & \\ \textcolor{light}{W} & D$_3$ & \textcolor{blue}{146.83} & $re_{_\smallsetminus}$ & \\ {\bf B} & & 155.56 & $\mathcal{G}_{_\smallsetminus}$ & \\ \textcolor{light}{W} & E$_3$ & \textcolor{blue}{164.81} & $ga_{_\smallsetminus}$ & \\ \textcolor{light}{W} & F$_3$ & \textcolor{blue}{174.61} & $ma_{_\smallsetminus}$ & {\bf \em Mandra} \\ {\bf B} & & 185.00 & $\mathcal{M}_{_\smallsetminus}$ & \\ \textcolor{light}{W} & G$_3$ & \textcolor{blue}{196.00} & $pa_{_\smallsetminus}$ & \\ {\bf B} & & 207.65 & $\mathcal{D}_{_\smallsetminus}$ & \\ \textcolor{light}{W} & A$_3$ & \textcolor{blue}{220.00} & $dha_{_\smallsetminus}$ & \\ {\bf B} & & 233.08 & $\mathcal{N}_{_\smallsetminus}$ & \\ \textcolor{light}{W} & B$_3$ & \textcolor{blue}{246.94} & $ni_{_\smallsetminus}$ & \\ \cline{4-5} \textcolor{light}{W} & C$_4$ & \textcolor{blue}{261.63} & $sa$ & \\ {\bf B} & & 277.18 & $R$ & \\ \textcolor{light}{W} & D$_4$ & \textcolor{blue}{293.67} & $re$ & \\ {\bf B} & & 311.13 & $\mathcal{G}$ & \\ \textcolor{light}{W} & E$_4$ & \textcolor{blue}{329.63} & $ga$ & \\ \textcolor{light}{W} & F$_4$ & \textcolor{blue}{349.23} & $ma$ & {\bf \em Madhya} \\ {\bf B} & & 369.99 & $\mathcal{M}$ & \\ \textcolor{light}{W} & G$_4$ & \textcolor{blue}{392.00} & $pa$ & \\ {\bf B} & & 415.30 & $\mathcal{D}$ & \\ \textcolor{light}{W} & \textcolor{magenta}{A$_4$} & \textcolor{magenta}{440.00} & $dha$ & \\ {\bf B} & & 466.16 & $\mathcal{N}$ & \\ \textcolor{light}{W} & B$_4$ & \textcolor{blue}{493.88} & $ni$ & \\ \cline{4-5} \textcolor{light}{W} & C$_5$ & \textcolor{blue}{523.25} & $sa'$ & \\ {\bf B} & & 554.37 & $R'$ & \\ \textcolor{light}{W} & D$_5$ & \textcolor{blue}{587.33} & $re'$ & \\ {\bf B} & & 622.25 & $\mathcal{G}'$ & \\ \textcolor{light}{W} & E$_5$ & \textcolor{blue}{659.26} & $ga'$ & \\ \textcolor{light}{W} & F$_5$ & \textcolor{blue}{698.46} & $ma'$ & {\bf \em Tar} \\ {\bf B} & & 739.99 & $\mathcal{M}'$ & \\ \textcolor{light}{W} & G$_5$ & \textcolor{blue}{783.99} & $pa'$ & \\ {\bf B} & & 830.61 & $\mathcal{D}'$ & \\ \textcolor{light}{W} & A$_5$ & \textcolor{blue}{880.00} & $dha'$ & \\ {\bf B} & & 932.33 & $\mathcal{N}'$ & \\ \textcolor{light}{W} & B$_5$ & \textcolor{blue}{987.77} & $ni'$ & \\ \hline \end{tabular} \end{table} In traditional Indian music a total of 22 micro-tones or $shruti$s were in use instead of the 12 tones discussed above. The practice continues to be the same in the South-Indian (Carnatic) music though the North-Indian (Hindustani) system is now more or less 12-tone based. The division of the $saptak$ in 22 $shruti$s exploits the fact that there exists a minimum interval (in pitch or frequency) that can be distinguished by human ear. (Theoretically, an infinite number of $shruti$s are possible but any practical division would depend on the actual size of the frequency interval that a listener can discern or a musician can produce.) The list of $shruti$s has been shown in {\em Table}~[\ref{t-shruti}], along with the pure notes and their correspondence with the Western scale. It can also be seen from {\em Table}~[\ref{t-shruti}] that the difference between the frequencies of notes in the Indian system and the Equal-Tempered-Scale (ETS) are rather small. In fact, with the introduction of the reed instruments (piano, harmonium etc.) in the Indian music scene (in particular, the huge popularity of harmonium across musical genres) the difference has all but disappeared. Therefore, for the sake of convenience we shall use the ETS even while talking about the Indian scales and notes in this article. As has been mentioned before, a $saptak$ corresponds to an octave. Three main $saptak$s are used in Indian music. Unlike Western music, which has an absolute frame of reference, the Indian system changes from instrument to instrument. \rightHighlight{The three main {\em saptak}s of the Indian tradition are the {\em mandra, madhya} and {\em tar saptak}.} The middle register, referred to as the {\em madhya saptak}, uses a base note that is most comfortable for a particular musician (vocal or instrument); everything else is reckoned from here. The octave above this base is referred to as the {\em tar saptak}; and the lower one is known as the {\em mandra saptak}. Additionally, two octaves above the middle is called {\em ati-tar saptak}; three octaves above is called {\em ati-ati-tar saptak} and so on. The reed instruments also allow us to connect the Indian $saptak$s with the corresponding octaves of an ETS in an easy manner, as shown in {\em Table}~[\ref{t-saptak}]. It is also clearly seen how the Indian scale corresponds to the `major' scale, since the pure tones of a $saptak$ follows the `T T S T T T S' pattern. \section{Shifting the Scale} One of the main characteristic differences between Western classical music and Indian is in their approach to fixing the {\em tonic} or the {\em base note}. In Western tradition, as we have seen earlier, a particular piece of music is set for a particular scale (the home octave, inclusive of all the notes) and the instruments are tuned to play those specific frequencies. On the other hand, Indian music is, more or less, independent of the chosen home octave. A performer can choose the base note ($sa$) of the {\em madhya saptak} (or more precisely, the home octave) according to her/his convenience and therefore effectively has infinite freedom in doing so. Indeed, traditional Indian music makes use of the infinite possibilities accorded by the frequency continuum. This freedom is enjoyed by the vocalists and also, to some extent, by the musicians using string instruments. However, for reed instruments the change of the home octave would necessarily be discrete. In the following we shall discuss two different cases of this {\em shift} (both based on the discrete ETS) commonly made use of in Indian music. \begin{table} \vspace{-1.15cm} \begin{tabular}{|l|c|c|c|c|c|} \hline Note & $\nu$ & C & F & A & B$_{\rm Flat}$ \\ & Hz & Major & Major & Major & Major \\ \hline C$_2$ & 65.406 & $sa_{_\smallsetminus}$ & & & \\ & 69.296 & $\mathcal{R}_{_\smallsetminus}$ & & & \\ D$_2$ & 73.416 & $re_{_\smallsetminus}$ & & & \\ & 77.782 & $\mathcal{G}_{_\smallsetminus}$ & & & \\ E$_2$ & 82.407 & $ga_{_\smallsetminus}$ & & & \\ F$_2$ & \textcolor{blue}{\bf 87.307} & $ma_{_\smallsetminus}$ & \textcolor{blue}{\bf \em sa} & & \\ & 92.499 & $\mathcal{M}_{_\smallsetminus}$ & $\mathcal{R}$ & & \\ G$_2$ & 97.999 & $pa_{_\smallsetminus}$ & $re$ & & \\ & 103.83 & $\mathcal{D}_{_\smallsetminus}$ & $\mathcal{G}$ & & \\ A$_2$ & \textcolor{blue}{\bf 110.00} & $dha_{_\smallsetminus}$ & $ga$ & \textcolor{blue}{\bf \em sa} & \\ B$_{\rm flat}$ & \textcolor{blue}{\bf 116.54} & $\mathcal{N}_{_\smallsetminus}$ & $\mathcal{M}$ & $\mathcal{R}$ & \textcolor{blue}{\bf \em sa} \\ B$_2$ & 123.47 & $ni_{_\smallsetminus}$ & $ma$ & $re$ & $\mathcal{R}$ \\ C$_3$ & 130.81 & $sa$ & $pa$ & $\mathcal{G}$ & $re$ \\ & 138.59 & $R$ & $\mathcal{D}$ & $ga$ & $\mathcal{G}$ \\ D$_3$ & 146.83 & $re$ & $dha$ & $ma$ & $ga$ \\ & 155.56 & $\mathcal{G}$ & $\mathcal{N}$ & $\mathcal{M}$ & $ma$ \\ E$_3$ & 164.81 & $ga$ & $ni$ & $pa$ & $\mathcal{M}$ \\ F$_3$ & 174.61 & $ma$ & $sa'$ & $\mathcal{D}$ & $pa$ \\ & 185.00 & $\mathcal{M}$ & & $dha$ & $\mathcal{D}$ \\ G$_3$ & 196.00 & $pa$ & & $\mathcal{N}$ & $dha$ \\ & 207.65 & $\mathcal{D}$ & & $ni$ & $\mathcal{N}$ \\ A$_3$ & 220.00 & $dha$ & & $sa'$ & $ni$ \\ & 233.08 & $\mathcal{N}$ & & & $sa'$ \\ B$_3$ & 246.94 & $ni$ & & & \\ C$_4$ & 261.63 & $sa'$ & & & \\ \hline \end{tabular} \caption{Illustration of {\em scale change} - When the base note is moved (by a particular multiplicative factor) keeping the octave structure intact, it shifts all the notes in the entire scale exactly by the same multiplicative factor.} \label{t-shift} \end{table} \subsection{Scale Change} The simple {\em shift} of the home octave (popularly known as the {\em \bf scale change} in India) is just that. The change of the base note from one frequency to another one, keeping the structure of the {\em saptak} intact. \rightHighlight{A scale-change is a simple shift in the base frequency, without any change in the structure of the music.} Remember, we have twelve {\em swara}s to the {\em saptak} corresponding to the twelve steps of an ETS, the frequency of a particular {\em swara} being $2^{\frac{1}{12}}$ higher than the one immediately preceding it. {\em Table}~[\ref{t-shift}] illustrates the {\em scale change} for a reed instrument. For vocalists accompanied by harmonium, the scales spanning $G_2-B_2$ are quite popular in modern Indian music. Typically, male voices prefer scales with a higher frequency base note compared to those preferred by female voices, signifying the natural pitch(frequency)-difference between male and female voices. Of course, there exist a huge range of natural frequencies at which a particular vocalist is comfortable. For example, there are people who feel most comfortable to sing at $F_2$ implying that the natural frequency of their voice is a factor of $2^{\frac{6}{12}}$ ($\simeq$1.414) lower than the natural frequency of someone singing at $B_2$. \begin{table}[h] \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $Bilawal$ & $sa$ & $re$ & $ga$ & $ma$ & $pa$ & $dha$ & $ni$ & $sa'$ & * & * & * & * & * & * \\ \hline $Kafi$ & * & $sa$ & $re$ & $ga$ & $ma$ & $pa$ & $dha$ & $ni$ & $sa'$ & * & * & * & * & * \\ \hline $Bhairavi$ & * & * & $sa$ & $re$ & $ga$ & $ma$ & $pa$ & $dha$ & $ni$ & $sa'$ & * & * & * & * \\ \hline $Kalyan$ & * & * & * & $sa$ & $re$ & $ga$ & $ma$ & $pa$ & $dha$ & $ni$ & $sa'$ & * & * & * \\ \hline $Khammaj$ & * & * & * & * & $sa$ & $re$ & $ga$ & $ma$ & $pa$ & $dha$ & $ni$ & $sa'$ & * & * \\ \hline $Asavari$ & * & * & * & * & * & $sa$ & $re$ & $ga$ & $ma$ & $pa$ & $dha$ & $ni$ & $sa'$ & * \\ \hline ---- & * & * & * & * & * & * & $sa$ & $re$ & $ga$ & $ma$ & $pa$ & $dha$ & $ni$ & $sa'$ \\ \hline \end{tabular} \vspace{0.5cm} \caption{Illustration of $Murchhana$. Only $Hindustani$ names of the $raga$s are indicated. See text for an explanation.} \label{t-murchana01} \end{table} Interestingly, likely due to its huge popularity across musical genres in India, the harmonium has now been modified to incorporate a scale-changing mechanism. Using this, one can move from a scale (say $F$) to another (say $B$) without having to experience a change in the piano keys. The harmonium player is then able to use the same fingering (that s/he is used to) for all the shifts in home octave with the scale-changing mechanism of the instrument taking care of the shift in frequency rather than a change in keystrokes. \subsection{\boldmath $Murchhana$} On the other hand, there is a more complex, and far more profound change of scale that is known to Indian classical music. This is known as $murchhana$ in Hindustani (North Indian) classical and as $grahabhedam$ in Carnatik (South Indian) classical tradition. Evidently, the Carnatik name gives away the underlying logic behind this - $graha$ means `position' and $bhedam$ means `change'. \rightHighlight{{\em Murchhana} or {\em grahabhedam} is a way of changing the scale which alters the basic scale structure or {\em thaat}.} The process literally means a change of position. Indeed at a first glance it appears to be no more than a shift of the base note and all the subsequent notes as shown in {\em Table}~[\ref{t-murchana01}]~\footnote{The corresponding Carnatik names of the {\em raga}s are - Dhirashankarabaranam (Bilawal), Kharaharapriya (Kafi), Hanumantodi (Bhairavi), Mechakalyani (Kalyan), Harikamboji (Khammaj), Natabhairavi (Asavari).}. However, this looks completely counter-intuitive. We have just seen that the Indian music allows for any shift of the home octave. By that argument, this shift is not likely to produce anything new. On the other hand, we also know that each $raga$ has its own specific set of $swara$s which gives it a particular flavour specific to that $raga$. Yet, we are moving from one particular $raga$ to another, simply by shifting the $saptak$ by a number of $swara$s according to this prescription. Neither of these are satisfied if we consider this table naively. Nor is it clear why the last shift indicated in the table is not an extant $raga$. \begin{figure} \hspace{-1.5cm} \includegraphics[width=13.5cm]{murchana.eps} \caption{Application of the rules of {\em Murchhana} to obtain {\em Raga Kafi} from {\em Raga Bilawal}.} \label{f-murchana} \vspace{-1.0cm} \end{figure} Here, we need to remember that the {\em shuddha swara}s or the pure tones are not equidistant (in a logarithmic sense) on an ETS. But if all the 12 tones ({\em shuddha + vikrita}) are taken together then they give us 12 equidistant notes. Whenever we map a set of 7 {\em shuddha swara}s to another set of 7 {\em shuddha swara}s we are not performing a fixed frequency shift (as was the case earlier, for a simple `scale-change') but something far more complicated. Consider {\em raga Bilawal} which is characterised by a pure $saptak$, i.e., by all of the seven {\em shuddha swara}s. Since, Indian $saptak$ corresponds to a major scale of the Western tradition, the notes are separated by a `T T S T T T S' pattern, where $T$ stands for a tone (factor of $2^{\frac{2}{12}}$) and $S$ stands for a semi-tone (factor of $2^{\frac{1}{12}}$). This is illustrated in Fig.[\ref{f-murchana}]. Now, let us follow the prescription given in {\em Table}~[\ref{t-murchana01}] and move the $swara$s by one position to obtain {\em raga Kafi}. Note that the $swara$s of the shifted $saptak$ are separated by `T S T T T S T' pattern (second row of the figure). According to the Western definition, it is no longer a major scale. However, the Indian $saptak$ adheres strictly to the major scale. Therefore, if we define a major scale or a true $saptak$ with the shifted base note ($sa$) we obtain the pattern given by the third row in Fig.[\ref{f-murchana}]. Comparing the second and the third row of the figure it is easy to see that, instead of all the {\em shuddha swara}s of {\em raga Bilawal} now we have two {\em vikrita swara}s (namely, $\mathcal{G}, \mathcal{N}$) for {\em raga Kafi}. This is how a new $raga$ is created by shifting the base note in Indian tradition and is known as $murchhana$. The process is illustrated for the entire $saptak$ in Figure~[\ref{t-murchana02}]. It needs to be noted that for all the shifts, barring the last one, $pa$ remains a {\em shuddha swara}. \ \rightHighlight{A {\em murchhana} that shifts the {\em atal swar `pa'} is not allowed.} As it should be, because in Indian tradition $sa$ and $pa$ are the two notes that do not have any variant (and $pa$ is actually the `fifth' of the Pythagorean scale). Since this condition is not satisfied in the last shift ($pa$ is shifted to $\mathcal{M}$ here) it is not considered to be a valid $raga$. Of course, Indian classical genre is not confined to just these six {\em murchhana}s or six {\em raga}s. A multitude of new {\em raga}s can be created remembering that traditional Indian scale consists not seven but, at least 22 {\em shruti}s (or more). Also, it is not mandatory to have 7 base notes, a {\em raga} can also be constructed with lesser number of base notes. Rather complex theories of music exist (differing significantly from one side of the Vindhyas to the other) that deal with the family of this large number of extant {\em raga}s. However, for the uninitiated, understanding this simple yet elegant logic underlying the {\bf \em sounds of music} of the {\em raga}s appears to be a beautiful exercise in itself. \begin{figure} \vspace{-5.0cm} \hspace{-5.0cm} \includegraphics[width=22.5cm,angle=-90]{table05.ps} \vspace{-3.5cm} \caption{Application of {\em Murchhana} through the entire {\em saptak}. Legends : {\em sp - saptak, es - extended saptak, sw - swara, ss - shifted saptak}} \label{t-murchana02} \end{figure} \section*{Acknowledgments} Thanks are due to the music exponents of Surajhankar, Pune (in particular, Ratnamanjari Munshi, Sripurna Mitra, Uma Roy \& Madhumita Ghosh) for introducing me to the world of music. I am also indebted to Sushruti Santhanam and Achintya Prahlad for illuminating discussions; and to Ushasi Roy Choudhury for help in fixing some of the scale related issues. \correspond{Sushan Konar \\ NCRA-TIFR \\ Pune 411007 \\ India \\ Email: sushan@ncra.tifr.res.in}
1,314,259,996,431
arxiv
\section{Introduction} Cluster algebras were introduced by S. Fomin and A. Zelevinsky \cite{FZ} in order to develop a combinatorial approach to study problems of total positivity in algebraic groups and canonical bases in quantum groups. The link between cluster algebras and representation theory of quivers were first revealed in \cite{MRZ}. In~\cite{BMRRT}, the authors introduced the cluster category of an acyclic quiver $Q$ (a quiver without oriented cycles) as the categorification of the corresponding cluster algebras. In order to show that a cluster category categorifies the involving cluster algebra, the Caldero-Chapoton map was defined by P. Caldero and F. Chapoton in \cite{CC}. Let $\mathcal{C}(Q)$ be the cluster category associated to an acyclic quiver $Q$ (a quiver without oriented cycles). The Caldero-Chapoton map of an acyclic quiver $Q$ is a map $$X_?^Q: \mathrm{obj}({\mathcal{C}}(Q))\rightarrow\bbq(x_1,\cdots,x_n).$$ The map was extensively defined by Y. Palu for a Hom-finite 2-Calabi-Yau triangulated categories with a cluster tilting object (\cite{Palu}). As in \cite{Keller}, the cluster category can be defined for any small hereditary abelian category with finite dimensional Hom- and Ext-spaces. It is interesting to study the cluster categories without cluster tilting objects and the involving cluster algebras. For example, the cluster category of a 1-Calabi-Yau abelian category contains no cluster tilting objects (even no rigid objects). In this paper, we will focus on the simplest example of the cluster category without cluster tilting objects: the cluster category of a cyclic quiver. We first define an analogue of the Caldero-Chapoton map for a cyclic quiver. We prove a multiplication formula analogous to the cluster multiplication theorem for acyclic cluster algebras (\cite{CK2005}, \cite{XiaoXu}). As a corollary, the map is a cluster character in the sense of \cite{Palu}. Let $\widetilde{A}_r $ be the cyclic quiver with $r$ vertices and ${\mathcal{C}}(\widetilde{A}_r)$ be its cluster category. Let $\mathcal{AH}$ be the subalgebra of $\bbq(x_1,\cdots, x_r)$ generated by $\{X_M \mid M\in\mathrm{ mod}\bbc \widetilde{A}_r\}$ and $\mathcal{EH}$ be the subalgebra of $\mathcal{AH}$ generated by $\{X_{M}\mid M\in \mathrm{mod}\bbc \widetilde{A}_r, \mbox{Ext}\,} \def\mbox{ch}\,{\mbox{ch}\,^1_{{\mathcal{C}}{(\widetilde{A}_n)}}(M,M)=0\ \}.$ We call the algebra $\mathcal{EH}$ the cluster algebra of $\widetilde{A}_r$. We show that $\mathcal{AH}$ coincides with $\mathcal{EH}$ and construct a $\bbz$-basis of $\mathcal{EH}$. \section{The cluster character for cyclic quivers} Let $k=\bbc$ be the field of complex number. Throughout the rest part of this paper, we fix a cyclic quiver $Q=\widetilde{A}_r,$ i.e, a quiver with oriented cycles, where $Q_0=\{1,2,\cdots, r\}$: $$ \xymatrix@R=0.8pc{& & r\ar[dr]& &\\ &1\ar[ur]&\cdots\ar[l]& r-1\ar[l]&} $$ We denote by mod$kQ$ the category of finite-dimensional nilpotent representations of $kQ$. Let $\tau$ be the Auslander-Reiten translation functor. Let $E_1,\cdots,E_r$ be simple modules of the vertices $1, \cdots, r$, respectively. Set $\mbox{\underline {dim}} E_i=s_i$ for $i=1, \cdots, r.$ We have $\tau E_{2}=E_{1}, \cdots, \tau E_1=E_r$ The Auslander-Reiten quiver of $kQ$ is a tube of rank $r$ with $E_1, \cdots, E_r$ lying the mouth of the tube. For $1\leq i\leq r$, we denote by $E_i=E_{i+r}$, and by $E_i[n]$ the unique nilpotent indecomposable representation with socle $E_i$ and length $n$. Set $E_i[0]=0$ for $i=1, \cdots, r.$ We note that that any indecomposable $kQ$-module is of the form $E_i[j]$ for $i=1, \dots, r$ and $j\in {\mathbb N}} \def\bbz{{\mathbb Z}} \def\bbq{{\mathbb Q}} \def\bb1{{\mathbb 1}\sqcup \{0\}$. Let $\textbf x=\{x_i|i\in Q_0\}$ is a family indeterminates over $\mathbb{Z}$ and set $x_i=x_{i+mr}$ for $1\leq i\leq r, m\in \mathbb{Z}_{\geq 0}$. Here we denote by $\mathbb{Z}_{\geq 0}=\mathbb{Z}\sqcup \{0\}.$ By definition, the cluster category ${\mathcal{C}}={\mathcal{C}}(Q)$ is the orbit category ${\mathcal{D}}^b(\mathrm{mod} kQ)/\tau\circ [-1]$. It is a triangulated category by \cite[Theorem 9.9]{Keller}. Different from the cluster category of an acyclic quiver, the set of objects in ${\mathcal{C}}$ coincides with the set of objects in $\mathrm{mod}kQ$. Also, for any two indecomposable objects $M, N\in {\mathcal{C}}$, we have $$ \mathrm{Ext}^1_{{\mathcal{C}}}(M, N)= \mathrm{Ext}^1_{kQ}(M, N)\oplus \mathrm{Ext}^1_{kQ}(N, M). $$ It is possible that both of two terms in the right side don't vanish. We denote by $\langle -, -\rangle$ the Euler form on $\mathrm{mod}kQ$, i.e., for any $M, N$ in $\mathrm{mod}kQ$, $$\langle \mbox{\underline {dim}} M, \mbox{\underline {dim}} N\rangle:=\mathrm{dim}_{k}\mathrm{Hom}_{kQ}(M, N)-\mathrm{dim}_k\mathrm{Ext}^1_{kQ}(M, N). $$ It is well defined. Thus according to the Caldero-Chapoton map, we can similarly define a map on $\mathrm{mod}kQ$ $$X_?: \mathrm{obj}(\mathrm{mod}kQ)\longrightarrow \mathbb{Z}[\mathbf{x}^{\pm 1}]$$ by mapping $M$ to $$ X_M = \sum_{\underline{e}} \chi(\mathrm{Gr}_{{\underline{e}}}(M)) \prod_{i \in Q_0} x_i^{-\left<{\underline{e}}, s_i\right>-\left <s_i, \underline{\mathrm{dim}}M - {\underline{e}}\right >} $$ where $Gr_{{\underline{e}}}(M)$ is the ${\underline{e}}$-Grassmannian of $M,$ i.e., the variety of finite-dimensional nilpotent submodules of $M$ with dimension vector ${\underline{e}},$ and set $X_0=1.$ Here, we need not assume that $M$ is indecomposable by the following Proposition \ref{DM}. The fraction $X_M$ is called a generalized cluster variable. \begin{Prop}\label{E[n]} With the above notation, we have $$ X_{E_l[n]}=\frac{x_{l+n}}{x_l}+\sum_{k=1}^{n-1}\frac{x_{l+n}x_{l+r-1}}{x_{l+k-1}x_{l+k}}+\frac{x_{l+r-1}}{x_{l+n-1}}.$$ for $n\in {\mathbb N}} \def\bbz{{\mathbb Z}} \def\bbq{{\mathbb Q}} \def\bb1{{\mathbb 1}$ and $l=1, \cdots, r.$ \end{Prop} \begin{proof} It is known that all submodules of $E_l[n]$ are $E_l[0], E_l[1], \cdots, E_l[n]$. Set $\underline{d}_{i,j}=\mbox{\underline {dim}} E_i[j].$ By definition, $$ X_{E_l[n]}=\sum_{k=0}^n\prod_{i\in Q_0}x_i^{-\left<\underline{d}_{l,k}, \underline{d}_{i,1}\right>-\left <\underline{d}_{i,1}, \underline{d}_{l+k, n-k}\right >}. $$ By the definition of the Euler form, we have $$ -\left<\underline{d}_{l,k}, \underline{d}_{i,1}\right>-\left <\underline{d}_{i,1}, \underline{d}_{l+k, n-k}\right>=-\mathrm{dim}_{k}\mathrm{Hom}(E_l[k], E_i)+\mathrm{dim}_{k}\mathrm{Ext}^1(E_l[k], E_i) $$ $$ -\mathrm{dim}_{k}\mathrm{Hom}(E_i, E_{l+k}[n-k])+\mathrm{dim}_{k}\mathrm{Ext}^1(E_i, E_{l+k}[n-k]). $$ If $k=0$, then $\prod_{i\in Q_0}x_i^{-\left<\underline{d}_{l,k}, \underline{d}_{i,1}\right>-\left <\underline{d}_{i,1}, \underline{d}_{l+k, n-k}\right >}=\frac{x_{l+n}}{x_{l}}.$ {\noindent} If $k=n,$ then $\prod_{i\in Q_0}x_i^{-\left<\underline{d}_{l,k}, \underline{d}_{i,1}\right>-\left <\underline{d}_{i,1}, \underline{d}_{l+k, n-k}\right >}=\frac{x_{l+r-1}}{x_{l+n-1}}.$ {\noindent} If $0<k<n,$ then $\prod_{i\in Q_0}x_i^{-\left<\underline{d}_{l,k}, \underline{d}_{i,1}\right>-\left <\underline{d}_{i,1}, \underline{d}_{l+k, n-k}\right >}=\frac{x_{l+n}x_{l+r-1}}{x_{l+k-1}x_{l+k}}.$ \end{proof} \begin{Prop}\label{DM} (1) For $M,N$ in $\mathrm{mod}kQ$, we have $$ X_{M}X_{N}=X_{M\oplus N}. $$ (2) Let $0\longrightarrow \tau M\longrightarrow B\longrightarrow M\longrightarrow 0$ be an almost split sequence in $\mathrm{mod}kQ$, then $$ X_{M}X_{\tau M}=X_{B}+1. $$ \end{Prop} \begin{proof} The proof is similar to \cite[Proposition 3.6]{CC}. For (1), by definition, it is enough to prove that for any dimension vector $\underline{e}$, we have $$\chi(Gr_{\underline{e}}(M\oplus N))=\sum_{\underline{f}+\underline{g}=\underline{e}}\chi(Gr_{\underline{f}}(M))\chi(Gr_{\underline{g}}(N)).$$ Consider the natural morphism of varieties $$ f: \prod_{\underline{f}+\underline{g}=\underline{e}}Gr_{\underline{f}}(M)\times Gr_{\underline{g}}(N)\rightarrow Gr_{\underline{e}}(M\oplus N) $$ defined by sending $(M_1, N_1)$ to $M_1\oplus N_1$. Since $f$ is monomorphism, we have $$ \sum_{\underline{f}+\underline{g}=\underline{e}}\chi(Gr_{\underline{f}}(M))\chi(Gr_{\underline{g}}(N))=\chi(\mathrm{Im}f). $$ On the other hand, we define an action of $\bbc^*$ on $Gr_{\underline{e}}(M\oplus N)$ by $$t. (m, n)=(tm, t^2n)$$ for $t\in \bbc^*$ and $m\in M, n\in N$. The set of stable points is just $\mathrm{Im}f.$ Hence, $\chi(\mathrm{Im}f)=\chi(Gr_{\underline{e}}(M\oplus N)).$ This proves (1).\\ (2)\ Assume that $M=E_i[n]$. It is enough to prove: $$X_{E_{1}[n]}X_{E_{2}[n]}=X_{E_{1}[n+1]}X_{E_{2}[n-1]}+1$$where $0\longrightarrow E_{1}[n]\longrightarrow E_{2}[n-1]\oplus E_{1}[n+1]\longrightarrow E_{2}[n]\longrightarrow 0$ is an almost split sequence in $\mathrm{mod}kQ$. The equation in (2) follows from the direct confirmation by using Proposition \ref{E[n]}. \end{proof} Let $M, N$ be indecomposable $kQ-$modules satisfying that $\mathrm{dim}_{k}\mathrm{Ext}^1_{kQ}(M, N)=\mathrm{dim}_{k}\mathrm{Hom}_{kQ}(N, \tau M)=1$. Assume that $M=E_i[j]$, $N=E_k[l]$. Then in ${\mathcal{C}}(Q)$, there are just two involving triangles $$ E_{k}[l]\rightarrow E\rightarrow E_{i}[j]\rightarrow \tau E_{k}[l] $$ and $$ E_{i}[j]\rightarrow E'\rightarrow E_{k}[l]\xrightarrow{g} \tau E_{i}[j] $$ where $E\cong E_{k}[i+j-k]\oplus E_{i}[k+l-i]$ and $E'\cong \mathrm{ker}g\oplus \tau^{-1}\mathrm{coker}g.$ \begin{Thm}\label{exp} With the above notation, we have $$ X_MX_N=X_{E}+X_{E'}. $$ \end{Thm} \begin{proof} Assume that $$X_M = \sum_{\underline{e}} \chi(\mathrm{Gr}_{{\underline{e}}}(M)) \prod_{i \in Q_0} x_i^{-\left<{\underline{e}}, s_i\right>-\left <s_i, \underline{\mathrm{dim}}M - {\underline{e}}\right >}$$ and $$ X_N = \sum_{\underline{e}'} \chi(\mathrm{Gr}_{{\underline{e}}'}(N)) \prod_{i \in Q_0} x_i^{-\left<{\underline{e}}', s_i\right>-\left <s_i, \underline{\mathrm{dim}}N - {\underline{e}}'\right >} $$ Then $$ X_MX_N=\sum_{{\underline{e}}, {\underline{e}}'}\prod_{i\in Q_0}x_i^{-\left<{\underline{e}}+{\underline{e}}', s_i\right>-\left <s_i, \underline{\mathrm{dim}}M +\underline{\mathrm{dim}N}- ({\underline{e}}+{\underline{e}}')\right>}. $$ Note that $\chi(\mathrm{Gr}_{{\underline{e}}}(L))=1$ or $0$ for any $kQ$-module $L.$ Since $\mathrm{Ext}^1_{kQ}(M, N)\neq 0$, we have a short exact sequence $$ 0\rightarrow E_{k}[l]\xrightarrow{f_1} E\xrightarrow{f_2} E_{i}[j]\rightarrow 0. $$ Define a morphism of varieties $$ \phi: Gr_{\ud}(E)\rightarrow \bigsqcup_{{\underline{e}}+{\underline{e}}'=\ud}Gr_{{\underline{e}}}(M)\times Gr_{{\underline{e}}'}(N) $$ by sending $(E_1)$ to $(f_1^{-1}(E_1), f_2(E_1)).$ For $(M_1, N_1)\in Gr_{{\underline{e}}}(M)\times Gr_{{\underline{e}}'}(N)$, we consider the natural map: $$ \beta': \mbox{Ext}\,} \def\mbox{ch}\,{\mbox{ch}\,^{1}_{kQ}(M,N)\oplus \mbox{Ext}\,} \def\mbox{ch}\,{\mbox{ch}\,^{1}_{kQ}(M_1,N_1)\rightarrow \mbox{Ext}\,} \def\mbox{ch}\,{\mbox{ch}\,^{1}_{kQ}(M_1,N) $$ sending $(\varepsilon,\varepsilon')$ to $\varepsilon_{M_1}-\varepsilon'_{N}$ where $\varepsilon_{M_1}$ and $\varepsilon'_{N}$ are induced by including $M_1\subseteq M$ and $N_1\subseteq N,$ respectively and the projection $$ p_0: \mbox{Ext}\,} \def\mbox{ch}\,{\mbox{ch}\,^{1}_{kQ}(M,N)\oplus \mbox{Ext}\,} \def\mbox{ch}\,{\mbox{ch}\,_{kQ}^{1}(M_1,N_1)\rightarrow \mbox{Ext}\,} \def\mbox{ch}\,{\mbox{ch}\,^{1}_{kQ}(M,N). $$ It is easy to check that $(M_1, N_1)\in \mathrm{Im}\phi$ if and only if $p_0(\mathrm{ker}\beta')\neq 0.$ Hence, we have $$ X_E=\sum_{{\underline{e}}, {\underline{e}}'; p_0(\mathrm{ker}\beta')\neq 0}\prod_{i\in Q_0}x_i^{-\left<{\underline{e}}+{\underline{e}}', s_i\right>-\left <s_i, \underline{\mathrm{dim}}M +\underline{\mathrm{dim}N} - ({\underline{e}}+{\underline{e}}')\right>}. $$ Assume that $$X_{E'} = \sum_{\underline{d}'_1, \ud'_2} \chi(\mathrm{Gr}_{\ud'_1}(K)) \chi(\mathrm{Gr}_{\ud'_2}(\tau^{-1}C))\prod_{i \in Q_0} x_i^{-\left<\ud'_1+\ud'_2, s_i\right>-\left <s_i, \underline{\mathrm{dim}}K+ \underline{\mathrm{dim}}\tau^{-1}C- \ud'_1-\ud'_2\right >} $$ Set $\ud^*=\underline{\mathrm{dim}}M-\underline{\mathrm{dim}}\tau^{-1}C.$ We have \begin{eqnarray} && \left <s_i, \underline{\mathrm{dim}}K+ \underline{\mathrm{dim}}\tau^{-1}C\right > \nonumber\\ &=& \left <s_i, \underline{\mathrm{dim}}N-\tau(\ud^*)+ \underline{\mathrm{dim}}M-\ud^*\right >\nonumber\\ &=& \left <s_i, \underline{\mathrm{dim}}M+\underline{\mathrm{dim}}N-\ud^*\right >+\left <\ud^*, s_i \right >. \nonumber \end{eqnarray} Hence, $X_{E'}$ can be reformulated as $$ \sum_{\underline{d}'_1, \ud'_2} \prod_{i \in Q_0} x_i^{-\left<\ud'_1+\ud'_2+\ud^*, s_i\right>-\left <s_i, \underline{\mathrm{dim}}M+ \underline{\mathrm{dim}}N- (\ud'_1+\ud'_2+\ud^*)\right >} $$ Since $\mathrm{dim}_{k}\mathrm{Hom}_{kQ}(N, \tau M)=1$, there is only one element in $\mathbb{P}\mathrm{Hom}_{kQ}(N, \tau M)$ with the representative $g$. We have a long exact sequence $$ \xymatrix{0\ar[r]&K\ar[r]&N\ar[r]^g&\tau M\ar[r]&C\ar[r]&0} $$ Given submodules $K_1, C_1$ of $K, C$, respectively, we have the commutative diagram $$ \xymatrix{ 0\ar[r]& K\ar[r]\ar[d]& N\ar[r]^{g}\ar[d]& \tau M\ar[r]& C\ar[r]& 0\\ 0\ar[r]& K/K_1\ar[r]& N/K_1\ar[r]^-{g'}& \tau M_1\ar[r]\ar[u] & C_1\ar[r]\ar[u]& 0} $$ where $\tau M_1$ is the corresponding pullback. Define a morphism of varieties $$\phi': \bigsqcup_{ \ud_1'+\ud_2'+\ud^*=\ud'}Gr_{\ud'_1}(K)\times Gr_{\ud'_2}(\tau^{-1}C)\rightarrow \bigsqcup_{{\underline{e}}+{\underline{e}}'=\ud'}Gr_{{\underline{e}}}(M)\times Gr_{{\underline{e}}'}(N) $$ by sending $(K_1, \tau^{-1}(C_1))$ to $(K_1, M_1)$. Checking the above diagram, we know that $(M_1, N_1)\in \mathrm{Im}\phi'$ if and only if $\mathrm{Hom}_{kQ}(N/N_1, \tau M_1)\neq 0.$ Therefore, we obtain $$ X_{E'}=\sum_{{\underline{e}}, {\underline{e}}'; \mathrm{Hom}_{kQ}(N/N_1, \tau M_1)\neq 0}\prod_{i\in Q_0}x_i^{-\left<{\underline{e}}+{\underline{e}}', s_i\right>-\left <s_i, \underline{\mathrm{dim}}M +\underline{\mathrm{dim}N} - ({\underline{e}}+{\underline{e}}')\right>}. $$ Consider the dual of $\beta'$: $$ \beta: \mathrm{Hom}_{kQ}(N, \tau M_1)\rightarrow \mathrm{Hom}_{kQ}(N, \tau M)\oplus \mathrm{Hom}_{kQ}(N_1, \tau M_1). $$ Then $$ (p_0(\mathrm{ker}\beta'))^{\perp}= \mathrm{Im}\beta\bigcap \mbox{Hom}(N,\tau M)\simeq \mbox{Hom}(N/N_1,\tau M_1). $$ We obtain that $$ \mathrm{dim}_{k}(p_0(\mathrm{ker}\beta'))+\mathrm{dim}_{k}\mathrm{Hom}(N/N_1, \tau M_1)=\mathrm{dim}_{k}\mathrm{Ext}^1_{kQ}(M, N)=1 $$ Hence, any $(M_1, N_1)$ belongs to either $\mathrm{Im}\phi$ or $\mathrm{Im}\phi'$ for some $\ud$ or $\ud'$. We complete the proof. \end{proof} Following the definition of a cluster character in \cite{Palu}, we can easily check the following corollary. \begin{Cor} The Caldero-Chapoton map for a cyclic quiver is a cluster character. \end{Cor} We will construct some inductive formulas in the next section. For convenience, we write down the following corollary. \begin{Cor}\label{DM1} With the above notation, we have $$ (1)\ X_{E_{i+n}}X_{E_{i}[n]}=X_{E_{i}[n+1]}+X_{E_{i}[n-1]} $$ $$ (2)\ X_{E_i}X_{E_{i+1}[n]}=X_{E_{i}[n+1]}+X_{E_{i+2}[n-1]}. $$ \end{Cor} \section{Inductive multiplication formulas} In this section, we will give inductive multiplication formulas for any two generalized cluster variables on $\mathrm{mod}kQ$. Note that these inductive multiplication formulas are an analogue of those for tubes in \cite {DXX} for acyclic cluster algebras. \begin{Thm}\label{16} Let $i,j, k,l,m$ and $r$ be in $\bbz$ such that $1\leq k\leq mr+l,0\leq l\leq r-1,1\leq i,j\leq r,m\geq 0$. {\noindent} (1)When $j\leq i$, then 1)for $k+i\geq r+j$, we have $X_{E_i[k]}X_{E_j[mr+l]}=X_{E_i[(m+1)r+l+j-i]}X_{E_j[k+i-r-j]}+X_{E_i[r+j-i-1]}X_{E_{k+i+1}[(m+1)r+l+j-k-i-1]},$ 2)for $k+i< r+j$ and $i\leq l+j\leq k+i-1$, we have $X_{E_i[k]}X_{E_j[mr+l]}=X_{E_j[mr+k+i-j]}X_{E_i[l+j-i]}+X_{E_j[mr+i-j-1]}X_{E_{l+j+1}[k+i-l-j-1]},$ 3)for other conditions, i.e, there are no extension between $E_i[k]$ and $E_j[mr+l]$, we have $X_{E_i[k]}X_{E_j[mr+l]}=X_{E_i[k]\oplus E_j[mr+l]}$. {\noindent} (2)When $j> i$, then 1)for $k\geq j-i,$ we have $X_{E_i[k]}X_{E_j[mr+l]}=X_{E_i[j-i-1]}X_{E_{k+i+1}[mr+l+j-k-i-1]}+X_{E_i[mr+l+j-i]}X_{E_j[k+i-j]},$ 2)for $k< j-i$ and $i\leq l+j-r\leq k+i-1$, we have $X_{E_i[k]}X_{E_j[mr+l]}=X_{E_j[(m+1)r+k+i-j]}X_{E_i[l+j-r-i]}+X_{E_j[(m+1)r+i-j-1]}X_{E_{l+j+1}[k+r+i-l-j-1]},$ 3)for other conditions, i.e, there are no extension between $E_i[k]$ and $E_j[mr+l]$, we have $X_{E_i[k]}X_{E_j[mr+l]}=X_{E_i[k]\oplus E_j[mr+l]}.$ \end{Thm} \begin{proof} We only prove (1) and (2) is totally similar to (1). 1) When $k=1,$ by $k+i\geq r+j$ and $1\leq j\leq i\leq r\Longrightarrow i=r$ and $j=1.$\\ Then by Proposition \ref{DM} and Corollary \ref{DM1}, we have $$X_{E_r}X_{E_1[mr+l]}=X_{E_r[mr+l+1]}+X_{E_2[mr+l-1]}.$$ When $k=2,$ by $k+i\geq r+j$ and $1\leq j\leq i\leq r\Longrightarrow i=r\ or\ i=r-1.$\\ For $i=r\Longrightarrow j=1\ or\ j=2$:\\ The case for $i=r$ and $j=1$, we have \begin{eqnarray} && X_{E_r[2]}X_{E_1[mr+l]} \nonumber\\ &=& (X_{E_r}X_{E_1}-1)X_{E_1[mr+l]} \nonumber\\ &=& X_{E_1}(X_{E_r[mr+l+1]}+X_{E_2[mr+l-1]})-X_{E_1[mr+l]} \nonumber\\ &=& X_{E_1}X_{E_r[mr+l+1]}+(X_{E_1[mr+l]}+X_{E_3[mr+l-2]})-X_{E_1[mr+l]} \nonumber\\ &=& X_{E_1}X_{E_r[mr+l+1]}+X_{E_3[mr+l-2]}.\nonumber \end{eqnarray} The case for $i=r$ and $j=2$, we have \begin{eqnarray} && X_{E_r[2]}X_{E_2[mr+l]} \nonumber\\ &=& (X_{E_r}X_{E_1}-1)X_{E_2[mr+l]} \nonumber\\ &=& X_{E_r}(X_{E_1[mr+l+1]}+X_{E_3[mr+l-1]})-X_{E_2[mr+l]} \nonumber\\ &=& X_{E_r[mr+l+2]}+(X_{E_2[mr+l]}+X_{E_r}X_{E_3[mr+l-1]})-X_{E_2[mr+l]} \nonumber\\ &=& X_{E_r[mr+l+2]}+X_{E_r}X_{E_3[mr+l-1]}.\nonumber \end{eqnarray} For $i=r-1\Longrightarrow j=1$: \begin{eqnarray} && X_{E_{r-1}[2]}X_{E_1[mr+l]} \nonumber\\ &=& (X_{E_{r-1}}X_{E_r}-1)X_{E_1[mr+l]} \nonumber\\ &=& X_{E_{r-1}}(X_{E_r[mr+l+1]}+X_{E_2[mr+l-1]})-X_{E_1[mr+l]} \nonumber\\ &=& (X_{E_{r-1}[mr+l+2]}+X_{E_1[mr+l]})+X_{E_{r-1}}X_{E_2[mr+l-1]}-X_{E_1[mr+l]} \nonumber\\ &=& X_{E_{r-1}[mr+l+2]}+X_{E_{r-1}}X_{E_2[mr+l-1]}.\nonumber \end{eqnarray}\\ Now, suppose it holds for $k\leq n,$ then by induction we have \begin{eqnarray*} && X_{E_i[n+1]}X_{E_j[mr+l]}\nonumber \\ &=& (X_{E_i[n]}X_{E_{i+n}}-X_{E_i[n-1]})X_{E_j[mr+l]}\nonumber \\ &=& X_{E_{i+n}}(X_{E_i[n]}X_{E_j[mr+l]})-X_{E_i[n-1]}X_{E_j[mr+l]} \nonumber\\ &=& X_{E_{i+n}}(X_{E_i[(m+1)r+l+j-i]}X_{E_j[n+i-r-j]}+X_{E_i[r+j-i-1]}X_{E_{n+i+1}[(m+1)r+l+j-n-i-1]})\nonumber \\ && -(X_{E_i[(m+1)r+l+j-i]}X_{E_j[n+i-r-j-1]}+X_{E_i[r+j-i-1]}X_{E_{n+i}[(m+1)r+l+j-n-i]}) \nonumber\\ &=& X_{E_i[(m+1)r+l+j-i]}(X_{E_j[n+i+1-r-j]}+X_{E_j[n+i-r-j-1]}) \nonumber\\ && +X_{E_i[r+j-i-1]}(X_{E_{n+i}[(m+1)r+l+j-n-i]} +X_{E_{n+i+2}[(m+1)r+l+j-n-i-2]}) \nonumber\\ && -(X_{E_i[(m+1)r+l+j-i]}X_{E_j[n+i-r-j-1]}+X_{E_i[r+j-i-1]}X_{E_{n+i}[(m+1)r+l+j-n-i]})\nonumber \\ &=& X_{E_i[(m+1)r+l+j-i]}X_{E_j[n+i+1-r-j]}+X_{E_i[r+j-i-1]}X_{E_{n+i+2}[(m+1)r+l+j-n-i-2]}. \end{eqnarray*} 2) When $k=1,$ by $i\leq l+j\leq k+i-1\Longrightarrow i\leq l+j\leq i\Longrightarrow i=l+j.$\\ Then by by Proposition \ref{DM} and Corollary \ref{DM1}, we have $$X_{E_{i}}X_{E_j[mr+l]}=X_{E_{l+j}}X_{E_j[mr+l]}=X_{E_j[mr+l+1]}+X_{E_j[mr+l-1]}$$ When $k=2,$ by $i\leq l+j\leq k+i-1\Longrightarrow i\leq l+j\leq i+1\Longrightarrow i=l+j\ or\ i+1=l+j$:\\ For $i=l+j$, we have \begin{eqnarray*} && X_{E_{i}[2]}X_{E_j[mr+l]} \nonumber\\ &=& X_{E_{l+j}[2]}X_{E_j[mr+l]}\nonumber \\ &=& (X_{E_{l+j}}X_{E_{l+j+1}}-1)X_{E_j[mr+l]}\nonumber \\ &=& (X_{E_j[mr+l+1]}+X_{E_j[mr+l-1]})X_{E_{l+j+1}}-X_{E_j[mr+l]}\nonumber \\ &=& X_{E_j[mr+l+2]}+X_{E_j[mr+l]}+X_{E_{l+j+1}}X_{E_j[mr+l-1]}-X_{E_j[mr+l]}\nonumber \\ &=& X_{E_j[mr+j+2]}+X_{E_{l+j+1}}X_{E_j[mr+l-1]}. \end{eqnarray*} For $i+1=l+j$, we have \begin{eqnarray*} && X_{E_{i}[2]}X_{E_j[mr+l]}\nonumber \\ &=& X_{E_{l+j-1}[2]}X_{E_j[mr+l]}\nonumber \\ &=& (X_{E_{l+j-1}}X_{E_{l+j}}-1)X_{E_j[mr+l]}\nonumber \\ &=& (X_{E_j[mr+l+1]}+X_{E_j[mr+l-1]})X_{E_{l+j-1}}-X_{E_j[mr+l]} \nonumber \\ &=& X_{E_j[mr+l+1]}X_{E_{l+j-1}}+(X_{E_j[mr+l]}+X_{E_j[mr+l-2]})-X_{E_j[mr+l]}\nonumber \\ &=& X_{E_j[mr+l+1]}X_{E_{l+j-1}}+X_{E_j[mr+l-2]}. \end{eqnarray*} Suppose it holds for $k\leq n,$ then by induction we have \begin{eqnarray*} && X_{E_i[n+1]}X_{E_j[mr+l]}\nonumber \\ &=& (X_{E_i[n]}X_{E_{i+n}}-X_{E_i[n-1]})X_{E_j[mr+l]}\nonumber \\ &=& (X_{E_i[n]}X_{E_j[mr+l]})X_{E_{i+n}}-X_{E_i[n-1]}X_{E_j[mr+l]}\nonumber\\ &=& (X_{E_j[mr+n+i-j]}X_{E_i[l+j-i]}+X_{E_j[mr+i-j-1]}X_{E_{l+j+1}[n+i-l-j-1]})X_{E_{i+n}} \nonumber\\ && -(X_{E_j[mr+n+i-j-1]}X_{E_i[l+j-i]}+X_{E_j[mr+i-j-1]}X_{E_{l+j+1}[n+i-l-j-2]})\nonumber \\ &=& (X_{E_j[mr+n+i+1-j]}+X_{E_j[mr+n+i-j-1]})X_{E_i[l+j-i]} \nonumber\\ && +(X_{E_{l+j+1}[n+i-l-j]}+X_{E_{l+j+1}[n+i-l-j-2]})X_{E_j[mr+i-j-1]} \nonumber\\ && -(X_{E_j[mr+n+i-j-1]}X_{E_i[l+j-i]}+X_{E_j[mr+i-j-1]}X_{E_{l+j+1}[n+i-l-j-2]})\nonumber \\ &=& X_{E_j[mr+n+i+1-j]}X_{E_i[l+j-i]}+X_{E_j[mr+i-j-1]}X_{E_{l+j+1}[n+i-l-j]}. \end{eqnarray*} 3) It is trivial by the definition of the Caldero-Chapoton map. \end{proof} \section {A $\mathbb{Z}$-basis for cyclic quivers} In this section, we will focus on studying the following set $$\mathcal{B}(Q)=\{X_{R}|\mathrm{Ext}_{k Q}^{1}(R,R)=0\}.$$ We prove that $\mathcal{B}(Q)$ is a $\mathbb{Z}$-basis of the algebra $\mathcal{AH}(Q)$ generated by all these generalized cluster variables. We first give the following definition. \begin{Definition}\label{p} For $M, N\in$ $\mathrm{mod} kQ$ with $\mathrm{\underline{dim}}M=(m_{1},\cdots,m_{r})$ and $\mathrm{\underline{dim}}N=(s_{1},\cdots,s_{r})$, we write $\mathrm{\underline{dim}}M\preceq \mathrm{\underline{dim}}N$ if $m_{i}\leq s_{i}\ for\ 1\leq i\leq r$. Moreover, if there exists some i such that $m_{i}< s_{i}$, then we write $\mathrm{\underline{dim}}M\prec \mathrm{\underline{dim}}N.$ \end{Definition} \begin{Remark}\label{7} It is easy to see that $\mathrm{\underline{dim}}E_{i+2}[n-1]\prec \mathrm{\underline{dim}}E_{i}[n+1]$ and $\mathrm{\underline{dim}}E_{i}[n-1]\prec \mathrm{\underline{dim}}E_{i}[n+1]$ in Corollary \ref{DM1}. \end{Remark} \begin{Lemma}\label{6} Let $T_1, T_2$ be $kQ$-modules such that $\mathrm{\underline{dim}}T_1=\mathrm{\underline{dim}}T_2$. Then we have $$X_{T_1}=X_{T_2}+\sum_{\mathrm{\underline{dim}}R\prec \mathrm{\underline{dim}} T_2}a_{R}X_{R}$$ where $R\in \mathrm{mod}kQ$ and $a_{R}\in \mathbb{Z}$. \end{Lemma} \begin{proof} Suppose $T_1=T_{11}\oplus T_{12}\oplus \cdots \oplus T_{1m}$ and $\mathrm{\underline{dim}}T_1=(d_1,d_2,\cdots,d_r)$ where $T_{1i}(1\leq i\leq m)$ are indecomposable regular modules with quasi-socle $E_{i_{1}}$ and $\underline{dim}T_{1i}=(d_{1i},d_{2i},\cdots, d_{ri})$ for $1\leq i\leq m.$ Thus, we can see that $(d_1,d_2,\cdots,d_r)=\sum_{i=1}^{m}(d_{1i},d_{2i},\cdots, d_{ri}).$ By Corollary \ref{DM1} and Theorem \ref{16}, we have \begin{eqnarray*} && X^{d_1}_{E_1}X^{d_2}_{E_2}\cdots X^{d_r}_{E_r}\nonumber \\ &=& \prod_{i=1}^{m}(X_{E_{i_{1}}}X_{E_{i_{1}+1}}X_{E_{i_{1}+2}}\cdots X_{E_{i_{1}+d_{1i}+\cdots+d_{ri}-1}}) \nonumber \\ &=& \prod_{i=1}^{m}(X_{T_{1i}}+\sum_{\mathrm{\underline{dim}}L'\prec \mathrm{\underline{dim}}T_{1i}}a_{L'}X_{L'})\nonumber \\ &=& X_{T_{1}}+\sum_{\mathrm{\underline{dim}}L\prec \mathrm{\underline{dim}}T_{1}}a_{L}X_{L}. \end{eqnarray*} where $a_{L'},a_{L}$ are integers. Similarly we have $$X^{d_1}_{E_1}X^{d_2}_{E_2}\cdots X^{d_n}_{E_n}= X_{T_2}+\sum_{\mathrm{\underline{dim}}M\prec \mathrm{\underline{dim}}T_2}b_{M}X_{M}$$ where $b_{M}$ are integers. Thus $$X_{T_{1}}+\sum_{\mathrm{\underline{dim}}L'\prec \mathrm{\underline{dim}}T_{1}}a_{L}X_{L}=X_{T_2}+\sum_{\mathrm{\underline{dim}}M\prec \mathrm{\underline{dim}}T_2}b_{M}X_{M}.$$ Therefore, we have $$X_{T_1}=X_{T_2}+\sum_{\mathrm{\underline{dim}}R\prec \mathrm{\underline{dim}}T_2}a_{R}X_{R}$$ where $a_{R}$ are integers. \end{proof} We explain the method used in Lemma \ref{6} by the following example. \begin{Example} Consider $r=4,X_{E_2[5]}$ and $X_{E_1[4]\oplus E_2}$. We can see that $\mathrm{\underline{dim}}(E_1[4]\oplus E_2)=\mathrm{\underline{dim}}E_2[5]=\mathrm{\underline{dim}}(E_1\oplus 2E_2\oplus E_3\oplus E_4)$ and satisfy the conditions in Lemma \ref{6}. Thus, for $X_{E_1[4]\oplus E_2}$, we have \begin{eqnarray*} X_{E_1}X^{2}_{E_2}X_{E_3}X_{E_4}&=& X_{E_1}X_{E_2}X_{E_3}X_{E_4}X_{E_2}\nonumber \\ &=& (X_{E_1[2]}+1)X_{E_3}X_{E_4}X_{E_2}\nonumber \\ &=& (X_{E_1[3]}+X_{E_1})X_{E_4}X_{E_2}+X_{E_3}X_{E_4}X_{E_2}\nonumber \\ &=& (X_{E_1[4]}+X_{E_1[2]})X_{E_2}+X_{E_1}X_{E_4}X_{E_2}+X_{E_3}X_{E_4}X_{E_2}\nonumber \\ &=& X_{E_1[4]\oplus E_2}+X_{E_1[2]\oplus E_2}+(X_{E_1[2]}+1)X_{E_4}+(X_{E_2[2]}+1)X_{E_4}\nonumber \\ &=& X_{E_1[4]\oplus E_2}+X_{E_1[2]\oplus E_2}+X_{E_1[2]\oplus E_4}+X_{E_2[3]}+X_{E_2}+2X_{E_4}. \end{eqnarray*} Similarly for $X_{E_2[5]}$, we have \begin{eqnarray*} X_{E_1}X^{2}_{E_2}X_{E_3}X_{E_4}&=& X_{E_2}X_{E_3}X_{E_4}X_{E_1}X_{E_2}\nonumber \\ &=& (X_{E_2[2]}+1)X_{E_4}X_{E_1}X_{E_2}\nonumber \\ &=& (X_{E_2[3]}+X_{E_2})X_{E_1}X_{E_2}+X_{E_4}X_{E_1}X_{E_2}\nonumber \\ &=& (X_{E_2[4]}+X_{E_2[2]})X_{E_2}+X_{E_2}X_{E_1}X_{E_2}+X_{E_4}X_{E_1}X_{E_2}\nonumber \\ &=& X_{E_2[5]}+X_{E_2[3]}+X_{E_2[2]\oplus E_2}+(X_{E_1[2]}+1)X_{E_2}+(X_{E_1[2]}+1)X_{E_4}\nonumber \\ &=& X_{E_2[5]}+X_{E_2[3]}+X_{E_2[2]\oplus E_2}+X_{E_1[2]\oplus E_2}+X_{E_2}+X_{E_1[2]\oplus E_4}+X_{E_4}. \end{eqnarray*} Hence, $X_{E_1[4]\oplus E_2}=X_{E_2[5]}+X_{E_2[2]\oplus E_2}-X_{E_4}$, where $\mathrm{\underline{dim}}(E_2[2]\oplus E_2)\prec \mathrm{\underline{dim}}E_2[5], \mathrm{\underline{dim}}E_4\prec \mathrm{\underline{dim}}E_2[5].$ \end{Example} \begin{Lemma}\label{lem} $$X_{E_i[r]}=X_{E_{i+1}[r-2]}+2.$$ \end{Lemma} \begin{proof} According to Proposition \ref{E[n]}. \end{proof} \begin{Lemma}\label{7} For any $M, N\in$ $\mathrm{mod} kQ$, $X_MX_N$ is a $\mathbb{Z}$-linear combination of the elements in $\mathcal{B}(Q).$ \end{Lemma} \begin{proof} By Theorem \ref{16}, we know that $X_{M}X_{N}$ must be a $\mathbb{Z}$-linear combination of elements in the set $$\{X_{T\oplus R}|\mathrm{Ext}_{k Q}^{1}(T,R)=\mathrm{Ext}_{k Q}^{1}(R,T)=0\}$$ where $R$ is $0$ or any regular exceptional module and $T$ is $0$ or any indecomposable regular module with self-extension. By Lemma \ref{6} and Lemma \ref{lem}, we can easily find that $X_{M}X_{N}$ is actually a $\mathbb{Z}$-linear combination of elements in the set $\mathcal{B}(Q)$. \end{proof} \begin{Prop}\label{theorem1} Let $\Omega=\{A=(a_{ij})\in M_{r\times r}(\bbz_{\geq 0})\mid a_{i,r}\cdots a_{i,r+i-2}\neq 0\}$ where $a_{i, r+s}=a_{i,s}$ for $i\geq 2$ and $s\in {\mathbb N}} \def\bbz{{\mathbb Z}} \def\bbq{{\mathbb Q}} \def\bb1{{\mathbb 1}$. Let $E(A, i)=E_{i}^{a_{i,1}}\oplus \cdots \oplus E_{i+r-2}^{a_{i, r-1}}$ for $A\in \Omega$ and $i=1, \cdots, r.$ Then the set $\mathcal{B'}(Q)=\{X_{E(A, i)}\mid i=1, \cdots, r, A\in \Omega\}$ is a linearly independent set over $\bbz.$ \end{Prop} \begin{proof} Suppose that there exists the identity $ S:=\sum_{A\in \Omega_0, i=1,\cdots r}n(A,i)X_{E(A, i)}=0$ where $\Omega_0$ is a finite subset of $\Omega$ and $n(A,i)\neq 0\in \bbz$ for $i=1, \cdots, r.$ Note that $$ X_{E(A, i)}=\prod_{j=i}^{i+r-2}(\frac{x_{j+1}+x_{j-1}}{x_j})^{a_{i, j-i+1}} $$ for $i=1, \cdots, r.$ Define a lexical order by set $x_r<x_1<x_2<\cdots <x_{r-1}$ and $x_i^{a}<x_i^b$ if $a<b$. Set $l_r(A)=max \{a_{2, r-1}, \cdots, a_{r,1}\}$ and $l_r=max\{l_r(A)\}_{A\in \Omega_0}$. Then $l_r\neq 0.$ Note that $a_{i, r-i+1}$ is just the exponent of $X_{E_{r}}$ in the expression of $X_{E(A, i)}$ for $i=2, \cdots, r.$ Then the expression of $ \sum_{A\in \Omega_0, i=1,\cdots r}n(A,i)X_{E(A, i)}$ contains the unique part of the form $\frac{L(x_1,\cdots,x_{r-1})}{x^{l_r}_r}$ which has the minimal exponent at $x_r$ and $L(x_1,\cdots,x_{r-1})$ is a Laurent polynomial associated to $x_1,\cdots,x_{r-1}$. In fact, $\frac{L(x_1,\cdots,x_{r-1})}{x^{l_r}_r}$ is a part of the sum $\sum_{i=2,\cdots r, A\in\Omega_0; a_{i,r-i+1}=l_r}n(A, i)X_{E(A, i)}$. Note that the terms in this sum have a common factor $X^{l_r}_{E_{r}},$ thus we have the following identity $$\sum_{i=2,\cdots r, A\in\Omega_0; a_{i,r-i+1}=l_r}n(A, i)X_{E(A, i)}=(\ast)X^{l_r}_{E_{r}}$$ here we denote $\frac{1}{X^{l_r}_{E_{r}}}\sum_{i=2,\cdots r, A\in\Omega_0; a_{i,r-i+1}=l_r}n(A, i)X_{E(A, i)}$ by $(\ast)$. Now we set $l_{r+1}(A)=max \{a_{3, r-1}, \cdots, a_{r,2}\}$ and $l_{r+1}=max\{l_r(A)\}_{A\in \Omega_0}$. Then $l_{r+1}\neq 0.$ In the same way as above, we know that the expression of the term $(\ast)$ contains the unique part of the form $\frac{L(x_2,\cdots,x_{r-1})}{x^{l_{r+1}}_1}$ which has the minimal exponent at $x_1=x_{r+1}$ and $L(x_2,\cdots,x_{r-1})$ is a Laurent polynomial associated to $x_2,\cdots,x_{r-1}$. Note that $\frac{L(x_2,\cdots,x_{r-1})}{x^{l_{r+1}}_1}X^{l_r}_{E_{r}}$ is actually a part of the following term $$\sum_{i=3,\cdots r, A\in\Omega_0; a_{i,r-i+1}=l_r, a_{i, r-i+2}=l_{r+1}}n(A, i)X_{E(A, i)}=(\ast\ast)X^{l_{r+1}}_{E_{1}}X^{l_r}_{E_{r}}$$ here we denote $\frac{1}{X^{l_{r+1}}_{E_{1}}X^{l_r}_{E_{r}}}\sum_{i=3,\cdots r, A\in\Omega_0; a_{i,r-i+1}=l_r, a_{i, r-i+2}=l_{r+1}}n(A, i)X_{E(A, i)}$ by $(\ast\ast)$. Continue this discussion, we deduce that there exists some $n(A, i)=0$. It is a contradiction. \end{proof} \begin{Thm}\label{theorem2} The set $\mathcal{B}(Q)$ is a $\mathbb{Z}$-basis of the algebra $\mathcal{AH}(Q).$ \end{Thm} \begin{proof} It is easy to prove that the elements in $\mathcal{B'}(Q)$ and elements in $\mathcal{B}(Q)$ have a unipotent matrix transformation. Then by Proposition \ref{theorem1} and Lemma \ref{7}, we know that $\mathcal{B}(Q)$ is a $\mathbb{Z}$-basis of the algebra $\mathcal{AH}(Q).$ \end{proof} \begin{Example}\label{exam} (1) Consider $r=1$, then we can calculate $$X_{E_1}=2, X_{E_1[2]}=3, X_{E_1[3]}=4, \cdots, X_{E_1[n]}=n+1,\cdots$$ It is obvious that $\mathcal{B}(Q)=\{1\}$. (2) Consider $r=2$, then we can calculate $$X_{E_1}=\frac{2x_2}{x_1}, X_{E_1[2]}=3, \cdots, X_{E_1[2n-1]}=\frac{2nx_2}{x_1}, X_{E_1[2n]}=2n+1,\cdots$$ $$X_{E_2}=\frac{2x_1}{x_2}, X_{E_2[2]}=3, \cdots, X_{E_2[2n-1]}=\frac{2nx_1}{x_2}, X_{E_2[2n]}=2n+1,\cdots$$ It is obvious that $\mathcal{B}(Q)=\{X^{m}_{E_1},X^{n}_{E_2}| m,n\in \mathbb{Z}_{\geq 0}\}$. \end{Example} \section*{Acknowledgements} The authors are grateful to Professor Jie Xiao for helpful discussions.
1,314,259,996,432
arxiv
\section{Introduction} Discrete-time quantum walks (DQWs) correspond to the one-particle sector of quantum cellular automata \cite{Arrighi2019, Farrelly20}. They can simulate numerous physical systems, ranging from particles in arbitrary Yang-Mills gauge fields \cite{AMBD16} and massless Dirac fermions near black holes \cite{DMBD13b}, to charged quantum fluids \cite{ZDMD22}, see also Refs.\ \cite{BW11, AAMS12, Shikano14, BMPT15, MP16, BMP16, RAA17, MAMP18, AMMP19, JDW19, AP22} for other physics-oriented applications. Moreover, DQWs can be seen as quantum analogs of classical random walks (CRWs) \cite{Kempe03}, and can be used to build spatial-search algorithms that outperform \cite{AKR05} those built with CRWs. Continuous-time quantum walks can also be used for such a purpose \cite{CG04}. In $3$ spatial dimensions, DQW-based algorithms \cite{AKR05, CG04} find the location of a marked node with a constant localization probability\footnote{We call ``localization probability'' the probability to be at the marked node, or nodes if there are several of them.} after $O(\sqrt{N})$ time steps, with $N$ the number of nodes of the three-dimensional grid, and this is exactly the bound reached by Grover's algorithm \cite{Grover96,LMP04, ADMP10, IKK04, KOS13, BLP21}. However, no two-dimensional (2D) QW proposed so far reaches Grover's lower bound. The state-of-the-art result using a 2D DQW was obtained by Tulsi in Ref.\ \cite{Tulsi08}: Tulsi's algorithm finds a marked node with a localization probability scaling as $O(1/\ln{N})$ in $O(\sqrt{N})$ time steps, where $N$ is the total number of nodes. To reach a probability independent of $N$, several amplitude amplification time steps have to be performed after the quantum-walk part. These extra time steps are Grover's algorithm time steps, see Ref.\ \cite{Brassard2002}. Taking the amplitude amplification into account, Tulsi's algorithm reaches an $O(1)$ localization probability after $O(\sqrt{N \ln N})$ time steps. Other schemes of 2D DQW for spatial search have followed, such as the one by Roget et al.\ in Ref.\ \cite{RGAM20}, where the 2D DQW simulates a massless Dirac fermion on a grid with defects. This scheme is inspired by physics, and it reaches Tulsi's bound using a coin of dimension 2 instead of 4. Recently, Zylberman and Debbasch introduced in Ref.\ \cite{ZD21} a new DQW scheme for 2D quantum spatial search. This scheme implements quantum search by simulating the dynamics of a massless Dirac fermion in a Coulomb electric field centered on the nodes to be found. We call this DQW ``electric Dirac DQW''\footnote{We call ``Dirac DQW'' a DQW that has as a continuum limit the Dirac equation. Throughout this paper, the terminology ``electric Dirac DQW'' will always refer to a Dirac DQW coupled to a \emph{Coulomb} electric potential, unless otherwise stated. In the literature other types of electric potentials have been considered. The reason why we do not specify the term ``Coulomb'' in the present denomination ``electric Dirac DQW'' is because the idea we want to convey is that the marked node is encoded in the shape of the electric potential, but the precise form of the electric potential, e.g., here, the fact that it is a Coulomb potential, may not be that relevant.}. In this walk, the oracle is a position-dependent \emph{phase}. This oracle is diagonal in the position basis and can be efficiently implemented on $n$ qubits up to an error $\epsilon$ using $O(\frac{1}{\epsilon})$ primitive quantum gates \cite{WGM14}. This total number of quantum gates is independant of $n$ and makes possible the implementation of the oracle on current Noisy Intermediate Scale Quantum (NISQ) devices and on future universal quantum computers. Note also that the algorithm proposed in Ref.\ \cite{ZD21} constitues actually a paradigm change in the construction of search algorithms, because it is based on the physically motivated idea that the position of the marked node can be encoded in the shape of an artificial force field which acts on the quantum walker. One of the main results of Zylberman and Debbasch's paper \cite{ZD21} is a localization probability which displays a maximum in $O_{N\rightarrow \infty}(1)$ time steps, the localization probability scaling as $O(1/N)$ (a detailed analysis is presented in Sec.\ \ref{sec:Noiseless}). Since it focuses on this result, Ref.\ \cite{ZD21} does not offer an analysis of the walk at times much larger than $O(1)$. Moreover, practical implementation not only on current NISQ devices, but also on future, circuit-based quantum computers, can only be envisaged if the algorithm is robust to noise (see for example Refs.\ \cite{NC02,SVMA17,Portugal2018,Preskill2018,ZRY21,DW09,LW17, ZDMD22}); this question is also not addressed in Ref.\ \cite{ZD21}. The aim of this article is to explore both aspects: long-time dynamics and robustness to noise. The main results are the following. First, the electric Dirac DQW exhibits a second localization peak at a time scaling as $O(\sqrt{N})$ with localization probability scaling as $O(1/\ln{N})$. This makes this walk state-of-the-art for 2D DQW spatial search before amplitude amplification. Moreover, this second localization peak is highly robust to spatial noise. Finally, the peak is also robust to spatiotemporal noise, but not as much as it is to time-independent spatial noise. The article is organised as follows. In Sec.\ \ref{sec:Reminder}, we offer a review of the electric Dirac DQW presented in Ref.\ \cite{ZD21}. In Sec.\ \ref{sec:Noiseless}, we study in details the first two maxima of the localization probability. We show that the first maximum, already analyzed in Ref.\ \cite{ZD21}, is actually present up to $N = 9 \times 10^6 > 10^{6}\simeq2^{20}$ (have in mind that $20$ is the current average number of working qubits on most IBM-Q plateforms\footnote{\url{https://quantum-computing.ibm.com/services/resources}}). We also present evidence for the scaling laws characterizing both the first peak and the second, long-time peak, which reaches Tulsi's state-of-the-art bound. In Sec.\ \ref{sec: Comparison}, we analyze the ressources one needs to implement the quantum spatial search in terms of qubits and primitive quantum operations. In Sec.\ \ref{sec:Oracle Noise}, we show that the walk, and in particular the second peak, have a good robustness to spatial oracle noise. We also show that the first peak is robust even to spatiotemporal noise. In Sec.\ \ref{sec: Presence probability}, we propose an analysis of the walker's probability distributions. These probability distributions show that the spatial noise does not affect the shape of the peaks significantly. The peaks remain extremely high relatively to the background, which shows not only good but high robustness of the peaks to spatial oracle noise. The probability distributions also show that the second peak is sharper than the first one. \section{Basics} \label{sec:Reminder} \subsection{Definition of the 2D electric Dirac DQW} We consider a 2D square spatial grid with nodes indexed by two integers $(p,q) \in[\![0,M]\!]^2$, where $M \in \mathbb{N}$ is the number of nodes along one dimension and $N=M^2$ is the total number of nodes. The time is also discrete and indexed by a label $j \in \mathbb{N}$. The walker is defined by its quantum \emph{state} $\ket{\Psi_j}$ in the Hilbert space $\mathscr{H}_C\otimes\mathscr{H}_P$, where $\mathscr{H}_C$, called \emph{coin space}, is the two-dimensional Hilbert space which corresponds to the internal, coin degree of freedom, and $\mathscr{H}_P$, called \emph{position space}, corresponds to the spatial degrees of freedom. The \emph{wavefunction} of the state will be denoted as $\Psi_{j,p,q}\equiv\begin{pmatrix}\psi_{j,p,q}^L &\psi_{j,p,q}^R \end{pmatrix}^{\top}$, where $\top$ denotes the transposition. The discrete-time evolution of the walker is defined by the following one-step evolution equation, \begin{equation} \label{eq:walk} \Psi_{j+1,p,q}=(\mathcal{U} \Psi_{j})_{p,q} \, . \end{equation} The one-step evolution operator, also called \emph{walk operator}, $\mathcal{U}$, is defined by \begin{equation} \mathcal{U} \vcentcolon= e^{-ie\phi} \, R(\theta^-) \, \mathcal{S}_2 \, R(\theta^+) \, \mathcal{S}_1 \, , \label{Eq:U} \end{equation} where $\mathcal{S}_{1,2}$ are standard \emph{shift operators}, \begin{subequations} \label{eq:Shifts} \begin{align} (\mathcal{S}_1\Psi)^L_{p,q} &\vcentcolon= \psi^L_{p+1,q} \\ (\mathcal{S}_1\Psi)^R_{p,q} &\vcentcolon= \psi^R_{p-1,q} \\ (\mathcal{S}_2\Psi)^L_{p,q} &\vcentcolon= \psi^L_{p,q+1} \\ (\mathcal{S}_2\Psi)^R_{p,q} &\vcentcolon= \psi^R_{p,q-1} \, , \end{align} \end{subequations} $R(\theta)$ is a coin-space rotation, also called \emph{coin operator}, defined by \begin{equation} R(\theta)\vcentcolon=\begin{bmatrix}\cos \theta & i\sin\theta\\i\sin\theta &\cos\theta\end{bmatrix}, \label{eq:Coinop} \end{equation} and \begin{equation} \theta^\pm \vcentcolon= \pm \frac{\pi}{4}-\frac{\mu}{2} \, , \label{eq:Theta} \end{equation} with $\mu$ some real parameter. The operator $e^{-ie\phi}$ is diagonal in position space, i.e., it acts on $\Psi_j$ as \begin{equation} (e^{-ie\phi}\Psi_j)_{p,q}=e^{-ie\phi_{p,q}}\Psi_{j,p,q} \, , \end{equation} with $\phi:(p,q)\mapsto \phi_{p,q} \in \mathbb{R}$ some sequence of the lattice position, and $e$ a parameter that we can call charge of the walker, see why further down. The sequence $\phi$ can be called lattice electric potential for at least two reasons: (i) in the continuum limit (see below, Sec.\ \ref{subsec:cont_limit}), this sequence indeed becomes, mathematically, an electric potential coupled to the walker, who then obeys the Dirac equation, and (ii) beyond the continuum limit, it has been shown that similar 2D DQWs exhibit an exact lattice U(1) gauge invariance \cite{AD16b} which, in the continuum limit, becomes the standard U(1) gauge invariance of the Dirac equation coupled to an electromagnetic potential. \subsection{Continuum limit} \label{subsec:cont_limit} We introduce a spacetime-lattice spacing $\epsilon$, and coordinates $t_j \vcentcolon= \epsilon j$, $x_p \vcentcolon= \epsilon p$ and $y_q \vcentcolon= \epsilon q$ \cite{Shikano13,ANF14}. We assume that $\Psi_{j,p,q}$ coincides with the value taken at point $t_j$, $x_p$ and $y_q$ by a function $\Psi$ of the continuous coordinates $t$, $x$, and $y$. We are interested in the dynamics followed by $\Psi$ when $\epsilon \rightarrow 0$. Let's introduce the following continuum quantities, \begin{subequations} \begin{align} m&\vcentcolon= \frac{\mu}{\epsilon}\\ V(x_p,y_q)&\vcentcolon= \frac{\phi_{p,q}}{\epsilon} \, , \end{align} \end{subequations} which are respectively the mass and the electric potential, see why just below. Expand now Eq.\ \eqref{eq:walk} in $\epsilon$ around $\epsilon = 0$. The walk operator, Eq.\ \eqref{Eq:U}, has been chosen so that (i) the zeroth-order terms give us $\Psi(t,x,y)=\Psi(t,x,y)$, i.e., the terms cancel each other, and (ii) the first-order terms deliver the well-known Dirac equation coupled to an electric potential $V$. This equation (in natural units where $c=1$ and $\hbar = 1$) reads: \begin{equation} \label{eq:Dirac_cont} i\partial_t \Psi = \mathcal{H} \Psi \, , \end{equation} where the Dirac Hamiltonian is \begin{equation} \mathcal{H} \vcentcolon= \alpha^{k} (-i\partial_{k}) + m \alpha^0 + e V \, , \end{equation} where summation over $k=1,2$ is implicitly assumed. The alpha matrices are \begin{subequations} \begin{align} \alpha^0 &\vcentcolon= \sigma_x \\ \alpha^1 &\vcentcolon= \sigma_z \\ \alpha^2 &\vcentcolon= - \sigma_y, \end{align} \end{subequations} where the $\sigma$s are the Pauli matrices. Thus, this DQW, Eq.\ \eqref{eq:walk}, simulates the (1+2)D Dirac equation coupled to an electric potential, explaining why the ``Dirac DQW'' is called an electric DQW. \subsection{Coulomb potential} \label{subsec : Coulomb Potential} As shown in Eq.\ \eqref{eq:Dirac_cont}, the sequence $\phi= \epsilon V$ represents in the continuum limit the electric potential to which the walker is coupled. We choose $V$ to be a Coulomb potential created by a point particle of charge $Q$ at location $(\Omega_x,\Omega_y)$ on the 2D plane: \begin{equation} eV(x,y) \vcentcolon= \frac{eQ}{\sqrt{(x-\Omega_x)^2+(y-\Omega_y)^2}}. \label{coulomb function} \end{equation} For the sake of simplicity, $e$ will be set to $-1$. As discussed in Ref.\ \cite{ZD21}, one can take without loss of generality (i) $(\Omega_x,\Omega_y)=(\frac{M}{2}-\frac{1}{2},\frac{M}{2}-\frac{1}{2})$, which is called the \emph{center}, and (ii) $\epsilon=1$. The charge $Q$ is set to $0.9$, and $m=\mu=0$. Notice that the center is not located on a node of the 2D lattice; it is at equal distance of four nodes, namely, $(\frac{M}{2}, \frac{M}{2})$, $(\frac{M}{2} -1, \frac{M}{2})$, $(\frac{M}{2}, \frac{M}{2}-1)$, and $(\frac{M}{2} -1, \frac{M}{2}-1)$. With this choice of potential, the walk can be referred to as a ``Coulomb walk''. \subsection{Definition of the spatial-search problem} The spatial-search problem is defined as follows. Consider at time $j=0$ a fully delocalized walker on the grid, i.e., $\forall(p,q,a)\in[\![0,M]\!]^2\times\{L,R\}, \psi_{0,p,q}^a \vcentcolon= \frac{1}{M\sqrt{2}}$. The problem addressed by the Coulomb walk with this initial condition is: can the walker localize on the nodes where $\phi_{p,q}$ is at its extremum, that is, the four nodes around the center $(\Omega_x,\Omega_y)=(\frac{M}{2}-\frac{1}{2},\frac{M}{2}-\frac{1}{2})$? The first observable will be the probability of being on these nodes as a function of time and of the number of grid nodes: \begin{equation} P_j(N) \vcentcolon= \sum \limits _{(p',q')\in \{\pm \frac{1}{2}\}^2} \left\Vert \Psi_{j,\Omega_x+p',\Omega_y+q'}(N)\right\Vert ^2 \, , \end{equation} which we call \emph{localization probability}. It has been shown in Ref.\ \cite{ZD21} the localization probability admits a first maximum at time $j_{1}(N)=82$, independent of $N$. We now define long times as times $t_j$ with $j$ much larger than $82$. The long-time behaviour is studied below in Sec.\ \ref{sec:Noiseless}. We consider as second observable the \emph{probability distribution} over space, \begin{equation} d_{j,p,q}(N) \vcentcolon= \left\Vert \Psi_{j,p,q}(N) \right\Vert^2 \, , \end{equation} which is studied in Sec.\ \ref{sec: Presence probability}. The fully delocalized initial condition is common in spatial-search problems since Grover's algorithm \cite{Grover96}. Moreover, this initial condition can easily be implemented on a quantum circuit as a tower of Hadamard gates. \color{black} Other initial superpositions for the coin part were considered in Ref.\ \cite{ZD21}. Now, the fully delocalized initial condition forces us to pay attention to boundary conditions. In our work, we choose periodic boundary conditions. From a computer-science point of view, one can expect from a database to have a list of adresses which are on a graph whose ends are connected, corresponding exactly to periodic boundary conditions. \section{Noiseless case: long times} \label{sec:Noiseless} In Ref.\ \cite{ZD21}, it is shown that for a `small' grid (up to $N=2.5 \times 10^5$), the first maximum occurs at $j_{1}(N) =82=O_{N\rightarrow\infty}(1)$ with a localization probability $P_{j_{1}(N)}(N)$ scaling as $O(1/N)$. According to Fig.\ \ref{fig:Scaling law}, the result $j_1(N) = O(1)$ actually holds up to $N= 9 \times 10^6 \simeq 10^7$. And the left panel of Fig.\ \ref{fig:Renormalized} shows that $P_{j_{1}(N)}(N) = O(1/N)$ is valid up to $N = 900 \times 900 \simeq 10^6$. Now, $P_j(N)$ with fixed $N$ presents several other maxima as $j$ varies, and Fig.\ \ref{fig:Renormalized} shows in particular that there is a prominent second maximum. This second maximum occurs at a time $j_{2}(N)$ which, according to Fig.\ \ref{fig:Scaling law} and to the right panel of Fig.\ \ref{fig:Renormalized} scales as $O(\sqrt{N})$. The right panel of Fig.\ \ref{fig:Renormalized} also shows the localization probability $ P_{j_{ 2}(N)}(N) = O(1 / \ln N)$ . This result matches the state-of-the-art result in $2$D DQW search algorithms before amplitude amplification \cite{Tulsi08}. \begin{figure} \includegraphics[height=5cm,width=8cm]{Figures/Scaling_law_two_peaksZD.pdf} \caption{ Times $j_1$ (green) and $j_2$ (blue) at which the localization probability $P_j(N)$ reaches a maximum, plotted as a function of $\sqrt{N}$ for $m=0$, $e=-1$ and $Q=0.9$. } \label{fig:Scaling law} \end{figure} \begin{figure*} \includegraphics[height=5cm,width=15cm]{Figures/RenormalizedPJ.pdf} \caption{Rescaled localization probability $P_j(N)$ % for $m=0$, $e=-1$ and $Q=0.9$. % Left panel: $P_j(N)\times N$ as a function of $j$, for several values of $N$. % Right panel: $P_j(N)\times \ln{N}$ as a function of $j$, for different values of $N$. } \label{fig:Renormalized} \end{figure*} \section{Ressource Analysis} \label{sec: Comparison} Since the evolution operator of the Coulomb walk is built out of two $1$D shift operators, one for each spatial directions, the Coulomb walk only requires a $2$-dimensional coin space. On the contrary, Tulsi's walk (see Ref.\ \cite{Tulsi08}) uses a 2D shift operator, which requires a 4-dimensional Hilbert space for the coin, so encoding this walk requires one more qubit than encoding the walk studied in the present article. Also note that Tulsi's algorithm also uses an ancilla qubit to allow a part of the probability amplitude to remain on the same site after one evolution step\footnote{Technically, Tulsi's walk uses a controlled shift operator and a controlled coin operator with respect to the ancilla.}. Thus, in total, Tulsi's algorithm needs two more qubits than the Coulomb walk to perform a quantum spatial search on a database of the same size. Roget and al.\ 's walk, presented in Ref.\ \cite{RGAM20}, is a DQW -- as is the Coulomb walk. It also uses two $1$D shift operators and dispenses with the ancilla qubit. The difference with the Coulomb walk lies in the choice of oracle. The Coulomb walk uses an artificial electric field as oracle, while Roget et al.'s walk views the node to be found as a defect and therefore replaces on the defect the rotation $R(\theta)$ of Eq. \eqref{eq:Coinop} by the identity operator. A scheme implementing efficiently (up to a given precision $\epsilon$) position-dependent diagonal unitaries similar to the electric potential oracle in Eq.\ (6) can be found in J.Welsh and al.\, (Ref.\ \cite{WGM14}). The total number $n$ of one-qubit and two-qubit quantum operations used in this scheme scales as $O(\frac{1}{\epsilon})$ and is actually independent of $n$. However, the implementation of the shift operators $\mathcal{S}_{1,2}$ (see Eq.(\ref{eq:Shifts})) requires a number of primitive quantum operations which does depend on $n$ and scales as $O(n^2)$ because implementing shift operators requires performing Quantum Fourier Transforms (QFTs) \cite{Shakeel20}. Note that each coin operation $R(\theta^-)$ and $R(\theta^+)$ in Eq.\ (\ref{Eq:U}) can be implemented as only one single quantum gate on the coin qubit. \section{Oracle noise} \label{sec:Oracle Noise} Today, one of the main goals in quantum computing is having fault-tolerant algorithms which can be implemented on NISQ devices \cite{DMN13, HF19, Roffe19, HF19}. In the scheme developed by Welch and al.\ in Ref.\ \cite{WGM14}, the final quantum circuit of the oracle is composed of $CNOT$ and $R_Z$. The rotation angles are only implementable up to a finite accuracy due to hardware limitations. This generates fluctuations in the potential $\phi$ and we model these fluctuations by a white noise. More precisely, we replace $\phi_{p,q}$ by $\phi^B_{j,p,q}=\phi_{p,q}+B_{j, p,q}$, where $B$ is a white noise in all its variables. To make things as simple as possible, given a point $(j, p, q)$, $B_{j,p,q}$ is chosen randomly with uniform distribution in a certain interval $(-B_{\text{max}}, B_{\text{max}})$ independent of $(j, p, q)$. Noise which depends on time only does not modify the probability distribution. All noises considered in this article will therefore be space-dependent. We will first focus on time-independent, but space-dependent noise, and then switch to both time- and space-dependent noise. Note that decoherence noise on the free-walk part and on Grover search has already been studied in Refs.\ \cite{CSB07, BSCR08, AAWM14, OPD06, DMD16, MCBK19, PWY21}. The amplitude of the noise is best characterized by the noise-to-signal ratio: \begin{equation} r \vcentcolon= \frac{B_{\text{max}}}{\text{max}_{p,q}\mid \phi_{p,q} \mid } \, . \end{equation} \subsection{Spatial oracle noise} \label{subsec:SPD oracle Noise} In this subsection, all observables are averaged over 50 realizations of the noise. Fig.\ \ref{fig:Noisy200TI} presents results obtained for $N = 200^2$ and $N = 500^2$. When the noise to signal ration $r$ is not too high, say $r \lesssim 0.5$, both peaks still exist and the second one occurs slightly later, with approximately the same time delay with respect to the noiseless situation. The amplitude of the peaks is also affected by the noise. In particular, for large enough $N$ (see the right panel in Fig.\ \ref{fig:Noisy200TI}), the amplitude of the first peak decreases while the amplitude of the second peak actually increases. Thus, weak noise favours, and even enhances the second peak, at least for large enough values of $N$. Increasing the noise to signal ratio $r$ erases the first peak and, to a certain extent, also the second one. Note however that, for large enough $N$, the probability $P^r_j(N)$ still exhibits a (rather flat) maximum \emph{in lieu} of the second peak. So, in any case, noise favours the second peak. So, all in all, the algorithm studied in this article shows good to great robustness to spatial noise. It is also instructive to investigate this robustness through the probability distribution over space $d^r_{j,p,q}(N)$ and this is done in Sec.\ \ref{sec: Presence probability} below. \begin{figure*} \includegraphics[height=5cm,width=15cm]{Figures/Oracle_Noise.pdf} \caption{Localization probability $P_j^r(N)$ with spatial noise as a function of $j$, for different noise-to-signal ratios $r$, for $m=0$, $e=-1$, $Q=0.9$ and $\sqrt{N}=200$ (left) and $\sqrt{N}=500$ (right). } \label{fig:Noisy200TI} \end{figure*} \color{black} \subsection{Spatiotemporal oracle noise} In this subsection, all observables are averaged over 10 realizations of the noise. Numerical results are presented in Fig.\ \ref{fig:Noise200Tdep}. One first observes a global decreases in the localization probability, which gets globally lower with increasing $r$. But it also appears that the first peak is less impacted by the noise than the rest of the curve, and especially the second peak. This can be understood in the following way. Since the noise we are considering is white in both space and time, the central limit theorem applies. The walk will therefore exhibit diffusive behaviour in the `long'-time limit (see for example Ref.\ \cite{DMD16}). But, the shorter the time is, the less important the perturbation induced by the noise on the walk's behaviour is. The striking robustness of the first peak, which always occur at $j = 82$, indicates that $j = 82$ is a `short' time, at least for noise to signal ratios to exceeding $0.5$. \begin{figure} \includegraphics[height=5cm,width=8cm]{Figures/Moyenne_tdep.pdf} \caption{Localization probability $P_j^r(200^2)$ as a function of $j$ for different spatiotemporal noise-to-signal ratios $r$, % with $m=0$, $e=-1$ and $Q=0.9$. } \label{fig:Noise200Tdep} \end{figure} \section{Probability distribution in space} \label{sec: Presence probability} We now investigate the probability distribution $D_j = \{ d_{j,p,q}, \{p, q\} \in [\! [0; M]\! ]^2\}$ of the walk at the localization times $j_1$ and $j_2$ corresponding to the first and the second peak. The \emph{height ratio} $\eta$ between the peak and the background is defined as \color{black} \begin{equation} \eta_j(N)\vcentcolon=\frac{d_{j,\frac{M}{2}-1,\frac{M}{2}-1}(N)}{d_{j,1,1}(N)} \, , \end{equation} where $d_{j,\frac{M}{2}-1,\frac{M}{2}-1}(N)$ is the probability to be on one of the four nodes of interest (where the potential is maximum), and where $d_{j,1,1}(N)$ is the probability to be where the potential is the weakest. \subsection{Noiseless case} \begin{figure*}[t] \includegraphics[height=9cm,width=15cm]{Figures/1st-2nd_peak.pdf} \caption{Probability distribution $D_j$ in the noiseless case for $\sqrt{N}=200,500,1000$, $j = j_1$ (top plots) and $j = j_2$ (bottom plots) % and $m=0$, $e=-1$, $Q=0.9$. % Height ratios for $j_1$: $\eta_{j_1}(200^2)=160$, $\eta_{j_1}(500^2)=163$ and $\eta_{j_1}(1000^2)=163$. % Height ratios for $j_2$: $\eta_{j_2}(200^2)=127$, $\eta_{j_2}(500^2)=242$ and $\eta_{j_2}(1000^2)=1933$. } \label{fig: dj12,p,q Noiseless} \end{figure*} The noiseless case is presented in Fig.\ \ref{fig: dj12,p,q Noiseless}. The probability distributions are sharply peaked on the nodes of interest for both $j = j_1$ (top plots) and $j = j_2$ (bottom plots). For a small grid size (i.e., $\sqrt{N}=200$), the height ratio is better for the first peak than for the second peak (the precise values are given in the figure caption). For a larger grid size (i.e., $\sqrt{N}=500$ and $\sqrt{N}=1000$), the height ratio of the first peak is important, but that of the second peak is substantially larger (see the figure caption). \subsection{Spatial oracle noise} \begin{figure*}[t] \includegraphics[height=9cm,width=15cm]{Figures/withdiff1st-2nd_Noisy_peak.pdf} \caption{Top plots: Probability distribution $d^{r=1/3}_{j,p,q}(N)$ for $\sqrt{N}=200$ and $500$, at $j_1$ (left plots) and $j_2$ (right plots), averaged over 50 realizations of the spatial noise, % with $m=0$, $e=-1$, $Q=0.9$. % Height ratios for $j_1$: $\eta^{r=1/3}_{j_1}(200^2)=87$ and $\eta^{r=1/3}_{j_1}(500^2)=115$. % Height ratios for $j_2$: $\eta^{r=1/3}_{j_2}(200^2)=142$ and $\eta^{r=1/3}_{j_2}(500^2)=218$. % % Bottom plots: Difference $d^{r=1/3}_{j,p,q}(N) - d_{j,p,q}(N)$ between the noisy and the noiseless cases. % for $m=0$, $e=-1$, $Q=0.9$. } \label{fig: dj12,p,q Noisy} \end{figure*} Let us now investigate the probability density $D^r_j = \{ d^r_{j,p,q}, \{p, q\} \in [ \! [0,M \!] ]\}$ in the presence of noise with noise-to-signal ration $r$. Fig.\ \ref{fig: dj12,p,q Noisy} displays $D^r_{j}$ (top plots) and $D^r_{j} - D_{j}$ (bottom plots) at $j = j_1$ (left plots) and $j = j_2$ (right plots). On the top plots of Fig.\ \ref{fig: dj12,p,q Noisy}, where $D^{1/3}_{j}$ for $r=1/3$ is plotted, one observes that the overall shapes of the peaks, and in particular their widths, are not affected by the noise. The height ratios (given in the caption of Fig.\ \ref{fig: dj12,p,q Noisy}) are still very large, even in the presence of a substantial amount of noise $(r=1/3)$. This shows \emph{not only good, but high robustness} of the walk to spatial noise. Looking at the bottom plots of Fig.\ \ref{fig: dj12,p,q Noisy} one observes that noise makes the first peak lower (two bottom left plots), but makes the second peak (two bottom right plots) higher for a small grid size ($M=200$) or balanced between the four nodes of interest for a larger grid size $(M=500)$. These observations are of course consistent with the curves of Fig.\ \ref{fig:Noisy200TI}. \section{Conclusions and discussion} In this paper, we have shown that the 2D electric Dirac DQW presented in Ref.\ \cite{ZD21} has at least two different localization peaks: \color{black} (i) one at short times ($O_{N\rightarrow\infty}(1)$ with $N$ the number of nodes on the 2D grid), for which the localization probability scales as $O(1/N)$, and (ii) another at a time scaling as $O(\sqrt{N})$ with localization probability in $O(1/\ln N)$, which matches the state-of-the-art result in spatial search with 2D DQWs before amplitude amplification \cite{Tulsi08,RGAM20}. This dynamics was studied numerically up to $N=9\times10^6 \simeq 2^{20}$. This quantum spatial search also presents a memory advantage by formally requiring two qubits less than Tulsi's algorithm. In terms of quantum operations, the oracle can be efficiently implemented on a quantum circuit up to an error $\epsilon$ using $O(\frac{1}{\epsilon})$ primitive quantum gates, allowing its implementation on current NISQ devices and future fault-tolerant universal quantum computers. We have also explored the effect of oracle noise by adding a white noise to the electric potential. This white noise can be viewed, for example, as a model of the fluctuations induced by the finite accuracy implementation of the quantum rotations involved in the Oracle quantum circuit \cite{WGM14}. Our results demonstrate that the algorithm is highly robust to oracle noise. The second peak is not only highly robust to, but actually slightly amplified by spatial noise. The second peak is admittedly less robust to spatiotemporal noise but the first peak turns out highly robust to this type of noise. This study is thus very encouraging for the future implementation of quantum spatial search with electric potential on universal quantum computers and NISQ devices. Adapting to the present walk the ancilla technique used in Tulsi's walk may make the second peak appear sooner and might eventually help the walk reach Grover's lower bound. Also, studying the evolution of the localization probability under other kind of noises is assuredly very promising to extend the robustness properties of the quantum spatial search with electric potential. Finally, extending all results to higher dimensions and to walks using other fields as oracle will certainly prove interesting. \renewcommand{\thesection}{Annexe \Alph{section}} \setcounter{section}{0}
1,314,259,996,433
arxiv
\section{Introduction} The existence of a unique minimal \emph{deterministic} acceptor is an important property of regular languages. Establishing a similar result for \emph{non-deterministic} acceptors is significantly more difficult, but nonetheless of great practical importance, as non-deterministic automata (NFA) can be exponentially more succinct than deterministic ones (DFA). The main issue is that a regular language can be accepted by several size-minimal NFAs that are not isomorphic. A number of sub-classes of non-deterministic automata have been identified in the literature to tackle this issue, which all admit canonical representatives: the \emph{\'atomaton}~\cite{BrzozowskiT14}, the \emph{canonical residual finite-state automaton} (short \emph{canonical RFSA} and also known as \emph{jiromaton})~\cite{DenisLT02}, the \emph{minimal xor automaton}~\cite{VuilleminG210}, and the \emph{distromaton}~\cite{MyersAMU15}. In this paper we provide a general categorical framework that unifies constructions of canonical non-deterministic automata and unveils new ones. Our framework adopts the well-known representation of side-effects via \emph{monads}~\cite{moggi1991notions} to generalise non-determinism in automata. For instance, an NFA (without initial states) can be represented as a pair $\langle X,k \rangle $, where $X$ is the set of states and $ k \colon X \to 2 \times \mathcal{P}(X)^A $ combines the function classifying each state as accepting or rejecting with the function giving the set of next states for each input. The powerset forms a monad $\langle \mathcal{P}, \{-\}, \mu \rangle$, where $\{-\}$ creates singleton sets and $\mu$ takes the union of a set of sets. This allows describing the classical powerset construction, converting an NFA into a DFA, in categorical terms \cite{silva2010generalizing} as depicted on the left of \Cref{gen-det-diagrams}, \begin{figure*}[t] \centering \begin{tikzcd}[ ampersand replacement=\&] X \ar{d}[swap]{k} \ar{r}{\{-\}} \& \mathcal{P}(X) \ar{dl}{k^\sharp} \ar[dashed]{r}{\textnormal{obs}} \& 2^{A^*} \ar{d}{\langle \varepsilon, \delta \rangle} \\ 2 \times \mathcal{P}(X)^A \ar[dashed]{rr}[below]{2 \times \textnormal{obs}^A} \& \& 2 \times (2^{A^*})^A \end{tikzcd} \qquad \begin{tikzcd}[ampersand replacement=\&] X \ar{d}[swap]{k} \ar{r}{\eta} \& TX \ar{dl}{k^\sharp} \ar[dashed]{r}{\textnormal{obs}} \& \Omega \ar{d}{\omega} \\ FTX \ar[dashed]{rr}[below]{F\textnormal{obs}} \& \& F\Omega \end{tikzcd}. \caption{Generalised determinisation of automata with side-effects in a monad.} \label{gen-det-diagrams} \end{figure*} where $k^\sharp \colon \mathcal{P}(X) \to 2 \times \mathcal{P}(X)^A$ represents an equivalent DFA, obtained by taking the subsets of $X$ as states, and $\langle \varepsilon, \delta \rangle : 2^{A^*} \rightarrow 2 \times (2^{A^*})^A$ is the automaton of languages. There then exists a unique automaton homomorphism $\textnormal{obs}$, assigning a language semantics to each set of states. As seen on the right of \Cref{gen-det-diagrams} this perspective further enables a \emph{generalised determinisation} construction \cite{silva2010generalizing}, where $2 \times (-)^A$ is replaced by any (suitable) functor $F$ describing the automaton structure, and $\mathcal{P}$ by a monad $T$ describing the automaton side-effects. $\Omega \xrightarrow{\omega} F \Omega$ is the so-called \emph{final coalgebra}, providing a semantic universe that generalises the automaton of languages. Our work starts from the observation that the deterministic automata resulting from this generalised determinisation constructions have \emph{additional algebraic structure}: the state space $\mathcal{P}(X)$ of the determinised automaton defines a free complete join-semilattice (CSL) over $X$, and $k^\sharp$ and $\textnormal{obs}$ are CSL homomorphisms. More generally, $TX$ defines a (free) algebra for the monad $T$, and $k^\sharp$ and $\textnormal{obs}$ are $T$-algebra homomorphisms. With this observation in mind, our question is: can we exploit the additional algebraic structure to ``reverse'' these constructions? In other words, can we convert a deterministic automaton with additional algebraic structure over a given monad to an equivalent succinct automaton with side-effects, possibly over another monad? To answer this question, the paper makes the following contributions: \begin{itemize} \item We present a general categorical framework based on bialgebras and distributive law homomorphisms that allows deriving canonical representatives for a wide class of succinct automata with side-effects in a monad. \item We strictly improve the expressivity of previous work \cite{HeerdtMSS19, arbib1975fuzzy}: our framework instantiates not only to well-known examples such as the canonical RFSA (\Cref{canonicalrfsaexample}) and the minimal xor automaton (\Cref{minimalxorexmaple}), but also includes the \'atomaton (\Cref{atomatonexample}) and the distromaton (\Cref{distromatonexample}), which were not covered in \cite{HeerdtMSS19, arbib1975fuzzy}. While other frameworks restrict themselves to the category of sets \cite{HeerdtMSS19}, we are able to include canonical acceptors in other categories, such as the \textit{canonical nominal RFSA} (\Cref{nominalexample}). \item We relate vector spaces over the unique two element field with complete atomic Boolean algebras and consequently discover a previously unknown canonical mod-2 weighted acceptor for regular languages---the \emph{minimal xor-CABA automaton} (\Cref{minimalxorcabaexample})---that in some sense is to the minimal xor automaton what the \'atomaton is to the canonical RFSA (\Cref{minimalxorcabadiagram}). \item We introduce an abstract notion of \emph{closedness} for succinct automata that is parametric in two monads (\Cref{closedsuccinctdef}), and prove that every regular language satisfying a suitable property admits a canonical size-minimal representative among closed acceptors (\Cref{minimalitytheorem}). By instantiating the latter we subsume known minimality results for canonical automata, prove the xor-CABA automaton minimal, and establish a size comparison between different acceptors (\Cref{minimalityimplications}). \end{itemize} \ifdefined\else An extended version of this paper is available at \cite{arxiv}.\fi \section{Overview of the approach} \label{overview} In this section, we give an overview of the ideas of the paper through an example. We show how our methodology allows recovering the construction of the \'atomaton for the regular language $ \mathcal{L} = (a+b)^*a $, which consists of all words over $A = \lbrace a, b \rbrace$ that end in $a$. For each step, we hint at how it is generalised in our framework. The classical construction of the \'atomaton for $\mathcal{L}$ consists in closing the \emph{residuals}\footnote{A language is a \textit{residual} or \textit{left quotient} of $\mathcal{L} \subseteq A^*$, if it is of the form $v^{-1}\mathcal{L} = \lbrace u \in A^* \mid vu \in \mathcal{L} \rbrace$ for some $v \in A^*$. } of $\mathcal{L}$ under all Boolean operations, and then forming a non-deterministic automaton whose states are the \emph{atoms}\footnote{A non-zero element $a \in B$ is called \emph{atom}, if for all $x \in B$ such that $x \leq a$ one finds $x = 0$ or $x = a$.} of the ensuing complete atomic Boolean algebra (CABA)---that is, non-empty intersections of complemented or uncomplemented residuals. In our categorical setting, this construction is obtained in several steps, which we now describe. \subsection{Computing residuals} We first construct the minimal DFA accepting $\mathcal{L}$ as a coalgebra of type $ M_\mathcal{L} \to 2 \times (M_\mathcal{L})^{A} \enspace $. By the well-known Myhill-Nerode theorem~\cite{nerode1958linear}, $M_\mathcal{L}$ is the set of residuals for $\mathcal{L}$. The automaton is depicted in \Cref{m(l)}. \begin{figure*} \tiny \center \begin{tikzpicture}[node distance=6em] \node[state, shape=rectangle, initial, initial text=] (x) {$x$}; \node[state, shape=rectangle, right of=x, accepting] (y) {$y$}; \path[->] (x) edge[loop above] node{$b$} (x) (y) edge[loop right] node{$a$} (y) (x) edge[above, bend left] node{$a$} (y) (y) edge[below, bend left] node{$b$} (x) ; \end{tikzpicture}. \caption{The minimal DFA for $\mathcal{L} = (a+b)^*a$.} \label{m(l)} \end{figure*} In our framework, we consider coalgebras over an arbitrary endofunctor $F \colon \mathscr{C} \to \mathscr{C}$ ($F = 2 \times (-)^{A}$ and $\mathscr{C} = \textnormal{Set}$ in this case). Minimal realisations, generalising minimal DFAs, exist for a wide class of functors $F$ and categories $\mathscr{C}$, including all the examples in this paper. \subsection{Taking the Boolean closure} We close the minimal DFA under all Boolean operations, generating an equivalent deterministic automaton that has additional algebraic structure: its state space is a CABA. This is achieved via a double powerset construction---where sets of sets are interpreted as full disjunctive normal form---and the resulting coalgebra is of type $ \mathcal{P}^2(M_\mathcal{L}) \to 2 \times (\mathcal{P}^2(M_\mathcal{L}))^{A} $. Our construction relies on the so-called \emph{neighbourhood monad} $\mathcal{H}$, whose algebras are precisely CABAs, and yields a (free) \emph{bialgebra} capturing both the coalgebraic and the algebraic structure; the interplay of these two structures is captured via a \emph{distributive law}. We then minimise this DFA to identify Boolean expressions evaluating to the same language. As desired, the resulting state space is precisely the Boolean closure of the residuals of $\mathcal{L}$. Formally, we obtain the minimal bialgebra for $\mathcal{L}$ depicted in \Cref{overlinem(l)atom}. This step in our framework is generalised as closure of an $F$-coalgebra w.r.t\ (the algebraic structured induced by) any monad $S$ for which a suitable distributive law $\lambda$ with the coalgebra endofunctor $F$ exists. The first step of the closure yields a free $\lambda$-bialgebra, comprised of both an $F$-coalgebra and an $S$-algebra over the same state space. In a second step, minimisation is carried out in the category of $\lambda$-bialgebras, which guarantees simultaneous preservation of the algebraic structure and of the language semantics. \subsection{Constructing the \'atomaton} This step is the key technical result of our paper. Atoms have the property that their Boolean closure generates the entire CABA. In our framework, this property is generalised via the notion of \emph{generators} for algebras over a monad, which allows one to represent a bialgebra as an equivalent \emph{free} bialgebra over its generators, and hence to obtain succinct canonical representations (\Cref{forgenerator-isharp-is-bialgebra-hom}). In \Cref{succinctbialgebra1} we apply this result to obtain the canonical RFSA, the canonical nominal RFSA, and the minimal xor automaton for a given regular language. However, to recover the \'atomaton from the minimal CABA-structured DFA of the previous step, in addition a subtle change of perspective is required. In fact, we are still working with the ``wrong'' side-effect: the non-determinism of bialgebras so far is determined by $\mathcal{H}$, whereas we are interested in an NFA, whose non-determinism is captured by the powerset monad $\mathcal{P}$. As is well-known, every element of a CABA can be obtained as the join of the atoms below it. In other words, those atoms are also generators of the underlying CSL, which is an algebra for $\mathcal{P}$. We formally capture this idea as a map between monads $\mathcal{H} \to \mathcal{P}$. Crucially, we show that this map lifts to a \emph{distributive law homomorphism} and allows translating a bialgebra over $\mathcal{H}$ to a bialgebra over $\mathcal{P}$, which can be represented as a free bialgebra over atoms---the \'atomaton for $\mathcal{L}$, which is shown in \Cref{atomaton}. In \Cref{succinctbialgebra2} we generalise this idea to the situation of two monads $S$ and $T$ involved in distributive laws with the coalgebra endofunctor $F$. In particular, \Cref{generatorbialgebrahom} is our free representation result, spelling out a condition under which a bialgebra over $S$ can be represented as a free bialgebra over $T$, and hence admits an equivalent succinct representation as an automaton with side-effects in $T$. Besides the \'atomaton and the examples in \Cref{succinctbialgebra1}, this construction allows us to capture the distromaton and a newly discovered canonical acceptor that relates CABAs with vector spaces over the two element field. \section{Preliminaries} \begin{figure*} \center \tiny \adjustbox{valign=m}{\begin{tikzpicture}[node distance= 8.5em] \node[state, initial, shape=rectangle, initial text=] (1) {$\lbrack \lbrace \lbrace x \rbrace, \lbrace x, y \rbrace \rbrace \rbrack $}; \node[state, shape=rectangle, right of = 1] (3) {$\lbrack \emptyset \rbrack$}; \node[state, shape=rectangle,below of = 1, accepting] (5) {$\lbrack\lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace \rbrace \rbrack$}; \node[state, shape=rectangle,right of = 5, accepting] (7) {$\lbrack\lbrace \lbrace y \rbrace \rbrace \rbrack$}; \node[state, shape=rectangle,right of = 3] (9) {$\lbrack\lbrace \emptyset \rbrace \rbrack$}; \node[state, shape=rectangle,right of = 9] (11) {$\lbrack \lbrace \lbrace x, y \rbrace, \emptyset \rbrace \rbrack$}; \node[state, shape=rectangle,right of = 7, accepting] (13) {$\lbrack \lbrace \lbrace y \rbrace, \emptyset \rbrace \rbrack$}; \node[state, shape=rectangle, right of = 13, accepting] (15) {$\lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace, \emptyset \rbrace \rbrack$}; \path[->] (3) edge[loop above] node{$a,b$} (3) (1) edge[loop above] node{$b$} (1) (1) edge[right, bend left] node{$a$} (5) (7) edge[left] node{$a,b$} (3) (5) edge[loop below] node{$a$} (5) (5) edge[left, bend left] node{$b$} (1) (9) edge[bend left, right] node{$b$} (13) (9) edge[loop above] node{$a$} (9) (11) edge[left] node{$a,b$} (15) (13) edge[left, bend left] node{$a$} (9) (13) edge[loop below] node{$b$} (13) (15) edge[loop below] node{$a,b$} (15) ; \end{tikzpicture}} \qquad \adjustbox{valign=m}{\resizebox{0.35 \columnwidth}{!}{% \begin{tabular}[]{ c|c|c|c|c|c|c|c|c } $\wedge$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ \\ \hline $1$ & $1$ & $2$ & $2$ & $1$ & $1$ & $2$ & $2$ & $1$ \\ \hline $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ & $2$ \\ \hline $3$ & $2$ & $2$ & $3$ & $3$ & $2$ & $2$ & $3$ & $3$ \\ \hline $4$ & $1$ & $2$ & $3$ & $4$ & $1$ & $2$ & $3$ & $4$ \\ \hline $5$ & $1$ & $2$ & $2$ & $1$ & $5$ & $6$ & $6$ & $5$ \\ \hline $6$ & $2$ & $2$ & $2$ & $2$ & $6$ & $6$ & $6$ & $6$ \\ \hline $7$ & $2$ & $2$ & $3$ & $3$ & $6$ & $6$ & $7$ & $7$ \\ \hline $8$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ \end{tabular} \qquad \begin{tabular}[]{ c|c} & $\neg$\\ \hline $1$ & $7$ \\ \hline $2$ & $8$ \\ \hline $3$ & $5$ \\ \hline $4$ & $6$ \\ \hline $5$ & $3$ \\ \hline $6$ & $4$ \\ \hline $7$ & $1$ \\ \hline $8$ & $2$ \end{tabular} }} \caption{The minimal CABA-structured DFA for $\mathcal{L} = (a+b)^*a$, where $1 \equiv \lbrack \lbrace \lbrace x \rbrace, \lbrace x, y \rbrace \rbrace \rbrack$, $2 \equiv \lbrack \emptyset \rbrack$, $3 \equiv \lbrack \lbrace \emptyset \rbrace \rbrack$, $4 \equiv \lbrack \lbrace \lbrace x, y \rbrace, \emptyset \rbrace \rbrack$, $5 \equiv \lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace \rbrace \rbrack$, $6 \equiv \lbrack \lbrace \lbrace y \rbrace \rbrace \rbrack$, $7 \equiv \lbrack \lbrace \lbrace y \rbrace, \emptyset \rbrace \rbrack$, $8 \equiv \lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace, \emptyset \rbrace \rbrack$.} \label{overlinem(l)atom} \end{figure*} \label{preliminaries} We assume basic knowledge of category theory (including functors, natural transformations, and adjunctions)~\cite{awodey2010category}. In this section we recall the relevant notions for our technical development: coalgebras, monads, algebras over a monad, distributive laws, and bialgebras. Unpointed deterministic automata are basic examples of coalgebras in the category of sets and functions: they are of the type $k \colon X \to FX$, where $FX = 2 \times X^A$ and $k$ pairs the final state function and the transition function assigning a next state to each letter $a\in A$. Coalgebra has emerged as a unifying framework to study infinite data types and state-based systems \cite{rutten2000universal}. \begin{definition}\deftag{Coalgebra} A \emph{coalgebra} for an endofunctor $F$ in a category $\mathscr{C}$ is a tuple $\langle X, k \rangle$ consisting of an object $X$ in $\mathscr{C}$ and a morphism $k\colon X \rightarrow FX$. \end{definition} Crucial in the theory of coalgebras is the notion of homomorphism, which allows to relate states of coalgebras of the same type. A homomorphism $f\colon \langle X, k_X \rangle \rightarrow \langle Y,k_Y \rangle$ between $F$-coalgebras is a morphism $f\colon X \rightarrow Y$ satisfying $k_Y \circ f = Ff \circ k_X$. The category of $F$-coalgebras and homomorphisms is denoted by $\textnormal{Coalg}(F)$. If it exists, the final object of this category is of particular importance. \begin{definition}\deftag{Final coalgebra} An $F$-coalgebra $\langle \Omega, k_{\Omega} \rangle$ is \emph{final} if every $F$-coalgebra $\langle X, k \rangle$ admits a unique homomorphism $\textnormal{obs}_{\langle X, k \rangle}: \langle X, k \rangle \rightarrow \langle \Omega, k_{\Omega} \rangle$. \end{definition} The unique final coalgebra homomorphism can be understood as the observable behaviour of a system. For example, for the functor $FX = 2 \times X^A$, the final $F$-coalgebra is the set of all languages $\mathcal{P}(A^\star)$ and the final coalgebra homomorphism assigns to a state $x$ of an unpointed deterministic automaton the language in $\mathcal{P}(A^*)$ it accepts\footnote{For a deterministic automaton given by $\varepsilon: X \rightarrow 2$ and $\delta: X \rightarrow X^A$, acceptance is coinductively defined as a function $\textnormal{obs}: X \rightarrow 2^{A^*}$ by $\textnormal{obs}(x)(\varepsilon) = \varepsilon(x)$ and $\textnormal{obs}(x)(av) = \textnormal{obs}(\delta(x)(a))(v)$.} when given the initial state $x$. In the context of computer science, monads have been introduced by Moggi as a general perspective on exceptions, side-effects, non-determinism, and continuations \cite{moggi1988computational, moggi1990abstract, moggi1991notions}. \begin{definition}\deftag{Monad} A \emph{monad} on a category $\mathscr{C}$ is a tuple $\langle T, \eta, \mu \rangle$ consisting of an endofunctor $T: \mathscr{C} \rightarrow \mathscr{C}$ and natural transformations $ \eta: \textnormal{id}_{\mathscr{C}} \Rightarrow T$ and $\mu: T^2 \Rightarrow T$ satisfying $\mu \circ T\mu = \mu \circ \mu_T$ and $\mu \circ \eta_T = \textnormal{id}_T = \mu \circ T\eta$. \end{definition} By a slight abuse of notation we will refer to a monad simply by its underlying endofunctor. Non-determinism is typically modelled by the \emph{powerset monad} $\mathcal{P} $, whose underlying endofunctor $\mathcal{P}$ assigns to a set $X$ the set of subsets $\mathcal{P}X$; whose unit maps an element $x$ to the singleton $\eta_X(x) = \lbrace x \rbrace$; and whose multiplication flattens subsets by taking their union $\mu_X(\Phi) = \bigcup_{U \in \Phi} U$. Other monads that play a role for us are the \emph{nominal powerset monad} $ \mathcal{P}_{\textnormal{n}}$ \cite{moerman2019residual}, the \emph{neighbourhood monad} $\mathcal{H}$ \cite{jacobs2015recipe}, the \emph{monotone neighbourhood monad} $\mathcal{A}$ \cite{jacobs2015recipe}, and the \emph{free vector space monad} $\mathcal{R}$ over the unique two element field \cite{jacobs2011bases}. The formal definitions are given in \ifdefined\Cref{monaddefs}\else \cite[Definition~40]{arxiv}\fi. The concept of a monad can also be seen as an alternative to Lawvere theory as a category theoretic formulation of universal algebra \cite{eilenberg1965adjoint, linton1966some}. \begin{definition}\deftag{Algebra over a monad} An \emph{algebra} over a monad $T$ on $\mathscr{C}$ is a tuple $\langle X, h \rangle$ consisting of an object $X$ in $\mathscr{C}$ and a morphism $h: TX \rightarrow X$ satisfying $h \circ \mu_X = h \circ Th$ and $h \circ \eta_X = \textnormal{id}_X$. \end{definition} Every object admits a \emph{free} algebra $\langle TX, \mu_X \rangle$. A homomorphism $f: \langle X, h_X \rangle \rightarrow \langle Y, h_Y \rangle$ between $T$-algebras is a morphism $f: X \rightarrow Y$ satisfying $h_Y \circ Tf = f \circ h_X$. The category of $T$-algebras and homomorphisms is denoted by $\textnormal{Alg}(T)$. \begin{example} \begin{itemize} \item The category $\textnormal{Alg}(\mathcal{P})$ is isomorphic to the category of complete join-semi lattices (CSL) and functions that preserve all joins \cite{jacobs2011bases}. \item The category $\textnormal{Alg}(\mathcal{H})$ is isomorphic to the category of complete atomic Boolean algebras (CABA) and Boolean algebra homomorphisms that preserve all meets and all joins \cite{jacobs2015recipe}. \item The category $\textnormal{Alg}(\mathcal{A})$ is isomorphic to the category of completely distributive lattices (CDL) and functions that preserve all meets and all joins \cite{jacobs2015recipe}. \item The category $\textnormal{Alg}(\mathcal{R})$ is isomorphic to the category of vector spaces over the unique two element field ($\mathbb{Z}_2$-Vect) and linear maps \cite{jacobs2011bases}. \end{itemize} \end{example} \begin{figure*} \center \tiny \begin{tikzpicture}[node distance= 8.5em] \node[state, initial, shape=rectangle, initial text=] (1) {$\lbrack \lbrace \lbrace x \rbrace, \lbrace x, y \rbrace \rbrace \rbrack $}; \node[state, shape=rectangle,right of = 1, accepting] (2) {$\lbrack\lbrace \lbrace y \rbrace \rbrace \rbrack$}; \node[state, shape=rectangle,right of = 2] (3) {$\lbrack\lbrace \emptyset \rbrace \rbrack$}; \path[->] (1) edge[loop above] node{$a,b$} (1) (1) edge[above] node{$a$} (2) (3) edge[loop above] node{$a,b$} (3) (3) edge[above] node{$b$} (2) ; \end{tikzpicture}. \caption{The \'atomaton for $\mathcal{L} = (a+b)^*a$.} \label{atomaton} \end{figure*} Distributive laws have originally occurred as a way to compose monads \cite{beck1969distributive}, but now also exist in a wide range of other forms \cite{Street2009}. For our particular case it is sufficient to consider distributive laws between a monad and an endofunctor, sometimes referred to as \emph{Eilenberg-Moore laws}\cite{jacobs2015trace}. \begin{definition}\deftag{Distributive law} A \emph{distributive law} between a monad $T$ and an endofunctor $F$ on $\mathscr{C}$ is a natural transformation $\lambda: TF \Rightarrow FT$ satisfying $F \eta_X = \lambda_X \circ \eta_{FX}$ and $\lambda_X \circ \mu_{FX} = F\mu_X \circ \lambda_{TX} \circ T{\lambda_X}$. \end{definition} For example, every algebra $h: TB \rightarrow B$ for a set monad $T$ induces a distributive law $\lambda^h$ between $T$ and $F$ with $FX = B \times X^A$ defined by \begin{equation} \label{induceddistrlaweq} \lambda^{h}_X := (h \times \textnormal{st}) \circ \langle T\pi_1, T\pi_2 \rangle, \end{equation} where $\textnormal{st}$ denotes the usual strength function\footnote{For any two sets $X,A$ the strength function $\textnormal{st}: T(X^A) \rightarrow (TX)^A$ is defined by $ \textnormal{st}(U)(a) = T(\textnormal{ev}_a)(U)$, where $\textnormal{ev}_{a}(f) = f(a)$.} \cite{jacobs2005bialgebraic}. We are particularly interested in canonical algebra structures for the output set $B = 2$. For instance, the algebra structures defined by $h^{\mathcal{P}}(\varphi) = h^{\mathcal{R}}(\varphi) = \varphi(1)$ and $h^{\mathcal{H}}(\Phi) = h^{\mathcal{A}}(\Phi) = \Phi(\textnormal{id}_2)$, where we identify subsets with their characteristic functions. In these cases we will abuse notation and write $\lambda^T$ instead of $\lambda^{h^T}$. \begin{example}\deftag{Generalized determinisation \cite{rutten2013generalizing}} \label[example]{determinisationexample} Given a distributive law, one can model the determinisation of a system with dynamics in $F$ and side-effects in $T$ (sometimes referred to as \emph{succinct} automaton) by lifting a $FT$-coalgebra $\langle X, k \rangle$ to the $F$-coalgebra $\langle TX, k^{\sharp} \rangle$, where $k^{\sharp} := (F \mu_X \circ \lambda_{TX}) \circ Tk$. As one verifies, the latter is in fact a $T$-algebra homomorphism of type $k^{\sharp}: \langle TX, \mu_X \rangle \rightarrow \langle FTX, F\mu_X \circ \lambda_{TX} \rangle$. For instance, if the distributive law $\lambda$ is induced by the disjunctive $\mathcal{P}$-algebra $h^{\mathcal{P}}: \mathcal{P}2 \rightarrow 2$ with $h^{\mathcal{P}}(\varphi) = \bigvee_{u \in \varphi} u = \varphi(1)$, the lifting $k^{\sharp}$ is the DFA in CSL obtained from an NFA $k$ via the classical powerset construction. \end{example} The example above illustrates the concept of a bialgebra: the algebraic part $(TX, \mu_X)$ and the coalgebraic part $(TX, k^{\sharp})$ of a lifted automaton are compatible along the distributive law $\lambda$. \begin{definition}\deftag{Bialgebra} A $\lambda$\emph{-bialgebra} is a tuple $\langle X, h, k \rangle$ consisting of a $T$-algebra $\langle X,h \rangle$ and an $F$-coalgebra $\langle X, k \rangle$ satisfying $Fh \circ \lambda_X \circ Tk = k \circ h$. \end{definition} A homomorphism between $\lambda$-bialgebras is a morphism between the underlying objects that is simultaneously a $T$-algebra homomorphism and an $F$-coalgebra homomorphism. The category of $\lambda$-bialgebras and homomorphisms is denoted by $\textnormal{Bialg}(\lambda)$. The existence of a final $F$-coalgebra is equivalent to the existence of a final $\lambda$-bialgebra, as the next result shows. \begin{lemma}\tagcite{jacobs2012trace} Let $\langle \Omega, k_{\Omega} \rangle$ be the final $F$-coalgebra, then $\langle \Omega, h_{\Omega}, k_{\Omega} \rangle$ with $h_{\Omega}:= \textnormal{obs}_{\langle T\Omega, \lambda_{\Omega} \circ T k_{\Theta} \rangle}$ is the final $\lambda$-bialgebra satisfying $\textnormal{obs}_{\langle X, h, k \rangle} = \textnormal{obs}_{\langle X, k \rangle}$. Conversely, if $\langle \Omega, h_{\Omega}, k_{\Omega} \rangle$ is the final $\lambda$-bialgebra, then $\langle \Omega, k_{\Omega} \rangle$ is the final $F$-coalgebra. \end{lemma} For instance, for the distributive law in \Cref{determinisationexample}, the final bialgebra is carried by the final coalgebra $\mathcal{P}(A^*)$ and also has a free $\mathcal{P}$-algebra structure that takes the union of languages. The generalized determinisation procedure in \Cref{determinisationexample} can now be rephrased in terms of a functor between the category of coalgebras with dynamics in $F$ and side-effects in $T$ on the one side, and the category of bialgebras on the other side. \begin{lemma}\tagcite{jacobs2012trace} \label[lemma]{expfunctor} Defining $\textnormal{exp}_T(\langle X, k \rangle) := \langle TX, \mu_X, (F \mu_X \circ \lambda_{TX}) \circ Tk \rangle$ and $\textnormal{exp}_T(f) := Tf$ yields a functor $\textnormal{exp}_T: \textnormal{Coalg}(FT) \rightarrow \textnormal{Bialg}(\lambda)$. \end{lemma} We will sometimes refer to the functor which arises from the one above by precomposition with the canonical embedding of $F$-coalgebras into $FT$-coalgebras. \begin{corollary} Defining $\textnormal{free}_T(\langle X, k \rangle) := \langle TX, \mu_X, \lambda_X \circ Tk \rangle$ and $\textnormal{free}_T(f) := Tf$ yields a functor $\textnormal{free}_T: \textnormal{Coalg}(F) \rightarrow \textnormal{Bialg}(\lambda)$ satisfying $\textnormal{free}_T(\langle X, k \rangle) = \textnormal{exp}_T(\langle X, F\eta_X \circ k\rangle)$. \end{corollary} \section{Succinct automata from bialgebras} \label{succinctbialgebra1} In this section we introduce the foundations of our theoretical contributions. We begin with the notion of a \textit{generator} \cite{arbib1975fuzzy} for an algebra over a monad and demonstrate how it can be used to translate a bialgebra into an equivalent free bialgebra. While the treatment is very general, we are particularly interested in the case in which the bialgebra is given by a deterministic automaton that has additional algebraic structure over a given monad, and the translation results in an automaton with side-effects in that monad. We will demonstrate that the theory in this section instantiates to the canonical RFSA \cite{DenisLT02}, the canonical \emph{nominal} RFSA \cite{moerman2019residual}, and the minimal xor automaton \cite{VuilleminG210}. \begin{definition}\deftag{Generator and basis} A \emph{generator} for a $T$-algebra $\langle X, h \rangle$ is a tuple $\langle Y, i, d \rangle$ consisting of an object $Y$, a morphism $i \colon Y \rightarrow X$, and a morphism $d \colon X \rightarrow TY$ such that $(h \circ Ti) \circ d = \textnormal{id}_X$. A generator is called a \emph{basis} if it additionally satisfies $d \circ (h \circ Ti) = \textnormal{id}_{TY}$. \end{definition} A generator for an algebra is called a \textit{scoop} by Arbib and Manes~\cite{arbib1975fuzzy}. Here, we additionally introduce the notion of a basis. Intuitively, one calls a set $Y$ that is embedded into an algebraic structure $X$ a generator for the latter if every element $x$ in $X$ admits a decomposition $d(x) \in TY$ into a formal combination of elements of $Y$ that evaluates to $x$. If the decomposition is moreover \emph{unique}, that is, $h \circ Ti$ is not only a \emph{surjection} with right-inverse $d$, but a \emph{bijection} with two-sided inverse $d$, then a generator is called a basis. Every algebra is generated by itself using the generator $\langle X, \textnormal{id}_X, \eta_X \rangle$, but not every algebra admits a basis. We are particularly interested in classes of set-based algebras for which \emph{every} algebra admits a \emph{size-minimal} generator, that is, no generator has a carrier of smaller size. In such a situation we will also speak of \emph{canonical} generators. \begin{figure}[t] \centering \begin{subfigure}[c]{\columnwidth} \centering \tiny \begin{subfigure}[b]{.6 \columnwidth} \centering \centering \adjustbox{valign=m}{ \begin{tikzpicture}[node distance=6em] \node[state, shape=rectangle, ] (0) {$\lbrack \emptyset \rbrack$}; \node[state,shape=rectangle, right of=0, initial, initial text=] (x) {$\lbrack \lbrace x \rbrace \rbrack$}; \node[state, shape=rectangle, right of=x, accepting] (y) {$\lbrack \lbrace y \rbrace \rbrack$}; \path[->] (0) edge[loop above] node{$a,b$} (0) (x) edge[loop above] node{$b$} (x) (x) edge[above, bend left] node{$a$} (y) (y) edge[below, bend left] node{$b$} (x) (y) edge[loop right] node{$a$} (y) ; \end{tikzpicture} } \qquad \adjustbox{valign=m}{ \resizebox{0.4 \columnwidth}{!}{% \begin{tabular}{ c|c|c|c } $\vee$ & $\lbrack \lbrace x \rbrace \rbrack$ & $ \lbrack \lbrace y \rbrace \rbrack$ & $\lbrack \emptyset \rbrack$ \\ \hline $\lbrack \lbrace x \rbrace \rbrack$ & $\lbrack \lbrace x \rbrace \rbrack$ & $\lbrack \lbrace y \rbrace \rbrack$ & $\lbrack \lbrace x \rbrace \rbrack$ \\ \hline $\lbrack \lbrace y \rbrace \rbrack$ & $\lbrack \lbrace y \rbrace \rbrack$ & $\lbrack \lbrace y \rbrace \rbrack$ & $\lbrack \lbrace y \rbrace \rbrack$ \\ \hline $\lbrack \emptyset \rbrack$ & $\lbrack \lbrace x \rbrace \rbrack$ & $\lbrack \lbrace y \rbrace \rbrack$ & $\lbrack \emptyset \rbrack$ \end{tabular} }} \caption{} \label{overlineml} \end{subfigure} \begin{subfigure}[b]{.3 \columnwidth} \tiny \centering \adjustbox{valign=m}{ \begin{tikzpicture}[node distance=6em] \node[state, shape=rectangle, initial, initial text=] (x) {$\lbrack \lbrace x \rbrace \rbrack$}; \node[state, shape=rectangle, right of=x, accepting] (y) {$\lbrack \lbrace y \rbrace \rbrack$}; \path[->] (x) edge[loop above] node{$a,b$} (x) (x) edge[above, bend left] node{$a$} (y) (y) edge[below, bend left] node{$a,b$} (x) (y) edge[loop right] node{$a$} (y) ; \end{tikzpicture} } \caption{} \label{jiromaton} \end{subfigure} \end{subfigure} \caption{(a) The minimal CSL-structured DFA for $\mathcal{L} = (a+b)^*a$; (b) The canonical RFSA for $\mathcal{L} = (a+b)^*a$.} \end{figure} \begin{example} \begin{itemize} \item A tuple $\langle Y, i, d \rangle$ is a generator for a $\mathcal{P}$-algebra $L = \langle X,h \rangle \simeq \langle X, \vee^h \rangle$ iff $x = \bigvee^h_{y \in d(x)} i(y) $ for all $x \in X$. Note that if $Y \subseteq X$ is a subset, then $i(y) = y$ for all $y \in Y$. If $L$ satisfies the descending chain condition, which is in particular the case if $X$ is finite, then defining $i(y) = y$ and $d(x) = \lbrace y \in J(L) \mid y \leq x \rbrace$ turns the set of join-irreducibles\footnote{A non-zero element $x\in L$ is called \emph{join-irreducible} if for all $y,z \in L$ such that $x=y \vee z$ one finds $x = y $ or $x = z$.} $J(L)$ into a size-minimal generator $\langle J(L), i, d \rangle$ for $L$, cf. \ifdefined\Cref{joinirreducstateminimal}\else \cite[Lemma~55]{arxiv}\fi. \item A tuple $\langle Y, i, d \rangle$ is a generator for a $\mathcal{R}$-algebra $V = \langle X,h \rangle \simeq \langle X, +^h, \cdot^h \rangle$ iff $x = \sum^h_{y \in Y} d(x)(y) \cdot^h i(y)$ for all $x \in X$. As it is well-known that every vector space can be equipped with a basis, every $\mathcal{R}$-algebra $V$ admits a basis. One can show that a basis is size-minimal, cf. \ifdefined\Cref{xorbasisstateminimal}\else \cite[Lemma~52]{arxiv}\fi. \end{itemize} \end{example} It is enough to find generators for the underlying algebra of a bialgebra to derive an equivalent free bialgebra. This is because the algebraic and coalgebraic components are tightly intertwined via a distributive law. \begin{proposition} \label[proposition]{forgenerator-isharp-is-bialgebra-hom} Let $\langle X, h, k\rangle$ be a $\lambda$-bialgebra and let $\langle Y, i, d \rangle$ be a generator for the $T$-algebra $\langle X,h \rangle$. Then $h \circ Ti \colon \textnormal{exp}_T(\langle Y, Fd \circ k \circ i\rangle ) \rightarrow \langle X, h, k \rangle$ is a $\lambda$-bialgebra homomorphism. \end{proposition} Intuitively, the bialgebra $\langle X, h, k \rangle$ is a deterministic automaton with additional algebraic structure in the monad $T$ and say initial state $x \in X$, while the equivalent free bialgebra is the determinisation of the succinct automaton $Fd \circ k \circ i \colon Y \rightarrow FTY$ with side-effects in $T$ and initial state $d(x) \in TY$. The following result further observes that if one considers a basis for the underlying algebraic structure of a bialgebra, rather than just a generator, then the equivalent free bialgebra is in fact isomorphic to the original bialgebra. \begin{proposition} \label[proposition]{forbasis-isharp-is-bialgebra-iso} Let $\langle X, h, k\rangle$ be a $\lambda$-bialgebra and let $\langle Y, i, d \rangle$ be a basis for the $T$-algebra $\langle X,h \rangle$. Then $h \circ Ti \colon \textnormal{exp}_T(\langle Y, Fd \circ k \circ i\rangle ) \rightarrow \langle X, h, k \rangle$ is a $\lambda$-bialgebra isomorphism. \end{proposition} We conclude this section by illustrating how \Cref{forgenerator-isharp-is-bialgebra-hom} can be used to construct the canonical RFSA \cite{DenisLT02}, the canonical nominal RFSA \cite{moerman2019residual}, and the minimal xor automaton \cite{VuilleminG210} for a regular language $\mathcal{L}$ over some alphabet $A$. All examples follow three analogous steps: \begin{enumerate} \item We construct the minimal\footnote{Minimal in the sense that every state is reachable by an element of $A^*$ and no two different states observe the same language.} pointed coalgebra $M_{\mathcal{L}}$ for the (nominal) set endofunctor $F = 2 \times (-)^{A}$ accepting $\mathcal{L}$. For the case $A = \lbrace a, b \rbrace$ and $\mathcal{L} = (a+b)^*a$, the coalgebra $M_{\mathcal{L}}$ is depicted in \Cref{m(l)}. \item We equip the former with additional algebraic structure in a monad $T$ (which is related to $F$ via a canonically induced distributive law $\lambda$) by generating the $\lambda$-bialgebra $\textnormal{free}_T(M_{\mathcal{L}})$. By identifying semantically equivalent states we consequently derive the minimal\footnote{Minimal in the sense that every state is reachable by an element of $T(A^*)$ and no two different states observe the same language.} (pointed) $\lambda$-bialgebra $\langle X, h, k \rangle$ for $\mathcal{L}$. \item We identify canonical generators $\langle Y, i, d \rangle$ for $\langle X, h \rangle$ and use \Cref{forgenerator-isharp-is-bialgebra-hom} to derive an equivalent succinct automaton $\langle Y, Fd \circ k \circ i \rangle$ with side-effects in $T$. \end{enumerate} \begin{figure*} \tiny \centering \begin{tikzpicture}[node distance=6em] \node[state, shape=rectangle, initial, initial text=] (L) {$\mathcal{L}$}; \node[state, shape=rectangle, right of=L] (aL) {$a^{-1}\mathcal{L}$}; \node[state, accepting, shape=rectangle, right of=aL] (A) {$A^*$}; \path[->] (L) edge[loop above] node{$A$} (L) (L) edge[above, bend left] node{$a$} (aL) (aL) edge[above, bend left] node{$a$} (A) (aL) edge[loop above] node{$A$} (aL) (aL) edge[above, bend left] node{$A$} (L) (A) edge[loop above] node{$A$} (A) (A) edge[above, bend left=45] node{$A$} (L) (A) edge[above, bend left] node{$A$} (aL) ; \end{tikzpicture} \caption{The orbit-finite representation of the canonical nominal RFSA for $\mathcal{L} = \lbrace v a w a u \mid v, w, u \in A^*, a \in A \rbrace$.} \label{fig:canonialnominalrfsa} \end{figure*} \begin{example}\deftag{The canonical RFSA} \label[example]{canonicalrfsaexample} Using the $\mathcal{P}$-algebra structure $h^{\mathcal{P}}: \mathcal{P}2 \rightarrow 2$ with $h^{\mathcal{P}}(\varphi) = \varphi(1)$, we derive a canonical distributive law $\lambda^{\mathcal{P}}$ between $F$ and the powerset monad $\mathcal{P}$. The minimal pointed $\lambda^{\mathcal{P}}$-bialgebra for $\mathcal{L} = (a+b)^*a$ with its underlying CSL structure is depicted in \Cref{overlineml}; the construction can be verified with the help of \ifdefined\Cref{freepowersetbialgebrastructure}\else \cite[Lemma~47]{arxiv}\fi. The partially ordered state space $L = \lbrace \lbrack \emptyset \rbrack \leq \lbrack \lbrace x \rbrace \rbrack \leq \lbrack \lbrace y \rbrace \rbrack \rbrace$ is necessarily finite, thus satisfies the descending chain condition, which turns the set of join-irreducibles into a size-minimal generator $\langle J(L), i, d\rangle$ with $i(y) = y$ and $d(x) = \lbrace y \in J(L) \mid y \leq x \rbrace$, cf. \ifdefined\Cref{joinirreducstateminimal}\else \cite[Lemma~55]{arxiv}\fi. In this case, the join-irreducibles are given by all non-zero states. The $\mathcal{P}$-succinct automaton consequently induced by \Cref{forgenerator-isharp-is-bialgebra-hom} is depicted in \Cref{jiromaton}; it can be recognised as the canonical RFSA, cf. e.g. \cite{MyersAMU15}. \end{example} \begin{example}\deftag{The canonical nominal RFSA} \label[example]{nominalexample} It is not hard to see that $F$ extends to a functor on the category of nominal sets; the usual strength function is equivariant \ifdefined(\Cref{equivariantstrength})\else\cite[Lemma~46]{arxiv}\fi; and $h^{\mathcal{P}_{\textnormal{n}}}: \mathcal{P}_{\textnormal{n}}2 \rightarrow 2$ with $h^{\mathcal{P}_{\textnormal{n}}}(\varphi) = \varphi(1)$ defines a $\mathcal{P}_{\textnormal{n}}$-algebra, which induces a canonical distributive law $\lambda^{\mathcal{P}_{\textnormal{n}}}$ between $F$ and the nominal powerset monad $\mathcal{P}_{\textnormal{n}}$. As in \cite{moerman2019residual}, let $\mathcal{L} = \lbrace v a w a u \mid v, w, u \in A^*, a \in A \rbrace$, then $a^{-n}\mathcal{L} = a^{-2}\mathcal{L} = A^*$ for $n \geq 2$, and $v^{-1}\mathcal{L} = \cup_{a \in A} a^{-\vert v \vert_a} \mathcal{L}$, where $\vert v \vert_a$ denotes the number of $a$'s that occur in $v$. In consequence, the nominal CSL underlying the minimal pointed $\lambda^{\mathcal{P}_{\textnormal{n}}}$-bialgebra is generated by the orbit-finite nominal set of join-irreducibles $\lbrace \mathcal{L} \rbrace \cup \lbrace a^{-1} \mathcal{L} \mid a \in A \rbrace \cup \lbrace A^* \rbrace$, which is equipped with the obvious $\textnormal{Perm}(A)$-action and satisfies the inclusion $\mathcal{L} \subseteq a^{-1} \mathcal{L} \subseteq A^*$. The orbit-finite representation of the $\mathcal{P}_{\textnormal{n}}$-succinct automaton consequently induced by \Cref{forgenerator-isharp-is-bialgebra-hom} is depicted in \Cref{fig:canonialnominalrfsa}. \end{example} \begin{example}\deftag{The minimal xor automaton} \label[example]{minimalxorexmaple} The $\mathcal{R}$-algebra structure $h^{\mathcal{R}}: \mathcal{R}2 \rightarrow 2$ with $h^{\mathcal{R}}(\varphi) = \varphi(1)$ induces a canonical distributive law $\lambda^{\mathcal{R}}$ between $F$ and the free vector space monad $\mathcal{R}$ over the two element field. The minimal pointed $\lambda^{\mathcal{R}}$-bialgebra accepting $\mathcal{L} = (a+b)^*a$ is depicted in \Cref{fm(l)xor} and coincides with the bialgebra freely generated by the $F$-coalgebra in \Cref{m(l)}. The construction can be verified using \ifdefined\Cref{freexorbialgebrastructure}\else\cite[Lemma~50]{arxiv}\fi. The underlying vector space structure necessarily has a basis; we choose the size-minimal generator $\langle Y, i, d \rangle$ with $Y = \lbrace \lbrace x \rbrace, \lbrace x, y \rbrace \rbrace$, $i(y) = y$, and $d(\emptyset) = \emptyset$, $d(\lbrace x \rbrace) = \lbrace \lbrace x \rbrace \rbrace$, $d(\lbrace y \rbrace) = \lbrace \lbrace x \rbrace$, $\lbrace x, y \rbrace \rbrace$, $d(\lbrace x, y \rbrace) = \lbrace \lbrace x, y \rbrace \rbrace$, which is sufficient by \ifdefined\Cref{xorbasisstateminimal}\else\cite[Lemma~52]{arxiv}\fi. The $\mathcal{R}$-succinct automaton induced by \Cref{forgenerator-isharp-is-bialgebra-hom} is depicted in \Cref{xorautomaton}; it can be recognised as the minimal xor automaton, cf. e.g. \cite{MyersAMU15}. \end{example} \section{Changing the type of succinct automata} \label{succinctbialgebra2} This section contains a generalisation of the approach in \Cref{succinctbialgebra1}. The extension is based on the observation that in the last section we implicitly considered \textit{two} types of monads: (i) a monad $S$ that describes the additional algebraic structure of a given deterministic automaton; and (ii) a monad $T$ that captures the side-effects of the succinct automaton that is obtained by the generator-based translation. In \Cref{forgenerator-isharp-is-bialgebra-hom}, the main result of the last section, the monads coincided, but to recover for instance the \'atomaton \cite{BrzozowskiT14} we will have to extend \Cref{forgenerator-isharp-is-bialgebra-hom} to a situation where $S$ and $T$ can differ. \subsection{Relating distributive laws} We now introduce the main technical ingredient of our extension: \textit{distributive law homomorphisms}. As before, we present the theory on the level of arbitrary bialgebras, even though we will later focus on the case where the coalgebraic dynamics are those of deterministic automata. Distributive law homomorphisms will allow us to shift a bialgebra over a monad $S$ to an equivalent bialgebra over a monad $T$, for which we can then find, analogous to \Cref{succinctbialgebra1}, an equivalent succinct representation. The notion we use is an instance of a much more general definition that allows to relate distributive laws on two different categories. We restrict to the case where both distributive laws are given over the same behavioural endofunctor $F$. \begin{definition}\deftag{Distributive law homomorphism \cite{watanabe2002well, power2002combining}} Let $\lambda^{S}: SF \rightarrow FS$ and $\lambda^{T}: TF \rightarrow FT$ be distributive laws between monads $S$ and $T$ and an endofunctor $F$, respectively. A \textnormal{distributive law homomorphism} $\alpha: \lambda^{S} \rightarrow \lambda^{T}$ consists of a natural transformation $\alpha: T \Rightarrow S$ satisfying $\mu^S \circ \alpha_S \circ T \alpha = \alpha \circ \mu^T$, $\alpha \circ \eta^T = \eta^S$ and $\lambda^S \circ \alpha_F = F\alpha \circ \lambda^T$. \end{definition} \begin{figure}[t] \centering \begin{subfigure}[c]{\columnwidth} \centering \begin{subfigure}[b]{.7 \columnwidth} \tiny \adjustbox{valign=m}{ \begin{tikzpicture}[node distance=6em] \node[state, shape=rectangle] (0) {$\emptyset$}; \node[state, shape=rectangle, right of=0, accepting] (xory) {$\lbrace x,y \rbrace$}; \node[state, shape=rectangle, below of=0, initial, initial text=] (x) {$\lbrace x \rbrace$}; \node[state, shape=rectangle, right of=x, accepting] (y) {$\lbrace y \rbrace$}; \path[->] (0) edge[loop above] node{$a,b$} (0) (xory) edge[above] node{$a,b$} (0) (x) edge[loop above] node{$b$} (x) (x) edge[above, bend left] node{$a$} (y) (y) edge[below, bend left] node{$b$} (x) (y) edge[loop right] node{$a$} (y) ; \end{tikzpicture} } \qquad \adjustbox{valign=m}{ \resizebox{0.45 \columnwidth}{!}{% \begin{tabular}[]{ c|c|c|c|c } $\oplus$ & $ \lbrace x \rbrace $ & $ \lbrace y \rbrace$ & $\lbrace x, y \rbrace$ & $\emptyset$ \\ \hline $ \lbrace x \rbrace $ & $\emptyset$ & $\lbrace x, y \rbrace$ & $\lbrace y \rbrace$ & $\lbrace x \rbrace$ \\ \hline $ \lbrace y \rbrace$ & $\lbrace x, y \rbrace$ & $\emptyset$ & $\lbrace x \rbrace$ & $\lbrace y \rbrace$\\ \hline $\lbrace x, y \rbrace$ & $\lbrace y \rbrace$ & $\lbrace x \rbrace$ & $\emptyset$ & $\lbrace x, y \rbrace$\\ \hline $\emptyset$ & $ \lbrace x \rbrace $ & $ \lbrace y \rbrace$ & $\lbrace x, y \rbrace$ & $\emptyset$ \end{tabular} }} \caption{} \label{fm(l)xor} \end{subfigure} \begin{subfigure}[b]{.2 \columnwidth} \tiny \adjustbox{valign=m}{ \begin{tikzpicture}[node distance=6em] \node[state, shape=rectangle, initial, initial text=] (x) {$\lbrace x \rbrace$}; \node[state, shape=rectangle, right of=x, accepting] (xory) {$\lbrace x,y \rbrace$}; \path[->] (x) edge[loop above] node{$a,b$} (x) (x) edge[above] node{$a$} (xory) ; \end{tikzpicture} } \caption{} \label{xorautomaton} \end{subfigure} \end{subfigure} \caption{(a) The minimal $\mathbb{Z}_2$-Vect structured DFA for $\mathcal{L} = (a+b)^*a$ (freely-generated by the DFA in \Cref{m(l)}); (b) Up to the choice of a basis, the minimal xor automaton for $\mathcal{L} = (a+b)^*a$.} \end{figure} The above definition is such that $\alpha$ induces a functor between the categories of $\lambda^S$- and $\lambda^T$-bialgebras. \begin{lemma}\tagcite{klin2015presenting, bonsangue2013presenting} \label[lemma]{inducedbialgebra} Let $\alpha: \lambda^S \rightarrow \lambda^T$ be a distributive law homomorphism. Then $\alpha \langle X, h, k\rangle := \langle X, h \circ \alpha_X, k \rangle$ and $\alpha(f) := f$ defines a functor $\alpha: \textnormal{Bialg}(\lambda^S) \rightarrow \textnormal{Bialg}(\lambda^T)$. \end{lemma} The next result is a straightforward consequence of \Cref{forgenerator-isharp-is-bialgebra-hom}, and may be strengthened to an isomorphism in case one is given a basis instead of a generator, analogous to \Cref{forbasis-isharp-is-bialgebra-iso}. It can be seen as a road map to the approach we propose in this section. \begin{corollary} \label[corollary]{generatorbialgebrahom} Let $\alpha: \lambda^S \rightarrow \lambda^T$ be a homomorphism between distributive laws and $\langle X,h,k \rangle$ a $\lambda^S$-bialgebra. If $\langle Y, i, d \rangle$ is a generator for the $T$-algebra $\langle X, h \circ \alpha_X \rangle$, then $ (h \circ \alpha_X) \circ Ti: \textnormal{exp}_T(\langle Y, Fd \circ k \circ i \rangle) \rightarrow \langle X, h \circ \alpha_X, k \rangle $ is a $\lambda^T$-bialgebra homomorphism. \end{corollary} \subsection{Deriving distributive law relations} We now turn to the procedure of deriving a distributive law homomorphism. In practice, coming up with a natural transformation and proving that it lifts to a distributive law homomorphism can be quite cumbersome. Fortunately, for certain cases, there is a way to simplify things significantly. For instance, as the next result shows, if, as in \eqref{induceddistrlaweq}, the involved distributive laws are induced by algebra structures $h^{S}$ and $h^{T}$ for an output set $B$, respectively, then one of the conditions is implied by a less convoluted constraint. \begin{lemma} \label[lemma]{distributivelawaxiomeasier} Let $\alpha: T \Rightarrow S$ be a natural transformation satisfying $h^{S} \circ \alpha_B = h^{T}$, then $\lambda^{S} \circ \alpha_F = F \alpha \circ \lambda^{T}$. \end{lemma} The next result shows that for the neighbourhood monad there exists a family of \textit{canonical} choices of distributive law homomorphisms parametrised by Eilenberg-Moore algebra structures on the output set $B = 2$. While it is well-known that such algebras induce a monad morphism, for instance in the coalgebraic modal logic community \cite{klin2004coalgebraic, schroder2008expressivity, hansen2014strong}, its commutativity with canonical distributive laws has not been observed before. Moreover, we provide a new formalisation in terms of the strength function, which allows the result to be lifted to strong monads and arbitrary output objects on other categories than the one of sets and functions. \begin{proposition} \label[proposition]{algebrainduceddistributivellawhom} Any algebra $h: T2 \rightarrow 2$ over a set monad $T$ induces a homomorphism $\alpha^{h}: \lambda^{\mathcal{H}} \rightarrow \lambda^{h}$ between distributive laws by $\alpha^{h}_X := h^{2^X} \circ \textnormal{st} \circ T(\eta^{\mathcal{H}}_X)$.\end{proposition} The rest of the section is concerned with using \Cref{algebrainduceddistributivellawhom} and \Cref{generatorbialgebrahom} to derive canonical acceptors based on induced distributive law homomorphisms. \begin{figure}[t] \centering \begin{subfigure}[c]{\columnwidth} \centering \begin{subfigure}[b]{0.5 \columnwidth} \center \tiny \begin{tikzpicture}[node distance= 2em, ] \node[state, initial, shape=rectangle, initial text=] (1) {$\lbrack \lbrace \lbrace x \rbrace, \lbrace x, y \rbrace \rbrace \rbrack$}; \node[state, shape=rectangle, right = of 1] (3) {$\lbrack \emptyset \rbrack$}; \node[state, shape=rectangle, below = of 1, accepting] (5) {$\lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace \rbrace \rbrack$}; \node[state, shape=rectangle, right = of 5, accepting] (15) {$\lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace, \emptyset \rbrace \rbrack$}; \path[->] (3) edge[loop above] node{$a,b$} (3) (1) edge[loop above] node{$b$} (1) (1) edge[right, bend left] node{$a$} (5) (5) edge[loop below] node{$a$} (5) (5) edge[left, bend left] node{$b$} (1) (15) edge[loop below] node{$a,b$} (15) ; \end{tikzpicture} \\ \resizebox{0.3 \columnwidth}{!}{% \begin{tabular}[]{ c|c|c|c|c} $\vee$ & $1$ & $2$ & $3$ & $4$ \\ \hline $1$ & $1$ & $1$ & $3$ & $4$ \\ \hline $2$ & $1$ & $2$ & $3$ & $4$ \\ \hline $3$ & $3$ & $3$ & $3$ & $4$ \\ \hline $4$ & $4$ & $4$ & $4$ & $4$\\ \end{tabular} } \resizebox{0.3 \columnwidth}{!}{% \begin{tabular}[]{ c|c|c|c|c} $\wedge$ & $1$ & $2$ & $3$ & $4$ \\ \hline $1$ & $1$ & $2$ & $1$ & $1$ \\ \hline $2$ & $2$ & $2$ & $2$ & $2$ \\ \hline $3$ & $1$ & $2$ & $3$ & $3$ \\ \hline $4$ & $1$ & $2$ & $3$ & $4$ \end{tabular} } \caption{} \label{overlinem(l)distro} \end{subfigure} \begin{subfigure}[b]{0.4 \columnwidth} \centering \tiny \begin{tikzpicture}[node distance=2em] \node[state, initial, shape=rectangle, initial text=] (1) {$\lbrack \lbrace \lbrace x \rbrace, \lbrace x, y \rbrace \rbrace \rbrack$}; \node[state, shape=rectangle,below= of 1, accepting] (5) {$\lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace \rbrace \rbrack$}; \node[state, shape=rectangle, right= of 5, accepting] (15) {$\lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace, \emptyset \rbrace \rbrack$}; \path[->] (1) edge[loop above] node{$a,b$} (1) (1) edge[right, bend left] node{$a$} (5) (5) edge[loop below] node{$a$} (5) (5) edge[left, bend left] node{$a,b$} (1) (15) edge[loop below] node{$a,b$} (15) (15) edge[bend right, right] node{$a,b$} (1) (15) edge[below] node{$a,b$} (5) ; \end{tikzpicture} \caption{} \label{distromaton} \end{subfigure} \end{subfigure} \caption{(a) The minimal CDL-structured DFA for $\mathcal{L} = (a+b)^*a$, where $1 \equiv \lbrack \lbrace \lbrace x \rbrace, \lbrace x, y \rbrace \rbrace \rbrack$, $2 \equiv \lbrack \emptyset \rbrack$, $3 \equiv \lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace \rbrace \rbrack$, $4 \equiv \lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace, \emptyset \rbrace \rbrack$; (b) The distromaton for $\mathcal{L} = (a+b)^*a$.} \end{figure} \subsection{Example: The \'atomaton} \label{atomatonexample} We will now justify the previous informal construction of the \'atomaton. As hinted before, the \'atomaton can be recovered by relating the neighbourhood monad $\mathcal{H}$---whose algebras are complete \emph{atomic} Boolean algebras (CABAs)---and the powerset monad $\mathcal{P}$. Formally, as a consequence of \Cref{algebrainduceddistributivellawhom} we obtain the following. \begin{corollary} \label[corollary]{alphapowersetneighbourhooddistrlaw} Let $\alpha_X: \mathcal{P}X \rightarrow \mathcal{H}X$ satisfy $\alpha_X(\varphi)(\psi) = \bigvee_{x \in X} \varphi(x) \wedge \psi(x)$, then $\alpha$ constitutes a distributive law homomorphism $\alpha: \lambda^{\mathcal{H}} \rightarrow \lambda^{\mathcal{P}}$. \end{corollary} The next statement follows from a well-known Stone-type duality \cite{taylor2002subspaces} representation theorem for CABAs. \begin{lemma} \label[lemma]{basisshiftecabapowerset} Let $\alpha_X: \mathcal{P}X \rightarrow \mathcal{H}X$ satisfy $\alpha_X(\varphi)(\psi) = \bigvee_{x \in X} \varphi(x) \wedge \psi(x)$. If $B = \langle X, h \rangle$ is a $\mathcal{H}$-algebra, then $\langle \textnormal{At}(B), i, d \rangle$ with $i(a) = a$ and $d(x) = \lbrace a \in \textnormal{At}(B) \mid a \leq x \rbrace$ is a basis for the $\mathcal{P}$-algebra $\langle X, h \circ \alpha_X \rangle$. \end{lemma} The \'atomaton for the regular language $\mathcal{L} = (a+b)^*a$, for example, can now be obtained as follows. First, we construct the minimal pointed $\lambda^{\mathcal{H}}$-bialgebra accepting $\mathcal{L}$, which is depicted in \Cref{overlinem(l)atom} together with its underlying CABA structure $B$. The construction can be verified with the help of \ifdefined\Cref{freeneighbourhoodbialgebrastructure}\else\cite[Lemma~48]{arxiv}\fi. Using the distributive law homomorphism $\alpha$ of \Cref{alphapowersetneighbourhooddistrlaw}, it can be translated into an equivalent pointed $\lambda^{\mathcal{P}}$-bialgebra with underlying CSL-structure $\alpha(B)$. By \Cref{basisshiftecabapowerset} the atoms $\textnormal{At}(B)$ of $B$ form a basis for $\alpha(B)$. In this case the atoms are given by $\lbrack \lbrace \lbrace x \rbrace, \lbrace x, y \rbrace \rbrace \rbrack, \lbrack \lbrace \lbrace y \rbrace \rbrace \rbrack$ and $\lbrack \lbrace \emptyset \rbrace \rbrack$. The $\mathcal{P}$-succinct automaton consequently induced by \Cref{generatorbialgebrahom} is depicted in \Cref{atomaton}; it can be recognised as the \'atomaton, cf. e.g. \cite{MyersAMU15}. \subsection{Example: The distromaton} \label{distromatonexample} We shall now use our framework to recover another canonical non-deterministic acceptor: the \textit{distromaton} \cite{MyersAMU15}. As the name suggests, it can be constructed by relating the monotone neighbourhood monad $\mathcal{A}$---whose algebras are completely \textit{distributive} lattices---and the powerset monad $\mathcal{P}$. Formally, the relationship can be established by the same natural transformation we used for the \'atomaton. \begin{corollary} \label[corollary]{neighbourhoodpowersetmorphism} Let $\alpha_X: \mathcal{P}X \rightarrow \mathcal{A}X$ satisfy $\alpha_X(\varphi)(\psi) = \bigvee_{x \in X} \varphi(x) \wedge \psi(x)$, then $\alpha$ constitutes a distributive law homomorphism $\alpha: \lambda^{\mathcal{A}} \rightarrow \lambda^{\mathcal{P}}$. \end{corollary} The distromaton for the regular language $\mathcal{L} = (a+b)^*a$, for example, can now be obtained as follows. First, we construct the minimal pointed $\lambda^{\mathcal{A}}$-bialgebra for $\mathcal{L}$, depicted in \Cref{overlinem(l)distro} with its underlying CDL structure $h$. The construction can be verified with the help of \ifdefined\Cref{freealternatingbialgebrastructure}\else\cite[Lemma~49]{arxiv}\fi. Using the distributive law homomorphism $\alpha$ in \Cref{neighbourhoodpowersetmorphism}, it can be translated into an equivalent pointed $\lambda^{\mathcal{P}}$-bialgebra with underlying CSL structure $L = h \circ \alpha_X$. Its partially ordered state space $ \lbrack \emptyset \rbrack \leq \lbrack \lbrace \lbrace x \rbrace, \lbrace x, y \rbrace \rbrace \rbrack \leq \lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace \rbrace \rbrack \leq \lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace, \emptyset \rbrace \rbrack $ is necessarily finite, which turns the set of join-irreducibles into a size-minimal generator $\langle J(L), i, d \rangle$ for $L$, where $i(y) = y$ and $d(x) = \lbrace y \in J(L) \mid y \leq x \rbrace$. In this case, the join-irreducibles are given by all non-zero states. The $\mathcal{P}$-succinct automaton consequently induced by \Cref{generatorbialgebrahom} is depicted in \Cref{distromaton} and can be recognised as the distromaton, cf. \cite{MyersAMU15}. \subsection{Example: The minimal xor-CABA automaton} \label{minimalxorcabaexample} We conclude this section by relating the neighbourhood monad $\mathcal{H}$ with the free vector space monad $\mathcal{R}$ over the unique two element field $\mathbb{Z}_2$. In particular, we derive a new canonical succinct acceptor for regular languages, which we call the \emph{minimal xor-CABA automaton}. Intuitively, the next result says that every CABA can be equipped with a symmetric difference like operation that turns it into a vector space over the two element field. \begin{corollary} \label[corollary]{alphaxorneighbourhooddistrlaw} Let $\alpha_X: \mathcal{R}X \rightarrow \mathcal{H}X$ satisfy $\alpha_X(\varphi)(\psi) = \bigoplus_{x \in X} \varphi(x) \cdot \psi(x)$, then $\alpha$ constitutes a distributive law homomorphism $\alpha: \lambda^{\mathcal{H}} \rightarrow \lambda^{\mathcal{R}}$. \end{corollary} Since every vector space admits a basis, above result leads to the definition of a new acceptor of regular languages. Let $\alpha$ denote the homomorphism in \Cref{alphaxorneighbourhooddistrlaw} and $F$ the endofunctor given by $FX = 2 \times X^{A}$. \begin{figure*}[t] \centering \begin{tikzpicture}[node distance=9em] \node[] (H) {$\mathcal{H}$}; \node[left of = H] (P) {$\mathcal{P}$}; \node[right of = H] (X) {$\mathcal{R}$}; \node[left of = P] (A) {$\mathcal{A}$}; \path[->] (H) edge[above] node{\tiny \'atomaton} (P) (H) edge[above, dashed] node{\tiny minimal xor-CABA} (X) (P) edge[loop below] node{\tiny canonical RFSA} (P) (X) edge[loop below] node{\tiny minimal xor} (X) (A) edge[above] node{\tiny distromaton} (P) ; \end{tikzpicture} \caption{The minimal xor-CABA automaton is to the minimal xor automaton what the \'atomaton is to the canonical RFSA.} \label{minimalxorcabadiagram} \end{figure*} \begin{definition}\deftag{Minimal xor-CABA automaton} Let $\langle X, h, k \rangle$ be the minimal $x$-pointed $\lambda^{\mathcal{H}}$-bialgebra accepting a regular language $\mathcal{L} \subseteq A^*$, and $B = \langle Y, i, d \rangle$ a basis for the $\mathcal{R}$-algebra $\langle X, h \circ \alpha_X \rangle$. The \emph{minimal xor-CABA automaton} for $\mathcal{L}$ with respect to $B$ is the $d(x)$-pointed $\mathbb{Z}_2$-weighted automaton $Fd \circ k \circ i$. \end{definition} In \Cref{minimalxorcabadiagram} it is indicated how the canonical acceptors of this paper, including the minimal xor-CABA automaton, are based on relations between pairs of monads. For the regular language $\mathcal{L} = (a+b)^*a$ above definition instantiates as follows. First, as for the \'atomaton, we construct the minimal pointed $\lambda^{\mathcal{H}}$-bialgebra $\langle X, h, k\rangle$ for $\mathcal{L}$; it is depicted in \Cref{overlinem(l)atom}. As one easily verifies, the $\mathbb{Z}_2$-vector space $\langle X, h \circ \alpha_X \rangle$ is induced by the symmetric difference operation $\oplus$ on subsets. Using the notation in \Cref{overlinem(l)atom}, we choose the basis $\langle Y, i, d \rangle$ with $Y = \lbrace 4,6,7,8 \rbrace$; $i(y) = y$; and $d(1) = 7 \oplus 8$, $d(2) = \emptyset$, $d(3) = 6 \oplus 7$, $d(4) = 4$, $d(5) = 6 \oplus 7 \oplus 8$, $d(6) = 6$, $d(7) = 7$, $d(8) = 8$. The induced $d(1) = 7 \oplus 8$-pointed $\mathcal{R}$-succinct automaton accepting $\mathcal{L}$, i.e. the minimal xor-CABA automaton, is depicted in \Cref{minimalxorcaba}. \section{Minimality}\label{minimality} \newcommand{\textnormal{im}}{\textnormal{im}} \newcommand{\textnormal{gen}}{\textnormal{gen}} \newcommand{\textnormal{obs}}{\textnormal{obs}} \newcommand{\textnormal{ext}}{\textnormal{ext}} \newcommand{\textnormal{exp}}{\textnormal{exp}} In this section we restrict ourselves to the category of (nominal) sets. We show that every language satisfying a suitable property parametric in monads $S$ and $T$ admits a size-minimal succinct automaton of type $T$ accepting it. As a main result we obtain \Cref{minimalitytheorem}, which is a generalisation of parts of \cite[Theorem 4.8]{MyersAMU15}. In \Cref{minimalityimplications} we instantiate the former to subsume known minimality results for canonical automata, to prove the xor-CABA automaton minimal, and to establish a size-comparison between different acceptors. Given a distributive law homomorphism $\alpha: \lambda^S \rightarrow \lambda^T$, let $\textnormal{ext}: \textnormal{Coalg}(FT) \rightarrow \textnormal{Coalg}(FS)$ be the functor given by $\textnormal{ext}(\langle X, k \rangle) = \langle X, F\alpha_X \circ k \rangle$ and $\textnormal{ext}(f) = f$. Moreover, let $\textnormal{exp}_U: \textnormal{Coalg}(FU) \rightarrow \textnormal{Bialg}(\lambda^U)$ for $U \in \lbrace S, T \rbrace$ denote the functor introduced in \Cref{expfunctor}. \begin{proposition} \label[proposition]{alphaunderlies} Let $\alpha: \lambda^{S} \rightarrow \lambda^T$ be a distributive law homomorphism. Then $\alpha_X: TX \rightarrow SX$ underlies a natural transformation $\alpha: \textnormal{exp}_T \Rightarrow \alpha \circ \textnormal{exp}_S \circ \textnormal{ext}$ between functors of type $\textnormal{Coalg}(FT) \rightarrow \textnormal{Bialg}(\lambda^T)$. \end{proposition} In the above situation a $T$-succinct automaton admits \emph{two} semantics, induced by lifting the former either to a bialgebra over $\lambda^S$ or $\lambda^T$. The next definition introduces a notion of \emph{closedness} that captures those cases in which the image of both semantics coincides. \begin{definition}\deftag{$\alpha$-closed succinct automaton} \label[definition]{closedsuccinctdef} Let $\alpha: \lambda^{S} \rightarrow \lambda^T$ be a distributive law homomorphism. We say that a $T$-succinct automaton $\mathcal{X}$ is \emph{$\alpha$-closed} if the unique diagonal below is an isomorphism: \[ \begin{tikzcd} \textnormal{exp}_T(\mathcal{X}) \arrow[twoheadrightarrow]{r}{\textnormal{obs}} \arrow{d}[left]{\textnormal{obs} \circ \alpha_X} & \textnormal{im}(\textnormal{obs}_{\textnormal{exp}_T(\mathcal{X})}) \arrow[dashed]{dl}{} \arrow{d}{} \\ \textnormal{im}(\textnormal{obs}_{\alpha(\textnormal{exp}_S(\textnormal{ext}(\mathcal{X})))}) \arrow[hookrightarrow]{r}{} & \Omega \end{tikzcd}. \] \end{definition} Next we show that succinct automata obtained from certain generators are $\alpha$-closed. \begin{lemma} \label[lemma]{generatorclosed} Let $\alpha: \lambda^{S} \rightarrow \lambda^T$ be a distributive law homomorphism and $\langle X, h, k \rangle$ a $\lambda^S$-bialgebra. If $\langle Y, i, d \rangle$ is a generator for $\langle X, h \circ \alpha_X \rangle$, then $\langle Y, Fd \circ k \circ i\rangle$ is $\alpha$-closed. \end{lemma} We are now able to state our main result, which is a generalisation of parts of \cite[Theorem 4.8]{MyersAMU15}. \begin{theorem}[Minimal succinct automata] \label[theorem]{minimalitytheorem} Given a language $\mathcal{L} \in \Omega$ such that there exists a minimal pointed $\lambda^S$-bialgebra $\mathbb{M}$ accepting $\mathcal{L}$ and the underlying algebra of $\alpha(\mathbb{M})$ admits a size-minimal generator, there exists a pointed $\alpha$-closed $T$-succinct automaton $\mathcal{X}$ accepting $\mathcal{L}$ such that: \begin{itemize} \item for any pointed $\alpha$-closed $T$-succinct automaton $\mathcal{Y}$ accepting $\mathcal{L}$ we have that $\textnormal{im}(\textnormal{obs}_{\textnormal{exp}_T(\mathcal{X})}) \subseteq \textnormal{im}(\textnormal{obs}_{\textnormal{exp}_T(\mathcal{Y})})$; \item if $\textnormal{im}(\textnormal{obs}_{\textnormal{exp}_T(\mathcal{X})}) = \textnormal{im}(\textnormal{obs}_{\textnormal{exp}_T(\mathcal{Y})})$, then $\vert X \vert \leq \vert Y \vert$, where $X$ and $Y$ are the carriers of $\mathcal{X}$ and $\mathcal{Y}$, respectively. \end{itemize} \end{theorem} For a $T$-succinct automaton $\mathcal{X}$ let us write $\textnormal{obs}^{\dag}_{\mathcal{X}} := \textnormal{obs}_{\textnormal{exp}_T(\mathcal{X})} \circ \eta^T_X: X \rightarrow \Omega$ for a generalisation of the semantics of non-deterministic automata. The next result provides an equivalent characterisation of $\alpha$-closedness in terms of $\textnormal{obs}^{\dag}$ that will be particularly useful in \Cref{minimalityimplications}. \begin{lemma} \label[lemma]{imgobssuccinctsem} Let $\alpha: \lambda^{S} \rightarrow \lambda^T$ be a distributive law homomorphism. For any $T$-succinct automaton $\mathcal{X}$ it holds that $\textnormal{im}(\textnormal{obs}_{\exp_T(\mathcal{X})}) = \textnormal{im}(h \circ \alpha_{\Omega} \circ T(\textnormal{obs}^{\dag}_{\mathcal{X}}))$ and $\textnormal{im}(\textnormal{obs}_{\alpha(\exp_S(\textnormal{ext}(\mathcal{X})))}) = \textnormal{im}(h \circ S(\textnormal{obs}^{\dag}_{\mathcal{X}}))$, where $\langle \Omega, h, k \rangle$ is the final $\lambda^S$-bialgebra. \end{lemma} \begin{figure}[t] \tiny \centering \begin{tikzpicture}[node distance=3em] \node[state, initial, shape=rectangle, initial text=, accepting] (7) {$\lbrack \lbrace \lbrace y \rbrace, \emptyset \rbrace \rbrack$}; \node[state, shape=rectangle, right = of 7, accepting] (6) {$\lbrack \lbrace \lbrace y \rbrace \rbrace \rbrack$}; \node[state, shape=rectangle, initial, right = of 6, accepting, initial text=] (8) {$\lbrack \lbrace \lbrace x \rbrace, \lbrace y \rbrace, \lbrace x, y \rbrace, \emptyset \rbrace \rbrack$}; \node[state, shape=rectangle, right = of 8] (4) {$\lbrack \lbrace \lbrace x, y \rbrace, \emptyset \rbrace \rbrack$}; \path[->] (7) edge[loop above] node{$a,b$} (7) (7) edge[above] node{$a$} (6) (8) edge[loop above] node{$a,b$} (8) (4) edge[above] node{$a,b$} (8) ; \end{tikzpicture} \caption{The minimal xor-CABA automaton for $\mathcal{L} = (a+b)^*a$.} \label{minimalxorcaba} \end{figure} \subsection{Applications to canonical automata} \label{minimalityimplications} In this section we instantiate \Cref{minimalitytheorem} to characterise a variety of canonical acceptors from the literature as size-minimal representatives among subclasses of $\alpha$-closed succinct automata, i.e. those automata whose images of the two semantics induced by $\alpha$ coincide. We begin with the canonical RFSA and the minimal xor automaton, for which $\alpha$ is the identitity and $\alpha$-closedness therefore is trivial. In \cite{DenisLT02} the canonical RFSA for $\mathcal{L}$ has been characterised as size-minimal among those NFAs accepting $\mathcal{L}$ for which states accept a residual of $\mathcal{L}$. More recently, it was shown that the class in fact can be extended to those NFAs accepting $\mathcal{L}$ for which states accept a \emph{union} of residuals of $\mathcal{L}$ \cite{MyersAMU15}. The next result recovers the latter as a consequence of the second point in \Cref{minimalitytheorem}. We write $\overline{Y}$ for the algebraic closure\footnote{If $Y = \textnormal{im}(f)$ for some morphism $f$ with codomain $\langle X,h \rangle$, the closure is given by the induced $T$-algebra structure on $\textnormal{im}(h \circ Tf)$.} of a subset $Y \subseteq X$ of some $T$-algebra $X$. \begin{corollary} \label[corollary]{canonicalrfsaminimal} The canonical RFSA for $\mathcal{L}$ is size-minimal among non-deterministic automata $\mathcal{Y}$ accepting $\mathcal{L}$ with $ \overline{\textnormal{im}(\textnormal{obs}^{\dag}_{\mathcal{Y}})}^{\textnormal{CSL}} \subseteq \overline{\textnormal{Der}(\mathcal{L})}^{\textnormal{CSL}} $. \end{corollary} The second condition in \Cref{minimalitytheorem} is always satisfied for a \emph{reachable} succinct automaton $\mathcal{Y}$. Since for $\mathbb{Z}_2$-weighted automata it is possible to find an equivalent reachable $\mathbb{Z}_2$-weighted automaton with less or equally many states (which for NFA is not necessarily the case), the minimal xor automaton is minimal among \emph{all} $\mathbb{Z}_2$-weighted automata, as was already known from for instance \cite{VuilleminG210}. \begin{corollary} \label[corollary]{minimalxor} The minimal xor automaton for $\mathcal{L}$ is size-minimal among $\mathbb{Z}_2$-weighted automata accepting $\mathcal{L}$. \end{corollary} For the \'atomaton, the distromaton, and the minimal xor-CABA automaton the distributive law homomorphism $\alpha$ in play is non-trivial; $\alpha$-closedness translates to the below equalities between closures. In all three cases it is possible to waive the inclusion induced by the second point in \Cref{minimalitytheorem}. \begin{corollary} \label[corollary]{minimalityatomaton} The \'atomaton for $\mathcal{L}$ is size-minimal among non-deterministic automata $\mathcal{Y}$ accepting $\mathcal{L}$ with $\overline{\textnormal{im}(\textnormal{obs}^{\dag}_{\mathcal{Y}})}^{\textnormal{CSL}} = \overline{\textnormal{im}(\textnormal{obs}^{\dag}_{\mathcal{Y}})}^{\textnormal{CABA}}$. \end{corollary} The above result can be shown to be similar to \cite[Theorem 4.9]{MyersAMU15}, which characterises the \'atomaton as size-minimal among non-deterministic automata whose accepted languages are \emph{closed under complement}. The result below is very similar to a characterisation of the distromaton as size-minimal among non-deterministic automata whose accepted languages are \emph{closed under intersection} \cite[Theorem 4.13]{MyersAMU15}. \begin{corollary} \label[corollary]{distromatonminimal} The distromaton for $\mathcal{L}$ is size-minimal among non-deterministic automata $\mathcal{Y}$ accepting $\mathcal{L}$ with $ \overline{\textnormal{im}(\textnormal{obs}^{\dag}_{\mathcal{Y}})}^{\textnormal{CSL}} = \overline{\textnormal{im}(\textnormal{obs}^{\dag}_{\mathcal{Y}})}^{\textnormal{CDL}}$. \end{corollary} The size-minimality result for the newly discovered minimal xor-CABA automaton is analogous to the ones for the \'atomaton and the distromaton. \begin{corollary} \label[corollary]{corminimalxorcaba} The minimal xor-CABA automaton for $\mathcal{L}$ is size-minimal among $\mathbb{Z}_2$-weighted automata $\mathcal{Y}$ accepting $\mathcal{L}$ with $ \overline{\textnormal{im}(\textnormal{obs}^{\dag}_{\mathcal{Y}})}^{\mathbb{Z}_2\textnormal{-Vect}} = \overline{\textnormal{im}(\textnormal{obs}^{\dag}_{\mathcal{Y}})}^{\textnormal{CABA}}$. \end{corollary} We conclude with a size-comparison between acceptors that is parametric in the closure of derivatives. \begin{corollary} \label[corollary]{sizecomparison} \begin{itemize} \item If $\overline{\textnormal{Der}(\mathcal{L})}^{\mathbb{Z}_2\textnormal{-Vect}} = \overline{\textnormal{Der}(\mathcal{L})}^{\textnormal{CABA}}$, then the minimal xor automaton and the minimal xor-CABA automaton for $\mathcal{L}$ are of the same size. \item If $\overline{\textnormal{Der}(\mathcal{L})}^{\textnormal{CSL}} = \overline{\textnormal{Der}(\mathcal{L})}^{\textnormal{CDL}}$, then the canonical RFSA and the distromaton for $\mathcal{L}$ are of the same size. \item If $\overline{\textnormal{Der}(\mathcal{L})}^{\textnormal{CSL}} = \overline{\textnormal{Der}(\mathcal{L})}^{\textnormal{CABA}}$, then the canonical RFSA and the \'atomaton for $\mathcal{L}$ are of the same size. \end{itemize} \end{corollary} \section{Related work} \label{relatedwork} One of the main motivations for the present paper is provided by active learning algorithms for the derivation of succinct state-based models \cite{angluin1987learning}. A major challenge in learning non-deterministic models is the lack of a canonical target acceptor for a given language \cite{DenisLT02}. The problem has been independently approached for different variants of non-determinism, often with the idea of finding a subclass admitting a unique representative \cite{esposito2002learning,berndt2017learning} such as e.g. the canonical RFSA, the minimal xor automaton, or the \'atomaton. A more general and unifying perspective on learning automata that may not have a canonical target was given by Van Heerdt \cite{van2017learning, van2016master, van2020phd}. One of the central notions in this work is the concept of a scoop, originally introduced by Arbib and Manes \cite{arbib1975fuzzy} and here referred to as a generator. The main contribution in \cite{van2017learning} is a general procedure to find irreducible sets of generators, which thus restricts the work to the category of sets. In the present paper we generally work over arbitrary categories, although we assume the existence of a minimal set-based generator in \Cref{minimalitytheorem}. Furthermore, the work of Van Heerdt has no size-minimality results. Closely related to the present paper is the work of Myers et al.\ \cite{MyersAMU15}, who present a coalgebraic construction for canonical non-deterministic automata. They cover the canonical RFSA, the minimal xor automaton, the \'atomaton, and the distromaton. The underlying idea in \cite{MyersAMU15} for finding succinct representations is similar to ours: first they build the minimal DFA for a regular language in a locally finite variety, then they apply an equivalence between the category of finite algebras and a suitable category of finite structured sets and relations. On the one hand, the category of finite algebras in a locally finite variety can be translated into our setting by considering a category of algebras over a monad preserving finite sets. In fact, modulo this translation, many of the categories considered here already appear in \cite{MyersAMU15}, e.g.\ vector spaces, Boolean algebras, complete join-semi lattices, and distributive lattices. On the other hand, their construction seems to be restricted to the category of sets and non-deterministic automata, while we work over arbitrary monads on arbitrary categories. Their work does not provide a general algorithm to construct a succinct automaton, i.e., the specifics vary with the equivalences considered, while we give a general definition and a soundness argument in \Cref{generatorbialgebrahom}. While Myers et al.\ give minimality results for a wide range of acceptors, each proof follows case-specific arguments. In \Cref{minimalitytheorem} we provide a unifying minimality result for succinct automata that generalises parts of \cite[Theorem 4.8]{MyersAMU15} and subsumes most of their results \cite[Theorem 4.9, Theorem 4.10, Corollary 4.11, Theorem 4.13]{MyersAMU15}. \section{Discussion and future work} \label{discussion} We have presented a general categorical framework based on bialgebras and distributive law homomorphisms for the derivation of canonical automata. The framework instantiates to a wide range of well-known examples from the literature and allowed us to discover a previously unknown canonical acceptor for regular languages. Finally, we presented a theorem that subsumes previously independently proven minimality results for canonical acceptors, implied new characterisations, and allowed us to make size-comparisons between canonical automata. In the future, we would like to cover other examples, such as the canonical probabilistic RFSA \cite{esposito2002learning} and the canonical alternating RFSA \cite{berndt2017learning, angluin2015learning}. Probabilistic automata of the type in \cite{esposito2002learning} are typically modelled as $TF$-coalgebras instead of $FT$-coalgebras \cite{jacobs2012trace}, and thus will need a shift in perspective. For alternating RFSAs we expect a canonical form can be constructed in the spirit of this paper, from generators for algebras over the neighbourhood monad, by interpreting the join-dense atoms of a CABA as a full meet of ground elements. Generally, it would be valuable to have a more systematic treatment of the range of available monads and distributive law homomorphisms \cite{zwart2019no}, making use of the fact that distributive law homomorphisms compose. Further generalisation in another direction could be achieved by distributive laws between monads and endofunctors on different categories. For instance, we expect that operations on automata as the product can be captured by homomorphisms between distributive laws of such more general type. Finally, we would like to lift existing double-reversal characterisations of the minimal DFA \cite{brzozowski1962canonical}, the \'atomaton \cite{BrzozowskiT14}, the distromaton \cite{MyersAMU15}, and the minimal xor automaton \cite{VuilleminG210} to general canonical automata. The work in \cite{bonchi2012brzozowski, bonchi2014algebra} gives a coalgebraic generalisation of Brzozowski's algorithm based on dualities between categories, but does not cover the cases we are interested in. The framework in \cite{adamek2012coalgebraic} recovers the \'atomaton as the result of a minimisation procedure, but does not consider other canonical acceptors. \bibliographystyle{eptcs}
1,314,259,996,434
arxiv
\section{Introduction}\label{section-Introduction} We consider the following nonlinear heat equation (NLH) \begin{equation} \label{NLH} \left\{ \begin{array}{l} u_t=\Delta u + |u|^{p-1}u,\\ u(.,0)=u_0\in L^\infty (\R^N,\R), \end{array} \right. \mbox{ } \end{equation} where $p>1$ and $u(x,t):\R^N\times [0,T)\to \R$. Equation \eqref{NLH} is considered as a model for many physical situations, such as heat transfer, combustion theory, thermal explosion, etc. (see more in Kapila \cite{KSJAM80}, Kassoy and Poland \cite{KPsiam80,KPsiam81}, Bebernes and Eberly \cite{BEbook89}). Firstly, note that equation \eqref{NLH} is well-posed in $L^\infty$. More precisely, for each $u_0 \in L^\infty(\R^N)$, one of the following statements holds: \begin{itemize} \item either the solution is global in time; \item or the maximum existence time is finite i.e. $T_{max} <+\infty $ and \begin{equation}\label{norm-infty-u-infty} \lim_{t \to T_{max}} \left\| u(\cdot, t) \right\|_{L^\infty} = +\infty. \end{equation} \end{itemize} In particular, $ T_{max} >0$ is called the blowup time of the solution and a point $a \in \mathbb{R}^N$ is called a blowup point of the solution if there exists sequence $(a_n,t_n) \to (a,T)$ as $ n \to +\infty$ such that $$ \left| u(a_n, t_n) \right| \to +\infty \text{ as } n \to + \infty. $$ \iffalse We can see that we have the trivial blowup solution to \eqref{NLH} given as follows $$ \psi_T(t) = \kappa (T-t)^{-\frac{1}{p-1}} \text{ where } \kappa = (p-1)^{-\frac{1}{p-1}}. $$ Accordingly to the classification investigated by Merle and Matano \cite{MMcpam04}, the blowup solutions of the equation \eqref{NLH} satisfy the following estimate \begin{equation}\label{defi-type-I} \left\| u(\cdot, t) \right\|_{L^\infty(\mathbb R^N)} \le C \psi_T(t), \forall t \in [0,T), \end{equation} called by \textit{Type I} blowup solutions, otherwise, they are of \textit{ Type II}. In the context of the paper, we aim to study the \textit{Type I}. There is a huge of literature concerned by the study of blow-solution of \textit{Type I}, we cite for example ...\\ \fi \medskip Blowup for equation \eqref{NLH} has been studied intensively by many mathematicians and no list can be exhaustive. This is the case for the question of deriving blowup profiles, which is completely understood in one space dimension (see in particular Herrero and Vel\'azquez \cite{HVasps92,HVdie92,HVaihn93,HVcras94}), unlike the higher dimensional case, where much less is known (see for example Vel\'azquez \cite{Vcpde92,VELtams93,VELiumj93}, Zaag \cite{ZAAaihp02,ZAAcmp02, Zdmj06,Zmme02} together with the recent contributions by Merle and Zaag \cite{MZimrn21,MZ22}). \medskip In the one dimensional case, Herrero and Vel\'azquez proved the following, unless the solution is space independent (see also Filippas and Kohn \cite{FKcpam92}): \begin{itemize} \item[$(i)$] Either \begin{equation*} \sup_{|x-a| \le K \sqrt{(T-t)|\ln(T-t)|}} \left| (T-t)^{\frac{1}{p-1}}u(x,t) - \varphi\left(\frac{x-a}{\sqrt{(T-t)|\ln(T-t)|}} \right) \right| \to 0, \end{equation*} where $\varphi(z)=(p-1+b_g|z|^{2k})^{\frac{1}{p-1}}$ and $b_g= \frac{(p-1)^2}{4p}$ is unique. (note that Herrero and Vel\'azquez proved that this behavior is generic in \cite{HVcras94,HVasnsp92}). \item[$(ii)$] Or \begin{equation*} \sup_{|x-a| \le K (T-t)^{\frac{1}{2k}}} \left| (T-t)^{\frac{1}{p-1}}u(x,t) - \varphi_k\left(\frac{x-a}{(T-t)^{\frac{1}{2k}}} \right) \right| \to 0, \end{equation*} where $\varphi_k(z)=(p-1+b|z|^{2k})^{\frac{1}{p-1}}$, where $b$ is an \textit{arbitrary} positive number. \end{itemize} In particular, we are interested in constructing blowup solution with a prescribed behaviors, via a ``generic approximation'', called the blowup profile of the solution. \\ The existence of such solutions was observed by Vel\'{a}zquez, Galaktionov and Herrero \cite{VelGalHer91} who indicated formally how one might find these solution. Later, Bricmont and Kupiainen \cite{BKnon94}, will give a rigorous proof of construction of such profiles (see also Herrero and Vel\'azquez \cite{HVdie92} for the profile $\varphi_4$). In \cite{AVjfdram97}, Angenent and Vel\'{a}zquez gives a construction of blow up solution for the mean curvature flow inspied by the construrction of (ii). Most of the constructions are made in one dimension $N=1$. In higher dimension $N\geq 2$, recently Merle and Zaag give the construction of a new profile of type I with a superlinear power in the Sobolev subcritical range, for more details see \cite{MZ22}. \medskip In this paper we revisit the construction of ii) given in Section 4 of \cite{BKnon94}. Our construction has the advantage that it uses the modulation parameter. We shall use a topological "shooting" argument to prove existence of Solutions constructed in Theorem \ref{Theorem-principal}. The construction is essentially an adaptation of Wazewski's principle (see \cite{Conbook78}, chapter II and the references given there). The use of topological methods in the analysis of singularities for blow-up phenomena seems to have been introduced by Bressan in \cite{Breiumj90}. \medskip \noindent The following is the main result in the paper \begin{theorem} \label{Theorem-principal}Let $p>1$ and $k \in \mathbb N, k \ge 2$, then there exist $\delta_0$ and $ \tilde T >0$ such that for all $\delta \in (0,\delta_0)$ and $T \in (0,\tilde T)$, we can construct initial datum $u_0 \in L^\infty(\R)$ such that the corresponding solution to equation \eqref{NLH} blowup in finite time $T$ and only at the origin. Moreover, there exists the flow $b(t) \in C^1(0,T)$ such that the following description is valid\\ (i) For all $t\in [0,T)$, it holds that \begin{equation}\label{theorem-intermediate} \left \| (T-t)^{\frac{1}{p-1}} u(\cdot, t) - f_{b(t)}\left(\frac{|\cdot|^{2k}}{T-t} \right)\right \|_{L^\infty(\mathbb{R})}\lesssim (T-t)^{\frac{\delta}{2}(1-\frac{1}{k})} \text{ as } s \to \infty. \end{equation} (ii) There exists $b^*>0$ such that $b(t)\to b^*$ as $t\to T$ and \begin{equation}\label{estimate-b-t-b-*} \left| b(t) - b^* \right| \lesssim (T-t)^{\delta (1-\frac 1k)}, \forall t \in (0,T), \end{equation} where $f_{b(t)}$ is defined by \begin{equation} f_{ b(t)}(y)= \left(p-1+ b(t) y^{2k}\right)^{-\frac{1}{p-1}}\;\;. \end{equation} \end{theorem} \begin{rem} One of the most important steps of the proof is to project the linearized partial differential equation \eqref{equation-q} on the $H_m$, given by \eqref{eigenfunction-Ls}. We note that this is technically different from the work of Bricomont and Kupiainen \cite{BKnon94}, where the authors project the integral equation. Consequently, we will have additional difficulty coming from the projection of the the different terms, see for example Lemma \ref{Lemma-Pn_partialq} and Lemma \ref{Lemma-P-n-mathcal-L-s}. \end{rem} \begin{rem} We note that $ \frac{b_0}{2} \le b(t) \le 2 b_0 $ and \eqref{estimate-b-t-b-*}, it holds that \begin{equation*} \left\| \left( p-1 + b(t) y^{2k} \right)^{-\frac{1}{p-1}} - \left( p-1 + b^* y^{2k} \right)^{-\frac{1}{p-1}} \right\|_{L^\infty(\mathbb{R})} \lesssim \left| b(t) - b^* \right| \lesssim (T-t)^{\delta\left( 1 -\frac{1}{k}\right)}, \end{equation*} which yields \begin{equation} \left \| (T-t)^{\frac{1}{p-1}} u(\cdot, t) - f_{b^*}\left(\frac{|\cdot|^{2k}}{T-t} \right)\right \|_{L^\infty(\mathbb{R})}\lesssim (T-t)^{\frac{\delta}{2}(1-\frac{1}{k})} \text{ as } s \to \infty. \end{equation} \end{rem} The paper is organised as follows. In Section \ref{section-Formulation} and \ref{Section-Spectral-properties-Ls}, we give the formulation of the problem. In Section \ref{section-Proof-assuming-estimates} we give the proof of the existence of the profile assuming technical details. In particular, we construct a shrinking set and give an example of initial data giving rise to the blow-up profile and at the end of the section we give the proof of Theorem \ref{Theorem-principal}. The topological argument of Section \ref{section-Proof-assuming-estimates} uses a number of estimates given by Proposition \ref{proposition-ode}, we give the proof of this proposition in Section \ref{Section-proof-proposition-ode}. \textbf{Acknowledgement:} The author Giao Ky Duong is supported by the scientific research project of An Giang University under the Grant 22.05.SP. \section{Formulation of the problem}\label{section-Formulation} Let us consider $T>0$, and $k \in \N, k \ge 2$, and $u$ be a solution to \eqref{NLH} which blows up in finite time $T>0$. Then, we introduce the following \textit{blow-up variable}: \begin{equation}\label{change-variable} w(y,s)=(T-t)^{-\frac{1}{p-1}}u(x,t),\;y=\frac{x}{(T-t)^{\frac{1}{2k}}},\;\; s=-\ln (T-t). \end{equation} Since $u$ solves \eqref{NLH}, for all $(x,t)\in\R^N\times[0,T)$, then $w(y,s)$ reads the following equation \begin{equation}\label{equation-w} \frac{\partial w}{\partial s}=I^{-2}(s) \Delta w - \frac{1}{2k} y \cdot \nabla w -\frac{1}{p-1} w +|w|^{p-1}w, \end{equation} where $I(s)$ is defined by \begin{equation}\label{defi-I-s} I(s) = e^{\frac{s}{2}\left(1-\frac{1}{k} \right)}. \end{equation} Adopting the \textit{setting} investigated by \cite{BKnon94}, we consider $C^1$-flow $b $ and introduce \begin{equation}\label{decompose-equa-w-=q} w (y,s)= f_{b(s)}(y) \left(1 + e_{b(s)}(y)q(y,s) \right) \end{equation} where $f_b$ and $e_b$ respectively defined as \begin{equation}\label{defi-profile} f_b(y)= \left(p-1+ b y^{2k}\right)^{-\frac{1}{p-1}}, \end{equation} and \begin{eqnarray} e_b(y) = \left( p-1 + b |y|^{2k} \right)^{-1} \label{defi-e-b}. \end{eqnarray} and the flow $b$ will arise as an unknown functions that will be constructed together with the linearized solution $q$. Since $f_b e_b=f_b^{p}$, by \eqref{decompose-equa-w-=q} $q$ can be written as follows \begin{eqnarray}\label{decom-q-w-} q=wf_b^{-p}- (p-1+by^{2k}). \end{eqnarray} \noindent In the following we consider $(q,b)(s)$ which satisfies the following equation \beqtn\label{equation-q} \pa_s q =\mathcal{L}_s q+ \mathcal{N} (q) +\mathcal{D}_s(\nabla q)+\mathcal{R}_s(q) +b'(s)\mathcal{M}(q), \eeqtn where \begin{eqnarray} \mathcal{L}_s q & = & I^{-2}(s) \Delta q-\frac{1}{2k}y \cdot \nabla q+q,\;\;\ I(s)=\dsp e^{\frac s2(1-\frac 1k)},\label{operator-Ls}\\ \mathcal{N}(q)&=&\left| 1+e_bq \right|^{p-1}(1+e_bq)-1-p e_b q \label{nonlinear-term}\\\mathcal{D}_s(\nabla q)&=&-\frac{4pkb}{p-1}I^{-2}(s) e_by^{2k-1}\nabla q, \label{equation-Ds} \\ \mathcal{R}_s(q)&=& I^{-2}(s)y^{2k-2} \left (\alpha_1+\alpha_2 y^{2k}e_b+(\alpha_3+\alpha_4 y^{2k}e_b)q \right), \label{equation-Rs} \\ \mathcal{M} (q) & = &\frac{p}{p-1}y^{2k} (1+ e_bq) \label{new-term} \end{eqnarray} and the constants $\alpha_i$ are given by \begin{equation}\label{defi-constant-in-R} \begin{matrix} \alpha_1 =-2k(2k-1)\frac{b}{p-1}; & \alpha_2=4pk^2\frac{b^2}{(p-1)^2}; & \alpha_3=-2pk(2k-1)\frac{b}{p-1};\alpha_4 =4p(2p-1)k^2\frac{b^2}{(p-1)^2} .\\ \end{matrix} \end{equation} \iffalse Our goal is to prove the following Proposition: \begin{prop} There exists $ s_1<\infty$ and $\varepsilon >0$, such that for $s_0 > s_1$ and $g $ in $C^0(\mathbb{R})$ such that the equation \eqref{equation-w} with initial data \eqref{initial-data} has a unique classical solution, which satisfies \begin{equation}\label{theorem-intermediate} \left \|w(.,s)- f_{b(s)}(.)\right \|_{\infty}\to 0\mbox{ as $s\to \infty$}, \end{equation} and $$ b(s) = b(T, b_0, p, k) + O(e^{-s(k-1)}), \text{ and } b(T, b_0, p, k) >0. $$ \begin{equation}\label{defi-profile} f_b(y)= \left(p-1+ b y^{2k}\right)^{-\frac{1}{p-1}},\;\; k>1, \end{equation} and $f_b$ satisfy \begin{equation} 0=-\frac k 2\nabla f_b-\frac{1}{p-1}f_b+|f_b|^{p-1}f_b. \end{equation} \end{prop} First we introduce the derivation of $w$ from $f_b$. It is convenient to write $w $ in the form \beqtn\label{definition-q} w(y,s)=f_{b}(y) \left (1+e_b(y)q(y,s)\right), \eeqtn where, \begin{equation}\label{defi-e-b} e_b(y)=\left (p-1+by^{2k}\right)^{-1} . \end{equation} \fi \iffalse \begin{rem} From \eqref{definition-q}, we can write \[q=(w-f_b) (f_b e_b)^{-1}\] we note that $f_b e_b=f_b^{p}$, then we obtain \[q=(w-f_b) \left (p-1+by^{2k}\right )^{\frac{p}{p-1}}\] \textcolor{blue}{\[q=wf_b^{-p}-\left (p-1+by^{2k}\right )\] } \en{rem} \medskip \fi \section{Decomposition of the solution}\label{Section-Spectral-properties-Ls} \subsection{ Fundamental solution involving to $\mathcal{L}_s$} Let us define Hilbert space $L^2_{\rho_s}(\R)$ by \beqtn\label{define-L-2-rho-s} L^2_{\rho_s}(\R)=\{f \in L^2(\R),\; \int_{\R}f^2\rho_s dy<\infty\}, \eeqtn where \begin{equation}\label{defi-rho-s} \displaystyle \rho_s=\frac{I(s)}{\sqrt{4\pi}} e^{-\frac{I_{s}^{2}y^2}{4}}, \end{equation} and $I(s)$ is defined by \eqref{defi-I-s}.\\ In addition, we denote \beqtn\label{eigenfunction-Ls} H_m(y,s)=I^{-m}(s)h_m(I(s) y)=\sum_{\ell=0}^{[\frac{m}{2}]}\frac{m!}{\ell!(m-2\ell)!}(-I^{-2}(s))^\ell y^{m-2\ell} \eeqtn where $h_m(z)$ be Hermite polynomial (physic version) \beqtn\label{definition-h-n-z} h_m(z)=\sum_{\ell=0}^{[\frac{m}{2}]}\frac{m!}{\ell!(m-2\ell)!}(-1)^\ell z^{m-2\ell}. \eeqtn In particular, it is well known that \[\int h_nh_m\rho_s dy=2^nn!\delta_{nm},\] which yields \beqtn\label{scalar-product-hm} \dsp ({H}_n(.,s),{H}_m(.,s))_s=\int {H}_n(y){H}_m(y)\rho_s(y)dy=I^{-2n}2^n n!\delta_{nm}. \eeqtn \textbf{Jordan block's decomposition of $\mathcal{L}_s$} \medskip By a simple computation (relying on fundamental identities of Hermite polynomials), we have \beqtn\label{Ls-Hm} \mathcal{L}_s H_m(y,s)= \left\{ \begin{array}{lll} & \left(1-\frac{m}{2k} \right)H_m(y,s)+m(m-1)(1-\frac{1}{k})I^{-2}(s)H_{m-2}&\mbox{ if }m\geq 2\\ & \left(1-\frac{m}{2k} \right)H_m(y,s)&\mbox{ if }m=\{0,1\} \end{array} \right. . \eeqtn We define $\mathcal{K}_{s,\sigma}$ as the fundamental solution to \beqtn \pa_s \mathcal{K}_{s\sigma}=\mathcal{L}_s \mathcal{K}_{s\sigma} \text{ for } s > \sigma \mbox{ and }\mathcal{K}_{\sigma\sigma}=Id. \eeqtn By using the Mehler's formula, we can explicitly write its kernel as follows \beqtn\label{Kernel-Formula} \dsp \mathcal{K}_{s\sigma}(y,z)=e^{s-\sigma}\mathcal{F} \left ( e^{-\frac{s-\sigma}{2k}}y-z \right ) \eeqtn where \beqtn\label{Kernel-Formula-F} \dsp \mathcal{F}(\xi)=\frac{L(s,\sigma)}{\sqrt{4\pi}}e^{-\frac{L^2(s,\sigma)\xi^2}{4}}\mbox{ where } L(s, \sigma) =\frac{I(\sigma)}{\sqrt{1-e^{-(s-\sigma)}}}\mbox{ and }I(s)=\dsp e^{\frac s2(1-\frac 1k)}. \eeqtn In addition, we have the following equalities \beqtn \mathcal{K}_{s\sigma}H_n(.,\sigma)=e^{(s-\sigma)(1-\frac{n}{2k})}H_n(.,s), n \ge 0. \label{kernel-Hn} \eeqtn \iffalse b) \textit{Multi-dimensional case:} Let $N \ge 2$ and the case is a natural extension of the setting in the one dimensional one. Indeed, we introduce $L^2_{\rho_s}(\mathbb{R}^N)$ as in \eqref{define-L-2-rho-s} with $$ \rho_s (y) = \frac{I^N(s)}{(4\pi)^\frac{N}{2}} e^{- \frac{I^2(s)|y|^2}{4}}, y \in \mathbb{R}^N.$$\\ In addition, let $\alpha$ be a multi-index in $\mathbb{N}^N$, $\alpha = (\alpha_1,...,\alpha_N)$ and $|\alpha|= \alpha_1+...+\alpha_N$. Similarly the one dimensional case, we have Jordan's block's decomposition \begin{equation} \mathcal{L}_s H_\alpha (y) = \left\{ \begin{array}{rcl} \left( 1 - \frac{|\alpha|}{2k} \right) H_\alpha(y) + \end{array} \right. \end{equation} Corresponding to eigenvalue $\lambda_m = 1 - \frac{m}{2k}$, eigenspace $\mathcal{E}_m$ is given by $$\mathcal{E}_m = \left\langle H_\alpha(y), |\alpha| =m \right\rangle, $$ where $H_\alpha$ defined by \begin{eqnarray*} H_\alpha(y,s) = \Pi_{i=1}^N H_{\alpha_i}(y_i,s) \text{ with } H_{\alpha_i} \text{ given in } \eqref{eigenfunction-Ls}. \end{eqnarray*} In particular, semigroup $\mathcal{K}_{s,\sigma}$ has the same structure as the first case that its kernel explicitly given by $$ \mathcal{K}_{s, \sigma}(y,z) = \frac{e^{s- \sigma} L^N(s, \sigma) }{(4\pi)^\frac{N}{2}} e^{-\frac{L^2(s,\sigma)}{4} \left|e^{-\frac{s -\sigma}{2k} y - z} \right|}.$$ \fi \subsection{ Decomposition of $q$.} For the sake of controlling the unknown function $q \in L^2_{\rho_s}$, we will expand it with respect to the polynomials $H_m(y,s)$. We start by writing \begin{equation}\label{decomposition-q2} q(y,s) = \sum_{m=0}^{[M]}q_m(s) H_m(y,s)+ q_-(y,s) \equiv \dsp q_+(y,s)+q_-(y,s), \end{equation} where constant $[M]$ be the largest integer less than $M$ with \begin{equation}\label{defi-M} M=\frac{2kp}{p-1} . \end{equation} From \eqref{scalar-product-hm}, we have \beqtn\label{defi-q_m} \begin{array}{rcl} q_m(s) = P_m(q) = \dsp \frac{\left\langle q,H_m \right\rangle_{L^2_{\rho_s}}}{\langle H_m,H_m\rangle_{L^2_{\rho_s}}}, \end{array} \eeqtn as the projection of $q$ on $H_m$. In addition, $q_-(y,s)$ can be seen as the projection of $q$ onto $\{H_m, m \ge [M]+1\}$ and we also denote as follow \beqtn\label{projector-P-} q_-=P_-(q). \eeqtn \subsection{Equivalent norms } Let us consider $L^\infty_M$ defined by \begin{equation}\label{defi-L-M} L^{\infty}_M(\R)=\{g \text{ such that } (1+|y|^M)^{-1} g \in L^\infty (\R)\}, \end{equation} and $L^\infty_M$ is complete with the norm \begin{equation}\label{defi-norm-L-M} \|g\|_{L^\infty_M} = \|(1+|y|^M)^{-1} g \|_{L^\infty}, \end{equation} \iffalse Considering $C^0(\R)$ which we introduce the norms for: Let us consider $q\in C^0(\R)$ with the decomposition in \eqref{decomposition-q2}, \fi we introduce \beqtn\label{norm-q} \|q\|_s=\sum_{m=0}^{[M]}|q_m|+|q_-|_s, \eeqtn where \beqtn\label{norm-q-||-s} |q_-|_s=\displaystyle \sup_{y}\frac{|q_-(y,s)|}{I(s)^{-M}+|y|^M}. \eeqtn It is straightforward to check that \[C_1(s)\|q\|_{L^\infty_M} \leq \|q\|_s\leq C_2(s)\|q\|_{L^\infty_M}\mbox{ where }C_{i,i=1,2}(s) >0.\] \iffalse In particular, we introduce $\|.\|_s$ as follows \beqtn\label{norm-q-2} \|q\|_s=\sum_{m=0}^{[M]}|q_m|+|q_-|_s, \eeqtn where \beqtn\label{defi-|-|-s-norm} |q_-|_s=\displaystyle \sup_{y}\frac{|q_-(y,s)|}{I(s)^{-M}+|y|^M} \eeqtn As a matter of fact, we have the following equivalence: \[C_1(s)\|q\|_{L^\infty_M} \leq \|q\|_s\leq C_2(s)\|q\|_{L^\infty_M} \mbox{ for some } C_1(s), C_2(s) \in \R^{*}_+,\] which yields \fi Finally, we derive that $L^\infty_M(\mathbb{R})$ is also complete with the norm $\|.\|_s$. \section{The existence assuming some technical results}\label{section-Proof-assuming-estimates} As mentioned before, we only give the proof in the one dimensional case. This section is devoted to the proof of Theorem \ref{Theorem-principal}. We proceed in five steps, each of them making a separate subsection. \begin{itemize} \item In the first subsection, we define a shrinking set $V_{\delta,b_0}(s)$ and translate our goal of making $q(s)$ go to $0$ in $L^\infty_M(\mathbb{R})$ in terms of belonging to $V_{\delta,b_0}(s)$. \item In the second subsection We exhibit a $k$ parameter initial data family for equation \eqref{equation-q} whose coordinates are very small (with respect to the requirements of $V_{\delta,b_0}(s)$) except for the $k+1$ first parameter $q_0,..,q_{2k-1}$. \item In the third subsection, we solve the local in time Cauchy problem for equation \eqref{equation-q} coupled with some orthogonality condition. \item In the fourth subsection, using the spectral properties of equation \eqref{equation-q} , we reduce our goal from the control of $q(s)$ (an infinite dimensional variable) in $V_{\delta,b_0}(s)$ to the control of its $2k$ first components $(q_0,..,q_{2k-1})$ (a (k)-dimensional variable) in $\left[ -I(s)^{-\delta}, I(s)^{-\delta} \right]^{2k}$. \item In the last subsection, we solve the finite dimensional problem using the shooting lemma and conclude the proof of Theorem \ref{Theorem-principal}. \end{itemize} \subsection{Definition of the shrinking set $V_{\delta,b_0}(s)$} In this part, we introduce the shrinking set that controls the asymptotic behaviors of our solution \begin{definition}\label{definition-shrinking-set} Let us consider an integer $k > 1$, the reals $ \delta >0 $, $ b_0 >0$ and $M$ given by \eqref{defi-M}, we define $V_{\delta,b_0}(s)$ be the set of all $(q,b) \in L^\infty_M \times \mathbb{R}$ satisfying \begin{equation}\label{bound-for-q-m} \left| q_{m} \right| \le I^{-\delta }(s) ,\quad \forall\; 0\leq m \leq [M],\;\; m\not = 2k, \end{equation} \begin{eqnarray} \left| q_{2k} \right| \le I^{-2\delta } (s) , \end{eqnarray} \begin{equation}\label{bound-for-q--} \left| q_- \right|_s \le I^{-\delta}(s), \quad \end{equation} and \begin{equation}\label{bound-b} \frac{b_0}{2}\leq b \leq 2 b_0, \end{equation} where $q_m$ and $q_-$ defined in \eqref{decomposition-q2}, $I(s)$ defined as in \eqref{defi-I-s} and $|\cdot |_s$ norm defined in \eqref{norm-q-||-s}. \end{definition} \subsection{ Preparation of Initial data} In this part, we aim to give a suitable family of initial data for our problem. Let us consider $(d_0, d_1,...,d_{2k-1}) \in \R^{2k}$,$\delta>0 $ and $ b_0 >0$, we then define \begin{equation}\label{initial-data-new} \psi(d_0,...,d_{2k-1},y,s_0)=\sum_{i=0}^{2k-1} d_i I^{-\delta }(s_0) y^i, \end{equation} \iffalse and $g$ will fixed later so that it guarantees the orthogonal condition at $s_0$: \begin{equation}\label{condition-g-initial-s-0} P_{2k}( q(s_0)) = 0, \text{ and } \|g \|_{L^\infty_M} \le I^{-\delta}(s_0), \end{equation} \fi then, we have the following result \begin{lemma}[Decomposition of initial data in different components]\label{lemma-initial-data} Let us consider $(d_i)_{0\le i \le 2k-1} \in \R^{2k}$ satisfying $ \max_{0 \le i \le 2k-1 } \left| d_i \right| \le 1 $ and $b_0 >0$ given arbitrarily. Then, there exists $ \delta_1(b_0)$ such that for all $\delta \le \delta_1$, there exists $s_1(\delta_1, b_0) \ge 1$ such that for all $s_0 \ge s_1$, the following properties are valid with $\psi(d_0,...,d_{2k-1})$ defined in \eqref{initial-data-new}: \begin{itemize} \item[(i)] There exits a quadrilateral $ \mathbb{D}_{s_0} \subset \left[-2,2\right]^{2k} $ such that the mapping \begin{equation}\label{defi-mapping-Gamma-initial-data} \begin{gathered} \Gamma: \mathbb{D}_{s_0} \to \mathbb{R}^{2k} \hfill \\ \hspace{-1.5cm} (d_0,...,d_{2k-1}) \mapsto (\psi_0,...,\psi_{2k-1}) \hfill \\ \end{gathered}, \end{equation} is linear one to one from $ \mathbb{D}_{s_0}$ to $\hat{\mathcal{V}}(s_0)$, with \begin{equation}\label{define-hat-V-A-s} \hat{\mathcal{V}}(s) = \left[ -I(s)^{-\delta}, I(s)^{-\delta} \right]^{2k}, \end{equation} where $(\psi_0,...,\psi_{2k-1})$ are the coefficients of initial data $\psi(d_0,...,d_{2k-1})$ given by the decomposition \eqref{decomposition-q2}. In addition to that, we have \begin{equation}\label{des-Gamma-boundary-ne-0} \left. \Gamma \right|_{\partial \mathbb{D}_{s_0}} \subset \partial \hat{\mathcal{V}}(s_0) \text{ and } \text{deg}\left(\left. \Gamma \right|_{\partial \mathbb{D}_{s_0}} \right) \ne 0. \end{equation} \item[(ii)] For all $(d_0,...,d_{2k-1}) \in \mathbb{D}_{s_0}$, the following estimates are valid \begin{eqnarray} \left| \psi_0 \right| \le I^{-\delta}(s_0),.... , \left| \psi_{2k-1} \right| \le I^{-\delta}(s_0), \quad \psi_{2k} =\psi_{M} =0 \text{ and } \psi_-\equiv 0. \end{eqnarray} \end{itemize} \end{lemma} \begin{proof} The proof of the Lemma is quite the same as \cite[Proposition 4.5]{TZpre15}. - \textit{The proof of item (i):} From \eqref{initial-data-new}, definition's $H_n $ in \eqref{eigenfunction-Ls} and \eqref{defi-q_m}, we get \begin{eqnarray} \left| \psi_n (s_0) - d_n I^{-\delta}(s_0) \right| \le C(d_0,...,d_{2k-1}) I^{-\delta - 2}(s_0) \end{eqnarray} which concludes item (i). - \textit{ The proof of item (ii):} From \eqref{initial-data-new}, $\psi(d_0,...,d_{2k-1},s_0)$ is a polynomial of order $2k-1$, so it follows that $$ \psi_{n} =0, \forall n \in \{2k,..,M\} \text{ and } \psi_{-} \equiv 0.$$ In addition, since $(d_0,...,d_{2k-1}) \in \mathbb{D}_{s_0}$, we use item (i) to deduce that $(\psi_0,...,\psi_{2k-1}) \in \hat{\mathcal{V}}(s_0)$ and $$ \left| \psi_{n} \right| \le I^{-\delta}(s_0), \forall n \in \{ 0,...,2k-1\} $$ which concludes item (ii)and the proof Lemma \ref{lemma-initial-data}. \end{proof} \begin{rem} Note that $s_0= -\ln (T)$ is the \textit{master constant}. In almost every argument in this paper it is almost to be sufficiently depending on the choice of all other constants ($\delta_0$ and $b_0$). In addition, we denote $C$ as the universal constant that is independent to $b_0$ and $s_0$. \end{rem} \subsection{Local in time solution for the problem \eqref{equation-q} $ \&$ \eqref{Modulation-condition}} As we setup in the beginning, besides main solution $q$, modulation flow $b$ plays an important role in our analysis, since it helps us to disable the perturbation of the neutral mode corresponding to eigenvalue $\lambda_{2k} = 0$ of linear operator $\mathcal{L}_s$. In particular, the modulation flow is one of the main contributions of our paper. The uniqueness of the flow $b$ is defined via the following orthogonal condition \beqtn\label{Modulation-condition} \langle q, H_{2k} \rangle_{L^2_{\rho_s}} = 0. \eeqtn As a matter of fact, the cancellation ensures that $q_{2k} =0$, the projection of the solution on $ H_{2k}$, corresponding to eigenvalue $ \lambda_{2k} =0 $, since the neutral issues to the control of our solution. Consequently, our problem given by \eqref{equation-q} is coupled with the condition \eqref{Modulation-condition}. In the following, we aim to establish the local existence and uniqueness. \begin{prop}[Local existence of the coupled problem \eqref{equation-q} $\&$ \eqref{Modulation-condition}] Let $(d_{i})_{0\leq i\leq 2k-1} \in \mathbb{R}^{2k} $ satisfying $\max_{0\le i \le 2k-1} |d_i| \le 2$ and $\delta >0, b_0 >0$, there exists $s_2 ( \delta, b_0) \ge 1$, such that for all $ s_0 \ge s_2$, the following property holds: If we choose initial data $\psi$ as in \eqref{initial-data-new}, then, there exists $s^* > s_0$ such that the coupled problem \eqref{equation-q} $ \&$ \eqref{Modulation-condition} uniquely has solution on $[s_0, s^* ]$. Assume furthermore that the solution $(q,b)(s) \in \mathcal{V}_{ \delta, b_0}(s)$ for all $s \in [s_0,s^*]$, then, the solution can be extended after the time $s^*$ i.e. the existence and uniqueness of $(q,b)$ are valid on $[s_0,s^*+\varepsilon]$, for some $\varepsilon >0$ small. \end{prop} \begin{proof} Let us consider initial $w_0$ defined as in \eqref{decompose-equa-w-=q} with $q(s_0) = \psi(d_0,d_1,...,d_{2k-1}) $ given as in \eqref{initial-data-new}, since equation \eqref{NLH} is locally well-posedness in $L^\infty$, then, the solution $w$ to equation \eqref{equation-w} exists on $\left[s_0, \tilde s \right]$ for some $\tilde s > s_0 $. Next, we need to prove that $w$ is uniquely decomposed as in \eqref{decompose-equa-w-=q} and $(q,b)(s) $ solves \eqref{equation-q} and \eqref{Modulation-condition}. The result follows the Implicit function theorem. Let us define the functional $\mathscr{F}$ by \begin{equation}\label{defimathscr-F-functional} \mathscr{F}(s,b) = \left\langle w f_b^{-p} - \left( p-1 +b y^{2k} \right), H_{2k} \right\rangle_{L^2_{\rho_s}}. \end{equation} For $b_0 >0$, and at $s=s_0$, from $\psi(d_0,...,d_1)$'s definition in \eqref{initial-data-new}, it directly follows that \begin{equation}\label{equality-F=0} \mathscr{F}(s_0, b_0) =0. \end{equation} \iffalse Regarding to \eqref{initial-data-new}, $g$ need to satisfy \begin{equation}\label{condition-on-g} \langle g f_{b_0}^{- p}, H_{2k} \rangle_{L^2_{\rho_s}} =0. \end{equation} \fi \noindent \medskip Next, we aim to verify \begin{eqnarray}\label{partial-F-s-0ne-0} \frac{ \partial \mathcal{F}}{\partial b } (s_0, b_0) \ne 0. \end{eqnarray} From \eqref{defimathscr-F-functional}, we obtain \begin{eqnarray} \frac{\partial \mathcal{F}}{\partial b}(s,b) = \left\langle w \frac{p y^{2k}}{p-1} f_b^{-1} - y^{2k}, H_{2k} \right\rangle_{L^2_{\rho_s}}.\label{formula-partial-F} \end{eqnarray} \noindent According to \eqref{decompose-equa-w-=q}, we express $w(s_0)$ as follows $$ w(y,s_0) = f_{b_0} \left( 1 + I^{-\delta}(s_0)f_{b_0}^{p-1}(y,s_0)\sum_{i=0}^{2k-1} d_i y^i \right). $$ Then, we have \begin{eqnarray} & & \frac{\partial \mathcal{F}}{\partial b}(s_0,b_0) = I^{-\delta}(s_0)\frac{p}{p-1}\left\langle f_{b_0}^{p-1}(y,s_0) \sum_{i=0}^{2k-1} d_i y^{i+2k}, H_{2k} \right\rangle_{L^2_{\rho_{s_0}}} \label{scalar-product-modulation-1}\\ &+&\frac{1}{p-1} \left\langle y^{2k}, H_{2k} \right\rangle_{L^2_{\rho_{s_0}}} :=A+B.\nonumber \end{eqnarray} Using \eqref{eigenfunction-Ls} and \eqref{scalar-product-hm}, we immediately have $$ B=\frac{2^{4k}(2k)!}{p-1}I^{-4k}(s_0).$$ \\ In addition, we use \eqref{defi-e-b} to get the following expression \begin{equation}\label{eb0-modulation} e_{b_0}(y) = (p-1)^{-1} \left( \sum_{l=0}^L \left(- \frac{b y^{2k}}{p-1} \right)^l + \left( -\frac{b y^{2k}}{p-1}\right)^{L+1} \right), \end{equation} for $L \in \mathbb{N}, L \ge 2$ arbitrarily. \noindent Now, we decompose the part $A$ in \eqref{scalar-product-modulation-1} by \[\begin{array}{l} A=I^{-\delta}(s_0)\frac{p}{p-1}\left\langle f_{b_0}^{p-1}(y,s_0) \sum_{i=0}^{2k-1} d_i (s_0) y^{i+2k}, H_{2k} \right\rangle_{L^2_{\rho_{s_0}}}\\ =\dsp I^{-\delta}(s_0)\sum_{i=0}^{2k-1} d_i \left (\int_{|y|\leq 1}e_{b_0}(y,s_0) y^{i+2k} H_{2k} \rho_{s_0}(y)dy+ \int_{|y|\geq 1}e_{b_0}(y,s_0) y^{i+2k} H_{2k} \rho_{s_0}(y)dy\right ) \\ =A_1+A_2.\\ \end{array} \] Since $e_{b_0}y^{2k}$ is bounded, we apply Lemma \ref{small-integral-y-ge-I-delta} to get $$ |A_2| \lesssim I^{-4k-\delta}(s_0),$$ provided that $s_0 \ge s_{2,2}(\delta)$. Besides that, we use \eqref{eb0-modulation} with $L\ge 2$ arbitrarily and we write $A_1$ as follows \begin{eqnarray*} A_1 = (p-1)^{-1} I^{-\delta}(s_0) \sum_{i=0}^{2k-1} d_i \int_{|y| \le 1} \left[ \sum_{j=0}^L \left( -\frac{b \xi^{2k}}{p-1} \right)^l +\left( -\frac{b \xi^{2k}}{p-1} \right)^L \right] y^{i +2k} H_{2k}(s_0) \rho_{s_0} dy. \end{eqnarray*} Using Lemmas \ref{lemma-scalar-product-H-m} and \ref{small-integral-y-ge-I-delta}, we get \begin{equation*} |A_1|\lesssim I^{-4k - \delta}(s_0). \end{equation*} By adding all related terms, we obtain \begin{equation*} \frac{ \partial \mathcal{F}}{ \partial b} (s_0, b_0) = I^{-4k}(s_0)2^{4k} (2k)! \left(1+ O(I^{-\delta}(s_0))\right) \ne 0, \end{equation*} provided that $s_0 \ge s_{2,3}(\delta, b_0)$. Thus, \eqref{partial-F-s-0ne-0} follows. By equality \eqref{equality-F=0} and \eqref{partial-F-s-0ne-0} and using the Implicit function Theorem, we obtain the existence of a unique $s^* >0$ and $b \in C^1(s_0,s^*)$ such that $q$ defined as in \eqref{decom-q-w-}, verifies \eqref{equation-q}, and the orthogonal condition \eqref{Modulation-condition} hold. Moreover, if we assume furthermore that $(q,b)$ is shrunk in the set $V_{A,\delta,b_0}(s)$ for all $s \in [s_0,s^*]$, then, we can repeat the computation for \eqref{expansion=partial-F-b-s-0} in using the bounds given in Definition \eqref{definition-shrinking-set} and we obtain \begin{eqnarray*} \frac{ \partial \mathcal{F}}{ \partial b} \left. \right|_{(s,b) = (s^*, b(s^*))} = I^{-4k}(s^*) 2^{4k} (2k)! \left( 1 + O(I^{-\delta}(s^*)) \right) \ne 0. \end{eqnarray*} Then, we can apply the Implicit function theorem to get the existence and uniqueness of $(q,b)$ on the interval $[s^*,s^* +\varepsilon]$ for some $\varepsilon >0$ small and the conclusion of the Lemma completely follows. \end{proof} \iffalse the decomposition is valid on the interval $[s_0,s_1]$ for some $s_1 > s_0$. \noindent \medskip Now, we assume that the solution $(q,b)(s)$ exists on $[s_0,s^*]$, we will prove \eqref{definition-q} remains valid on $[s^*, s^* + \varepsilon ]$ for some $\varepsilon >0$. Similarly, it is sufficient to prove \begin{equation}\label{partial-F-s-*} \frac{ \partial \mathscr{F}}{ \partial b} (s,b) \left. \right|_{(s,b) = (s^*, b(s^*))} \ne 0. \end{equation} Using the fact that $ w = f_b + f_b^p q$ and \eqref{formula-partial-F}, we get \begin{eqnarray*} & & \frac{ \partial \mathscr{F}}{ \partial b} (s,b) \left. \right|_{(s,b) = (s^*, b(s^*))} = \left\langle w \frac{p y^{2k}}{p-1} f_b^{-1} - y^{2k}, H_{2k} \right\rangle_{L^2_{\rho_s}}\\ & = & \left\langle y^{2k}, H_{2k}(s^*) \right\rangle_{L^2_{\rho_{s^*}}} + \left\langle \frac{p}{p-1} y^{2k} f_{b(s^*)}^{p-1} q(s^*) , H_{2k}(s^*) \right\rangle_{L^2_{\rho_{s^*}}}. \end{eqnarray*} Firstly, we have \begin{eqnarray*} \left\langle y^{2k}, H_{2k}(s^*) \right\rangle_{L^2_{\rho_{s^*}}} = I^{-4k}(s^*) 2^{4k} (2k!) + O\left(I^{-4k -2}(s^*) \right), \end{eqnarray*} and using the fact that $V_{A, \delta, b_0}(s^*)$, we apply Lemma \ref{small-integral-y-ge-I-delta} to obtain \begin{eqnarray*} & &\left\langle \frac{p}{p-1} y^{2k} f_{b(s^*)}^{p-1} q(s^*) , H_{2k}(s^*) \right\rangle_{L^2_{\rho_{s^*}}} \\ &=& \int_{y \le I^{-\delta}(s^*)} \frac{p}{p-1} y^{2k} f_{b(s^*)}^{p-1}(y) q(y,s^*) , H_{2k}(y,s^*) \rho_{s^*}(y) dy + O(e^{-\frac{I(s^*)}{8}}). \end{eqnarray*} In particular, on the interval $[0,I^{-\delta}(s^*)]$, we use Taylor expansion to get \begin{eqnarray*} y^{2k} f_{b(s^*)}^{p-1}(y) q(y,s^*) = \sum_{j=0}^M q_j(s^*)\sum_{l=0}^{2k+L-1+j} a_{j,l} H_{l}(y,s^*) + A_2(y,s^*), \end{eqnarray*} where \begin{eqnarray*} \left| A_2(y,s^*) \right| &\le & C\left[ I^{-\delta}(s^*) y^{2k + L} + y^{2k} |q_-(y,s^*)| \right] \\ & \le & C A I^{-\delta}(s^*) \left[ y^{2k + L} + y^{2k} ( I^{-M}(s^*) + y^M) \right] . \end{eqnarray*} Then, it follows \begin{eqnarray*} \left|\int_{y \le I^{-\delta}(s^*)} \rho_{s^*}(y) dy \right| \le CA I^{-4k -\delta} (s^*). \end{eqnarray*} Finally, we get \begin{eqnarray*} \frac{ \partial \mathscr{F}}{ \partial b} (s,b) \left. \right|_{(s,b) = (s^*, b(s^*))} = I^{-4k}(s^*) 2^{4k} (2k)! \left( 1 + O(AI^{-4k-\delta}(s^*)) \right) \ne 0, \end{eqnarray*} provided that $s^* \ge s_0 \ge s_1(A, \delta)$, thus, \eqref{partial-F-s-*} follows and the proof of the Lemma is concluded. \fi \subsection{Reduction to a finite dimensional problem} As we defined shrinking set $V_{\delta, b_0 }$ in Definition \ref{definition-shrinking-set}, it is sufficient to prove there exists a unique global solution $(q,b)$ on $[s_0, +\infty)$ for some $s_0 $ sufficient large that $$ (q,b)(s) \in V_{\delta, b_0}(s), \forall s \ge s_0.$$ In particular, we show in this part that the control of infinite problem is reduced to a finite dimensional one. To get this key result, we first show the following \text{priori estimates }. \begin{prop}[A priori estimates] \label{proposition-ode} Let $b_0 > 0$ and $k \in \mathbb{N}, k \ge 2, b_0 >0$, then there exists $\delta_{3}(k, b_0) > 0$ such that for all $\delta \in (0,\delta_3)$, there exists $s_3(\delta, b_0)$ such that for all $s_0 \ge s_3$, the following property holds: Assume $(q,b)$ is a solution to problem \eqref{equation-q} $\&$ \eqref{Modulation-condition} that $(q,b)(s) \in \mathcal{V}_{\delta, b_0}(s) $ for all $s\in[\tau, \bar s]$ for some $\bar s \geq s_0$, and $q_{2k}(s)=0$ for all $s\in [\tau, \bar s]$, then for all $s\in [\tau,s_1], s_0 \le \tau \le \bar s $, the following properties hold: \begin{itemize} \item[(i)] (ODEs on the finite modes). For all $j \in \{ 0,...,[M] \}$, we have $$\left |q_j'(s)-\left( 1-\frac{j}{2k}\right)q_j(s) \right |\leq CI^{-2\delta}(s). $$ \item[$(ii)$] (Smallness of the modulation $b(s)$). It satisfies that \begin{equation*}\label{estimat-derivative-b} \left| b'(s) \right| \leq C I^{-\delta}(s)\mbox{ and }\frac 34b_0\leq b(s)\leq \frac 54b_0. \end{equation*} \item[$(iii)$] (Control of the infinite-dimensional part $q_-$): We have the following a priory estimate \[ \begin{array}{lll} \left| q_-(s)\right|_s &\le & e^{-\frac{s-\tau}{p-1}} \left| q_-(\tau)\right|_\tau + C \left( I^{-\frac{3}{2} \delta}(s) + e^{-\frac{s-\tau}{p-1}} I^{-\frac{3}{2}\delta}(\tau)\right). \end{array} \] \end{itemize} \end{prop} \begin{proof}[Proof of Proposition \ref{proposition-ode}] This result plays an important role in our proof. In addition to that, the proof based on a long computation which is technical. To help the reader in following the paper, we will give the complete proof in Section \ref{proposition-ode}. \end{proof} Consequently, we have the following result \begin{prop}[Reduction to a finite dimensional problem]\label{propositionn-transversality} Let $b_0 >0$ and $ k \in \mathbb{N}, k \ge 2 $, then there exists $\delta_4(b_0)$ such that for all $ \delta \in (0,\delta_4)$, there exists $s_4(b_0, \delta)$ such that for all $s_0 \ge s_4$, the following property holds: Assume that $(q,b)$ is a solution to \eqref{equation-q} $\&$ \eqref{Modulation-condition} corresponding to initial data $(q,b)(s_0) = (\psi(d_0,...,d_{2k-1}),s_0)$ where $\psi(d_0,...,d_{2k-1}),s_0)$ defined as in \eqref{initial-data-new} with $ \max_{0 \le i \le 2k-1} |d_i| \le 2 $; and $(q,b)(s)\in V_{\delta,b_0}(s)$ for all $s \in [s_0, \bar s]$ for some $\bar s > s_0$ that $(q,b)( \bar s) \in \partial V_{\delta,b_0}( \bar s)$, then the following properties are valid: \begin{itemize} \item[(i)] \textbf{(Reduction to finite modes)}: Consider $q_0,...,q_{2k-1}$ be projections defined as in \eqref{defi-q_m} then, we have $$\left (q_0,..,q_{2k-1}\right )(\bar s) \in \partial \hat{V}(\bar s),$$ where $I(s)$ is given by \eqref{defi-I-s}.\\ \item[(ii)] \textbf{(Transverse crossing)} There exists $m\in\{0,..,2k-1\}$ and $\omega \in \{-1,1\}$ such that \[\omega q_m(s_1)=I(s_1)^{-\delta}\mbox{ and }\omega \frac{d q_m}{ds}>0.\] \end{itemize} \end{prop} \begin{rem} In (ii) of Proposition \ref{propositionn-transversality}, we show that the solution $q(s)$ crosses the boundary $\partial V_{\delta, b_0}(s)$ at $s_1$ with positive speed, in other words, that all points on $\pa V_{\delta, b_0}(s_1)$ are \textit{strict exit points} in the sense of \cite[Chapter 2]{Conbook78}. \end{rem} \begin{proof} Let us start the proof Proposition \ref{propositionn-transversality} assuming Proposition \ref{proposition-ode}. Let us consider $\delta \le \delta_3$ and $s_0 \ge s_3$ that Proposition \ref{proposition-ode} holds.\\ \noindent - \textit{Proof of item (i)} To get the conclusion of this item, we aim to show that for all $s \in [s_0, \bar s]$ \begin{equation}\label{improve-q-j-ge-2k+1} \left| q_j(s) \right| \le \frac{1}{2} I^{-\delta}(s), \forall j \in \{ 2k+1,...,[M] \} (\text{note that } q_{2k} \equiv 0), \end{equation} and \begin{equation}\label{improve-q_-} \left| q_-(s) \right|_s \le \frac{1}{2} I^{-\delta}(s), \end{equation} + For \eqref{improve-q-j-ge-2k+1}: From item (i) of Proposition \ref{proposition-ode}, we have $$ \left[ q_j(s) \pm \frac{1}{2} I^{-\delta}(s) \right]' = \left( 1 - \frac{j}{2k} \right) q_j(s) \pm \frac{\delta}{2} \left( \frac{1}{2k} - \frac{1}{2} \right) I^{-\delta}(s) + O(I^{-2 \delta}(s)). $$ Hence, with $ j > 2k, \delta \le \delta_{4,1}$ and initial data $q_j(s_0) =0$ that $ q_j(s_0) \in \left( -\frac{1}{2}I^{-\delta}(s_0), \frac{1}{2} I^{-\delta}(s_0) \right) $, it follows that $$ q_j(s) \in \left( -\frac{1}{2}I^{-\delta}(s), \frac{1}{2} I^{-\delta}(s) \right), \forall s \in [s_0, \bar s_0], $$ which concludes \eqref{improve-q-j-ge-2k+1}. + For \eqref{improve-q_-}: Let consider $\sigma \ge 1 $ fixed later. We divide into two cases that $ s - s_0 \le s_0 $ and $s - s_0 \ge s_0$. According to the first case, we apply item (iii) with $\tau = s_0$ that \begin{eqnarray*} \left| q_-(s) \right|_s \le C \left( I^{-\frac{3}{2}\delta}(s) + e^{-\frac{s-s_0}{p-1}} I^{-\frac{3}{2}\delta}(s_0) \right) \le \frac{1}{2} I^{-\delta}(s), \end{eqnarray*} provided that $\delta \le \delta_{4,2}$ and $s_0 \ge s_{4,2}(\delta) $. In the second case, we use item (iii) again with $\tau = s - s_0 $, and we obtain \begin{eqnarray*} \left| q_-(s) \right|_s & \le & e^{-\frac{s_0}{p-1}} I^{-\delta}(\tau) + C\left( I^{-\frac{3}{2}\delta} + e^{-\frac{s_0}{p-1}} I^{-\frac{3}{2}\delta}(\tau) \right)\\ & \le & C ( e^{-\frac{s_0}{p-1}} I^{\delta}(s) I^{-\frac{3}{2}\delta}(\tau) + I^{-\frac{1}{2}\delta}(s) ) I^{-\delta}(s) \le \frac{1}{2} I^{-\delta}(s). \end{eqnarray*} Thus, \eqref{improve-q_-} follows. Finally, using the definition of $V_{\delta, b_0}(s)$, the fact $(q,b)(\bar s) \in \partial V_{\delta, b_0}(\bar s)$, estimates \eqref{improve-q-j-ge-2k+1}, \eqref{improve-q_-}, and item (ii) of Proposition \ref{propositionn-transversality}, we get the conclusion of item (ii). \iffalse We define $\sigma=s-\tau$ and take $s_0=-\ln T\ge 3 \sigma,\;(\mbox{ that is } T\le e^{-3\sigma})$ so that for all $\tau \geq s_0$ and $s\in [\tau, \tau +\sigma]$, we have \[\tau\leq s\leq \tau +\sigma\leq \tau+\frac{1}{3} s_0\leq \frac{4}{3} \tau.\] From (i) of proposition Proposition \ref{proposition-ode}, we can write for all $2k\leq j\leq [M] $ \[\left |\left ( e^{-(1-\frac{j}{2k})t}q_j(t)\right )' \right |\leq C e^{-(1-\frac{j}{2k})t}I^{-2\delta}(t),\] We consider that $\tau \leq s\leq \frac{4}{3}\tau$. Integrating the last inequality between $\tau$ and $s$, we obtain \[ \begin{array}{lll} |q_j(s)|&\leq &e^{-(1-\frac{j}{2k})(\tau-s)} q_j(\tau)+ C(s-\tau)e^{(1-\frac{j}{2k})s} I^{-2\delta}(\tau)\\ &\leq &e^{(1-\frac{j}{2k})(s-\tau)} q_j(\tau)+ C(s-\tau)e^{(1-\frac{j}{2k})s} I^{-\frac{4}{3}\delta}(s), \end{array} \] There exists $\tilde s_1=\max \{s_j,\;5\le j\le 9\}$, such that if $s\geq \tilde s_1$, then we can easily derive \[|q(s)|\leq Ce^{(1-\frac{j}{2k})(s-\tau)} I^{-\frac{4}{3}\delta}(s)+I^{-\frac{7}{6}\delta}(s)< \frac{1}{2}I^{-\delta}(s).\] In a similar fashion, exists $\tilde s_2=\max \{s_j,\;9\le j\le 13\}$, for all $s\geq \tilde s_2$, we obtain \[|q_-(s)|_s< \frac{1}{2}I^{-\delta}(s).\] Thus we finish the prove of (ii) of Proposition \ref{propositionn-transversality}.\\ \fi \noindent - \textit{Proof of item (ii)}: From item (ii) of Proposition \ref{propositionn-transversality}, there exist $m=0,..2k-1$ and $\omega=\pm 1$ such that $q_m(s_1)=\omega I(s_1)^{-\delta}$. By (ii) of Proposition \ref{proposition-ode}, we see that for $\delta>0$ \[\omega q_m'(s_1)\geq (1-\frac{m}{2k})\omega q_m(s_1)-CI^{-2\delta}(s_1)\geq C\left ((1-\frac{m}{2k})I^{-\delta}(s_1)-I^{-2\delta}(s_1)\right )>0,\] which concludes the proof of Proposition \ref{propositionn-transversality}. It remains to prove Proposition \ref{proposition-ode}. This will be done in Section \ref{Section-proof-proposition-ode}. \end{proof} \subsection{Topological\textit{ ``shooting method``} for the finite dimension problem and proof of Theorem \ref{Theorem-principal}} In this part we aim to give the complete proof to Theorem \ref{Theorem-principal} by using a topological \textit{shooting method}: \begin{proof}[The proof of Theorem \ref{Theorem-principal}] Let us consider $\delta>0$, $T > 0,(T= e^{-s_0})$ , $(d_0,..,d_{2k-1}) \in \mathbb{D}_{s_0}$ such that problem \eqref{equation-q} $\&$ \eqref{Modulation-condition} with initial data $ \psi(d_0,...,d_{2k-1},s_0)$ defined as in \eqref{initial-data-new} has a solution $(q(s),b(s))_{d_0,..,d_{2k-1}}$ defined for all $s \in [s_0,\infty)$ such that \beqtn \|q(s)\|_{L^\infty_M} \leq C I^{-\delta}(s)\mbox{ and } |b(s)-b^*|\leq C I^{-2\delta}(s), \label{goal-of-the proof} \eeqtn for some $b^*>0$. Let $b_0,\delta$ and $ s_0$ such that Lemma \ref{initial-data-new}, Propositions \ref{propositionn-transversality} and Proposition \ref{proposition-ode} hold, and we denote $T= e^{-s_0}$ (positive since $s_0$ is large enough). We proceed by contradiction, from (ii) of Lemma \ref{initial-data-new}, we assume that for all $(d_0,...,d_{2k-1}) \in \mathbb{D}_{s_0}$ there exists $s_*=s_*(d_0,..,d_{2k-1}) < +\infty$ such that \begin{equation*} \begin{array}{ll} q_{d_0,..,d_{2k-1}}(s)\in V_{\delta,b_0}(s), & \forall s\in [s_0, s_*], \\ q_{d_0,..,d_{2k-1}}(s_*)\in \pa V_{\delta,b_0}(s_*).& \end{array} \end{equation*} By using item (i) of Proposition \ref{propositionn-transversality}, we get $(q_0,..,q_{2k-1})(s_*) \in \pa \hat{\mathcal{V}}(s_*)$ and we introduce $\Phi$ by \[\Phi: \begin{array}{ll} \mathbb{D}_{s_0}\to \pa [-1,1]^{2k}&\\ (d_0,..d_{2k-1})\to I^{\delta}(s)(q_0,..,q_{2k-1})(s_*), \end{array} \] which is well defined and satisfies the following properties: \begin{itemize} \item[$(i)$] $\Phi$ is continuous from $\mathbb{D}_{s_0}$ to $\pa [-1,1]^{2k}$ thanks to the continuity in time of $q$ on the one hand, and the continuity of $s_*$ in $(d_0,...,d_{2k-1})$ on the other hand, which is a direct consequence of the trasversality in item (ii) of Proposition \ref{propositionn-transversality}. \item[(ii)] It holds that $\Phi \left. \right|_{\partial \mathbb{D}_{s_0}}$ has nonzero degree. Indeed, for all $(d_0,...,d_{2k-1}) \in \partial \mathbb{D}_{s_0}$, we derive from item (i) of Lemma \ref{lemma-initial-data} that $s_*(d_0,...,d_{2k-1}) =s_0$ and $$ \text{ deg}\left( \Phi \left. \right|_{\partial \mathbb{D}_{s_0}} \right) \ne 0. $$ \end{itemize} From Wazewski's principle in degree theory such a $\Phi$ cannot exist. Thus, we can prove that there exists $(d_0,...,d_{2k-1}) \in \mathbb{D}_{s_0}$ such that the corresponding solution $(q,b)(s) \in V_{\delta, b_0}(s), \forall s \ge s_0$. \iffalse and by (iii) of Proposition \ref{propositionn-transversality}, $\Phi$ is continuous.\\ In the following we will prove that $\Phi$ has nonzero degree, which mean by the degree theory (Wazewski's principle) that for all $s\in [s_0, \infty )$ $q(s)$ remains in $V_{\delta,b_0}(s)$, which is a contradiction with the Exit Proposition.\\ Indeed Using Lemma \ref{initial-data-new}, and the fact that $q(-\ln T)=\psi_{d_0,..,d_{2k-1}}$, we see that when $(d_0,..,d_{2k-1})$ is on the boundary of the quadrilateral $\mathbb{D}_T$, $q_0,..,q_{2k-1}(-\ln T)\in \pa [-I^{-2\delta}(s),I^{-2\delta}(s)]^{2k}$ and $q(-\ln T)\in V_{\delta,b_0}(-\ln T)$ with strict inequalities for the other components.\\ By the Exit proposition \ref{propositionn-transversality}, $q(s)$ leaves $V_{\delta,b_0}$ at $s_0=-\ln T$, hence $s_*=-\ln T$.\\ Using (ii) of Proposition \ref{propositionn-transversality}, we get that the restriction of $\Phi$ on he boundary of $\mathbb{D}_T$ is of degree $1$, which means by the shooting method that for all $s\in [s_0, \infty )$ $q(s)$ remains in $V_{\delta,b_0}(s)$, which is a contradiction.\\ We conclude that there exists a value $(d_0,..,d_{2k-1})\in \mathbb{D}$ such that for all $s\geq -\ln T$, $q_{d_0,..,d_{2k-1}}\in V_{\delta,b_0}(s)$, which means that \beqtn\label{estimation-linftyM-q} \left \|\frac{q}{1+|y|^M}\right\|_{L^\infty}\leq C I^{-\delta}(s), \eeqtn and using the definition of\fi In particular, we derive from \eqref{decompose-equa-w-=q}, $M=\frac{2kp}{p-1}$, and the following estimate \[|f_be_b|=|f_b^p|\leq C(1+|y|^{-\frac{2kp}{p-1}})= C(1+|y|^{-M}) \] that \[\|w(y,s)-f_{b}\|_{L^\infty}=\|f_{b}e_bq\|_{L^\infty} \leq C I^{-\delta}(s).\] So, we conclude item (i) of Theorem \ref{Theorem-principal}.\\ The proof of item (ii): From (ii) of Proposition \ref{proposition-ode}, it immediately follows that there exists $b^* \in \mathbb{R}^*_+$ such that $$ b(s) \to b^* \text{ as } s \to +\infty, $$ which is equivalent to $$ b(t) \to b^* \text{ as } t \to T.$$ In particular, by integrating the first inequality given by between $s$ and $\infty$ and using the fact that $b(s)\to b^*$ (see \eqref{convegence-b-s}), we obtain \[|b(s)-b^*|\leq Ce^{-\delta s(1-\frac{1}{k})}.\] Note that $s = -\ln(T-t)$ then, \eqref{goal-of-the proof} follows and the conclusion of item (ii) of Theorem \ref{Theorem-principal}. \\ \end{proof} \section{Proof to Proposition \ref{proposition-ode} }\label{Section-proof-proposition-ode} In this section, we prove Proposition \ref{proposition-ode}. We just have to project equation \eqref{equation-q} to get equations satisfied by the different coordinates of the decomposition \eqref{decomposition-q2}. More precisely, the proof will be carried out in 2 subsections, \begin{itemize} \item In the first subsection, we write equations satisfied by $qj$, $0\le j\leq M$, then, we prove (i), (ii) of Proposition \ref{proposition-ode}. \item In the second subsection, we first derive from equation \eqref{equation-q} an equation satisfied by $q_-$ and prove the last identity in (iii) of Proposition \ref{proposition-ode}. \end{itemize} \subsection{The proof to items (i) and (ii) of Proposition \ref{proposition-ode} }\label{subsection-proof-i-ii} \begin{itemize} \item In Part 1, we project equation \eqref{equation-q} to get equations satisfied by $q_j$ for $0 \leq j\leq [M]$. \item In Part 2: We will use the precise estimates from part I to conclude items (i) and (ii) of Proposition \ref{proposition-ode}. \end{itemize} \medskip \textbf{Part 1: The projection of equation \eqref{equation-q} on the eigenfunctions of the operator $\mathcal{L}_s$.} Let $(q,b)$ be solution to problem \eqref{equation-q} $\&$ \eqref{Modulation-condition} trapped in $V_{\delta, b_0}(s)$ for all $s \in [s_0, \bar s]$ for some $\bar s > s_0$. Then, we have the following: \medskip \textbf{a) First term $\pa_s q$:} In this part, we aim to estimate the error between $\partial_s q_n(s)$ and $P_n(\partial_s q)$ by the following Lemma \iffalse \begin{lemma} There exist $\delta_0 > 0 $ such that for all $0<\delta < \delta_0$ and $b_0 >0$, there exists $s_{2}(A,\delta, b_0) \ge 1$ such that for all $ s_0 \ge s_2$, the following property holds: Assume that $(q,b) \in V_{\delta, b_0}(s )$ for all $s \in [s_0,s^*], $ for some $s^* >s_0$, satisfying \eqref{equation-q}- \eqref{modulation-equation}, then, the following estimates hold \begin{equation}\label{esti-par-q-m-P-partial-s-q} \left| \partial_s q_n - P_n(\partial_s q) -n \left( 1- \frac{1}{k} \right) q_n \right| \le C A^{\max(n-1,0)} I^{j -2k } (s), \quad \forall s \in (s_0,s^*), \forall n \in \{0,1,...,2k-1\}. \end{equation} and for $ n=2k$, we have \begin{equation}\label{partial-q-2k-s} \left| \partial_s q_{2k} - P_{2k} (\partial_s q) - 2k \left( 1-\frac{1}{k} \right) q_{2k} \right| \le CA I^{-\delta}(s). \end{equation} \end{lemma} \fi \begin{lemma}\label{Lemma-Pn_partialq} For all $n \in \{0,1,...,[M]\}$, it holds that $$ P_{n} (\partial_s q)=\partial_s q_n (s) - \left (1-\frac 1k\right )(n+1)(n+2) I^{-2}(s) q_{n+2} (s), \forall s \in [s_0, \bar s]. $$ \end{lemma} \begin{proof}We only give the proof when $n\geq 2$, for $n=0,1$ it is easy to derive the result. Using \eqref{defi-q_m}, we have the following equality $$\langle H_n, H_n \rangle_{L^2_{\rho_s}} q_n(s) = \langle q,H_n(s)\rangle_{L^2_{\rho_s}},$$ which implies \begin{eqnarray*} \langle H_n, H_n \rangle_{L^2_{\rho_s}} \partial_s q_n(s) & = &\langle \partial_s q, H_n \rangle_{L^2_{\rho_s}} + \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}} + \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \\ & - & \partial_s \langle H_n,H_n \rangle_{\rho_s} q_n, \end{eqnarray*} which yields \begin{eqnarray*} P_n(\pa_s q) & = & \partial_s q_n - \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} \\ & & - \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} + \partial_s \langle H_n,H_n \rangle_{\rho_s} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} q_n. \end{eqnarray*} \medskip \begin{eqnarray*} \partial_s q_n & = &\langle \partial_s q, H_n \rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} + \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} \\ & & + \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} - \partial_s \langle H_n,H_n \rangle_{\rho_s} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} q_n. \end{eqnarray*} Thus, we can write \begin{equation}\label{estimate-q-s-partial-sn} \partial_s q_n = P_{n}( \partial_s q) + \tilde L, \end{equation} where \begin{eqnarray*} \tilde L = \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} + \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} - \partial_s \langle H_n,H_n \rangle_{\rho_s} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} q_n. \end{eqnarray*} We now aim to estimate $\tilde L$ provided that $(q(s),b(s)) \in V_{A,b_0,\delta}(s) $ and we also recall that $$ q = \sum_{j=1}^M q_j H_j + q_-. $$ + For $\partial_s \langle H_n,H_n \rangle_{\rho_s} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} q_n $: We have the facts that $$\langle H_n,H_n \rangle_{\rho_s} = I^{-2n}(s) 2^n n!, \text{ and } I(s) = e^{\frac{s}{2}\left(1 -\frac{1}{k} \right) } , $$ which implies \begin{eqnarray*} \partial_s \langle H_n, H_n \rangle_{L^2_{\rho_s}} = - n \left( 1 -\frac{1}{k} \right) \langle H_n, H_n \rangle_{L^2_{\rho_s}}. \end{eqnarray*} So, we obtain $$ \partial_s \langle H_n,H_n \rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} q_n(s) = - n \left( 1 -\frac{1}{k} \right) q_n(s).$$ + For $\left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}}^{-1} $: Using the fact that $$ \partial_s \rho_s = \frac{1}{2} \left( 1 -\frac{1}{k}\right) \rho_s - \frac{1}{4} \left( 1 - \frac{1}{k} \right) I^2(s) y^2 \rho_s , $$ which yields \begin{eqnarray*} \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} & = & \frac{1}{2} \left( 1 -\frac{1}{k} \right) \langle q, H_n(s) \rangle_{L^2_{\rho_s}} - \frac{1}{4} \left( 1 -\frac{1}{k} \right) \langle q, I^2(s) y^2 H_n(s) \rangle_{L^2_{\rho_s}}\\ & = & \frac{1}{2} \left( 1 -\frac{1}{k} \right) q_n \langle H_n,H_n \rangle_{L^2_{\rho_s}} - \frac{1}{4}\left( 1 -\frac{1}{k} \right) \langle q, I^2(s) y^2 H_n(s) \rangle_{L^2_{\rho_s}} . \end{eqnarray*} Thus, we derive \begin{eqnarray*} \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}}^{-1} = \frac{1}{2} \left(1 -\frac{1}{k} \right) q_n - \frac{1}{4} \left(1 -\frac{1}{k} \right) \langle q, I^2(s) y^2 H_n(s) \rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}}^{-1} \end{eqnarray*} Using the polynomial Hermite identities, we obtain \[z^2h_n=zh_{n+1}+2nzh_{n-1}=h_{n+2}+2(2n+1)h_n+4n(n-1)h_{n-2},\] and we find the following identify \begin{equation*} y^2 H_{n}(y,s) = H_{n+2} (y,s) + (4n+2) I^{-2}(s) H_n(y,s) + 4 n(n-1) I^{-4}(s) H_{n-2}(y,s) \end{equation*} \iffalse note that when $n=0,$ or $1$ the sum in the right hand side vanishes, and $q \in V_{A,\delta, b_0}(s), \forall s \in [s_0,s^*]$, we get \begin{eqnarray*} \langle q, I^2(s) y^2 H_n(s) \rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}}^{-1} = (4n+2) q_n + O(AI^{-\delta -2}(s)). \end{eqnarray*} \fi This implies that \begin{eqnarray*} & & \langle q, I^2(s) y^2 H_n(s) \rangle_{L^2_{\rho_s}} \\ & = & I^{2}(s) \left[ q_{n+2} \| H_{n+2}\|^2_{L^2_{\rho}} + I^{-2}(s) q_n (s) \|H_n\|^2_{L^2_\rho} + 4n(n-1) q_{n-2} I^{-4}(s) \|H_{n-2}\|^2_{L^2_{\rho}} \right], \end{eqnarray*} which yields \begin{eqnarray*} & & \left\langle q, H_n (s) \frac{ \partial_s \rho_s}{\rho_s} \right\rangle_{L^2_{\rho_s}} \langle H_n, H_n \rangle_{L^2_{\rho_s}}^{-1} \\ &= & -n \left(1 - \frac{1}{k} \right)q_n - n(n-1) \left(1-\frac{1}{k} \right) q_{n-2} I^{-2}(s) \frac{\|H_{n-2}\|^2_{L^2_\rho}}{\|H_{n}\|^2_{L^2_\rho}} \\ &+&\left(1-\frac{1}{k} \right) (n+2)(n+1) I^{-2}(s) q_{n+2}, \end{eqnarray*} for all $ n \in \{0,...,[M]\} \text{ and } \forall s \in [s_0,s^*]$ (with convention that $q_j =0$ if $j<0$) and for some $\tilde c_n \in \mathbb{R}$. \\ + $ \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} $: \begin{eqnarray*} \partial_s H_n(s) &=& - n I'(s) I^{-n-1}(s) h_n(I(s)y) + I'(s) y h'_n(I(s)y) I^{-n}(s) \\ & = & - \frac{n}{2} \left(1-\frac{1}{k} \right) H_n(s) + \frac{n}{2} \left( 1-\frac{1}{k} \right) y H_{n-1}(s). \end{eqnarray*} Let us recall the following identify on Hermite's polynomial \begin{equation} y H_{n-1} (y,s) = H_n(y,s) + I^{-2} (s) 2(n-1) H_{n-2}(y,s). \end{equation} So, we can rewrite $\partial_s H_n$ as follows \begin{equation}\label{formula-partia-s-Hn} \partial_s H_n (y,s) = n (n-1) \left(1 -\frac{1}{k} \right) I^{-2}(s) H_{n-2}(y,s). \end{equation} Thus, we obtain \begin{eqnarray*} \langle q, \partial_s H_n (s)\rangle_{L^2_{\rho_s}}\langle H_n, H_n \rangle_{L^2_{\rho_s}} ^{-1} &=& n (n-1) \left(1-\frac{1}{k} \right) I^{-2}(s)q_{n-2} \frac{\|H_{n-2}\|^2_{L^2_{\rho}}}{ \| H_{n}\|^2_{L^2_\rho}}. \end{eqnarray*} Finally, we obtain \begin{equation*} \partial_s q_n = P_{n}( \partial_s q) + \left (1-\frac 1k\right )(n+1)(n+2) q_{n+2}, \forall n \in \{0,1....,[M]\}, \end{equation*} which concludes the proof of the Lemma. \end{proof} \textbf{ b) Second term $\mathcal{L}_s (q)$} \begin{lemma}\label{Lemma-P-n-mathcal-L-s} For all $0\leq n\leq [M]$, it holds that \begin{equation}\label{P-n-mathcal-L-n-s-q} P_n( \mathcal{L}_s q)= \left(1-\frac{n}{2k}\right)q_n+(1-\frac{1}{k})(n+1)(n+2) I^{-2} q_{n+2}. \end{equation} \end{lemma} \begin{proof} As in the proof of Lemma \ref{Lemma-Pn_partialq}, we only give the proof when $n\geq 2$, for $n=0,1$ it is easy to derive the result. We write $P_n( \mathcal{L}_s q)$ as follows: \[\begin{array}{lll} P_n(\mathcal{L}_s q)&=& \dsp \int\left ( I^{-2}(s) \Delta q-\frac{1}{2}y \cdot \nabla q+q \right )H_n \rho_s dy+ \int\frac 12(1-\frac{1}{k})y\nabla q H_n \rho_s dy \\ & =&A_1+\frac 12(1-\frac{1}{k})A_2. \end{array} \] In the following we will use Hermite polynomial identity \eqref{Hermite-identities-ell=2} given bu Lemma \ref{Hermite_Identies}. Using integration by part and polynomial identities we obtain \[ \begin{array}{rcl} \dsp A_1 &=& \int \left (I^{-2}(s) \Delta q-\frac{1}{2}y \cdot \nabla q+q\right ) H_n \rho_s dy\\ &=&\dsp \int I^{-2}\div{(\nabla q\rho_s)} +q H_{n} dy,\\ &=& -\dsp I^{-2} \int\nabla q nH_{n-1}\rho_s dy+q_n\|H_n\|_{L^2_{\rho_s}}^2,\\ &=&\dsp n(n-1)I^{-2}\int q H_{n-2} \rho_s-\frac n 2 \int y qH_{n-1}\rho_s dy+q_n\|H_n\|_{L^2_{\rho_s}}^2\\ &=&-\dsp \left (1-\frac{n}{2}\right ) q_n \|H_n\|_{L^2_{\rho_s}}^2. \end{array} \] By a similar computation, using the change of variable $z=Iy$ and we introduce $\rho(z)=I^{-1}\rho_s(y)$, we get \[\begin{array}{l} A_2=\dsp \int y\nabla q H_n\rho_s dy\\ = \dsp \left (- \int q H_{n}\rho_s dy- \int q nyH_{n-1}\rho_s dy + \frac{I^2}{2}\int qy^2H_{n} \rho_sdy\right ) \\ = \dsp \left (-q_n\|H_n\|^2-I^{-n} n\int q z h_{n-1}\rho dz + \frac{1}{2} I^{-n}\int z^2 q h_{n}\rho dz\right ) \\ \dsp \left (-q_n\|H_n\|^2-I^{-n} n\int q (h_n+(n-1)h_{n-2})\rho dz + \frac{1}{2} I^{-n}\int z^2 q h_{n}\rho dz\right ) \\ \end{array} \] Using the polynomial Hermite identities that \[z^2h_n=zh_{n+1}+2nzh_{n-1}=h_{n+2}+2(2n+1)h_n+4n(n-1)h_{n-2},\] which yields \[\begin{array}{rcl} A_2 &=& \dsp \left(-q_n\|H_n\|^2 -I^{-n} n\int q z h_{n-1} \rho dz + \frac{1}{2} I^{-n-2}\int z^2 q h_{n}\rho dz\right) \\ &=&\dsp -q_n\|H_n\|^2-I^{-n} n\int q (h_n+2(n-1)h_{n-2})\rho dz \\ &+& \frac{1}{2} I^{-n}\int q [h_{n+2}+2(2n+1)h_n+4n(n-1)h_{n-2}]\rho dz \\ & = & -q_n\|H_n\|^2-nq_n\|H_n\|^2-2n(n-1) I^{-2}q_{n-2}\|H_{n-2}\|^2+\frac{1}{2}q_{n+2}I^2\|H_{n+2}\|^2\\ &+& (2n+1) q_n\|H_n\|^2+2n(n-1)q_{n-2}I^{-2}\|H_{n-2}\|^2\\ &=&\left (n q_n+ 2(n+2)(n+1) I^{-2} q_{n+2}\right )\|H_n\|^2. \end{array} \] Thus, we obtain by adding all related terms that \[\dsp P_n( \mathcal{L}_s q)= \left(1-\frac{n}{2k}\right)q_n+(1-\frac{1}{k})(n+2)(n+1) I^{-2} q_{n+2}, \] which concludes the proof of Lemma \ref{Lemma-P-n-mathcal-L-s}. \end{proof} \textbf{c) Third term, the nonlinear term $\mathcal{N} (q)$}\\ In this part, we aim to estimate to the projection of $\mathcal{N}(q)$ on $H_n,$ for some $n \in \{0,1,...,[M] \} $. More precisely, we have the following Lemma: \begin{lemma}\label{projecion-H-n-N} Let $b_0 > 0$, then, there exists $ \delta_5(b_0)>0$ such that for all $ \delta \in (0, \delta_5)$ there exists $s_5(b_0,\delta) \ge 1$ such that for all $s_0 \ge s_5$, the following property is valid: Assume $(q,b)(s) \in V_{\delta, b_0}(s)$ for all $s \in [s_0,\bar s]$ for some $\bar s > s_0$, then, we have \beqtn\label{bound-N} \left| P_n(\mathcal{N}) \right| \leq CI^{-2\delta }(s), \forall \text{ and } n \in \{0,1,..., [M]\}, \eeqtn for all $s \in \left[s_0, \bar s \right]$ and $0 \le n \le [M]$. \end{lemma} \begin{proof} We argue as in \cite{BKnon94}. First, let us recall the nonlinear term $\mathcal{N}$ and $ P_n(\mathcal{N}) $ defined as in \eqref{nonlinear-term} and \eqref{defi-q_m}, respectively. The main goal is to use the estimates defined in $V_{\delta, b_0}(s)$ to get an improved bound on $P_n(\mathcal{N})$. Firstly, we recall the following identity \begin{equation}\label{e-b-identity-L} e_b(y)=(p-1)^{-1}\left (\sum_{\ell=0}^{L}\left (-\frac{by^{2k}}{p-1}\right )^\ell + \left (-\frac{by^{2k}}{p-1}\right )^{L+1}e_b(y)\right), \forall L \in \mathbb{N}^*. \end{equation} \iffalse The main goal of this Lemma is to improve the bounds of the $P_n(\mathcal{N})$. The main idea follows the fact that, the projection only is affected on a very small neighborhood of $0$, for instant $[0,I^{-\delta}(s)]$ with $ I^{-\delta}(s) \to 0 \text{ as } s \to +\infty$, and the rest part on $[I^{-\delta}, +\infty)$ we be ignored. In addition to that, on the main interval, we can apply Taylor expansion as well to get some cancellation to the nonlinear term, that finally deduces a good estimate on the projection $ P_n(\mathcal{N})$. Let us start to the detail of the proof: We now recall of necessary estimates for the proof in the below. \fi From the fact that $ (q,b)(s) \in V_{\delta, b_0}(s) $ for all $s \in [s_0, s_1]$, then get the following \begin{eqnarray} \left| e_{b}(y) q(y) \right| = | e_{b}(y)| \left| \left( \sum_{m=0}^M q_m(s) H_m(y,s) +q_-(y,s) \right) \right| \le C I^{-\delta }(s) (1 + |y|^M), \label{rough-esti-e-b-q} \end{eqnarray} which implies \begin{eqnarray} \left|\mathcal{N}(q)(y,s) \right| \le C \left| 1 + e_{b}(y,s) q(y,s) \right|^p \le C[1+I^{-p-\delta}(s)(1 + |y|^{Mp})]. \label{rough-estimate-N-q} \end{eqnarray} By applying Lemma \ref{small-integral-y-ge-I-delta} with $f(y) =\mathcal{N}(y)$ and $K =pM, \delta =0$, we obtain \begin{eqnarray}\label{projection-xc-N} \left| \int_{|y| \ge 1} \mathcal{N}(y,s) H_n(y,s)\rho_{s}(y)dy \right| \le C e^{-\frac{I(s)}{8}}, \forall n \in \{0,1,...,[M]\}, \end{eqnarray} then it follows \begin{eqnarray} \left| \int_{|y| \ge 1} \mathcal{N}(y,s) H_n(y,s)\rho_{s}(y)dy \right| \le C I^{-2\delta -2n}, \forall n \in \{0,1,...,[M]\}.\label{esti-integral-N-lea-I-delta} \end{eqnarray} provided that $s_0 \ge s_{1,1}(\delta, M)$. We here claim that the following estimate \begin{eqnarray} \left|\int_{|y| \le 1} \mathcal{N}(y,s) H_n(y,s)\rho_{s}(y)dy \right| \le C I^{-2\delta -2n}, \forall n \in \{ 0,1..., [M]\}, \label{integral-N-H-n-les-I-delta} \end{eqnarray} directly concludes the proof of Lemma \ref{projecion-H-n-N}. Indeed, let us assume that \eqref{esti-integral-N-lea-I-delta} and \eqref{integral-N-H-n-les-I-delta} hold, then we derive $$ \left| \left\langle \mathcal{N}, H_n(y,s) \right\rangle_{L^2_{\rho_s} } \right| \le C I^{-2\delta -2n}(s), \forall n \in \{0,1...,[M]\}, $$ which implies $$ |P_n(\mathcal{N})| \le CI^{-2\delta}(s), \forall s \in [s_0,s_1] \text{ and } n \in \{0,1...,[M]\}, $$ since and it concludes \eqref{projecion-H-n-N} and also Lemma \ref{projecion-H-n-N}. Now, it remains to prove \eqref{integral-N-H-n-les-I-delta}. From \eqref{rough-esti-e-b-q}, we have \begin{eqnarray}\label{esti-e-b-q-yle-1} \left| e_{b(s)}(y) q(y,s) \right| \le CI^{-\delta}(s), \forall s \in [s_0,s_1] \text{ and } |y| \leq 1. \end{eqnarray} then, we apply Taylor expansion to function $\mathcal{N}(q)$ in the variable $z = q e_{b}$ (here we usually denote $b$ standing for $b(s)$) and we get \begin{eqnarray} \mathcal{N}(q)&=&|1+e_bq|^{p-1}(1+e_bq)-1-p e_b q =\sum_{j=2}^{K}c_j (e_bq)^j + R_K, \label{defi-R-K} \end{eqnarray} where $K $ will be fixed later and the reader should bear in mind that we only consider $|y| \leq 1$ in this part. For the remainder $R_K$, we derive from \begin{eqnarray} \left| R_K(y,s) \right| \le C \left| e_{b}(y)q(y,s) \right|^{K+1} \le C I^{-\delta (K+1)}(s).\label{property-R-K} \end{eqnarray} Besides that, we recall from \eqref{decomposition-q2} that $ q = q_+ + q_-$ and we have then express \begin{eqnarray*} \sum_{j=2}^K c_j (e_{b} q )^j = \sum_{j=2}^{K}d_{j,j} (e_bq_+)^j + \sum_{j=2}^K \sum_{\ell=0}^{i-1} d_{j,\ell} e_{b}^j(q_+)^\ell (q_-)^{j-\ell} = A + S, \end{eqnarray*} where \begin{eqnarray} A = \sum_{j=2}^{K}d_{j} (e_bq_+)^j \text{ and } S =\sum_{j=2}^K \sum_{\ell=0}^{j-1} \tilde d_{j,\ell} e_{b}^j(q_+)^\ell (q_-)^{j-\ell}, \text{ for some } d_j, \tilde d_{j,\ell} \in \mathbb{R}.\label{defi-A-S} \end{eqnarray} From the above expressions, we can decompose $\mathcal{N}$ by $$ \mathcal{N} = A + S + R_K,$$ and we also have \begin{eqnarray*} \int_{|y| \le 1 } \mathcal{N}(y,s) H_n(y,s) \rho(y) dy &=& \int_{|y| \le 1} A H_n(y,s) \rho(y) dy +\int_{|y| \le 1} S H_n(y,s) \rho(y) dy \\ &+& \int_{|y| \le 1} R_K H_n(y,s) \rho(y) dy. \end{eqnarray*} - \textit{The integral for $R_K$} Note that $H_n$ defined in \eqref{eigenfunction-Ls} satisfies $$ \left| H_n(y,s) \right| \le C (1 + |y|^n) \le C, \forall |y| \le 1, $$ hence, it follows from \eqref{property-R-K} that \begin{eqnarray} \left|\int_{|y| \le 1 } R_K(y,s) H_n(y,s)\rho_s(y) dy\right| &\le & C I^{-\delta(K+1)}(s) \int_{|y| \le 1} e^{-\frac{I^2(s)y^2}{4}} I(s) dy \nonumber\\ & \le & C I^{1-\delta (K+1)}(s) \le CI^{-2\delta - 2n }(s), \forall s \in [s_0,s_1],\label{estimate-on-integral-R-K} \end{eqnarray} provided that $K \ge K_1(\delta,M)$. \noindent - \textit{ The integral for $S$:} Since $(q,b)(s) \in V_{\delta,b_0}(s)$, for all $s \in [s_0,\bar s]$, we can estimate as follows \begin{eqnarray*} \left| q_+(y,s) \right|^\ell + |q_-(y,s)|^\ell = \left| \sum_{m=0}^M q_m(s) H_{m}(y,s) \right|^\ell + C I^{-\ell \delta}(s)(I^{-M}(s)+|y|^M)^\ell \le C I^{-\ell \delta}(s), \end{eqnarray*} for all $ |y| \le 1 \text{ and } \ell \in \mathbb{N}.$ Regarding to \eqref{defi-A-S}, we can estimate as follows \begin{eqnarray*} \left| S(y,s) \right| \le C \left(\left|q_+(y,s) \right| |q_-(y,s)| +|q_-(y,s)|^2 \right)\le C I^{-2\delta}(s) ( I^{-M}(s) + |y|^M ), \end{eqnarray*} provided that $s_0 \ge s_{1,3}(K)$. Thus, we derive \begin{eqnarray*} \left| \int_{|y| \le 1} S(y,s) H_n(y,s) \rho_s(y) dy \right| \le C I^{-2 \delta}(s) \int_{|y| \le 1} \left(I^{-M}(s) + |y|^M \right) |H_n(y,s)| e^{-\frac{I^2(s) y^2 }{4}} I(s) dy. \end{eqnarray*} Accordingly to \eqref{eigenfunction-Ls} and changing of variable $z = I(s) y$, we have \begin{eqnarray} & & \hspace{-0.8cm} \int_{|y| \le 1} \left(I^{-M}(s) + |y|^M \right) |H_n(y,s)| e^{-\frac{I^2(s) y^2 }{4}} I(s) dy \label{changin-variable-z}\\ &=& I^{-M-n}(s) \int_{|z| \le I(s)} (1+|z|^M) |h_n(z)| e^{-\frac{|z|^2}{4}} dz \le C I^{-M-n}(s). \nonumber \end{eqnarray} Finally, we have \begin{eqnarray} \left| \int_{|y| \le 1} S(y,s) H_n(y,s) \rho_s(y) dy \right| \le CI^{-2\delta-2n}(s), \forall n \le M,\label{integral-S-H-n} \forall s \in [s_0,s_1], \end{eqnarray} provided that $s_0 \ge s_{1,3}(K)$. \noindent - \textit{The integral for $A$}: From \eqref{decomposition-q2} and \eqref{eb0-modulation}, we write \iffalse \begin{eqnarray*} e_{b}(y) = \sum_{\ell =0}^{K-1} E_\ell b^\ell y^{2\ell k } + O(y^{K 2k }), \text{ with } E_j \in \R. \end{eqnarray*} Then, with $q_+ $ defined as in \eqref{decomposition-q2}, we conclude \fi \begin{eqnarray*} \left( e_{b}q_+ \right)^j = \left( \sum_{\ell =0}^{K-1} E_\ell b^\ell y^{2\ell k } \right)^j \left( \sum_{m=0}^{\left[ M \right]} q_m H_m \right)^j + O(|q_+|^2 y^{K(2k)} ), \forall j \ge 2. \end{eqnarray*} By using the technique in \eqref{changin-variable-z} (changing variable $z = I(s) y$), we obtain \begin{eqnarray} \int_{|y| \le 1 } |y|^{K(2k)} |q_+|^2(y) \rho_s (y) dy & \le & C I^{-2\delta}(s) \int_{|y| \le 1 } |y|^{K(2k)} \left( \sum_{m=0}^{\left[ M\right]} \left|H_m(y,s) \right| \right)^2 \rho_s dy \nonumber\\ &\le & I^{-2\delta -K(2k) }(s) \le C I^{-2\delta - 2n}(s), \label{estimates-the-errors-y-K} \end{eqnarray} provided that $ K \ge K_2(\delta, M)$ large enough. In addition, we derive from $H_m$'s definition defined in \eqref{eigenfunction-Ls} that \begin{eqnarray*} \left( \sum_{\ell=0}^{K-1} E_\ell b^\ell y^{(2k)\ell} \right)^j \left( \sum_{m=0}^{\left[ M \right]} q_m H_m \right)^j = \sum_{k=0}^{L} \mathcal{A}_k(s) y^k \text{ where } L = j\left( \left[M \right] + (K-1)(2k)\right), \end{eqnarray*} and $\mathcal{A}_j$ satisfying \begin{eqnarray*} \left| \mathcal{A}_j(s) \right| \le C I^{-2\delta}(s). \end{eqnarray*} Now, we apply Lemmas \ref{lemma-scalar-product-H-m} and \ref{small-integral-y-ge-I-delta} to deduce \begin{eqnarray} \left| \int_{|y| \le 1 } \left( \sum_{n=0}^{K-1} E_n b^n y^{(2k)n} \right)^j \left( \sum_{m=0}^{ [M] } q_m H_m \right)^j H_n(y,s)\rho_s(y) dy \right| \le C I^{-2\delta -2n} (s). \label{estinate-polynomial-q-+} \end{eqnarray} Thus, we get \begin{eqnarray} \left| \int_{|y| \le 1} A(y,s) H_n(y,s) \rho_s(y) dy \right| \le CI^{-2\delta-2n}(s), \forall n \le M, \forall s \in [s_0,s_1]. \label{estimate-A} \end{eqnarray} According to \eqref{estimate-on-integral-R-K}, \eqref{integral-S-H-n} and \eqref{estimate-A}, we have \begin{eqnarray} \left| \int_{|y| \le 1} \mathcal{N}(q) H_n(y,s) \rho_s(y) dy \right| \le C I^{-2\delta -2n}(s),\label{estimate-} \end{eqnarray} provided that $s_0 \ge s_{1,3}(K)$, and $K \ge K_2$. Thus, \eqref{integral-N-H-n-les-I-delta} follows which concludes the conclusion of the Lemma. \end{proof} \iffalse \newpage then, we can write \beqtn\label{PnN} \begin{array}{lll} P_n(\mathcal{N}) &=& \dsp \int_{|y|\leq b(s)^{-\frac{1}{2k}} }\mathcal{N} H_n \rho_s dy+ \int_{|y|\geq b(s)^{-\frac{1}{2k}} } \mathcal{N} H_n \rho_s dy\\ &&=\mathcal{N}_1+\mathcal{N}_2.\\ & \end{array} \eeqtn \textbf{ Estimation of $\mathcal{N}_1$}\\ Using the Taylor expansion for $|y|\leq b(s)^{-\frac{1}{2k}}$ and \eqref{decomposition-q2} we can write \beqtn\label{decomposition-interior-N} \begin{array}{lll} \mathcal{N}&=&\sum_{j=2}^{K}c_j (e_bq)^j+R_K,\\ &=& \sum_{j=2}^{K}c_j (e_bq_+)^j+S+R_K,\\ &=&A+ S+R_K. \end{array} \eeqtn \begin{cl} \label{estimation-inter-A-S-RK} \beqna (i)\;\; \left |\int_{|y|\leq b(s)^{-\frac{1}{2k}}} AH_n\rho_s dy\right |&\leq& CI_s^{-2\delta} \label{estimation-inter-A}\\ (ii)\;\; \left |\int_{|y|\leq b(s)^{-\frac{1}{2k}}} SH_n\rho_s dy\right |&\leq& CI_s^{-2\delta}\label{estimation-inter-S}\\ (iii)\;\; \left |\int_{|y|\leq b(s)^{-\frac{1}{2k}}} R_K H_n\rho_s dy\right |&\leq& CI_s^{-2\delta}\label{estimation-inter-RK} \eeqna \end{cl} \textit{Proof of the claim: } We note that in the the region $|y|\leq b(s)^{\frac{1}{2k}}$, $e_b$ is bounded from above and below. We deduce that the reminder $R_K$ is bounded as follows \beqtn\label{bound-RK} |R_K|\leq C|e_b q|^{K+1}\leq C|q|^{K+1}, \eeqtn Thus using the definition of the shrinking set \eqref{definition-shrinking-set}, we obtain the following estimation for a $K>K(\delta),$ \beqna \label{bound-RK1} \int_{|y|\leq b(s)^{-\frac{1}{2k}}} \left |R_KH_n\rho_s dy\right |\leq C I_s^{-n}I_s^{-\delta(K+1)}\leq C I_s^{-2\delta}. \eeqna Now, let us estimate the second term $S$; \[S=\sum_{j=2}^{K}d_j e_b^j\sum_{n=0}^{j} q_+^nq_-^{j-n},\] \[|S|\leq C\sum_{j=2}^{K}\sum_{n=0}^{j} (\sum_{m=0}^{[M]} |q_m|)^n|q_-|_s^{j-n}\] By the properties of the shrinking set and the bound for $q_-$, we obtain for $|y|\leq b(s)^{-\frac{1}{2k}}$ \beqtn\label{estimation-S} |S|\leq C I_s^{-2\delta}\left (I_s^{-M}+|y|^M\right ), \eeqtn thus we obtain \eqref{estimation-inter-S} \[\left |\int_{|y|\leq b(s^{-\frac{1}{2k}})} S H_n\rho_sdy\right |\leq C I_s^{n-M} I_s^{-2\delta}\leq C I_s^{-2\delta}. \] \medskip We can write $e_b$ in terms of $y^{2k}$ as follows \beqtn\label{eb-decomposition} e_b(y)=(p-1)^{-1}\left (\sum_{l=0}^{L}\left (-\frac{by^{2k}}{p-1}\right )^l+\left (-\frac{by^{2k}}{p-1}\right )^{L+1}e_b(y)\right ), \eeqtn then, we obtain \beqtn\label{decomposition-A} \begin{array}{lll} A&=& \sum_{j=2}^{K}c_j\left ( e_b q_+ \right )^j \\ & =&\dsp \sum_{\textbf{n},p} c_{\textbf{n},p}b(s)^{\frac{p}{2k}}y^p \displaystyle \Pi_{i=1}^{[M]}q_{i}^{n_i}H_i^{n_i}+\textcolor{red}{I_s^{-2\delta}}b(s)^{\frac{2k(L+1)}{2k}} y^{2k(L+1)}Q,\\ &=&A_1+A_2, \end{array} \eeqtn where $\textbf{n}=\left ( n_1,.....,n_{M}\right )$, $\sum n_i\geq 2$ and $Q$ is bounded. Now we note that \beqtn\label{bound-pnyph} P_m(y^p\Pi_{i=1}^{[M]} H_{i}^{n_i}) \left \{ \begin{array}{lll} =0 & \mbox{if } p+\sum n_i< m \\ \leq I_s^{n-p-\sum n_i} & \mbox{if }p+\sum n_i\geq m \end{array} \right . \eeqtn Using \eqref{bound-pnyph} and \eqref{definition-shrinking-set}, we get \beqtn\label{bound-A1} \left |\int_{|y|\leq b(s)^{-\frac{1}{2k}}} A_1 H_n\rho_sdy\right |\leq C I_s^{-2\delta}. \eeqtn Taking $L$ such that $2k(L+1)\geq M$, we obtain \beqtn\label{bound-A2} \left |\int_{|y|\leq b(s)^{-\frac{1}{2k}}} A_2 H_n\rho_sdy\right |\leq C I_s^{-2\delta}. \eeqtn Thus, using \eqref{estimation-inter-A}, \eqref{estimation-inter-S} and \eqref{estimation-inter-RK}, we obtain \beqtn\label{bound-N1} |\mathcal{N}_1| \leq C I_s^{-2\delta}. \eeqtn \medskip \textbf{Estimation of $\mathcal{N}_2$} On the other hand for $|y|\geq b(s)^{-\frac{1}{2k}}$, from the defintion of $\mathcal{N}$, we have \beqtn |\mathcal{N}|\leq C|e_bq|^{p}, \eeqtn then by the definition of $e_b$ and the shrinking set \eqref{definition-shrinking-set} we get for $|y|\geq b(s)^{-\frac{1}{2k}}$ \[|\mathcal{N}|\leq CI_s^{-p\delta} |y|^{(M-2k)p}b^{-p},\] by the fact that $M=2kp(p-1)^{-1}$, we obtain \beqtn\label{outer-esti-N} |\mathcal{N}|\leq CI_s^{-p\delta} |y|^{M}b^{-p}. \eeqtn Using the fact that $\rho_s(y)\leq Ce^{-cI_s^2 b(s)^{-\frac{1}{k}}}$ when $|y|\geq b(s)^{\frac{1}{2k}}$, the integral of $\mathcal{N}$ over $|y|\geq b(s)^{-\frac{1}{2k}}$ can be bounded by \beqtn\label{outer-esti-N-2} \dsp Ce^{-cI_s^2 b(s)^{-\frac{1}{k}}}\int_{|y|\geq b(s)^{-\frac{1}{2k}}}I_s^{-p\delta} |y|^{M}b^{-p}|H_n(y)|\sqrt {\rho_s(y)}dy\leq Cb^{-p} I_s^{-p\delta}e^{-cI_s^2 b(s)^{-\frac{1}{k}}}, \eeqtn as $b^{-p}e^{-cI_s^2 b(s)^{-\frac{1}{k}}}\leq C$, we deduce that \beqtn\label{bound-N2} |\mathcal{N}_2|\leq CI_s^{-2\delta}.\eeqtn From \eqref{bound-N1} and \eqref{bound-N2} We obtain \eqref{bound-N} and conclude the proof of our Lemma. \fi \medskip \textbf{d) Fourth term $b'(s)\mathcal{M} (q)$.} Let us consider $\mathcal{M}$'s definition that \[\mathcal{M}(q)=\frac{p}{p-1}y^{2k} (1+e_bq),\] we have then the following result: \begin{lemma}\label{lemma-P-n-M} Let $b_0 >0$, then there exists $\delta_6(b_0)$ such that for all $ \delta \in (0, \delta_6)$, then there exists $ s_6(\delta, b_0) \ge 1$ such that for all $s_0 \ge s_6$ the following folds: Assume $(q,b)(s) \in V_{\delta,b_0}(s), \forall s \in [s_0, \bar s]$ for some $\bar s$ arbitrary, then it holds that \begin{equation}\label{project-P-n-M} P_{n} \left( \mathcal{M} (q) (s) \right) = \left\{ \begin{array}{rcl} \frac{p}{p-1} + O(I^{-\delta}(s)) & \text{ if } & n = 2k \\ O(I^{-\delta}(s)) & \text{ if } & n \ne 2k, n \in \{0,1,...,[M]\} \end{array} \right. . \end{equation} for all $s \in [s_0, \bar s]$. \end{lemma} \begin{proof} We firstly decompose as follows $$ \left\langle \mathcal{M}, H_n(y,s) \right\rangle_{ L^2_{\rho_s}} = \left\langle \frac{p}{p-1} y^{2k} , H_n(y,s) \right\rangle_{ L^2_{\rho_s}} + \left\langle \frac{p}{p-1}y^{2k}e_b(y) q , H_n(y,s) \right\rangle_{ L^2_{\rho_s}}.$$ From \eqref{eigenfunction-Ls}, we get the following \begin{eqnarray}\label{part-1-M} \left\langle \frac{p}{p-1} y^{2k} , H_n(y,s) \right\rangle_{ L^2_{\rho_s}} = \frac{p}{p-1} \left\{ \begin{array}{rcl} \|H_{2k}\|^2_{L^2_{\rho_s}} & \text{ if } & n=2k\\[0.2cm] O(I^{-2k-2}(s)) & \text{ if } & n < 2k \\ 0 & \text{ if } & n > 2k \end{array} \right. , \end{eqnarray} Now we focus on the scalar product $$ \left\langle \frac{p}{p-1}y^{2k}e_b(y) q , H_n(y,s) \right\rangle_{ L^2_{\rho_s}} .$$ We decompose \begin{eqnarray*} \left\langle \frac{p}{p-1}y^{2k}e_b(y) q , H_n(y,s) \right\rangle_{ L^2_{\rho_s}} &=& \int_{|y| \le 1} \frac{p}{p-1} y^{2k} e_n(y) q H_n(y,s) \rho_s (y) dy \\ &+&\int_{|y| \ge 1} \frac{p}{p-1} y^{2k} e_n(y) q H_n(y,s) \rho_s (y) dy. \end{eqnarray*} Since $q \in V_{ \delta, b_0}(s)$ for all $ s \in [s_0,s^*]$, the following estimate holds \begin{eqnarray*} \left| \frac{p}{p-1} y^{2k} e_b(y) q \right| \le C I^{-\delta}(s) |y|^{2k} (1+ |y|^{M}). \end{eqnarray*} Using Lemma \ref{small-integral-y-ge-I-delta}, we conclude \begin{eqnarray} & & \left| \int_{|y| \ge 1 } \frac{p}{p-1} y^{2k} e_b(y) q H_n(y,s) \rho_s (y) dy \right| \label{integral_M-I-ge-I-delta} \\ &\le & C I^{-\delta} e^{-\frac{1}{8} I(s)} \le C I^{-2\delta}(s), \forall s \in [s_0,s^*],\nonumber \end{eqnarray} provided that $s_0 \ge s_3(\delta)$. Let us decompose \begin{eqnarray*} \frac{p}{p-1} y^{2k} e_b(y) q = \frac{p}{p-1} y^{2k} e_b(y) q_+ + \frac{p}{p-1} y^{2k} e_b(y) q_- . \end{eqnarray*} Since $q \in V_{\delta, b_0}(s)$ and $e_b$ bounded, we get \begin{eqnarray*} \left| \frac{p}{p-1} y^{2k} e_b(y) q_- \right| \le C I^{-\delta}(s) |y|^{2k} (I^{-M}(s) + |y|^M). \end{eqnarray*} By the same technique in \eqref{changin-variable-z}, we obtain \begin{equation}\label{bound-for-y-2k-e-b-q-} \left| \int_{|y| \le 1 } \frac{p}{p-1} y^{2k} e_b(y) q_- H_n (y,s) \rho_s(y) dy \right| \le CI^{-2\delta -2n}(s), \forall s \in [s_0,s^*] \text{ and } n \in \{ 0,1...,[M]\}. \end{equation} In addition, using \eqref{decomposition-q2} and \eqref{eb0-modulation}, we write \begin{eqnarray*} \frac{p}{p-1} y^{2k} e_b(y) q_+ = \sum_{i=0}^M \sum_{j=1}^{K} m_{i,j} b^j q_i(s) y^{2kj} H_i(y,s) + O\left( I^{-\delta}(s) y^{(K+1)2k} (1 + |y|^M ) \right). \end{eqnarray*} Repeating the technique in \eqref{changin-variable-z} (changing variable $z = I(s) y$), we obtain \begin{eqnarray*} \left| \int_{|y| \le 1} I^{-\delta}(s) y^{(K+1)2k} (1 + |y|^M ) H_n(y,s) \rho_s(y) dy \right| \le CI^{-2\delta -2n} (s), \forall s \in [s_0,s^*], n \in \{ 0,1,..., M\}, \end{eqnarray*} provided that $ K $ large enough. Besides that, we use the fact that $ q \in V_{\delta, b_0}(s) $ to get \begin{eqnarray*} \left| q_j(s) \right| \le CI^{-\delta}(s), \end{eqnarray*} and $H_i$ can be written by a polynomial in $y$, we apply Lemma \ref{lemma-scalar-product-H-m} and Lemma \ref{small-integral-y-ge-I-delta}, we derive \begin{eqnarray*} & & \left| \int_{|y| \le 1} \left( \sum_{i=0}^M \sum_{j=1}^{K} m_{i,j} b^j q_i(s) y^{2kj} H_i(y,s) \right) H_n(y,s) \rho_s(y) dy \right| \\ &\le & CI^{-\delta -2n}(s), \forall s \in [s_0,s^*] \text{ and } n \in \{ 0,1,..., [M]\}. \end{eqnarray*} Finally, we get \begin{equation}\label{bound-y-2k-e-b-q+} \left| \int_{|y| \le 1 } \frac{p}{p-1} y^{2k} e_b(y) q_+ H_n (y,s) \rho_s(y) dy \right| \le CI^{-\delta -2n}(s), \forall s \in [s_0,s^*] \text{ and } n \in \{ 0,1...,[M]\}. \end{equation} Now, we combine \eqref{bound-for-y-2k-e-b-q-} with \eqref{bound-y-2k-e-b-q+} to imply \begin{equation}\label{integra-M-y-le-I-delta} \left| \int_{|y| \le 1 } \frac{p}{p-1} y^{2k} e_b(y) q H_n (y,s) \rho_s(y) dy \right| \le CI^{-\delta -2n}(s), \forall s \in [s_0,s^*] \text{ and } n \in \{ 0,1...,[M]\}. \end{equation} We use \eqref{integral_M-I-ge-I-delta} and \eqref{integra-M-y-le-I-delta} to conclude \begin{equation}\label{part-2-M} \left| \left\langle \frac{p}{p-1}y^{2k}e_b(y) q , H_n(y,s) \right\rangle_{ L^2_{\rho_s}} \right| \le CI^{-\delta-2n}(s), \forall s \in [s_0,s^*] \text{ and } n \in \{ 0,1...,[M]\}. \end{equation} Finally, by \eqref{part-1-M} and \eqref{part-2-M} we conclude the proof of the Lemma. \end{proof} \medskip \textbf{e) Fifth term $\mathcal{D}_s (q)$}\\ \begin{lemma}[Estimation of $P_n(\mathcal{D}_s)$] \label{lemma-P-n-mathcal-D} Let $b > 0$, then there exists $\delta_7(b_0) > 0$ such that for all $\delta \in (0,\delta_7)$ , there exists $s_7(\delta, b_0)$ such that for all $s_0 \ge s_7$, the following property holds: Assume $(q,b)(s) \in V_{\delta, b_0}(s)$ for all $ s \in [s_0,\bar s]$ for some $\bar s \ge s_0$, then we have \begin{equation}\label{projec-P-n-mathcal-D} \left| P_n(\mathcal{D}_s(q)) \right| \leq C I^{-2\delta} (s), \text{ for all } s \in [s_0,\bar s], \end{equation} for all $ 0 \le n \le M $. \end{lemma} \begin{proof} Let us now recall from \eqref{equation-Ds} that \[\mathcal{D}_s(\nabla q)=-\frac{4pkb}{p-1}I_s^{-2} y^{2k-1}e_b\nabla q.\] From \eqref{defi-q_m} and \eqref{scalar-product-hm}, it is sufficient to estimate to \begin{eqnarray*} \left\langle \mathcal{D}_s, H_n(y,s) \right\rangle_{L^2_{\rho_s} } = \int_{\mathbb R} \left(-\frac{4pkb}{p-1} I^{-2}(s) y^{2k-1} e_b \nabla q H_n(y,s) \rho_s(y) dy \right) \end{eqnarray*} From the fact that $\nabla (H_n)=nH_{n-1}, \rho_s(y) = \frac{I(s)}{4\pi} e^{-\frac{I^2(s) y^2}{4}}$, we use integration by parts to derive \begin{eqnarray*} & & \langle \mathcal{D}_s, H_n(y,s) \rangle_{L^2_{\rho_s} } \\ & = & \frac{4pkb}{p-1}I^{-2}(s)\left (\int \nabla (y^{2k-1}e_b)q H_n \rho_s(y)dy,\right. + n\int y^{2k-1}e_b q H_{n-1} \rho_s dy \left . -\frac 1 2I^2(s)\int y^{2k} e_b q y H_n \rho_s dy \right ). \end{eqnarray*} Then, we explicitly write the scalar product by four integrals as follows \begin{eqnarray*} \left\langle \mathcal{D}_s, H_n(y,s) \right\rangle_{L^2_{\rho_s} } &=&\frac{4pkb}{p-1}I^{-2}(s)\left \{(2k-1)\int y^{2k-2}e_bq H_n \rho_s(y)dy\right . -2kb\int y^{4k-2} e_b^{2}q H_n \rho_s(y)dy\\ &&+n\int y^{2k-1}e_b q H_{n-1} \rho_s dy \left . -\frac 1 2I^2(s)\int y^{2k} e_b q y H_n \rho_s dy \right\}. \end{eqnarray*} By the technique established in Lemma \ref{lemma-P-n-M}, we can prove $$ \left| \left\langle \mathcal{D}_s, H_n(y,s) \right\rangle_{L^2_{\rho_s} } \right| \le C I^{-2\delta -2n}, \forall s \in [s_0,s^*], \text{ and } n \in \{0,1,...,[M] \}. $$ which concludes \eqref{projec-P-n-mathcal-D} and the conclusion of the Lemma follows. \end{proof} \iffalse Then we write $$ \begin{array}{rcl} P_n(\mathcal{D}_s)&=& \dsp \int_{|y|\leq b(s)^{-\frac{1}{2k}}} \mathcal{D}_s H_n\rho_s dy+ \int_{|y|\geq b(s)^{-\frac{1}{2k}}} \mathcal{D}_s H_n\rho_s dy \\ & =&\mathcal{D}_1+\mathcal{D}_2. \end{array} $$ Arguing as in the proof of Claim \ref{estimation-M1-M2} and using the properties of our shrinking set \eqref{definition-shrinking-set}, we obtain \beqna |P_n(\mathcal{D}_s)| \leq C I^{-2\delta}(s). \eeqna \fi \medskip \textbf{f) Sixth term $\mathcal{R}_s (q)$} \begin{lemma}[Estimation of $P_n(\mathcal{R}_s)$]\label{Lemma-Rs-n} Let $b_0 > 0$, then there exists $\delta_8(b_0) >0$ such that for all $ \delta \in (0, \delta_8)$ there exists $s_8(b_0, \delta) \ge 1$ such that for all $ s_0 \ge s_8$, the following holds \begin{equation}\label{bound-P-n-mathcal-R-s} \left| P_n(\mathcal{R}_s(q)) \right| \leq C I^{-2\delta}(s), \end{equation} for all $s \in [s_0,\bar s]$ and $0 \le n \le M$. \end{lemma} \begin{proof} The technique is quite the same as the others terms in above. Firstly, we write $\mathcal{R}_s$'s definition given in \eqref{equation-Rs} as follows \[ \mathcal{R}_s(q)= I^{-2}(s)y^{2k-2}\left (\alpha_1+\alpha_2 y^{2k}e_b+(\alpha_3+\alpha_4 y^{2k}e_b)q \right), \] then, we have the following \begin{eqnarray*} P_n(\mathcal{R}_s) = \frac{\left\langle \mathcal{R}_s, H_n(y,s) \right\rangle_{L^2_{\rho_s}} }{ \left\| H_n(s), \right\|_{L^2_{\rho_s}}^2 }, \end{eqnarray*} where $\left\| H_n(s) \right\|_{L^2_{\rho_s}}^2$ computed in \eqref{scalar-product-hm}. In particular, we observe that \eqref{bound-P-n-mathcal-R-s} immediately follows by \begin{eqnarray} \left| \left\langle \mathcal{R}_s, H_n(y,s) \right\rangle_{L^2_{\rho_s}} \right| \le CI^{-2\delta- 2n}, \forall s \in [s_0,s^*] \text{ and } \forall n \in \{ 0,1,...,[M]\}.\label{scalar-product-mathcal-R-H-n} \end{eqnarray} Besides that the technique of the proof of \eqref{scalar-product-mathcal-R-H-n} is proceed as in Lemma \ref{lemma-P-n-M}. For that reason, we kindly refer the reader to check the details and we finish the proof of the Lemma \end{proof} \iffalse projecting $\mathcal{R}_s$ on $H_m$ gives: \[\begin{array}{lll} P_n(\mathcal{R}_s) &=&\dsp \int_{|y|\leq b(s)^{-\frac{1}{2k}} }\mathcal{R}_sH_n \rho_s dy+ \int_{|y|\geq b(s)^{-\frac{1}{2k}} } \mathcal{R}_sH_n \rho_s dy\\ &&=\mathcal{R}_1+\mathcal{R}_2. \end{array} \] To get the estimation for $\mathcal{R}_1$ and $\mathcal{R}_2$, we proceed as in the proof of (i) of Claim \ref{estimation-M1-M2}.\\ For $\mathcal{R}_2$, we note from the definition of $\mathcal{R}$ that when $b(s)y^{2k}\geq 1$ \[|\mathcal{R}_s|\leq CI_s^{-2\delta}|y|^{M-2},\] which allows us to conclude as in the proof of (ii) of Claim \ref{estimation-M1-M2}. \fi \textbf{Part 2: Proof of (i) and (ii) of Proposition \ref{proposition-ode}}: \medskip \noindent \textit{- Proof of (i) of Proposition \ref{proposition-ode}:}\\ Combining Lemma \ref{Lemma-Pn_partialq}-\ref{Lemma-Rs-n} the estimates defined in $V_{\delta, b_0}(s)$, we obtain (i) of Proposition \ref{proposition-ode} \[\forall n \in \{0,..[M]\},\;\;\left |\pa_s q_n-\left( 1-\frac{n}{2k} \right)q_n\right |\leq CI^{-2\delta}(s), \forall s \in [s_0, \bar s],\] provided that $\delta \le \delta_3$ and $s_0 \ge s_3(\delta, b_0)$. Thus, we conclude item (i). \medskip \noindent \textit{- Proof of (ii) of Proposition \ref{proposition-ode}: Smallness of the modulation parameter. }\\ Let us recall the equation satisfied by $q$: \beqtn\label{equation-q-bis} \pa_s q =\mathcal{L}_s q+b'(s)\mathcal{M}(q) +\mathcal{N} (q)+\mathcal{D}_s(\nabla q)+\mathcal{R}_s(q), \eeqtn this part aims to obtain an estimation of the modulation parameter $b(s)$. For this we will project the equation \eqref{equation-q-bis} on $H_{2k}$ and take on consideration that $q_{2k}=0$, we obtain \beqtn\label{modulation-equation} 0=\frac{p}{p-1}b'(s)\left (1+ P_{2k}(y^{2k}e_bq)\right )+P_{2k}(\mathcal{N}) +P_{2k}(\mathcal{D}_s)+P_{2k}(\mathcal{R}_s), \eeqtn Using estimations given by equation \eqref{bound-N} and Lemmas \ref{lemma-P-n-M}, \ref{lemma-P-n-mathcal-D} and \ref{Lemma-Rs-n}, we obtain \beqtn\label{inequality-b} |b'(s)|\leq CI(s)^{-2\delta}=C e^{\delta\frac{1-k}{k}s}, \eeqtn where $0<\delta\leq \min (\delta_j,5\le j\le 8 )$ is a strictly positive real, which gives us the smallness of the modulation parameter in i) of Proposition \ref{proposition-ode} and we obtain \beqtn b(s)\to b^*\mbox{ as }s\to \infty,\;\; (t\to T). \label{convegence-b-s} \eeqtn Integrating inequality \eqref{inequality-b} between $s_0$ and infinity, we obtain \[ |b^*-b_0|\leq C e^{\delta\frac{1-k}{k}s_0},\] we conclude that there exist $s_{9}$ such that dor for $s_0\geq s_9$ big enough, we have \[\frac{3}{4} b_0\leq b^*\leq \frac{5}{4} b_0,\] which is (ii) of Proposition \ref{proposition-ode}. \subsection{The proof to item (iii) of Proposition \ref{proposition-ode} }\label{proof-item-iii} Here, we prove the last identity of Proposition \ref{proposition-ode}. As in the previous subsection, we proceed in two parts: \begin{itemize} \item In Part 1, we project equation \eqref{equation-q} using projector $P_-$ defined in \eqref{projector-P-} . \item In Part 2, we prove the estimate on $q_-$ given by (iii) of Proposition \ref{proposition-ode}. \end{itemize} \textbf{Part 1: The projection of equation \eqref{equation-q} using the projector $P_-$.} Let $(q,b)$ be solution to problem \eqref{equation-q} $\&$ \eqref{Modulation-condition} trapped in $V_{\delta, b_0}(s)$ for all $s \in [s_0, \bar s]$ for some $\bar s > s_0$. Then, we have the following results: \medskip \textbf{First term $\pa_s q$.}\\ \begin{lemma} For all $s \in [s_0, \bar s]$, it holds that \begin{equation}\label{esti-par-q-m-P-partial-s-q_-} P_-(\partial_s q)=\partial_s q_- - I^{-2}(1-\frac{1}{k})\sum_{n=[M]-1}^{[M]}(n+1)(n+2)q_{n+2}(s)H_n. \end{equation} \end{lemma} \begin{proof} We firstly have \[ \begin{array}{lll} P_-(\partial_s q)-\partial_s q_- &= &-\displaystyle \left (\partial_s q-P_-(\partial_s q)\right )+\left (\partial_s q-\partial_s q_-\right ) , \\ &=&-\displaystyle \sum_{n=0}^{[M]} P_n(\partial_s q)H_n+\sum_{n=0}^{[M]}\partial_s (q_n H_n),\\ &=&-\displaystyle \sum_{n=0}^{[M]} P_n(\partial_s q)H_n+\sum_{n=0}^{[M]}\partial_s q_n H_n+\sum_{n=2}^{[M]}q_n\partial_s H_n, \end{array} \] we recall by \eqref{formula-partia-s-Hn} that for all $n\ge 2$ \[ \partial_s H_n (y,s) = n (n-1) \left(1 -\frac{1}{k} \right) I^{-2}(s) H_{n-2}(y,s),\] then by Lemma \ref{Lemma-Pn_partialq}, we obtain the desired result \[P_-(\partial_s q)=\partial_s q_- -I^{-2}\left(1-\frac{1}{k}\right)\sum_{n=[M]-1}^{[M]}(n+1)(n+2)q_{n+2}(s)H_n.\] \end{proof} \textbf{Second term $\mathcal L_s q$.}\\ By the spectral properties given in Section \ref{Section-Spectral-properties-Ls}, we can write \begin{lemma} For all $s \in [s_0, \bar s]$, it holds that \[P_-(\mathcal{L}_s q)=\mathcal{L}_s q_- -I^{-2} (1-\frac{1}{k}) \displaystyle\sum_{n=[M]-1}^{[M]}(n+1)(n+2)q_{n+2} H_n.\] \end{lemma} \begin{proof} We write \[ \begin{array}{lll} P_-(\mathcal{L}_s q)-\mathcal{L}_s q_- &= &-\displaystyle \left (\mathcal{L}_s q-P_-(\mathcal{L}_s q)-\right )+\left (\mathcal{L}_s q-\mathcal{L}_s q_-\right ) , \\ &=&-\displaystyle \sum_{n=0}^{[M]} P_n(\mathcal{L}_s q) H_n+ \mathcal{L}_s \left (q-q_-\right ),\\ &=&-\displaystyle \sum_{n=0}^{[M]} P_n(\mathcal{L}_s q) H_n+ \sum_{n=0}^{[M]}q_n\mathcal{L}_s(H_n) . \end{array} \] From \eqref{Ls-Hm}, we obtain \[\begin{array}{lll} \displaystyle \sum_{n=0}^{[M]}q_n\mathcal{L}_s (H_n)&=&\displaystyle q_0+(1-\frac{n}{2k})q_1 H_1+\sum_{n=2}^{[M]}q_n\left [(1-\frac{n}{2k})H_n+I^{-2} n(n-1)(1-\frac{1}{k})H_{n-2}\right ],\\ &=& \displaystyle\sum_{n=0}^{M} (1-\frac{n}{2k})q_n H_n+ I^{-2} (1-\frac{1}{k}) \displaystyle\sum_{n=0}^{M-2}(n+1)(n+2)q_{n+2} H_n \end{array}, \] We deduce from Lemma \ref{Lemma-P-n-mathcal-L-s} that \[P_-(\mathcal{L}_s q)-\mathcal{L}_s q_-=- I^{-2} (1-\frac{1}{k})\left [ M(M+1)q_{M+1} H_{M-1} -(M+1)(M+2)q_{M+2} H_{M} \right ].\] \end{proof} \medskip \textbf{Third term $\mathcal{N}$.}\\ \begin{lemma}\label{lemma-estimation-P--N} Let $b_0 >0$, then there exists $\delta_{10}(b_0)$ such that for all $ \delta \in (0, \delta_{10})$, then there exists $ s_{10}(\delta, b_0) \ge 1$ such that for all $s_0 \ge s_{10}$ the following folds: Assume $(q,b)(s) \in V_{\delta,b_0}(s), \forall s \in [s_0, \bar s]$ for some $\bar s$ arbitrary, then it holds that \[|P_-(\mathcal{N})|\leq C\left (I(s)^{-2\delta}+I(s)^{-p\delta} \right )\left (I(s)^{-M}+|y|^M\right ). \] \end{lemma} \begin{proof} We argue as in \cite{BKnon94}. We recall from \eqref{nonlinear-term} that \[\mathcal{N}(q)=|1+e_bq|^{p-1}(1+e_bq)-1-p e_b q. \] We proceed in a similar fashion as in the projection $P_n(\mathcal{N})$, we will give estimations in the outer region $|y|\geq 1$ and the inner region $|y|\leq 1$. Let us first define $\chi_0$ a $C_0^{\infty}(\R^+,[0,1])$, with $supp(\chi) \subset [0,2]$ and $\chi_0=1$ on $[0,1]$, we define \beqtn\label{def-chi} \chi (y)=\chi_0\left (|y|\right ). \eeqtn Using the fact that \[\mathcal{N}= \chi \mathcal{N}+\chi^c\mathcal{N}, \] we claim the following: \begin{cl}\label{estimation-P-N} \begin{eqnarray} (i)\;\; \left |P_-(\chi^c \mathcal{N} )\right |&\leq &C I(s)^{-\delta p}\left (I(s)^{-M}+|y|^M\right ),\\ (ii)\;\; \left |P_-(\chi \mathcal{N} )\right |&\leq &C I(s)^{-2\delta}\left (I(s)^{-M}+|y|^M\right ). \end{eqnarray} \end{cl} \begin{proof} First, we will estimate $P_-(\chi^c \mathcal{N})$, then $P_-(\chi \mathcal{N})$ and conclude the proof of the lemma.\\ (i) Let us first write \[ \begin{array}{lll} P_-(\chi^c \mathcal{N})&=&\chi^c \mathcal{N} -\sum_{n\leq [M]+1} P_n(\chi^c \mathcal{N})H_n\\ &=&\dsp \chi^c \mathcal{N} -\sum_{n\leq [M]+1}\frac{\int_{|y|\geq 1 } \mathcal{N} H_n \rho_s dy}{\|H_n\|_{L^{2}_{\rho_s}}} H_n, \end{array} \] using the definition of the shrinking set we can write \[|\chi^c(\mathcal{N})|\leq |\chi^c (CI^{-\delta}e_b|y|^M)^p|=|\chi^c \left (CI^{-\delta}(e_by^{2k})|y|^{\frac{2k}{p-1}}\right )^p|,\] by the fact that $|e_by^{2k}|\leq C$ and $M=\frac{2kp}{p-1}$, we have \[|\chi^c(\mathcal{N})|\leq CI^{-\delta p}|y|^M \] Then using \eqref{projection-xc-N} we deduce (i) of Claim \ref{estimation-P-N}: \beqtn |P_-(\chi^c \mathcal{N})|\leq CI(s)^{-\delta p}\left ( I(s)^{-M}+|y|^M\right ). \label{bound-P-Ncchi} \eeqtn (ii) In the inner region $|y|\leq 1$, we proceed as the in the proof of Lemma \ref{projecion-H-n-N}. For $|y|\leq 1$, using the Taylor expansion as in \eqref{defi-R-K}, we write \[\chi \mathcal{N}=\chi\left (A+S+R_K\right ),\] where $A$ and $S$ are given by \eqref{defi-A-S} \[ A =\chi \sum_{j=2}^{K}d_{j} (e_bq_+)^j \text{ and } S =\chi \sum_{j=2}^K \sum_{\ell=0}^{j-1} \tilde d_{j,\ell} e_{b}^j(q_+)^\ell (q_-)^{j-\ell}, \text{ for some } d_j, \tilde d_{j,\ell} \in \mathbb{R}. \] We get for $K$ large, \beqtn\label{bound-RK2} |\chi R_K|\leq \mathcal{I}_s^{-\delta} I(s)^{-M}. \eeqtn We proceed in a similar fashion as in he proof of Lemma \ref{projecion-H-n-N}, we write $A$ as \beqtn \begin{array}{lll}\label{A1-A2} A&=&\chi \sum_{j=2}^{K}d_j\left ( e_b q_+ \right )^j \\ & =&\dsp\chi \sum_{\textbf{n},p} c_{\textbf{n},p}b(s)^{\frac{p}{2k}}y^p \displaystyle \Pi_{i=1}^{[M]}q_{i}^{n_i}H_i^{n_i}+I(s)^{-2\delta}b(s)^{\frac{2k(L+1)}{2k}} y^{2k(L+1)}\chi Q,\\ & = &A_1+A_2, \end{array} \eeqtn where $\chi Q$ is bounded. Then, we divide the sum $A_1$ as follows \beqtn \begin{array}{lll} A_1&=&\chi\dsp \sum_{\textbf{n},p} c_{\textbf{n},p}b(s)^{\frac{p}{2k}}y^p \displaystyle \Pi_{i=1}^{[M]}q_{i}^{n_i}H_i^{n_i},\\ & =&\dsp \chi\dsp \sum_{\textbf{n},p, p+\sum n_i\leq M} c_{\textbf{n},p}b(s)^{\frac{p}{2k}}y^p \displaystyle \Pi_{i=1}^{[M]}q_{i}^{n_i}H_i^{n_i} +\dsp \chi\sum_{\textbf{n},p,p+\sum n_i> M} c_{\textbf{n},p}b(s)^{\frac{p}{2k}}y^p \displaystyle \Pi_{i=1}^{[M]}q_{i}^{n_i}H_i^{n_i}\\ &=& A_{1,1}+A_{1,2}, \end{array} \eeqtn In the first sum, $A_{1,1}$,we replace $\chi=1-\chi^c$ by $-\chi^c$, since $1$ will not contribute to $A_-$. Using the fact that $|y|\geq 1$ and by \eqref{bound-b}, we get \[\dsp \chi^c \left |y^p \Pi_{i=1}^{[M]}H_i^{n_i}\right |\leq C |y|^M.\] Since $H_m$ is bounded as follows \[|H_m(y,s)|\leq C(I(s)^{-m}+|y|^m),\] we obtain by \eqref{bound-b} \[\dsp \chi \left |y^p \Pi_{i=1}^{[M]}H_i^{n_i}\right |\leq C(I(s)^{-M}+|y|^M). \] We conclude by the definition of the shrinking set given by \eqref{definition-shrinking-set}, that \beqtn |A_{1,2}|\leq CI(s)^{-2\delta}\chi(y) \left (I(s)^{-M}+|y|^{M} \right ). \eeqtn By the properties of the shrinking set and the bound for $q_-$, we obtain the bound for the term $A_2$, defined by \eqref{A1-A2}, more precisely we have \[ |A_2|\leq CI(s)^{-2\delta}\chi(y) \left (I(s)^{-M}+|y|^{M} \right ).\] Then, we conclude that \beqtn\label{bound-A-} |P_-(A)|=|A_-|\leq C I(s)^{-2\delta}(I(s)^{-M}+|y|^M). \eeqtn which yields the conclusion of item (ii). \end{proof} Now, we return to the proof of the Lemma. We deduce by \eqref{bound-P-Ncchi}, \eqref{bound-RK2} and \eqref{bound-A-} the following estimation for $P_-(\mathcal{N})$ \beqtn |P_-(\mathcal{N})|=|\mathcal{N}_-|\leq C (I(s)^{-2\delta}+I(s)^{-p\delta})( I(s)^{-M}+|y|^M), \eeqtn thus end the proof of Lemma \ref{lemma-estimation-P--N}. \end{proof} \textbf{Fourth term $b'(s)\mathcal{M} (q)$.}\\ \begin{lemma}\label{lemma-estimation-P--M}Let $b_0 >0$, then there exists $\delta_{11}(b_0)$ such that for all $ \delta \in (0, \delta_{11})$, then there exists $ s_{10}(\delta, b_0) \ge 1$ such that for all $s_0 \ge s_{11}$ the following folds: Assume $(q,b)(s) \in V_{\delta,b_0}(s), \forall s \in [s_0, \bar s]$ for some $\bar s$ arbitrary, then it holds that \[|P_-(\mathcal{M})|\leq C I(s)^{-\delta}\left (I(s)^{-M}+|y|^M\right ). \] \end{lemma} We recall that \[\mathcal{M}=\frac{p}{p-1}y^{2k} (1+e_bq),\] then, we can write \[ P_-\left (\mathcal{M}(q)\right )=\frac{p}{p-1}P_-(y^{2k}e_bq).\] Let us write \[P_-(y^{2k}e_bq)= P_-(\chi y^{2k}e_bq)+P_-(\chi^c y^{2k}e_bq), \] we claim the following: \begin{cl}\label{estimation-P-M} \begin{eqnarray} (i)\;\; \left |P_-(\chi^c y^{2k} e_b q )\right |&\leq &C I(s)^{-\delta}\left (I(s)^{-M}+|y|^M\right ),\\ (ii)\;\; \left |P_-(\chi y^{2k} e_b q)\right |&\leq & C I(s)^{-\delta}\left (I(s)^{-M}+|y|^M\right ). \end{eqnarray} \end{cl} \begin{proof} Let us first write \[ \begin{array}{lll} P_-(\chi^c y^{2k} e_b q)&=&\chi^c y^{2k} e_b q -\sum_{n\leq [M]+1} P_n(\chi^c y^{2k} e_b q)H_n\\ &=&\dsp \chi^c y^{2k} e_b q -\sum_{n\leq [M]+1}\frac{\int_{|y|\geq b(s)^{-\frac{1}{2k}} } y^{2k} e_b q H_n \rho_s dy}{\|H_n\|^2_{L^{2}_{\rho_s}}}H_n, \end{array} \] When $ |y|\geq 1$, using \eqref{definition-shrinking-set}, we can write \[|y^{2k}e_b q|\leq C |q|\leq \frac{CI(s)^{-\delta}}{b(s)}|y|^M \leq CI(s)^{-\delta}|y|^M .\] ii) As for i), we Write \[P_-(\chi y^{2k}e_b q)=\chi y^{2k} e_b q-\sum_{n\leq M+1} P_n(\chi^c y^{2k} e_b q).\] By Lemma \ref{lemma-P-n-M} we have $\left |\sum_{n\leq M+1} P_n(\chi^c y^{2k} e_b q) \|H_n\|^{-2}_{L^{2}_{\rho_s}} \right |\leq C I(s)^{-\delta}$. \\ We conclude using the definition of the shrinking set and we obtain the following estimation \[|\chi y^{2k} e_b q|\leq C I(s)^{-\delta}. \] \end{proof} \textbf{Fifth term $\mathcal{D}_s (\nabla q)$} \begin{lemma} Let $b_0 >0$, then there exists $\delta_{12}(b_0)$ such that for all $ \delta \in (0, \delta_{12})$, then there exists $ s_{12}(\delta, b_0) \ge 1$ such that for all $s_0 \ge s_{12}$ the following folds: Assume $(q,b)(s) \in V_{\delta,b_0}(s), \forall s \in [s_0, \bar s]$ for some $\bar s$ arbitrary, then it holds that \[P_-(\mathcal{D}_s)\leq C I^{-2\delta}\left (I(s)^{-M}+|y|^M\right ).\] \end{lemma} \begin{proof} Let us first write \[\begin{array}{lll} P_-(\mathcal{D}_s) &=&\mathcal{D}_s-\dsp \sum_{n=0}^{[M]} P_n(\mathcal{D}_s)H_n, \end{array}\] Since we are using the properties given by the shrinking set in Definition \ref{definition-shrinking-set}, it will be more convenient to estimate \beqtn \begin{array}{lll} d &=&\dsp \int_{\sigma}^{s}d\tau\mathcal{K}_{s,\tau }(y,z)\mathcal{D}_s(\nabla q). \\ \end{array} \eeqtn Using integration by parts, we obtain \beqtn \begin{array}{lll} d&=&\dsp 4pkb(p-1)^{-1}\int_{\sigma}^{s}d\tau I({\tau})^{-2} \int dz \partial _z\left (\mathcal{K}_{s,\tau }(y,z) e_b(z) z^{2k-1}\right )q(z,\tau),\\ &=&\dsp 4pkb(p-1)^{-1}\int_{\sigma}^{s}d\tau I({\tau})^{-2} \int dz \mathcal{K}_{s,\tau }(y,z)\partial _z\left ( e_b(z) z^{2k-1}\right )q(z,\tau)\\ &&\dsp +pkb(p-1)^{-1}\int_{\sigma}^{s}d\tau I({\tau})^{-2} \int dz \partial _z(\mathcal{K}_{s,\tau }(y,z)) e_b(z) z^{2k-1}q(z,\tau),\\ &=&d_1+d_2. \end{array} \label{decomposition-integral-d} \eeqtn For the estimation of the firs term $d_1$, we argue in a similar fashion as in the projection of $P_n(\mathcal{M})$, see Lemma \ref{lemma-P-n-M}. For the second term, we argue as in Bricomont Kupiainen \cite{BKnon94}. Indeed, we need to bound $\partial _z K_{s,\tau}$. From equations \eqref{Kernel-Formula} we obtain \beqtn |\partial _z(\mathcal{K}_{s,\tau }(y,z)|\leq C L\mathcal{F}_{\frac 12 L^2}\left (e^{\frac{s-\tau}{2k}}y-z\right )\leq \frac{CI(s)}{\sqrt{s-\tau}}\mathcal{F}_{\frac 12 L^2}\left (e^{\frac{s-\tau}{2k}}y-z\right ), \eeqtn where $L=\frac{I(s)^{2}}{(1-e^{-(s-\sigma)})}$, $\mathcal{F}$ defined by \eqref{Kernel-Formula-F} $\text{ and } I(s)=\dsp e^{\frac s2(1-\frac 1k)}$. Then, by Definition \ref{definition-shrinking-set}, we obtain \[|d_2|\leq I(s)^{-1} I(s)^{-\delta}\] and we conclude that there exist $\delta_?$ such that for all $0<\delta\leq \delta_?$, \beqtn\label{P--mathcalDs-part1} |d|\leq CI^{-2\delta }\left (I(s)^{-M}+|y|^M\right ). \eeqtn \medskip On the other hand by Lemma \ref{projec-P-n-mathcal-D}, we obtain \beqtn\label{P--mathcalDs-part2} |\sum_{n=0}^{[M]} P_n(\mathcal{D}_s)H_n|\leq C I^{-2\delta}\left (I(s)^{-M}+|y|^M\right ). \eeqtn We conclude from \eqref{P--mathcalDs-part1}, \eqref{P--mathcalDs-part1} that \[P_-(\mathcal{D}_s)\leq C I^{-2\delta}\left (I(s)^{-M}+|y|^M\right ).\] \end{proof} \medskip \textbf{Sixth term $\mathcal{R}_s(q)$} \begin{lemma}\label{P--mathcal-Rs} Let $b_0 >0$, then there exists $\delta_{13}(b_0)$ such that for all $ \delta \in (0, \delta_{13})$, then there exists $ s_{13}(\delta, b_0) \ge 1$ such that for all $s_0 \ge s_{13}$ the following folds: Assume $(q,b)(s) \in V_{\delta,b_0}(s), \forall s \in [s_0, \bar s]$ for some $\bar s$ arbitrary, then it holds that \beqna |P_-(\mathcal{R}_s (q))| \leq C I(s)^{-2\delta}\left (I(s)^{-M}+|y|^M\right ). \eeqna \end{lemma} \begin{proof} By \eqref{equation-Rs} \[ \mathcal{R}_s (q)= I(s)^{-2}y^{2k-2}\left (\alpha_1+\alpha_2 y^{2k}e_b+(\alpha_3+\alpha_4 y^{2k}e_b)q \right), \] we proceed as for the estimation of $P_-(\mathcal{M})$. \end{proof} \textbf{Part 2: Proof of the identity (iii) in Proposition \ref{proposition-ode} (estimate on $q_-$)} If we apply the projector $P_-$ to the equation of \eqref{equation-q}, we obtain \begin{eqnarray*} \partial_s q_-=\mathcal{L}_s q_-+ P_-\left (\mathcal{N} (q) +\mathcal{D}_s(q)+\mathcal{R}_s (q)+ b'(s)\mathcal{M}(q)\right ) \end{eqnarray*} Using the kernel of the semigroup generated by $\mathcal{L}_s$, we get for all $s\in [\tau, s_1]$ The integral equation of the equation above is \[ \begin{array}{lll} q_-(s)&=&\mathcal{K}_{s\tau} q_-(\tau)\\ &&+\displaystyle \int_{\tau}^{s} \mathcal{K}_{s s' } \left (P_-\left [\mathcal{N} (q)+\mathcal{D}_s(\nabla q)+\mathcal{R}_s (q)+b'(s')\mathcal{M}(q)\right ]\right )ds'. \end{array} \] Using Lemma \ref{lemma-estimation-K-phi}, we get \[ \begin{array}{lll} |q_-(s)|_{s}&\leq& e^{-\frac{1}{p-1}(s-\tau)}|q_-(\tau)|_{\tau}\\ && +\dsp \int_{\tau}^{s} e^{-\frac{1}{p-1}(s-s')} \left |P_-\left [\mathcal{N} (q)+\mathcal{D}_s(\nabla q)+\mathcal{R}_s (q)+b'(s')\mathcal{M}(q)\right ]\right |_{s} ds' \end{array} \] By Lemma \ref{lemma-estimation-P--N}, Lemma \ref{lemma-estimation-P--M}, Lemma \ref{P--mathcal-Rs} , equations \eqref{P--mathcalDs-part1}, \eqref{P--mathcalDs-part2} and the smalness of the modulation parmeter $b(s)$ given by (ii) of Proposition \ref{proposition-ode}, we obtain \[ \begin{array}{lll} |q_-(s)|_{s} &\leq& e^{-\frac{1}{p-1}(s-\tau)}|q_-(\tau)|_{\tau}+\dsp \int_{\tau}^{s} e^{-\frac{1}{p-1}(s-s')} I(s')^{-\delta\frac{\min(p,2)+1}{2}} ds'. \end{array} \] Then, for $\delta \le \delta_3$, it holds that $$ \left| q_-(s)\right|_s \le e^{-\frac{s-\tau}{p-1}} \left| q_-(\tau)\right|_\tau + C \left( I^{-\frac{3}{2} \delta}(s) + e^{-\frac{s-\tau}{p-1}} I^{-\frac{3}{2}\delta}(\tau)\right). $$ which concludes the proof of the last identity of Proposition \ref{proposition-ode}.
1,314,259,996,435
arxiv
\section{Introduction} In the big data era, it is necessary to spread the machine learning tasks across multiple servers and transform centralized systems into distributed ones~\cite{verbraeken2020survey}. These distributed systems confront new challenges and one of the unsolved challenges is privacy~\cite{PMP4MLDS18}. Privacy computing is a kind of technique that performs data computation without specified information leakage. To outsource private computations, a cryptographic tool called fully homomorphic encryption (FHE) is exploited. FHE is a special form of encryption that permits users to perform computations on encrypted data without first decrypting it. FHE can be split into two categories: \emph{single-key} fully homomorphic encryptions and \emph{multi-key} fully homomorphic encryptions. \emph{Single-key} FHE only allows a server to perform addition and multiplication on data encrypted by the same key. In contrast, \emph{multi-key} FHE (MKFHE) proposed in~\cite{Lopez12} enables users to encrypt their own data under their own keys, but during the decryption of MKFHE, all secret keys of all participants are used. It prevents conspiracy between a user and a server to steal the data of other users. For multi-key fully homomorphic encryption over torus (MKTFHE), researchers care about the decryption algorithm and evaluation algorithm. As for the study on the evaluation algorithm, Chen et al.~\cite{MKTFHE} developed the library to implement MKTFHE where the evaluation algorithm takes a NAND gate as input. As an improvement, Jiang et al.~\cite{MKTFHE-op} made the evaluation algorithm can take arithmetic operators including adder, subtracter, multiplier, and divider so that MKTFHE can support linear multi-key homomorphic evaluation. However, MKTFHE cannot evaluate the non-linear operations, such as the Sigmoid function, which means that more complex machine learning schemes like logistic regression and neural networks cannot be done. As for the study on decryption algorithm, Chen et al. provided a naive decryption algorithm in the origin MKTFHE which asked all the secret keys of users to input together. However, in a real scenario, anyone shouldn't have the access to the secret key of others even during the decryption. Then, Lee et al.~\cite{LeeP19} proposed a distributed decryption algorithm that splits the decryption algorithm in~\cite{MKTFHE} into two sub-algorithms: partial decryption and final decryption which each user only uses their own secret key in partial decryption. However, with the ciphertext and a partial decryption of a user $u_{i}$, the existing MKTFHE scheme leaks information of the secret key $s_{i}$ of the user $u_{i}$. In detail, suppose a ciphertext $(a_{1},\ldots,a_{k},b)$ with $b=\frac{1}{4}m-\sum^{k}_{j=1}<a_{i},s_{i}>+e$ where $k$ is the number of users in total, $m$ is a bit message, and $e$ is an error to randomize $b$. Then, a partial decryption $p_{i}=b+<a_{i},s_{i}>$ and $b$ from the ciphertext gives out $<a_{i},s_{i}>$ which leaks at least a bit information of $s_{i}$. In addition, if the external adversary obtains all partial decryption results, the computation results under multi-key encryption can be finally obtained by computing $\sum_{i=1}^{k}p_i-\left(k-1\right)b$. These problems can cause security concerns. On the other hand, to better apply multi-key homomorphic encryption on more practical machine learning schemes, we first propose a distributed decryption protocol for MKTFHE, then design an MKTFHE-friendly activation function and utilize it to implement privacy-preserving logistic regression and neural networks. In this paper, we make the following contributions : \begin{enumerate}[1.] \item We construct a secure distributed decryption protocol for MKTFHE by introducing a secret sharing scheme to solve the information leakage problem. We define our security goal for MKTFHE against a possible static adversary, and then prove the correctness of our protocol. Our idea is that each data provider uses their secret key to run partial decryption and the secret sharing is utilized to finish the final decryption so that each user does not have the access to the secret key of other users in the system and the external adversary cannot get the partial decryption result which we truly protect the decryption and users' key from both internal and external adversaries. \item We utilize MKTFHE with our proposed secure distributed decryption protocol to train and evaluate logistic regression and neural network models. We design a $homogenizer$ to modify the length of bits of the operand in order to use operators of the less bits to reduce the operation time. In addition, to accelerate the computation of the activation function, we design a $compare\ quads$ to implement a kind of MKTFHE-friendly activation function. Experimental results show that the efficiency of our function is 10 times higher than using 7-order Taylor polynomials straightly and the accuracy of the training model is similar to that using a high-order polynomial as an activation function scheme. \end{enumerate} \section{Related Work} There are numerous works on privacy-preserving machine learning prediction~\cite{CryptoNets16, BourseMMP18, BoemerLCW19, TianNYY21, CHET19} and training~\cite{ChenGHHJLL18, KimS0XJ18, CheonKKS18}. These solutions are based on single-key FHE which cannot support the data participants using different secret keys to encrypt their own data. And the prediction can support more complex models such as logistic regression and even neural networks in the second level, but the training only focuses on simpler models such as logistic regression in the hour level or higher. Besides, we also note that there are other approaches based on secure multi-party computation (MPC), e.g.~\cite{mohassel2017secureml, makri2017pics} and comparing with the above works based on FHE, the performances of the solutions based on MPC is very impressive. But they need interactivity between the data participants and the computation parties which may lead to many problems such as network latency or high bandwidth usage. Considering the above downsides, we focus on FHE, especially multi-key FHE. In 2012, the concept of multi-key fully homomorphic encryption was first proposed by Lopez et al.~\cite{Lopez12}, which is intended to apply to on-the-fly multiparty computation based on NTRU. In 2015, the first LWE-based MKFHE was constructed by Clear et al.~\cite{Clear15}, and in 2016, was improved by Mukherjee and Wichs~\cite{Mukherjee16}. These schemes are single-hop MKFHE schemes, which means all the participants must be known in advance. In 2016, multi-hop MKFHE schemes was proposed by Peikert et al.~\cite{Peikert16} and Brakerski et al.~\cite{Bra16}, but their schemes are impractical and without implementation. In 2019, the first implementation of the MKFHE scheme was achieved by Chen et al. ~\cite{MKTFHE}, named MKTFHE which is the variant of TFHE ~\cite{ChillottiGGI16, ChillottiGGI17, ChillottiGGI20}. Their scheme only provided a bootstrapped NAND gate to evaluate. Then, Lee and Park~\cite{LeeP19} first formalized the distributed decryption for MKFHE and improved the decryption part of MKTFHE, but a passive adversary still can recover the decryption result through the partial results. In 2021, Jiang et al.~\cite{MKTFHE-op} designed other bootstrapped gates, utilized them to build arithmetic operators including adder, subtractor, multiplier, and divider, and then implemented a privacy-preserving linear regression in the GD method. However, only using arithmetic operators cannot directly compute the non-linear activation function like the Sigmoid function. So, there is still a gap between MKTFHE and the implementation of more complex privacy-preserving machine learning such as logistic regression and neural networks. \iffalse \subsection{Maintaining the Integrity of the Specifications} The IEEEtran class file is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations. \fi \section{Preliminaries} \textbf{Notation}: In the rest of this paper, $\mathbb{R} $ denotes the real numbers, $\mathbb{Z}$ denotes the integers, and $\mathbb{T}$ indicates $ \mathbb{R}/\mathbb{Z}$, the torus of real numbers modulo $1$. We use TLWE to denote the (scalar) binary Learning With Error problem over Tours, and TRLWE for the ring mode. We define $params$ as the parameter set in TFHE, $ mkparams $ in MKTFHE and our scheme. Besides, $ k$ is used to represent the number of the participants in MKTFHE and $l $ is used to represent the bit length of a message or ciphertext. Then we use bold letter, e.g. $\bm{a}$, to denote vector and use $\langle \bm{a},\bm{b}\rangle $ to represent the inner product between vector $\bm{a} $ and vector $\bm{b}$. \subsection{Multi-key Fully Homomorphic Encryption over Torus}\label{AA} MKTFHE scheme is the multi-key version of TFHE scheme. TFHE, constructed by Chillotti et al.~\cite{ChillottiGGI16, ChillottiGGI17, ChillottiGGI20}, is a fast fully homomorphic encryption (FHE) scheme over the torus, which generalizes and improves the FHE based on GSW~\cite{GentrySW13} and its ring variants. In the TFHE scheme, bootstrapped binary gates are designed to represent the functions developers need. The main idea of TFHE is to bootstrap after every binary gate evaluation to refresh the ciphertext in order to make it usable for the following operations, resulting in that arbitrarily deep circuits can be homomorphically evaluated. The entire homomorphic evaluation of the circuits will take time proportional to the number of the binary gates used. The message space of TFHE bootstrapping gates is $\mathbb{T}$. A TLWE ciphertext $\left(\bm{a},b\right)\in\mathbb{T}^{n+1}$ encrypted a message $\mu\in\mathbb{T}$ with noise parameter $\alpha$ In the TFHE scheme, the homomorphic evaluation of a binary gate is achieved with operations between TLWE samples and a gate-bootstrapping just following (expect NOT gate). By using this approach, all the basic gates can be evaluated with a single gate bootstrapping ($\mathsf{GB}$) process: \begin{itemize} \item $\mathsf{TFHE.NAND}\left({ct}_1,{ct}_2\right)=\mathsf{GB}\left(\left(0,\frac{5}{8}\right)-{ct}_1-{ct}_2\right)$ \item $\mathsf{TFHE.AND}\left({ct}_1,{ct}_2\right)=\mathsf{GB}\left(\left(0,-\frac{1}{8}\right)+{ct}_1+{ct}_2\right)$ \item $\mathsf{TFHE.OR}\left({ct}_1,{ct}_2\right)=\mathsf{GB}\left(\left(0,\frac{1}{8}\right)+{ct}_1+{ct}_2\right)$ \item $\mathsf{TFHE.XOR}\left({ct}_1,{ct}_2\right)=\mathsf{GB}\left(2\left({ct}_1-{ct}_2\right)\right)$ \item $\mathsf{TFHE.NOT}\left(ct\right)=\left(0,\frac{1}{4}\right)-ct$ \end{itemize} The TFHE scheme has the advantages of fast bootstrapping, efficient homomorphic logic circuit evaluation, and so on. Its multi-key version, named MKTFHE, was constructed by Chen et al.~\cite{MKTFHE} in 2019. MKTFHE is the first attempt in the literature to implement an MKFHE scheme in codes. In the MKTFHE scheme, the ciphertext length increases linearly with the number of users, and a homomorphic NAND gate with bootstrapping is given. The MKTFHE scheme is comprised of the following algorithms: \begin{itemize} \item $mkparams\gets \mathsf{MKTFHE.SETUP}\left(1^\lambda\right)$: Take a security parameter $\lambda$ as input, and output the public parameter set $mkparams$. \item $\{sk_i,pk_i\}\gets \mathsf{MKTFHE.KEYGEN} \left(mkparams\right)$: Take the $mkparams$ as input, and output secret key $sk_i$ and public key $pk_i$ for a single participant $i$. \item $ct\gets \mathsf{MKTFHE.ENC}\left(\mu\right)$: Encrypt an input bit $\mu\in\{0,1\}$ and output a TLWE ciphertext with the scaling factor $\frac{1}{4}$. The output ciphertext $ct=\left(\bm{a},b\right)\in\mathbb{T}^{n+1}$, satisfing $ b+\langle \bm{a},\bm{s}\rangle \approx\frac{1}{4}\mu$. \item $\mu\gets \mathsf{MKTFHE.DEC}\left(ct,\{sk_i\}_{i\in[k]}\right)$: Input a TLWE ciphertext $ct=(\bm{a_i},\cdots,$ $\bm{a_k},b)\in\mathbb{T}^{kn+1}$ and a set of secret keys $\{sk_i\}_{i\in[k]}$, and output the message $\mu\in\{0,1\}$ which satisfies $b+\sum_{i=1}^{k}{\langle \bm {a_i},sk_i\rangle }\approx\frac{1}{4}\mu\left(\mod1\right)$. \item $ct\gets \mathsf{MKTFHE.NAND}\left(ct_1,ct_2\right)$: Input two TLWE ciphertext $ct_1=\mathsf{MKTFHE.}$ $\mathsf{ENC}\left(\mu_1\right)\in\mathbb{T}^{n+1}$, $ct_2=\mathsf{MKTFHE.ENC}\left(\mu_2\right)\in\mathbb{T}^{n+1}$, where $ct_1$, $ct_2$ can be constructed by different participants respectively, and output the multi-key ciphertext result $ct=\mathsf{MKTFHE.ENC}\left(\mu_1\oplus\mu_2\right)\in\mathbb{T}^{kn+1}$: \begin{itemize} \item[-] Extend $ct_1$ and $ct_2$ to $ct_1^\prime$ and $ct_2^\prime$ to make them encrypted under the multi-key $\{sk_i\}_{i\in[k]}$ by putting zero in the empty extending slots. \item[-] Evaluate $\mathsf{GB}\left(\left(0,\cdots,0,\frac{5}{8}\right)-ct_1^\prime-ct_2^\prime\right)$ and return the result. \end{itemize} \end{itemize} For the $\mathsf{GB}$ part, we will not discuss it in this paper and refer to the original paper. And we call the evaluated ciphertext as multi-key ciphertext whose dimension is the number of participants in the MKTFHE scheme. \subsection{Distributed Decryption} The decryption algorithm of existing MKTFHE is a single decryptor case, which is the decryptor holds a set of secret keys $\{sk_i\}_{i\in[k]}$ of all participants. However, in practical use, for security reasons, the decryptor should not hold any secret key $sk_i$ of participants. Therefore, the distributed decryption which involved all participants to jointly decrypt a multi-key ciphertext is more practical. The most common distributed decryption for MKFHE~\cite{mukherjee2016two} has been defined below: \begin{itemize} \item $p_i\gets \mathsf{PartDec}\left(ct,sk_i\right)$: Input a multi-key ciphertext $ct$ under a set of secret keys $\{sk_i\}_{i\in[k]}$ of all participants, and the $i$-th secret key $sk_i$, output a partial decryption result $p_i$; \item $m\gets \mathsf{FinDec}\left(p_1,\cdots,p_k\right)$: Input a set of partial decryption results $\{p_i\}_{i\in[k]}$ of all participants and output the plaintext of the multi-key ciphertext. \end{itemize} \subsection{Homomorphic Gates and Operators Based on MKTFHE} Based on TFHE scheme and its multi-key variant, Jiang et al.~\cite{MKTFHE-op}. designed other binary gates with the same efficiency as NAND gates in MKTFHE, and used their designed binary gates to implement the $k$-bit complement arithmetic operators, so that the addition, subtraction, multiplication, and division of both positive and negative numbers can be evaluated in MKTFHE. They also used our proposed integer operation of MKTFHE to implement the training of a naive linear regression model by gradient descent method. The experiments show that the time consumption of linear operators such as adders and subtracters increase linearly with the increase of bit-number, and the time consumption of array operators such as multipliers and dividers increase exponentially with the increase of bit-number. Besides, all the bit-number of the above operators need be predefined, so do operands.The definition of binary gates and operators in MKTFHE is as follows: \begin{itemize} \item $ct\gets \mathsf{MKAND}\left(ct_1,ct_2\right)$: Input two TLWE ciphertext $ct_1=\mathsf{MKTFHE.ENC}\left(\mu_1\right)$, $ct_2=\mathsf{MKTFHE.ENC}\left(\mu_2\right)$, extend $ct_1$ and $ct_2$ to multi-key ciphertext $ct_1^\prime$ and $ct_2^\prime$, then evaluate $\mathsf{GB}\left(\left(0,\cdots,0,-\frac{1}{8}\right)+ct_1^\prime+ct_2^\prime\right)$ and return the evaluated result $ct=\mathsf{MKTFHE.ENC}\left(\mu_1\land\mu_2\right)$. \item $ct\gets \mathsf{MKOR}\left(ct_1,ct_2\right)$: Input two TLWE ciphertext $ct_1=\mathsf{MKTFHE.ENC}\left(\mu_1\right)$, $ct_2=\mathsf{MKTFHE.ENC}\left(\mu_2\right)$, extend $ct_1$ and $ct_2$ to multi-key ciphertext $ct_1^\prime$ and $ct_2^\prime$, then evaluate $\mathsf{GB}\left(\left(0,\cdots,0,\frac{1}{8}\right)+ct_1^\prime+ct_2^\prime\right)$ and return the evaluated result $ct=\mathsf{MKTFHE.ENC}\left(\mu_1\vee\mu_2\right)$. \item $ct\gets \mathsf{MKNOT}\left(ct_1\right)$: Input a TLWE ciphertext $ct_1=\mathsf{MKTFHE.ENC}\left(\mu_1\right)$, extend $ct_1$ to multi-key ciphertext $ct_1^\prime$, then evaluate $\left(0,\cdots,0,\frac{1}{4}\right)-ct_1$ and return the evaluated result $ct=\mathsf{MKTFHE.ENC}\left(\lnot\mu_1\right)$. \item $c\left[k\right]\gets \mathsf{MKENC}\left(l,m\right)$: Input bit length $l$ and a $l$-bit plaintext integer $m$, encode $m$ as complement $m^\prime\left[l\right]\in{\{0,1\}}^l$ and call $\mathsf{MKTFHE.ENC}\left(m^\prime\left[i\right]_{i\in\left[l\right]}\right)$ $k$ times to construct $k$-bit TLWE ciphertext $c\left[l\right]$ to return. \item $ct\left[l\right]\gets \mathsf{MKADD}\left(ct_1\left[l\right],ct_2\left[l\right]\right)$: Input two $l$-bit TLWE ciphertexts $ct_1\left[l\right]$ and $ct_2\left[l\right]$, which are from $m_1$ and $m_2$ complement encoded and call $\mathsf{MKTFHE.ENC}$ to encrypt, and output a $l$-bit TLWE ciphertext $ct\left[l\right]$ which is a multi-key ciphertext of $m_1+m_2$. \item $ct\left[l\right]\gets \mathsf{MKSUB}\left(k,ct_1\left[l\right],ct_2\left[l\right]\right)$: Input two $l$-bit TLWE ciphertexts $ct_1\left[l\right]$ and $ct_2\left[l\right]$, which are from $m_1$ and $m_2$ complement encoded and call $\mathsf{MKTFHE.}$ $\mathsf{ENC}$ to encrypt, and output a $l$-bit TLWE ciphertext $ct\left[l\right]$ which is a multi-key ciphertext of $m_1-m_2$ \item $ct\left[2k\right]\gets \mathsf{MKMUL}\left(k,ct_1\left[l\right],ct_2\left[l\right]\right)$: Input two $l$-bit TLWE ciphertexts $ct_1\left[l\right]$ and $ct_2\left[l\right]$, which are from $m_1$ and $m_2$ complement encoded and call $\mathsf{MKTFHE.}$ $\mathsf{ENC}$ to encrypt and output a $2l$-bit TLWE ciphertext $ct\left[2l\right]$ which is a multi-key ciphertext of $m_1\times m_2$. \item $ct\left[l\right]\gets \mathsf{MKDIV}\left(k,ct_1\left[2l\right],ct_2\left[l\right]\right)$: Input a $2l$-bit TLWE ciphertext $ct_1\left[2l\right]$ and an $l$-bit TLWE ciphertext $ct_2\left[l\right]$, which are from $m_1$ and $m_2$ complement encoded and call $\mathsf{MKTFHE.ENC}$ to encrypt and output an $l$-bit length TLWE ciphertext $ct\left[l\right]$ which is multi-key ciphertext of $m_1\div m_2$ \end{itemize} Note that the input of the gate circuit is a single bit ciphertext, while the input of the operator is a multi-bit ciphertext. Therefore, before inputting a multi-bit integer, first encode it as complement, and then encrypt it by bit. In addition, the bit-number of the multiplication and division input data and output data are different, which is prone to data overflow or the bits of the input data do not match the operator. \subsection{Secret Sharing Based on Arithmetic Circuit} The secret sharing protocol on arithmetic circuit is carried out on a finite field. In the secure 2-party computation, the $l$-bit value $x$ is shared by the participants into two elements on $\mathbb{Z}_{2^l}$ ring which are sent to two computing parties $P_0$ and $P_1$ respectively. Make $\left[x\right]_i^A$ represents the sub secret owned by the computing party $P_i$, and the superscript $A$ represents the secret share on the arithmetic circuit. Secret sharing on arithmetic circuits is in $\mathbb{Z}_{2^l}$ which satisfies $\left[x\right]_0^A+\left[x\right]_1^A=x \mod{ 2}^l$ where $\left[x\right]_0^A$, $\left[x\right]_1^A\in\mathbb{Z}_{2^l}$. So, the participants can share and reconstruct the secret $x$ by the following algorithms: \begin{itemize} \item $\{\left[x\right]_0^A,\left[x\right]_1^A\}\gets Share^A\left(x\right)$: Input an $l$-bit secret $x$, randomly choose $r\in\mathbb{Z}_{2^l}$, set $\left[x\right]_0^A=x-r$, $\left[x\right]_1^A=r$ and then output $\left[x\right]_0^A,\left[x\right]_1^A$. \item $x\gets Rec^A\left(\left[x\right]_0^A,\left[x\right]_1^A\right)$: Input $\left[x\right]_0^A$,$\left[x\right]_1^A$, compute $\left[x\right]_0^A+\left[x\right]_1^A$, then output the result. \end{itemize} In this protocol, if one party does not abide by the rules and sends the wrong value during the final reconstruction, the honest party cannot reconstruct the secret, while the fraudulent party can reconstruct the real secret. Therefore, this agreement is semi-honest and the participants need to abide by the rules of the agreement. Besides, the addition operation of this protocol is free and both computing parties can directly perform the calculation locally which follows below: \begin{itemize} \item ${{\{\left[z\right]}_0^A,\left[z\right]_1^A}\}\gets {Ad d}^A\left(\left[x\right]_0^A,\left[x\right]_1^A,\left[y\right]_0^A,\left[y\right]_1^A\right)$: Input the secret shares of x, y, compute ${\left[z\right]_0^A=\left[x\right]}_0^A+\left[y\right]_0^A$, $\left[z\right]_1^A=\left[x\right]_1^A+\left[y\right]_1^A$, and output $\left[z\right]_0^A$,$\left[z\right]_1^A$. \end{itemize} The addition of this protocol satisfies the below equation: \begin{equation} \begin{split} \left[z\right]^A&=\left[x+y\right]^A \\ &=\left(\left[x\right]_0^A+\left[y\right]_0^A\right)+\left(\left[x\right]_1^A+\left[y\right]_1^A\right) \\ &=\left[z\right]_0^A+\left[z\right]_1^A. \end{split} \nonumber \end{equation} \subsection{Machine Learning}\label{MLP} \subsubsection{Gradient Descent} Gradient descent (GD) is a classic approximation method for obtaining a minimum of a function $J\left(\Theta\right)$ by iteration which is widely used in the training of machine learning. With the training data set, people can obtain an excellent model after many iterative calculations. The iterative equation is described as follows: \begin{equation} \Theta^{\left(n+1\right)}=\Theta^{\left(n\right)}-\alpha\nabla\Theta, \nonumber \end{equation} \begin{equation} \nabla\Theta=\frac{\partial J\left(\Theta\right)}{\partial\Theta}, \nonumber \end{equation} where $\alpha$ is a learning rate which means the gradient descent minimum in each iteration, $\nabla\Theta$ means the gradient of coefficient $\Theta$ and coefficient $\Theta$ can be initialized as a vector of random values. In practical application, there are different GD methods derived. For example, the most classic Batch Gradient Descent (BGD) uses all the training data to update the coefficient in each iteration, and makes full use of all the data in each iteration. So, it can be more accurate towards the direction of the extreme value. Assuming that the sample size of the training data set is $n$, the gradient expression of BGD is as follows: \begin{equation} \nabla\Theta=\frac{1}{n}\sum_{i=1}^{n}\frac{\partial J_i\left(\Theta\right)}{\partial\Theta}. \nonumber \end{equation} However, BGD needs to calculate all training data set in each iteration. When the sample size of the training set is large, it will consume more time. Therefore, Stochastic Gradient Descent (SGD) is introduced which is to update the coefficients by calculating a single training data in each iteration. The gradient expression of SGD is as follows: \begin{equation} \nabla\Theta=\frac{\partial J_i\left(\Theta\right)}{\partial\Theta}. \nonumber \end{equation} But SGD is greatly affected by a single sample of the training set, which will produce a concussion effect and reduce the efficiency. For this reason, Gradient Descent with Momentum (GDM) is proposed which considers the previous gradient and updates the coefficient in a weighted way. The gradient expression of GDM is as follows: \begin{equation} \nabla\Theta^{\left(n+1\right)}=\beta\nabla\Theta^{\left(n\right)}+\left(1-\beta\right)\frac{\partial J_i\left(\Theta\right)}{\partial\Theta}, \nonumber \end{equation} where $\nabla\Theta^{\left(0\right)}$ can be initialized to zero. Since each weighted average gradient contains the information of the previous gradient, the oscillation amplitude of the coefficient is less than SGD, so the general optimization effect is better than SGD. \subsubsection{Logistic Regression} Logistic regression is a generalized linear regression model. It is a classical method to solve the binary classification problem by using the activation function. The binary classification problem refers to that the output of the predicted value $y$ has only two values ($0$ or $1$). E.g., in a spam filtering system, input the features of the mail, and predict whether the mail is spam or normal. Therefore, the activation function is used to bound the output of prediction between $0$ and $1$. In the traditional logistic regression, the activation function is defined as Sigmoid function, function $f\left(x\right)=\frac{1}{1+e^{-x}}$. As shown in Fig.~\ref{LG_fig}, the two tails of the logistic function converge to 0 and 1. At the same time, although introducing the activation function to logistic regression, the BGD method can still obtain the global optimal solution instead of the local optimal solution. The BGD method for logistic regression updates the coefficients in each iteration as follows: \begin{equation} z=\theta_0+\theta_1x_1+\cdots+\theta_nx_n=\theta^Tx \nonumber \end{equation} \begin{equation} h_\theta\left(x\right)=f\left(\theta^Tx\right)=\frac{1}{1+e^{-\theta^Tx}} \nonumber \end{equation} \begin{equation} \theta_j=\theta_j-\alpha\frac{1}{m}\sum_{i=1} n\left(h_\theta\left(x_i\right)-y_i\right)x_i^j. \nonumber \end{equation} The phase to calculate the predicted output $h_\theta\left(x_i\right)$ is called $forward\ propagation$, and the phase to calculate the gradient $\alpha\frac{1}{m}\sum_{i=1} n\left(h_\theta\left(x_i\right)-y_i\right)x_i^j$ is called $backward\ propagation$. \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=0.8\linewidth]{new_fig1-1.png} \caption{Sigmoid function} \label{LG_fig} \end{minipage}% \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=0.8\linewidth]{new_fig2.png} \caption{BP neural networks} \label{BP_fig} \end{minipage} \end{figure} \subsubsection{Neural Networks} Neural networks are a more generalized regression model compared by logistic regression to learn more complex relationships between high dimensional input data and multiple output label. Fig.~\ref{BP_fig} shows an example of a neural networks. Each node (also called neurons) in the hidden layer and the output layer is an instance of logistic regression and is associated with an activation function and a coefficient vector. Traditional activation functions are like Sigmoid function $f\left(x\right)$ or RELU function. Standard error Back Propagation (BP) neural networks can be trained by GDM method so that the coefficients convergence will be faster than BGD and more stable than SGD. By the chain rule, the coefficients are updated as follows: $$ o_i=f(\sum_{i=1}^m w_{ji}x_i^k),{\hat{y}}_j^k=\sum_{i=1}^{n}{o_iv_{ji}},$$ $$ \Delta v_{ji}^{(n+1)}=\beta_1 \Delta_1v_{ji}^{(n)}+(1-\beta_1)(\hat{y}_j^k-y_j^k)o_i, $$ $$ \Delta w_{ij}^{\left(n+1\right)}=\beta_2\Delta w_{ij}^{\left(n\right)}+\left(1-\beta_2\right)o_i\left(1-o_i\right)x_j^k\sum_{j=1}^{p}{\left({\hat{y}}_j^k-y_j^k\right)v_{ji}}, $$ $$ v_{ji}^{\left(n+1\right)}=v_{ji}^{\left(n\right)}-\alpha_1\Delta v_{ji}^{\left(n+1\right)},{w_{ij}^{\left(n+1\right)}=w_{ij}^{\left(n\right)}-\alpha_2\Delta w}_{ij}^{\left(n+1\right)}, $$ where $m$, $n$ and $p$ are the number of neurons in the input layer, hidden layer, and output layer respectively, $x_i^k$, and $y_j^k$ is the $i$-th feature and $j$-th label of $k$-th input data, $o_i$ is the output of $i$-th neuron in the hidden layer, ${\hat{y}}_j^k$ is the output of $j$-th neuron in output layer from $k$-th input data, $\alpha_1$, $\alpha_2$, $\beta_1$, $\beta_2$ are the four decimal parameters in GDM, $v_{ji}^{\left(n\right)}$ is the $i$-th coefficient of $j$-th neuron in the output layer in the $n$-th iteration and $w_{ij}^{\left(n\right)}$ is the $j$-th coefficient of $i$-th neuron in the hidden layer in the $n$-th iteration. \section{Distributed Decryption Protocol} \subsection{Our Security Goal} In MKFHE scheme, participants generally generate their own secret keys independently, encrypt their own data by their own secret key, and decrypt the multi-key ciphertext by all secret keys jointly. Therefore, our security goal of MKFHE decryption is to protect individual single-key encrypted message and common multi-key encrypted message. We already know that MKFHE is IND-CPA secure~\cite{cryptoeprint:2018:1156}, so the multi-key ciphertext is guaranteed to be secure for both adversaries. However, in the existing MKTFHE scheme, the participant can obtain partial decryption results of other participants $p_i$ (still encrypted by other’s secret key), but they can get $\langle \bm{a},\bm{s_i} \rangle$ by computing $p_i-b$. Then after obtaining multiple partial decryption results, the participants can compute the secret keys of other participants, so as to obtain the clear message of other participants. In addition, if the external adversary obtains all partial decryption results, the computation results under multi-key encryption can be finally obtained by computing $\sum_{i=1}^{k}p_i-\left(k-1\right)b$. In order to solve the above problems, we introduce secret sharing technique and propose a new distributed decryption protocol. So far, we can observe that there are at least two kinds of static (passive) adversaries: an internal adversary and an external adversary. The internal adversary is one of the participants in the scheme, but the external adversary is not. Both adversaries want to know any information about each participant’s message and the external adversary also hopes to obtain the computation result. To simplify, we take both of them together as \emph{semi-honest} adversaries $\mathcal{A}$. And we assume the adversary $\mathcal{A}$ can corrupt any subset of the participants and servers (at least two participants and the server is uncorrupted) which only the data of the corrupted participants but nothing else about the remaining honest participants' data can be learned by the adversary $\mathcal{A}$. We define the security using the framework of Universal Composition(UC) ~\cite{canetti2001universally}. \subsection{Our Distributed Decryption Protocol for MKTFHE} Denote the multi-key ciphertext by $\hat{ct}=\left(\bm{a_1},\ldots,\bm{a_k},b\right)\in\mathbb{T}^{kn+1}$, which satisfies $b=\frac{1}{4}m-\sum_{j=1}^{k}{\langle \bm{a_j},\bm{s_j}\rangle }+e(\mod 1)$~\cite{MKTFHE}, where $m\in\{0, 1\}$ is the plaintext after decryption of the ciphertext, $k$ is the number of participants, $s_j$ is the private key of the $j$-th participant. The secure distributed decryption algorithm based on MKTFHE is defined as follows: \begin{itemize} \item[-] Partial decryption algorithm: $p_i\gets \mathsf{Part\_Dec}(\hat{ct}, s_i)$: The input is the multi-key ciphertext $\hat{ct}$ and the secret key $s_i$ of the $i$-th participant. The output $p_i$ is the partial decryption result of the $i$-th participant, and the computation is $p_i=b+\langle a_i,s_i\rangle $ \item[-] Final decryption algorithm: $\frac{1}{4}m+\bar{e}\gets \mathsf{Fin\_Dec}({\{p_i\}}_{i\in{k}})$: The input ${\{p_i\}}_{i\in{k}}$ is the partial decryption result of all participants, and the output is plaintext with noise before rounding, and the computation is $\frac{1}{4}m+\bar{e}=\sum_{i=1}^{k}p_i-(k-1)b$ \end{itemize} The protocol can be divided into four steps, as shown in Fig~\ref{DD_fig}, take two participants as an example: \begin{enumerate}[Step 1] \item Partial decryption: each participant $P_1,...,P_k$ uses their own secret key $s_i$ to compute partial decryption $\mathsf{Part\_Dec}(\hat{ct}, s_i)$ to obtain their personal partial decryption results $p_i$; \item Secret sharing: each participant run secret share for its own partial decryption result $p_i$. and get secret shares$\left[p_i\right]_0^A$ and $\left[p_i\right]_1^A$, and send them to the cloud server and the decryption party respectively; \item Offline computing: the cloud server and the decryption party receive secret shares received from participants, and then offline compute final decryption $\mathsf{Fin\_Dec}({\{\left[p_i\right]_0^A\}}_{i\in{k}})$ and $\mathsf{Fin\_Dec}({\{\left[p_i\right]_1^A\}}_{i\in{k}})$ respectively, so as to obtain the secret share of final decryption result $\left[\frac{1}{4}m+\bar{e}\right]_0^A$ and $\left[\frac{1}{4}m+\bar{e}\right]_1^A$; \item Secret reconstruction: the decryption party $DP_1$, $DP_2$ send the secret share of the final decryption result $\left[\frac{1}{4}m+\bar{e}\right]_0^A$ and $\left[\frac{1}{4}m+\bar{e}\right]_1^A$ to participants and the participants use the secret recovery protocol to reconstruct the final decryption result and obtain the final decryption result $\frac{1}{4}m+\bar{e}$. \end{enumerate} \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{fig3.png} \caption{Our proposed distributed decryption protocol} \label{DD_fig} \end{figure} \subsubsection{Correctness Proof} The correctness of the protocol follows: \begin{equation} \begin{split} &\left(\sum_{i=1}^{k}\left[p_i\right]_0^A-\left(k-1\right)\left[b\right]_0^A\right)+\left(\sum_{i=1}^{k}\left[p_i\right]_1^A-\left(k-1\right)\left[b\right]_1^A\right) \\ &=\left(\sum_{i=1}^{k}\left[p_i\right]_0^A+\sum_{i=1}^{k}\left[p_i\right]_1^A\right)-\left(\left(k-1\right)\left[b\right]_0^A+\left(k-1\right)\left[b\right]_1^A\right)\\ &=\sum_{i=1}^{k}p_i-\left(k-1\right)b=\sum_{i=1}^{k}\left(b+\langle a,s_i\rangle \right)-\left(k-1\right)b\\ &=b+\sum_{i=1}^{k}{\langle a,s_i\rangle }=\frac{1}{4}m+e \nonumber \end{split} \end{equation} If the error $e$ is less than $\frac{1}{8}$, the decryption will work correctly. \subsubsection{Security Proof} In the UC framework, security is defined by comparing the \emph{real} world and \emph{ideal} world. The \emph{real} world is involved in the protocol, adversary $\mathcal{A}$, and honest participants. And the \emph{ideal} world includes the trusted party to represent the protocol, the simulator $\mathcal{S}$ to simulate the \emph{ideal} world, and honest participants. If the view of \emph{real} world and \emph{ideal} world is undistinguished, the protocol is secure. We consider security in the semi-honest model in which all participants and servers follow the protocol exactly. We assume that the two servers are non-colluding. We choose the adversary $\mathcal{A}$ who corrupts a server $DP_1$ and all but two of the participants $\{P_1, ..., P_{k-2}\}$ as an example that can cover all scenarios in our security goal. Simulator $\mathcal{S}$ is to simulate the above in \emph{ideal} world which submits the partial decryption of the participants and receives the final decryption from the trusted party. During the simulating, on behalf of the honest participants $\mathcal{S}$ sends a randomized partial decryption share $[p_i]_0^A$, $[p_i]_1^A$ in $\mathbb{Z}_{2^l}$ to $DP_1$ and $DP_2$. This is the only phase where participants are involved. Then each server evaluates independently until the final decryption is recovered. We can briefly argue that the view of the \emph{real} world and \emph{ideal} world is indistinguishable because of the security of the arithmetic secret sharing. The share of partial decryption is generated by participants randomly. Particularly, all messages sent, received, and reconstructed in our protocol are generated using uniformly random shares in both \emph{real} world our protocol involved and \emph{ideal} world simulator simulated, so the view of both identically distributed concludes our argument. \section{Privacy-Preserving Distributed Machine Learning} \subsection{Pre-Work} \subsubsection{Extract Sign Bit} Thanks to the encryption and evaluation of MKTFHE is bit by bit, we can easily extract any bit in a multi-bit ciphertext. In complement coding, the highest bit can represent the sign of operand, which is that the highest bit is 0 and the operand is positive, the highest bit is 1 and the operand is negative. Therefore, we can use this property to extract the sign bit of any ciphertext. The operation of extracting the sign bit is defined as follows: \begin{itemize} \item $ct_{sign}\gets \mathsf{Extract\_Sign}\left(ct\left[l\right]\right)$: Input a $l$-bit length TLWE ciphertext $ct\left[l\right]$, and output the sign bit $ct_{sign}$ of $ct\left[l\right].$ \end{itemize} \subsubsection{Cut off and Expand} Considering the bit-length of the existing arithmetic operators in MKTFHE is predefined, and the larger the bit-length, the more time consumption. Therefore, flexibly adjusting the bit-number of ciphertext and selecting the less bit-length arithmetic operators can improve the efficiency and accuracy of the overall scheme of machine learning based on MKTFHE. We continue to use the encoding method in MKTFHE arithmetic operators~\cite{MKTFHE-op} which is to use complement to encode both positive and negative integers. When the large-bit of ciphertext operands is input the little-bit arithmetic operators, the operands will be automatically cut off the rest bits of the input ciphertext operands, in other words, only the part of data with the same bit-number as the arithmetic operators will be calculated. For example, we can get a 16-bit ciphertext product from an 8-bit multiplier with couple 8-bit ciphertext operands input. And if this 16-bit ciphertext product doesn’t overflow 8-bit size, it can be put into the next 8-bit arithmetic operator directly with a little time of computation. But if the 16-bit ciphertext product overflows the size of 8-bit, it must be put into the 16-bit arithmetic operator in the next calculation with more time consumption and its corresponding the other 8-bit operand must be expanded from 8-bit to 16-bit. Until the operand recovers the 8-bit size by subsequent operations such as division, we can continue to use the less bit-number operator. In order to flexibly expand the bit-number of the operand and keep its sign, we design and implement a device named $homogenizer$, as shown in Fig.~\ref{HO_fig}, and the specific design is as follows: MKTFHE does not allow the ciphertext to be copied directly (it is considered unsafe), so we use the trivial $\mathsf{TLWE}(0)$ and the sign bit $ct_{sign}$ of the original small-bit ciphertext operand to calculate $\mathsf{MKAND}\left(\mathsf{TLWE}\left(0\right),ct_{sign}\right)$, and fill the ciphertext results into the high bits. The operation of expanding is defined as follows: \begin{itemize} \item $ct\left[l^\prime\right]\gets \mathsf{Homogenizer}\left(ct\left[l\right],l^\prime\right)$: Input the $l$-bit length TLWE ciphertext $ct\left[l\right]$ and the bit-length $l^\prime$, output the $l^\prime$-bit length ciphertext $ct\left[l^\prime\right]$, the plaintext of which is the same as $ct\left[l\right].$ \end{itemize} \begin{figure}[htbp] \centering \includegraphics[width=0.4\linewidth]{fig4.png} \caption{Expand the bit-length of ciphertext} \label{HO_fig} \end{figure} \subsubsection{Compare}\label{com_qua} In the practical machine learning scheme, the comparison operation is usually required, but the existing MKTFHE scheme cannot support the comparison operation without decryption. We believe that the comparison can be divided into two categories. One is to need to know the comparison results, such as the millionaire problem, and the other is to determine the next calculation through the comparison result, which is similar to branch selection. At present, the comparison in the machine learning scheme is mainly the second category. Therefore, we utilize the Boolean operation and arithmetic operations in MKTFHE to design and implement the basic elements of the comparison operation, named $Compare\ Quads$, which is used to pick one from two ciphertext operands based on the results of comparison between the other two ciphertext operands, as shown in Fig.~\ref{COM_fig}. A $Compare\ Quads$ can be constructed in the following step: \begin{enumerate}[1.] \item Calculate $c_a\left[l\right]-c_b\left[l\right]$ by calling $c_{mid-result}\left[l\right]\gets \mathsf{MKSUB}\left(c_a\left[l\right],c_b\left[l\right]\right);$ \item Extract the sign bit $c_{sign}$ of $c_{mid-result}\left[l\right]$ by calling $c_{sign}\gets \mathsf{Extract\_Sign}$ $\left(c_{mid-result}\left[l\right]\right);$ \item Reverse $c_{sign}$ by call $\lnot c_{sign}\gets \mathsf{MKNOT}\left(c_{sign},para\right);$ \item Calculate $c_{sign}\land c_c\left[n\right]$ and $\lnot c_{sign}\land c_d\left[n\right]$ bit by bit, call $\mathsf{MKAND}\left(c_{sign}, c_c\left[t\right]\right)$ and $\mathsf{MKAND}\left(\lnot c_{sign}, c_d\left[t\right]\right)$ which $t$ is from 1 to $n$ and can compute in parallel; \item Calculate $c_{sign}\land c_c\left[n\right]+\lnot c_{sign}\land c_d\left[n\right]$ by calling $\mathsf{MKADD}(c_{sign}\land c_c\left[l\right],\lnot c_{sign}\land c_d\left[l\right])$ and return the result. \end{enumerate} We use Algorithm 1 to evaluate $Compare\ Quads$. And the operation of comparison is defined as follows: \begin{itemize} \item $c_{result}\left[l\right]\gets \mathsf{Compare\_Quads}\left(c_a\left[l\right],c_b\left[l\right],c_c\left[l\right],c_d\left[l\right]\right)$: Input four $l$-bit length ciphertext $c_a\left[l\right]$, $c_b\left[l\right]$, $c_c\left[l\right]$, $c_d\left[l\right]$, if the plaintext of $c_a\left[l\right]$ is bigger than $c_b\left[l\right]$, then output $c_d\left[l\right]$, else output $c_c\left[l\right]$. \end{itemize} \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{fig5.png} \caption{Select a ciphertext by $Compare\ Quads$} \label{COM_fig} \end{figure} \begin{table*}[htb] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{rl} \hline \multicolumn{2}{l}{\textbf{Algorithm 1} $Compare\ Quads$}\\ \hline \textbf{Input}:&MKTFHE parameter set $mkparams$, four $l$-bit ciphertext ${ c}_a\left[l\right]=\mathsf{MKENC}\left(a,l\right)$,\\ &$c_b\left[l\right]=\mathsf{MKENC}\left(b,l\right)$, $c_c\left[l\right]=\mathsf{MKENC}\left(c,l\right)$, $c_d\left[l\right]=\mathsf{MKENC}\left(d,l\right)$, and the public \\ & keys of all participants $\{pk_i\}_k$\\ \textbf{Output}:&$l$-bit ciphertext $c_{result}\left[l\right]=\mathsf{MKENC}\left(\left(a\geq\ b\right):d:c,l\right)$\\ 1.& Calculate $c_a\left[l\right]-c_b\left[l\right]$\\ 2.& Extract sign bit $c_{sign}of c_a\left[l\right]-c_b\left[l\right]$ \\ 3.& Calculate the result $c_{result}\left[l\right]=c_{sign}\land c_c\left[l\right]+\left(\lnot c_{sign}\right)\land c_d\left[l\right]$\\ \hline \end{tabular}} \end{table*} \subsection{MKTFHE Friendly Activation Function} At present, the existing MKTFHE only supports integer linear operation and Boolean operation, so how to compute the Sigmoid function in logical regression and neural networks has become a main additional challenge. Prior work shows that polynomials can be used to fit Sigmoid function~\cite{aono2016scalable}, and high-degree polynomials can achieve very high accuracy~\cite{livni2014computational}. Hence, it is obvious that we can use the above method to implement Sigmoid function, but high-degree polynomials will seriously reduce the efficiency, and using low-degree polynomials will lose a lot accuracy. We refer to the idea in SecureML~\cite{mohassel2017secureml} which discusses that the piecewise function can also achieve high accuracy. Hence, we design a new MKTFHE friendly activation function $g\left(x\right)$. In addition, in order to improve the accuracy, we fit the sigmoid function in the form of the tangent at the origin, as shown in Fig.~\ref{ACF_fig}. Considering that MKTFHE only supports integers, we have zoomed the new activation function by 16 times. The function description is as follows: $$ g(x) = \begin{cases} 16,&x > 2 \\ 4x+8,&-2\leq x\leq 2\\ 0,&x < -2\\ \end{cases} $$ \begin{figure}[htbp] \centering \includegraphics[width=0.5\linewidth]{new_fig6-1.png} \caption{Our function $g(x)$ and Sigmid function $f(x)$} \label{ACF_fig} \end{figure} Note that the comparison in our proposed activation function belongs to the second category of comparison in the compare part of Subsection~\ref{com_qua} we have discussed before, which is the comparison results are used for the next calculation instead of the need of knowing the comparison results. Therefore, we can use two $Compare Quads$ in Subsection~\ref{com_qua} to implement the activation function can be evaluated by Algorithm 2 and Fig.~\ref{ACFC_fig}. \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{fig7.png} \caption{Constructure of our activation function} \label{ACFC_fig} \end{figure} \begin{table*}[htb] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{rl} \hline \multicolumn{2}{l}{\textbf{Algorithm 2} New activation function}\\ \hline \textbf{Input}:& MKTFHE parameter set $mkparams$, ciphertext $ct\left[l\right]=\mathsf{MKENC}\left(x,l\right)$ and the \\ & public keys of all participants $\{pk_i\}_k$\\ \textbf{Output}:&Ciphertext $c_{result}\left[l\right]=\mathsf{MKENC}\left(g\left(x\right),l\right)$\\ 1.& Prepare the ciphertext $c\left(-2\right),c\left(16\right), c\left(0\right)$ and $c(4x+8)$\\ 2.& Calculate $c_{mid-result}\left[l\right]=\mathsf{Compare\_Quads}\left(ct\left[n\right],c\left(-2\right),c\left(0\right),c\left(4x+8\right)\right)$ \\ 3.& Calculate $c_{result}\left[l\right]=\mathsf{Compare\_Quads}\left(c_{mid-result}\left[l\right],c\left(16\right),c_{mid-result}\left[l\right],c\left(16\right)\right)$ \\ \hline \end{tabular}} \end{table*} \subsection{Privacy-Preserving Machine Learning based on MKTFHE} After we propose the new MKTFHE friendly activation function, we will utilize it to compute the back propagation in the logistic regression and neural networks. Considering the accuracy, we continue to use the Sigmoid function to calculate the partial derivative and maintain the structure of the iterative equation. And prior research also shows that if we change to compute the partial derivative of the linear activation function, the cross-entropy function is no longer convex, and the accuracy of training will incur more losses~\cite{mohassel2017secureml}. In addition, since MKTFHE is for integer and Boolean, it is necessary to zoom the learning rate and other parameters into integers, and modify the relevant calculation equation, so as to ensure that the model coefficients after training are also enlarged in proportion. We set the expansion factor $q$, use the integer learning rate $\alpha^\prime$ and the new iterative computation for logistic regression is below: $$ h_\theta\left(x\right)=g(x) $$ $$\theta_j=\theta_jq-\alpha\prime\frac{1}{m}\sum_{i=1}^{m}{\left(h_\theta\left(x_i\right)-y_i\right)x_i^j}$$ There are $m$ neurons in the input layer, $n$ neurons in the hidden layer and $p$ neurons in the output layer. Like the above logistic regression, we set the expansion factor $q$, use the integer learning rate $\alpha_1^\prime$, $\alpha_2^\prime$, $\beta_1^\prime$, $\beta_2^\prime$ and the rest definition is the same as Section~\ref{MLP}, the iterative computation for neural networks is below: $$ o_i=g(\sum_{i=1}^{m} w_{ji} x_i^k q),{\hat{y}}_j^k=\sum_{i=1}^{n}{o_iv_{ji}q}$$ $$\Delta v_{ji}^{(n+1)}=\beta_2^\prime \Delta v_{ji}^{(n)}q-\alpha_1^\prime \Delta v_{ji}^{(n+1)}$$ $$\Delta w_{ij}^{\left(n+1\right)}=\beta_2^\prime\Delta w_{ij}^{\left(n\right)}+\left(q-\beta_2^\prime\right)o_i\left(q-o_i\right)x_j^k\sum_{j=1}^{p}{\left({\hat{y}}_j^k-y_j^kq\right)v_{ji}}$$ $$v_{ji}^{\left(n+1\right)}=v_{ji}^{\left(n\right)}q-\alpha_1^\prime\Delta v_{ji}^{\left(n+1\right)},{w_{ij}^{\left(n+1\right)}=w_{ij}^{\left(n\right)}q-\alpha_2^\prime\Delta w}_{ij}^{\left(n+1\right)}$$ \subsection{Our Framework} After implementing the privacy preserving logical regression and neural network training, we replace the original decryption with our proposed distributed decryption protocol, and finally propose a distributed privacy preserving machine learning framework based on MKTFHE, including four types of entities: participants, a cloud server, a CRS server, and a decryption party. The participants want to outsource computation and each of them holds their own part of data for model training which should not be learnt by a cloud server and other participants; The cloud servers are usually composed of one or more high-performance servers, which do not have their own data and only provide computing power. The CRS server is only responsible for generating the public parameters ( that is, common reference string) of the framework which can be included in the cloud server. The decryption party only joins in the distributed decryption and does not need too much computing power, which can be acted by a participant or a single server. We take two participants as examples. The steps of the whole scheme are shown in the Fig.~\ref{FLM}: \begin{enumerate}[Step 1] \item Set up parameters: The CRS server call $\mathsf{MKTFHE.SETUP}(1^\lambda)$ to generate the $mkparams$ and communicates with the participants on the expansion factor $q$, then sends the set of parameters and factors $\{mkparams,q\}$ to each participant and the cloud server. \item Preprocess and encrypt data: Each participant uses the expansion factor $q$ to zoom or round the origin data and uses the $mkparams$ to call $\mathsf{MKTFHE.KEYGEN}\left(mkparams\right)$ to generate their own secret key and public key, then utilizes their own secret key to encrypt the preprocessed data bit by bit by calling $\mathsf{MKTFHE.ENC}\left(m\right)$ and finally sends the public key and the ciphertext to the cloud server. \item Train ML models: The cloud server first expands the single key ciphertext from each participant to multi-key ciphertext, then uses the arithmetic operators like $\mathsf{MKADD}$, $\mathsf{MKSUB}$, $\mathsf{MKMUL}$, $\mathsf{MKDIV}$, and our new designed $Homogenizer$, $Compare\ Quads$ to train the model. In the end, the cloud server sends the ciphertext model to decryption party and all participants. \item Decrypt data: All the participants, the decryption party and the cloud server run the distributed decryption protocol together and the participants will get the plaintext in the end. \end{enumerate} \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{new_fig8.png} \caption{Framework of privacy-preserving machine learning} \label{FLM} \end{figure} \section{Implementation and Experiment} \subsection{Implementation of Distributed Decryption Protocol} \subsubsection{The Implementation and Experimental Environment} Our code for the distributed decryption protocol is written in C + +, mainly using the arithmetic secret sharing of ABY~\cite{demmler2015aby} which is a very efficient two-party secure computing scheme. Considering that the final decryption only involves addition and subtraction without multiplication, this experiment neither needs to use oblivious transfer (OT) and other operations that require online interaction, nor needs a semi honest third party (STP) to assist in generating multiplication triples. Noted that the ABY library only uses unsigned number and negative number cannot be directly computed. Therefore, we naturally regard the unsigned numbers as the number encoded by complement code, and convert them into signed original code data after the final operation. In addition, since the finite field used in MKTFHE scheme is a 32-bit tours, we straightly use the 32-bit arithmetic secret sharing in ABY library for computation. The experiment of this subject runs on the Linux environment based on the following configuration. \begin{enumerate}[(1)] \item Cloud server $S_0$ and $S_1$, is configured as: Intel Xeon gold 5220 @2.2GHz processor, 256GB memory, and the operating system is Ubuntu 18.04 LTS; \item The client is Windows PC, Intel Core i7-8750H@2.20GHz processor, 16GB memory, and the operating system is Windows 10. \end{enumerate} In the experiment of LAN, we use two servers located in the same area. The network bandwidth is 512MB/s and the network delay is 0.35ms. \subsubsection{Accuracy and Efficiency Analysis} In this experiment, we compare with the original decryption scheme in MKTFHE, and set the participants $k$ to 2, 4 and 8 respectively. We test in 10 groups for both decryption scheme and each group inclueds 1000 bits ciphertext. Then we record the average time of decryption. Note that in the specific implementation, in the distributed decryption protocol, we use the SIMD technique in ABY library for parallel optimization to improve the efficiency. The experimental results are shown below: \begin{table*}[htb] \caption{Implementation of distributed decryption} \label{tab1} \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \toprule {Participants $k$} & {MKTFHE/s} & {Our protocol/s} & {Accuracy}\\ \midrule 2 & 0.024 & 0.261 & {100\%} \\ 4 & 0.050 & 0.268 & {100\%} \\ 8 & 0.112 & 0.263 & {100\%} \\ \bottomrule \end{tabular*} \end{table*} The result in Table \ref{tab1} shows that compared with the original MKTFHE, the efficiency of our scheme is relatively lower, but it’s still acceptable. We think the reason is mainly in the establishment of secret sharing scheme and ciphertext transmission because the original scheme does not involve additional schemes and transmissions. By using the SIMD technique in ABY library, the decryption time basically remains unchanged with the increase of participants, while the decryption time of the original MKTFHE scheme increases linearly with the increase of participants. Therefore, in the case of multi-party participation, our scheme has more advantages in both security and efficiency. \subsection{Implementation of Privacy Preserving Distributed Machine Learning} \subsubsection{Data preprocessing} Considering that MKTFHE only supports integer and Boolean, we need to preprocess the input data. We have two methods to preprocess, one is rounding and the other is zooming. The input data of logistic regression is in a rather large range while the input data of neural networks is relatively small, so we apply the rounding method on logistic regression and the zooming method on neural networks to keep the data precise. We store the zooming factor for the following computation to guarantee accuracy in an acceptable range. \subsubsection{Implementation of Privacy Preserving Logistic Regression} The input data are generated by ourselves which are several sets of linear data with small random noise, and we mainly use 16-bit and 32-bit operators in this implementation. We first use the 7-order Taylor polynomial (high enough order) formed Sigmoid function and our proposed activation function as the activation function in logistic regression to train the models with \emph{plaintext} of integer and float data. The result shows in Table~\ref{tab2} that in both integer and floating numbers, the accuracy of using a 7-order Taylor polynomial as an activation function is the highest, and using our proposed activation function can be close to that of a 7-order Taylor polynomial. Then, we utilize the operators and other tools in MKTFHE to train the logistic regression models with the above different types of activation functions in \emph{ciphertext}. In addition to recording the accuracy and time in training in Table~\ref{tab3}, we also compare the computation time of different activation functions under MKTFHE in Table~\ref{tab4}. The result of the experiments shows that our scheme has no accuracy loss which means that the model trained in \emph{ciphertext} is the same as that in \emph{plaintext}, and the loss only occurs in the integer transfer stage. Using our proposed activation function can shorten the computing activation function time in \emph{ciphertext} by 10 times and significantly shorten the training time compared with the 7-order Taylor polynomial and the accuracy is close to it. Note that we also compare the 3-order Taylor polynomial with our proposed function in both \emph{plaintext} and \emph{ciphertext} and the result shows that comparing with the 3-order Taylor polynomial we can also shorten the computing activation time by 5 times with much better accuracy. \begin{table*}[htb] \caption{Logistic regression accuracy in plaintext} \label{tab2} \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \toprule {Data type} & {7-order Taylor polynomial} & {3-order Taylor polynomial} & {Our function} \\ \midrule Floating data & 98\% & 85\% & {95\%} \\ Integer data & 95\% & 80\% & {92\%} \\ \bottomrule \end{tabular*} \end{table*} \begin{table*}[htb] \caption{Logistic regression accuracy in ciphertext} \label{tab3} \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}ccc@{}} \toprule Activation function & Accuracy & Training time/iter/piece/s \\ \midrule 7-order Taylor polynomial & 95\% & 4049 \\ 3-order Taylor polynomial & 80\% & 2549 \\ Our function & 92\% & 611 \\ \bottomrule \end{tabular*} \end{table*} \begin{table*}[htb] \caption{Computing activation function in ciphertext} \label{tab4} \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \toprule & {7-order Taylor polynomial} & {3-order Taylor polynomial} & {Our function} \\ \midrule Time/s & 1440 & 549 & 130 \\ \bottomrule \end{tabular*} \end{table*} \subsubsection{Implementation of Privacy Preserving Neural Networks} We use the Iris data set in sklearn~\cite{varoquaux2015scikit} as input data for neural networks, half of them for training and the rest for prediction. Like the above logistic regression, we also implement neural networks in both plaintext and ciphertext. In \emph{plaintext}, we use the above different kinds of functions as activation functions to train the model with both integer data and floating data, and the same in \emph{ciphertext}. Note that we also compute every neuron in the same layer in parallel to optimize the code. The result of the experiments is shown in Table \ref{tab5}. As the result shown in Table \ref{tab6}, using a 7-order Taylor polynomial as an activation function is more accurate and costs more time, but using our proposed activation function can greatly reduce the training time with close accuracy to it. Note that we also compare our function with 3-order Taylor polynomial and the result shows that we shorten the training time with much better accuracy as well. \begin{table*}[htb] \caption{Neural networks accuracy in plaintext} \label{tab5} \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}cccc@{}} \toprule {Data type} & {7-order Taylor polynomial} & {3-order Taylor polynomial} & {Our function} \\ \midrule Floating data & 96.23\% & 72.17\% & {95.46\%} \\ Integer data & 94.67\% & 68.12\% & {94.15\%} \\ \bottomrule \end{tabular*} \end{table*} \begin{table*}[htb] \caption{Neural networks accuracy in ciphertext} \label{tab6} \begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}ccc@{}} \toprule Activation function & Accuracy & Training time/iter/piece/s \\ \midrule 7-order Taylor polynomial & 94.67\% & 7301 \\ 3-order Taylor polynomial & 68.12\% & 6736 \\ Our function & 94.15\% & 4654 \\ \bottomrule \end{tabular*} \end{table*} \subsubsection{Parameters} We choose the same parameters in MKTFHE~\cite{MKTFHE}. The achievement estimated security level of our scheme is 110-bit while the dimension of the TLWE problem is $k=1$. \section{Conclusion and Discussion} In this paper, we propose privacy preserving logistic regression and neural networks with distributed decryption protocol based on MKTFHE. Firstly, we introduce secret sharing to protect the partial decryption and final decryption. Secondly, we design $homogenizer$ and $compare\ quads$ to implement our proposed MKTFHE friendly activation function. Then, we utilize them to train privacy preserving logistic regression and privacy preserving neural networks. Finally, we formalize our distributed privacy preserving machine learning framework. The experimental results show that the efficiency of our distributed decryption protocol is acceptable. Compared with using Sigmoid function, with our activation function, the efficiency is greatly improved and the accuracy is basically unchanged. \section*{Acknowledgment} This work is supported by National Natural Science Foundation of China (No. 61872109), China (No. JCYJ20200109113405927) and National Science and Technology Major Project Carried on by Shenzhen, China (No. CJGJZD20200617103\\000001). \bibliographystyle{splncs04}
1,314,259,996,436
arxiv
\section{Introduction} Index coding is a canonical problem in network information theory with close connections to many important problems such as network coding \cite{effrosrouayheblangberg15} and distributed storage \cite{Shanmugam--Dimakis2014}. Index coding aims to find the optimal broadcast rate and optimal coding schemes for broadcasting $n$ unique messages from a server to $n$ receivers with (possibly differing) side information at each receiver \cite{birk1998informed}. Characterizing the capacity region of a general index coding problem remains elusive. This paper is concerned with a class of index coding problems where there is, in addition to $n$ legitimate receivers, an eavesdropper who may have side information about some messages and wants to obtain the rest. We aim to characterize inner and outer bounds on the secure index coding capacity region under the restricted security requirement that there is no leakage of information about any single message that is unknown to the eavesdropper. The secure variant of the index coding problem was first studied in \cite{dau2012security}, where the conditions for a linear code to be a valid secure index code were investigated. Later in \cite{ong2016secure}, non-linear secure index codes that use side information as secret keys were proposed. The connection between secure network coding and secure index coding (analogous to the relationship between non-secure versions \cite{effrosrouayheblangberg15}) was developed in \cite{ong:kliewer:vellambi:isit2018}. In \cite{mojahedian2017perfectly}, the authors studied the minimum key length to achieve perfect secrecy where the eavesdropper has no additional side information, but it must not learn any information whatsoever about the messages (namely, zero mutual information). The private index coding problem with linear codes was studied in \cite{varun2018private} where the aim is to allow legitimate receivers to only learn about messages they want, but nothing of other unknown messages. Finally, \cite{fragouli:isit2017, fragouli:isit2018} considered the case in which the identity of the demanded message and the side information of each receiver should be kept private from other receivers. In this paper, we examine the fundamental limits of using side information as the main protection mechanism to effect security in the index coding problem. After introducing the system model and problem setup in Section \ref{sec:model}, Section \ref{sec:secure_outer} presents a newly developed outer bound on the secure index coding capacity region. Section \ref{sec:SCOMP} presents an achievable rate region using a secure random coding scheme for index coding. The proposed scheme is based on the existing composite coding scheme \cite{arbabjolfaei2013capacity,liu2017capacity}. For all securely feasible index coding problems with $n\leq 4$ messages, inner and outer bounds match, yielding the corresponding secure capacity regions. However, we note that for the achievability of the secure composite coding scheme, a secret key with vanishingly small rate may be needed so that each legitimate receiver who wants the same message as the eavesdropper, knows at least two more messages than the eavesdropper. \section{System Model} \label{sec:model} Throughout the paper, we let $[n]\triangleq\{1,2,\dots,n\}$ and use $2^{[n]}$ to denote the power set of $[n]$. The aim of the index coding problem depicted in Figure \ref{fig:index_coding} is to devise a coding scheme to allow a server to communicate $n$ independent and uniformly distributed messages, ${\bf X}_i\in\{0,1\}^{t_i}$, $i\in [n]$, to their corresponding receivers over a noiseless broadcast link with unit capacity in the presence of an eavesdropper $e$. Each receiver has prior knowledge of the realization ${\bf x}_{A_i}$ of ${\bf X}_{A_i}$, where $A_i \subseteq [n] \backslash \{i\}$. The set $B_i\triangleq [n]\setminus(A_i \cup \{i\})$ denotes the set of \emph{interfering} messages at receiver $i$. The eavesdropper has access to ${\bf X}_{A_e}, A_e \subset [n]$. The encoder should be designed to prevent the eavesdropper from learning any single message ${\bf X}_{j}$, $j\in A_e^c = [n] \backslash A_e$. \begin{figure} [b] \begin{center} \vspace{-3mm} \includegraphics[scale=0.45]{problem_setup_new} \caption{Problem setup for secure index coding.} \vspace{-3mm} \label{fig:index_coding} \end{center} \end{figure} To compactly represent a non-secure index coding problem we use $(i|A_i), i\in [n]$, to indicate that legitimate receiver~$i$ has messages ${\bf x}_{A_i}$ and wants to decode message ${\bf x}_i$. With a slight abuse of notation, $(e|A_e)$ denotes what the eavesdropper has, but note that it wants to learn all other messages. A $({\bf t},r) = ((t_1, \ldots, t_n), r)$ index code is defined by: \begin{itemize} \item One encoder at the server $\phi: \prod_{i=1}^n \{0,1\}^{t_i} \rightarrow \{0,1\}^r$ that uses all messages to generate the transmitted codeword ${\bf Y}\triangleq\phi({\bf X}_1,\ldots, {\bf X}_n)$ of length $r$ bits. \item For each legitimate receiver $i\in [n]$, a decoder function $\psi_i: \{0,1\}^r \times \prod_{k\in A_i} \{0,1\}^{t_k} \rightarrow\{0,1\}^{t_i}$ that takes the received sequence ${\bf Y}$ together with the side information at receiver $i$ and maps them to ${\hat{{\bf X}}}_i\triangleq\psi_i({\bf Y},{\bf X}_{A_i})$. \end{itemize} A rate tuple $(R_1, \ldots, R_n)$ is said to be \emph{securely} achievable if for every $\delta,\epsilon >0$, there exists a $({\bf t},r)$ index code such that the following three constraints are met: \begin{align} &\textrm{Rate: }\qquad\,\,\label{eq:rate_def} R_i \le \textstyle{\frac{t_i}{r}}, \quad i\in [n]; \\ &\textrm{Decoding: }\,\,\label{eq:error} \P\{({\hat{{\bf X}}}_1,\ldots, {\hat{{\bf X}}}_n) \ne ({\bf X}_1, \ldots, {\bf X}_n)\} \le \epsilon;\\ &\textrm{Security: }\,\,\,\,\,\,\label{eq:secu} I({\bf X}_i; {\bf Y} |{\bf X}_{A_e}) \le \delta, \quad i \in A_e^c. \end{align} We define the secure capacity region $\mathcal{C}^{\mathrm{s}}$ as the closure of the set of all securely achievable rate tuples. Note that for a sequence of codes operating at a securely achievable rate tuple, the decoding condition along with Fano's inequality ensure that at receiver $i$, $\lim_{\epsilon\to0} H({\bf X}_i|{\bf Y},{\bf X}_{A_i})= 0$. Note that an index coding problem is not securely feasible when there is there is no securely achievable rate tuple. This happens when $A_i \subseteq A_e$ for some $i \in A^c_e$. That is, when the side information of the eavesdropper is equally strong or stronger than that of some receiver. Otherwise, the secure index coding problem is said to be \emph{securely feasible}. \section{Polymatroidal Outer Bound} \label{sec:secure_outer} We present the following outer bound to the secure index coding capacity region. \begin{theorem} [Secure Outer Bound]\label{theo:SPM} Any securely achievable rate tuple for the index coding problem $(i|A_i)$, $i \in [n]$, and $(e|A_e)$ must lie in $\mathcal{R}_{\mathrm{g}}^\mathrm{s}$ that consists of all rate tuples satisfying \begin{align}\label{eq:R:sec} R_i = g(B_i \cup \{i\})-g(B_i), ~~i \in [n], \end{align} for some set function $g: 2^{[n]}\rightarrow [0,1]$ such that for any $J\subseteq [n]$ and $i,k\notin J$, \begin{align} \label{pm_as_1} &g(\emptyset) = 0,\\ \label{pm_as_2} &g([n]) \leq 1, \\ \label{pm_as_3} &g(J) \leq g(J,\{i\}), \\ \label{pm_as_4} &g(J) + g(J\cup \{i,k\}) \leq g(J \cup \{i\}) + g(J \cup \{k\}),\\ \label{pm_as_5} &g(B_i \cup \{i\}) - g(B_i) = g(\{i\}), \end{align} and additionally for $i\in A_e^c$, \begin{align} \label{item:security} &g(A_e^c\setminus \{i\}) \geq g(A_e^c). \end{align} $\hfill \blacksquare$ \end{theorem} The proof is given in Appendix \ref{proof:SPM}. A few remarks are in order. First, we note that the rate constraint \eqref{eq:R:sec} and polymatroidal constraints \eqref{pm_as_1}-\eqref{pm_as_4} appeared in \cite{arbabjolfaei2013capacity} to form an outer bound on the non-secure index coding capacity region\footnote{In \cite{arbabjolfaei2013capacity}, inequalities in \eqref{eq:R:sec} and equality in \eqref{pm_as_2} were used. This is immaterial to the non-secure capacity region outer bound. See Remark \ref{rem:eqaulaity} at the end of this section for its impact on the secure counterpart.}, which is known to be tight for all index coding problems with $n \le 5$ messages. Constraint \eqref{pm_as_5} captures \emph{additional decoding conditions} for each legitimate receiver $i \in [n]$, since \begin{align} H({\bf X}_i|{\bf Y},{\bf X}_{A_i}) = H({\bf X}_i|{\bf Y},{\bf X}_{A_i},{\bf X}_{C}) = 0, \end{align} for any $C \subseteq B_i$. Due to the submodularity constraint \eqref{pm_as_4}, it suffices to write the additional decoding condition for $C = B_i$ only. See Appendix \ref{proof:SPM} for more details. We note that the same constraint appeared in a similar outer bound to the non-secure index coding capacity region in \cite{lexico}. Finally, \eqref{item:security} captures the security constraint \eqref{eq:secu} with $\delta = 0$. An \textit{explicit} outer bound to the index coding capacity region is derived from Theorem \ref{theo:SPM} by the means of Fourier-Motzkin elimination (FME)~\cite{elgamal_yhk} through eliminating $g(J), J\subseteq [n]$ that are viewed purely as intermediate variables. Let us consider a non-secure and a secure example. \begin{example} \label{exp:NonSPM} Consider the \emph{non-secure} index coding problem $(1|-), (2|3), (3|2)$ in the absence of the eavesdropper. Invoking Theorem \ref{theo:SPM} without \eqref{item:security} and eliminating variables $g(J), J\subseteq [n]$, via FME yields \begin{align} R_1 + R_2 \leq 1, ~~~R_1 + R_3 \leq 1, \end{align} which is the explicit outer bound on the non-secure index coding capacity region. $\hfill \blacksquare$ \end{example} \begin{example} \label{exp:SPM} Consider the index coding problem $(1|-), (2|3), (3|2)$, with the eavesdropper~$(e|1)$. The explicit outer bound on the secure capacity region derived from Theorem \ref{theo:SPM} is \begin{align*} \qquad\qquad\qquad R_2 = R_3, ~~ R_1+R_3 \leq 1. \qquad\qquad\qquad\quad \blacksquare \end{align*} \end{example} In Example \ref{exp:SPM}, the security requirement imposes the equality $R_2 = R_3$. Since the eavesdropper already has ${\bf x}_1$ as side information, it is not possible to protect ${\bf x}_2$ or ${\bf x}_3$ using ${\bf x}_1$. Therefore, the only solution to guarantee secrecy is to protect ${\bf x}_2$ with ${\bf x}_3$ and vice versa at the same rate. This can be achieved using a simple linear code, ${\bf x}_1$ (of length $t_1$), ${\bf x}_2 \oplus {\bf x}_3$ (of length $t_2$), illustrated below. \begin{figure} [b] \vspace{-8mm} \begin{center} \includegraphics[scale=0.9]{linear_code_exp2} \vspace{-1mm} \caption{A linear code achieves the capacity region for Example \ref{exp:SPM}.} \vspace{-1mm} \label{fig:exp2_code} \end{center} \end{figure} We summarize a few important observations. \begin{enumerate} \item The outer bound on the non-secure capacity region of the index coding problem is at least as large as that on the secure capacity region. \item In some secure index coding problems, it is possible for a stronger receiver (with more side information) to have its rate bounded by that of a weaker receiver (with less side information). Equivalently, it is possible that \begin{equation} \nonumber \exists i,j \in[n] \text{ such that } A_i \subset A_j, R_i \geq R_j. \end{equation} For the problem $(1|3),(2|1,3),(3|1),(e|-)$, the outer bound on the secure capacity region stipulates $R_2 \leq R_3$, while $A_3 =\{3\} \subset \{1, 3\} = A_2$. \item The additional decoding constraint \eqref{pm_as_5} in Theorem \ref{theo:SPM} is essential for deriving a tighter secure capacity region outer bound for some index coding problems. For example, if we exclude \eqref{pm_as_5}, the outer bound for the index coding problem $ (1|3),(2|3),(3|2), (e|-)$ is \begin{equation} \nonumber R_1+R_2\leq 1, \quad R_2 = R_3. \end{equation} However, with \eqref{pm_as_5} included, the outer bound is \begin{equation} \nonumber R_1+R_2\leq 1, \quad R_2 = R_3, \quad R_1 \leq R_3. \end{equation} \item To obtain equality relationships between the message rates, $R_i = g(B_i \cup \{i\})-g(B_i)$ should be used in Theorem \ref{theo:SPM}, instead of $R_i \leq g(B_i \cup \{i\})-g(B_i)$. For example, if the latter is used, then $R_2 = R_3$ will not be captured in Example \ref{exp:SPM}. To ensure the convex envelope of the capacity outer bound is obtained, we then use $g([n]) \leq 1$ in Theorem \ref{theo:SPM}, instead of $g([n]) = 1$.\label{rem:eqaulaity} \end{enumerate} \section{Secure Composite Coding Inner Bound} \label{sec:SCOMP} Before proposing the secure composite coding scheme, we first recap the original composite coding scheme established in \cite{arbabjolfaei2013capacity}. For ease of exposition, the scheme is described for a fixed decoding configuration, which is a tuple of subsets of messages ${\bf D} = (D_i, i\in[n])$ such that for each $i\in [n]$, $D_i\subseteq [n] \backslash A_i$ and $i\in D_i$. Let $r \in \mathbb{N}$. Let for each $i \in [n]$, $t_i = \lceil{rR_i\rceil}$. Denote $s_K = \lceil{rS_K\rceil}$, where $S_K\in[0,1]$ is the rate of composite index for subset $K$. By convention, $S_\emptyset = 0$. \textbf{Codebook generation:} \textbf{(1)} For each $K \subseteq [n]$ and ${\bf x}_K$, a corresponding composite index $W_{K}({\bf x}_K)$ is drawn uniformly at random from $[2^{s_K}]$. \textbf{(2)} For every tuple $(w_{K}, K\in 2^{[n]})$, the codeword to be transmitted, ${\bf Y}((w_{K}, K\in 2^{[n]}))$, is drawn uniformly at random from $[2^r]$. The random codebooks (message-to-composite indices and composite indices-to-codeword maps) are revealed to all parties. \textbf{Encoding:} To communicate a realization ${\bf x}_{[n]}$, the transmitter sends ${\bf Y}((W_{K}({\bf x}_K), K\in 2^{[n]}))$. \textbf{Decoding:} Upon receiving the codeword realization $\mathbf y$: \textbf{(1)} Each legitimate receiver $i$ finds the unique tuple of composite indices $(\hat{w}_{K}, K\in 2^{[n]})$ such that ${\bf y} = {\bf Y}((\hat{w}_{K}, K\in 2^{[n]}))$, and declares an error if a unique tuple is not found. \textbf{(2)} Assuming composite index tuple decoding is successful, receiver $i$ finds the unique message tuple ${\hat{\xv}}$ such that $w_{K} = W_{K}({\hat{\xv}}_K)$, for all $K \subseteq D_i \cup A_i$. An error is declared if a unique tuple is not found. The following result from \cite{arbabjolfaei2013capacity} quantifies the constraints on the message rates and composite index rates for successful decoding (in the non-secure setting). \begin{proposition}\label{thm:compcod} A rate tuple $(R_i, i\in [n])$ is achievable for the index coding problem $(i| A_i)$, $i \in [n]$, if for each $i\in [n]$: \begin{align} \label{deco_1} &\sum_{J \not\subseteq A_i} S_J < 1,\\ \label{deco_2} &\sum_{i \in K} R_i < \sum_{J \subseteq D_i \cup A_i : J \cap K \neq \emptyset} S_J, \quad K \subseteq D_i. \end{align} \end{proposition} Now we move on to develop the \emph{secure} composite coding scheme. Recall the security condition \begin{align} \label{secure_eq} I({\bf X}_i ; {\bf Y}|{\bf X}_{A_e}) < \delta, ~~ i \in A_e^c. \end{align} Using the chain rule for mutual information and the independence between different messages we have \begin{align}\label{secure_eq_new} I({\bf X}_i; {\bf Y}, {\bf X}_{A_e}) < \delta, ~~ i \in A_e^c. \end{align} Since the eavesdropper can generate all composite indices $\{{\bf w}_J: J \subseteq A_e\}$ from ${\bf X}_{A_e}$, it will be useful to define $T = \{K: K \subseteq [n], K \not\subseteq A_e\}$. Then for any $Q \subseteq T$, $P_Q = \bigcup_{J \in Q} J \backslash A_e$ is the set of messages from $Q$ that are unknown to the eavesdropper. We assume that the eavesdropper learns the codebook and is also able to decode all the composite indices in the first step of decoding. Condition \eqref{secure_eq_new} becomes: \begin{align} \label{final_secure_eq2} I({\bf X}_i ; \{W_K: K \in T\}, {\bf X}_{A_e}) &< \delta, ~~ i \in A_e^c. \end{align} Applying Theorem 1 from \cite{6283010} and Lemma 2.7 from \cite{csiszar2011information}, we obtain the following random-coding based achievable rate region. The proof will be provided in Appendix \ref{proof:SCOMP} for the more general secure enhanced composite coding scheme described in Proposition \ref{theo:SCOMPenhanced}. \begin{theorem} \label{theo:SCOMP} A rate tuple $(R_i, i\in [n])$ is securely achievable for the index coding problem $(i|A_i), i\in [n], (e|A_e)$ if \begin{align} \label{deco_1s} & \sum_{J \not\subseteq A_i} S_J< 1, \quad i\in [n],\\ \label{deco_2s} &\sum_{i \in K} R_i< \sum_{\substack{J \subseteq D_i \cup A_i \\ J \cap K \neq \emptyset}} S_J, \quad K \subseteq D_i, i\in [n],\\ \label{eq:sec_comp} &\sum_{\substack{K \subseteq P_Q\cup A_e\\ K \not\subseteq A_e}} S_K < \sum_{j \in (P_Q\backslash \{i\})} R_j, \quad Q \subseteq T, i \in A_e^c. \end{align} \end{theorem} Note, when Theorem \ref{theo:SCOMP} gives an inequality of the form $S_J <0$, we set $S_J = 0$. For each index coding problem with $n\leq5$ messages, a single \emph{natural} decoding configuration $\underline {\bf D}$ \cite{liusimplified} was shown to be sufficient to achieve the non-secure capacity region. We will also use the natural decoding configuration in this paper, which will be sufficient to achieve the secure capacity region for all index coding problems with $n \leq 4$ messages. However, more than one decoding configuration might be necessary for larger problems. Secure composite coding with multiple decoding configurations is detailed below. \subsubsection{Secure Enhanced Composite Coding Scheme} Following similar lines as \cite{liu2017capacity}, let $\Delta$ be the set of all decoding configurations, i.e, $\Delta = \{{\bf D}: D_i\subseteq [n] \backslash A_i, i\in D_i\}$. Let $r \in \mathbb{N}$. Let for each ${\bf D}\in\Delta$ and $i \in [n]$, $t_i({\bf D}) = \lceil{rR_i({\bf D})\rceil}$, where $R_i({\bf D})$ is the rate of message $i$ communicated via decoding configuration ${\bf D}$. Let ${\bf X}_i({\bf D})\in [2^{t_i({\bf D})}]$ be the part of message $i$ communicated via decoding configuration ${\bf D}$. For each $K \subseteq [n]$ and ${\bf D}\in \Delta$, let $S_K({\bf D})\in[0,1]$. Denote $s_K({\bf D}) = \lceil{rS_K({\bf D})\rceil}$, $K \subseteq [n]$, where $S_K({\bf D})$ is the rate of composite index for subset $K$ and configuration ${\bf D}$. By convention, $S_\emptyset({\bf D}) = 0$ for each ${\bf D}\in\Delta$. \textbf{Codebook generation:} \textbf{(1)} For each $K \subseteq [n]$, ${\bf D}\in\Delta$, and ${\bf x}_K({\bf D})$, a corresponding composite index $W_{K,{\bf D}}({\bf x}_K({\bf D}))$ is drawn uniformly at random from $[2^{s_K({\bf D})}]$. \textbf{(2)} For every tuple $(w_{K,{\bf D}}, (K,{\bf D})\in 2^{[n]}\times \Delta)$, the codeword to be transmitted, ${\bf Y}((w_{K,{\bf D}}, (K,{\bf D})\in 2^{[n]}\times \Delta))$, is drawn uniformly at random from $[2^r]$. The random codebooks (message-to-composite indices and composite indices-to-codeword maps) are revealed to all parties. \textbf{Encoding:} To communicate a realization ${\bf x}_{[n]}$, the transmitter sends ${\bf Y}((W_{K,{\bf D}}({\bf x}_K({\bf D})), (K,{\bf D})\in 2^{[n]}\times \Delta))$. \textbf{Decoding:} Upon receiving the codeword realization $\mathbf y$: \textbf{(1)} Each legitimate receiver $i$ finds the unique tuple of composite indices $(\hat{w}_{K,{\bf D}}, (K,{\bf D})\in 2^{[n]}\times \Delta)$ such that ${\bf y} = {\bf Y}((\hat{w}_{K,{\bf D}}, (K,{\bf D})\in 2^{[n]}\times \Delta))$, and declares an error if a unique tuple is not found. \textbf{(2)} Assuming composite index tuple decoding is successful, for each ${\bf D}\in\Delta$, receiver $i$ finds the unique message tuple ${\hat{\xv}}_{D_i}({\bf D})$ such that $w_{K,{\bf D}} = W_{K,{\bf D}}({\hat{\xv}}_K)$, for all $K \subseteq D_i \cup A_i$. An error is declared if a unique tuple is not found. \begin{proposition} \label{theo:SCOMPenhanced} A rate tuple $(R_i, i\in [n])$ is securely achievable for the index coding problem $(i|A_i), i\in [n], (e|A_e)$ if \begin{align} &R_i = \sum_{{\bf D} \in \Delta} R_i({\bf D}), \quad i\in [n], \\ \label{deco_1s} &\sum_{{\bf D} \in \Delta} \sum_{J \not\subseteq A_i} S_J({\bf D}) < 1, \quad i\in [n],\\ \label{deco_2s} &\sum_{i \in K} R_i({\bf D}) < \sum_{\substack{J \subseteq D_i \cup A_i \\ J \cap K \neq \emptyset}} S_J({\bf D}), \quad K \subseteq D_i, i\in [n],\\ \label{eq:sec_comp} &\sum_{\substack{K \subseteq P_Q\cup A_e\\ K \not\subseteq A_e}} S_K({\bf D}) < \sum_{j \in (P_Q\backslash \{i\})} R_j({\bf D}), \quad Q \subseteq T, i \in A_e^c. \end{align} \end{proposition} Note that in Proposition \ref{theo:SCOMPenhanced}, if we set $S_K({\bf D})$, $K \in 2^{[n]}$ and $R_j({\bf D})$, $j \in [n]$ to zero for all, but one particular ${\bf D}$, we will recover Theorem \ref{theo:SCOMP}. \subsection{Secure Composite Coding with a Secret Key} Theorem \ref{theo:SCOMP} and Proposition \ref{theo:SCOMPenhanced} may generate conflicting constraints for some index coding problems as shown below. \begin{example} \label{exp:SCOMP} Consider the same setting as in Example \ref{exp:NonSPM}. Set $D_1 = \{1\}$, $D_2 = \{1,2\}$, and $D_3 = \{1,3\}$. The set of active inequalities generated by Theorem \ref{theo:SCOMP} is \begin{align*} &S_1+S_2+S_{12}+S_3+S_{13}+S_{23}+S_{123} < 1, \notag \\ &R_1 < S_1,\notag\\ &R_2 < S_2+S_{12}+S_{23}+S_{123}, \notag\\ &R_3 < S_3+S_{13}+S_{23}+S_{123},\\ &R_1+R_2 < S_1+S_2+S_{12}+S_{13}+S_{23}+S_{123},\notag \\ &R_1+R_3 < S_1+S_{12}+S_3+S_{13}+S_{23}+S_{123},\notag \\ &S_2+S_{12}+S_{13}+S_{23}+S_{123} < R_2,\\ &S_{12}+S_3+S_{13}+S_{23}+S_{123} < R_3,\\ ~&S_2 =0, ~ S_3 = 0, ~S_{12}=0,~S_{13}=0. \notag \end{align*} Clearly, there are conflicting constraints for $R_2$ and $R_3$. $\hfill\blacksquare$ \end{example} For $n=3$ messages, there are the total of 20 securely feasible index coding problems. Of these, only the problem $(1|2,3), (2|1,3), (3|1,2), (e|-)$ does not have conflicting inequalities. For $n=4$ messages, 43 out of 833 securely feasible index coding problems have no conflicting inequalities. For all such non-conflicting cases, the secure composite coding inner bound matches the secure polymatroidal outer bound, thereby establishing the corresponding secure capacity region. In each of these problems, each receiver who wants the same message as the eavesdropper, knows at least two more messages as side information than the eavesdropper. I.e., $\forall \, i \in A_e^c, |A_i \backslash A_e| \geq 2$. We now show a resolution can be obtained for the conflicting cases by using a secret key of arbitrarily small rate shared between the server and legitimate receivers. Assume there is an independent secret key ${\bf M}$ at rate $\zeta_M$, shared between the transmitter and all the legitimate receivers. For each $K \subseteq [n]$, ${\bf x}_{K}({\bf D}) \cup {\bf M}$ is mapped into a composite index $W_{K,M, {\bf D},}({\bf x}_K({\bf D})\cup {\bf M})$ drawn uniformly at random from $[2^{s_{K,M}({\bf D})}]$. The second step of codebook generation, encoding and decoding are the same as before. \begin{theorem} \label{theo:SCOMP_KEY} A rate tuple $(R_i, i\in [n])$ is securely achievable for the index coding problem $(i|A_i), i\in [n], (e|A_e)$ if \begin{align} &R_i = \sum_{{\bf D} \in \Delta} R_i({\bf D}), \quad i\in [n], \\ \label{deco_1skey} &\sum_{{\bf D} \in \Delta} \sum_{J \not\subseteq A_i} S_{J,M}({\bf D}) < 1, \quad i\in [n],\\ \label{deco_2skey} &\sum_{i \in K} R_i({\bf D}) < \sum_{\substack{J \subseteq D_i \cup A_i \\ J \cap K \neq \emptyset}} S_{J,M}({\bf D}), K \subseteq D_i, i\in [n],\\ \label{eq:sec_compskey} &\sum_{\substack{K \subseteq P_Q \cup A_e\\ K \not\subseteq A_e}} S_{K,M}({\bf D}) < \sum_{j \in (P_Q\backslash \{i\})} R_j({\bf D})+\zeta_M,\\\nonumber &\quad\quad \quad Q \subseteq T, i \in A_e^c. \end{align} \end{theorem} For the secure index coding problem described in Example \ref{exp:SCOMP}, the secure achievable rate region with a secret key becomes \begin{align} &R_3 - \zeta_M < R_2 < R_3 + \zeta_M, \notag \\ &R_1+ R_2 < 1 , \quad R_1 + R_3 <1. \notag \end{align} which matches the polymatroidal outer bound as $\zeta_M \rightarrow 0$. We now summarize our key observations. \begin{enumerate} \item For cases where there are at least two more messages at each receiver $i$, $i\in A_e^c$ to protect their desired message (i.e., $\forall\, i\in A_e^c, |A_i \backslash A_e| \geq 2$), the proposed secure composite coding achieves capacity without the need for a shared secret key. For $n=3$, there is 1 such problem out of 20 securely feasible problems. For $n=4$, there are 43 such problems out of 833 securely feasible problems. \item For remaining cases, conflicting inequalities can be resolved by means of a secret key of vanishingly small rate. The secret key acts as the second message unknown to the eavesdropper to ensure $\forall\, i\in A_e^c, |A_i \backslash A_e| \geq 2$. \ifitw \item Table \ref{table:SPM-3} lists the secure capacity region for 20 securely feasible index coding problems with $n=3$ messages. \else \item Appendix \ref{app:table3} lists the secure capacity region for all 20 securely feasible index coding problems with $n=3$ messages. \fi \end{enumerate} \ifitw \else \section{Equivalence between Two Capacity Region Outer Bounds} \label{sec:equi} First, let us specialize Theorem \ref{theo:SPM} to the non-secure index coding. \begin{corollary} [Non-secure Outer Bound]\label{theo:PM} Any achievable rate tuple for the index coding problem $(i|A_i)$, $i \in [n]$ must lie in $\mathcal{R}_{\mathrm{g}}$ that consists of all rate tuples satisfying \begin{align}\label{pm_a_0} R_i \leq g(B_i \cup \{i\})-g(B_i), ~~i \in [n], \end{align} for some set function $g: 2^{[n]}\rightarrow [0,1]$ such that for any $J\subseteq [n]$ and $i,k\notin J$, \begin{align} \label{pm_a_1} &g(\emptyset) = 0,\\ \label{pm_a_2} &g([n]) = 1, \\ \label{pm_a_3} &g(J) \leq g(J,\{i\}), \\ \label{pm_a_4} &g(J) + g(J\cup \{i,k\}) \leq g(J \cup \{i\}) + g(J \cup \{k\}),\\ \label{pm_a_5} &g(B_i \cup \{i\}) - g(B_i) = g(\{i\}). \end{align} $\hfill \blacksquare$ \end{corollary} We have used inequalities in \eqref{pm_a_0} and equality in \eqref{pm_a_2}.\footnote{This is technically needed in the proof of Theorem \ref{Prop_hg_equi}, but it is immaterial to the outer bound itself.} Let ${\bf X}_0 ={\bf Y}$ denote a random variable over $\{0,1\}^r$ representing the output of the index code. Denote $N = \{0\} \cup[n]$ and define the entropic set function $h: 2^{\{0\}\cup [n]} \rightarrow R_{\geq 0}$ as \begin{align} h(J) = H({\bf X}_J). \end{align} The following is the outer bound on the non-secure capacity region of the index coding that captures all Shannon-type inequalities of the entropy function. \begin{theorem} \label{thm:h:outerbound}Any achievable rate tuple for the index coding problem $(i|A_i)$, $i \in [n]$ must lie in $\mathcal{R}_{\mathrm{h}}$ consisting of all rate tuples $(R_i, i\in [n])$ that satisfy \begin{align} \label{SH_rate} R_i \leq \frac{h(\{i\})}{h(\{0\})}, ~~i \in [n], \end{align} for some set function $h: 2^{\{0\}\cup [n]} \rightarrow R_{\geq 0}$ such that \begin{align} \label{SH_1} &h(\emptyset) = 0, \\ \label{SH_2} &h([n]) = \sum_{i\in [n]} h(\{i\}), \\ \label{SH_4} &h(N \setminus \{i\}) = h(N) = 1,~~i\in [n],\\ \label{SH_5} &h(\{i\} \cup A_i \cup \{0\}) = h(A_i \cup \{0\}), ~~i\in [n], \\ \label{SH_6} &h(J) \leq h(J\cup \{i\}),\quad J\subseteq N, i\in N\backslash J,\\ \label{SH_7} &h(J) + h(J \cup \{i,k\}) \leq h(J \cup \{i\}) + h(J \cup \{k\}),\\ &\phantom{{} = ~~~~} J\subseteq N,i,k \not\in J, i\neq k. \notag \end{align} \end{theorem} We now state the main result of this section. \begin{theorem} \label{Prop_hg_equi} $\mathcal{R}_{\mathrm{h}} = \mathcal{R}_{\mathrm{g}}$. \end{theorem} The proof of Theorem \ref{Prop_hg_equi} is shown in detail in Appendix \ref{proof:g_to_h}, and the outline is presented here. \begin{itemize} \item To prove $\mathcal{R}_{\mathrm{g}} \subseteq \mathcal{R}_{\mathrm{h}}$, we first take a set function $g$ which satisfies \eqref{pm_a_0} to \eqref{pm_a_5}. We then define $h: 2^{\{0\} \cup [n]} \rightarrow R_{\geq 0}$ as follows. For $J \subseteq [n], i\in [n]$, \begin{align} \label{g_to_h_1} h(\emptyset) &= 0, \\ \label{g_to_h_2} h(\{i\}) &= \frac{g(\{i\})}{\sum_{i=1}^n g(\{i\})}, \\ \label{g_to_h_3} h(J) &= \sum_{i \in J} h(\{i\}),\\ \label{g_to_h_4} h(J \cup \{0\}) &= h(J)+\frac{g(\overline{J})}{\sum_{i=1}^n g(\{i\})}, \end{align} where for notational convenience, we use $\overline{J}$ to denote $J^c = [n]\setminus J$. We then prove \eqref{SH_rate}-\eqref{SH_7} using the constraints of Corollary \ref{theo:PM}. \item To prove $\mathcal{R}_{\mathrm{h}} \subseteq \mathcal{R}_{\mathrm{g}}$, we take a set function $h$ that satisfies \eqref{SH_rate} to \eqref{SH_7} and define \begin{align}\label{h_to_g} g(J) = \frac{h(\overline{J} \cup \{0\}) - h(\overline{J})}{h(\{0\})}, \quad J \subseteq [n]. \end{align} We then prove \eqref{pm_a_0} to \eqref{pm_a_5} using the constraints of Theorem \ref{thm:h:outerbound}. \end{itemize} \fi \newcommand{\noopsort}[1]{}
1,314,259,996,437
arxiv
\section{Introduction} As Adamchik has expressed \cite{Adamchik2002} (cf.\ \cite{CampbellChen2022}), there is a great history concerning the study of series of the following form: \begin{equation}\label{Sfunction} S(r) = \sum_{k=0}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{ 1 }{ k + r }. \end{equation} As in \cite{Adamchik2002,CampbellChen2022}, we recall that Ramanujan was interested in the $S$-function shown above \cite[p.\ 39]{Berndt1989}, and we record Ramanujan's identity whereby \begin{equation}\label{SfunctionRamanujan} S(r) = \frac{16^{r}}{ \pi r^2 \binom{2r}{r}^2 } \sum_{k=0}^{r - 1} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \end{equation} for $r \in \mathbb{N}_{0}$. Ramanujan's discoveries concerning the $S$-function in \eqref{Sfunction}, together with the large amount of research, over the years, concerning this function \cite{Adamchik2002}, have inspired the development of \emph{Ramanujan-like} series given by introducing summand factors, such as harmonic or harmonic-type numbers, in the series shown in \eqref{Sfunction} \cite{Campbell2018,WangChu2020}. In this article, we solve a problem recently considered by Wang and Chu \cite{WangChu2022} concerning a family of series resembling \eqref{Sfunction} and involving harmonic-type numbers, and we introduce many related evaluations. We write $O_{n} = 1 + \frac{1}{3} + \cdots + \frac{1}{2 n - 1}$ to denote the $n^{\text{th}}$ odd harmonic number, by analogy with the usual notation $H_{n} = 1 + \frac{1}{2} + \cdots + \frac{1}{n}$ for the $n^{\text{th}}$ entry in the classical sequence of harmonic numbers. In the recent article \cite{WangChu2022}, Wang and Chu left the problem of evaluating series of the form \begin{equation} \label{mainproblem} \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2 }{(1 + 2n - 2 \lambda)^2 } \end{equation} as an open problem, letting $\lambda \in \mathbb{N}$. Even for the base case such that $\lambda = 1$, evaluating the series in \eqref{mainproblem} seems to be quite difficult, and it appears that the coefficient-extraction methods of Wang and Chu \cite{WangChu2020,WangChu2022} do not apply to \eqref{mainproblem}. We offer explicit solutions to Wang and Chu's problem for the {\color{black} cases whereby $\lambda \in \{ 1, 2, 3, 4 \}$}, using recent results from references as in \cite{Campbell2022,XuZhao2022}, and, {\color{black} furthermore, our techniques, as in Section \ref{sectionnotConclusion}, allow us to express \eqref{mainproblem} as the sum of \begin{equation*} \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2 }{(1 + 2n - 2 (\lambda - 1))^2 } \end{equation*} plus expressions that admit explicit finite sum evaluations for $\lambda \in \mathbb{N}$, thus providing a complete solution to Wang and Chu's open problem on \eqref{mainproblem}.} We also take care of two other open problems in \cite{WangChu2022}, namely, the evaluation of: \begin{align} \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n} }{(1 + 2n - 2 \lambda)^2 } , \label{firstproblem} \\ \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2 }{1 + 2n - 2 \lambda }. \label{secondproblem} \end{align} \subsection{Organization of the article} In Section \ref{subsectionproof}, we introduce a full proof for a symbolic evaluation for Wang and Chu's series in \eqref{mainproblem}, for the $\lambda = 1$ case. Our strategy in Section \ref{subsectionproof} has the advantage of producing many new and nontrivial series/integral evaluations closely related to \eqref{mainproblem}, and is also of interest in terms of the many different mathematical techniques involved, including the Wilf--Zeilberger method \cite{PetkovsekWilfZeilberger1996}, and with reference to recent advances in the study of Ap\'{e}ry-type series and colored multiple zeta values \cite{XuZhao2022}. Furthermore, with a particular regard to the formula in \eqref{odd2kp1}, methods and formulas involved in our proof in Section \ref{subsectionproof} are required in generalizing our solution for the $\lambda = 1$ case so as to be applicable for $\lambda > 1$. In Section \ref{subsectionalternate}, we consider a method, based on the recent work of Xu and Zhao \cite{XuZhao2022}, for generalizing our evaluation for \eqref{mainproblem}, according to higher powers of odd harmonic numbers, again in the $\lambda = 1$ case. In Section \ref{sectionnotConclusion}, we succeed in determining a recursion for \eqref{mainproblem} of the form described above, and we provide explicit evaluations for \eqref{mainproblem} for the $\lambda \in \{ 2, 3, 4 \}$ cases. Many closely related integral/series identities are introduced in Section \ref{sectionFurther}. \section{The $\lambda = 1$ case} We adopt notation from \cite{CampbellLevrieNimbran2021}, writing $$ \mathcal{G} := \Im\left( \text{Li}_{3}\left( \frac{i+1}{2} \right) \right) $$ to denote the Catalan-like constant explored in \cite{CampbellLevrieNimbran2021}, letting $ G:= \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{(2n-1)^2} $ denote the ``original'' Catalan constant, and writing $\text{Li}_{s}(z) = z + \frac{z^{2}}{2^{s}} + \frac{z^{3}}{3^{s}} + \cdots$ to denote the polylogarithm function. \subsection{A full proof}\label{subsectionproof} We again emphasize the importance of our evaluation in \eqref{odd2kp1} in the following proof, which we use in a direct way to obtain an explicit solution for the $\lambda = 2$ case that may be generalized for $\lambda > 2$. \begin{theorem}\label{maintheorem} The infinite series $$ \sum_{k=1}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{O_{k}^{2}}{(2k-1)^2} $$ admits the symbolic form \begin{equation*} -\frac{8 G}{\pi }-\frac{8 G \ln (2)}{\pi }-\frac{48 \mathcal{G}}{\pi }+\frac{9 \pi ^2}{8}+\frac{\pi }{6}+\frac{4 \ln ^2(2)}{\pi }+\frac{3 \ln ^2(2)}{2}. \end{equation*} \end{theorem} \begin{proof} Using the Wilf--Zeilberger method \cite{PetkovsekWilfZeilberger1996}, Campbell \cite{Campbell2022} had recently proved the following equality, noting the appearance of a copy of the base case of \eqref{mainproblem} for $\lambda = 1$: \begin{align} & 4 \sum _{k=0}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2\frac{O_{k}}{2 k+1} - \sum _{k=0}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{O_{k}^2}{2 k-1}-2 \sum _{k = 0}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{O_{k}^2}{(2 k-1)^2} \nonumber \\ & \hspace{1cm} = \frac{12 G}{\pi }+\frac{32 \mathcal{G} }{\pi } - \frac{6 \ln ^2(2)}{\pi } - \frac{3 \pi ^2}{4}-\frac{\pi }{4} - \ln ^2(2). \label{wantcentral} \end{align} Using a recursive proof approach, Campbell \cite{Campbell2022} had also evaluated the ``central'' series in \eqref{wantcentral}: \begin{equation}\label{Basel} \sum _{k=0}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{O_{k}^2}{2 k-1} = \frac{4G}{\pi} - \frac{\pi}{12} - \frac{2\ln^2 (2)}{\pi}, \end{equation} giving us that the following equality holds: \begin{align} & 4 \sum _{k=0}^{\infty}\left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{O_{k} }{2 k+1} - 2 \sum _{k=0}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2\frac{ O_{k}^{2} }{(2 k-1)^2} \label{nocentral} \\ & = \frac{16 G}{\pi }+\frac{32 \mathcal{G} }{\pi }-\frac{3 \pi ^2}{4}-\frac{\pi }{3}-\frac{8 \ln ^2(2)}{\pi }-\ln ^2(2). \nonumber \end{align} So, the problem of evaluating the base case for \eqref{mainproblem} is equivalent to the problem of evaluating $$ \sum _{k=0}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{ O_{k} }{2 k+1}. $$ In this direction, we claim that the following equality holds: \begin{equation}\label{Paulbrilliant} \sum _{k=0}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{2 O_{k} - H_k}{2 k+1} = -\frac{16 \mathcal{G} }{\pi }+\frac{\pi ^2}{8}+\frac{8 G \ln (2)}{\pi }+\frac{\ln ^2(2)}{2}. \end{equation} To prove this, we need the following integrals: \begin{align} & \int_{0}^{\pi/4} (\ln (\sin(t)) - \ln(\cos (t)) ) dt = - G, \label{eq:1} \\ & \int_{0}^{\pi/4} (\ln^2 (\sin(t)) - \ln^2 (\cos (t)) ) dt = \frac{5\pi^3}{64}+\frac{1}{16}\pi \ln^2(2)-2\mathcal{G}+ G \ln(2). \label{eq:2} \end{align} {\color{black}The first is a well known expression for Catalan's constant}, whereas the second one can be found in \cite{CampbellLevrieNimbran2021}. We are to also use the following result \cite{NimbranLevrie2022}: \begin{equation}\label{eq:7} \int_0^{\arcsin(x)}\ln^{n}(\sin u)\, {du} = \sum_{i=0}^{n} \left[(-1)^i i! \binom{n}{i} \ln^{n-i} (x) \sum_{k = 0}^\infty \frac{\binom{2k}{k}\, x^{2k+1}}{(2k+1)^{i+1} \,2^{2k}}\right]. \end{equation} For $n=1$, \eqref{eq:7} reduces to: $$\int_0^{\arcsin(x)}\ln(\sin u)\, {du} = \ln (x) \sum_{k=0}^\infty \frac{\binom{2k}{k}\, x^{2k+1}}{(2 k + 1) \,2^{2k}} - \sum_{k=0}^\infty \frac{\binom{2k}{k}\, x^{2k+1}}{(2k+1)^{2} \,2^{2k}}. $$ We divide by $x$, replace $x$ by $\sin v$, and integrate the result between $0$ and $\tfrac{\pi}{2}$: \begin{align} & \int_0^{\pi/2} \left(\frac{1}{\sin v} \int_0^t \ln(\sin u)\, {du} \right){dv} = \nonumber \\ & \sum_{k=0}^\infty \frac{\binom{2k}{k}}{(2k+1) \,2^{2k}} \int_0^{\pi/2} \ln (\sin v) \sin^{2k} v \, dv - \frac{\pi}{2}\sum_{k=0}^\infty \frac{\binom{2k}{k}^2}{(2k+1)^{2} \,2^{4k}}. \label{eq:8} \end{align} In the double integral on the left-hand side, we change the order of integration: \begin{align*} \int_0^{\pi/2} \left(\frac{1}{\sin v} \int_0^t \ln(\sin u)\, {du} \right){dv} & = \int_0^{\pi/2} \left(\ln(\sin u) \int_u^{\pi/2} \frac{1}{\sin v}\, {dv} \right) {du} \\ & = -\int_0^{\pi/2} \ln(\sin u) \ln (\tan \tfrac{u}{2}) {du}. \end{align*} Using the substitution $u=2t$ the integral can be rewritten as: \begin{align} &-2 \int_0^{\pi/4} \left[ \ln 2 \cdot (\ln (\sin(t)) - \ln(\cos (t)) ) + \ln^2 (\sin(t)) - \ln^2 (\cos (t)) \right] {dt} \label{eq:9}\\ & \hspace{3cm}= -\frac{5}{32} \pi^3 - \frac{1}{8}\pi \ln^2 2 + 4 \mathcal{G} \nonumber \end{align} using \eqref{eq:1} and \eqref{eq:2}. To calculate the integral in the right-hand side of \eqref{eq:8}, we replace $\sin v$ by $x$ and use integration by parts. This leads to the following recurrence: $$I_k:=\int_0^{\pi/2} \ln (\sin v) \sin^{2k} v \, dv \ \ \mbox{satisfies:} \hspace{1cm} \ 2k I_k = \frac{\pi}{2} \cdot \frac{1}{2k} \cdot \frac{\binom{2k-2}{k-1}}{2^{2k-2}} + (2k-1) I_{k-1} $$ with initial value $I_0 = -\frac{1}{2} \pi \ln 2$. If we put $I_k = \frac{\pi}{2} \frac{1}{2^{2k}} \binom{2k}{k} Z_k$, $Z_k$ is a solution of the recurrence $Z_k = \frac{1}{2k-1}-\frac{1}{2k} + Z_{k-1}$ with $Z_0 = -\ln 2$. It is easy to prove that the solution $Z_k$ is given by: $$Z_k = O_k - \frac{1}{2}H_k - \ln 2 \ \ \Rightarrow \ \ I_k = \frac{\pi}{2} \frac{1}{2^{2k}} \binom{2k}{k} (O_k - \frac{1}{2}H_k - \ln 2) .$$ Hence, the right-hand side of \eqref{eq:8} reduces to: $$\frac{\pi}{2}\sum_{k=0}^\infty \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 (O_k - \frac{1}{2}H_k - \ln 2) \frac{1}{2k+1} - \frac{\pi}{2}\sum_{k=0}^\infty \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{1}{(2k+1)^{2} }.$$ Multiplying both sides of \eqref{eq:8} by $\frac{4}{\pi}$ and using \eqref{eq:1}, \eqref{eq:2} and \eqref{eq:9} we obtain and equivalent form of \eqref{Paulbrilliant}. So, it remains to evaluate the following series: \begin{equation}\label{remainsHk} \sum _{k=0}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{H_{k} }{2 k+1}. \end{equation} Using a reindexing argument, we obtain the following: \begin{align*} & \sum _{k=0}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{H_{k} }{2 k+1} = \sum _{k = 1}^{\infty} \sum _{k=0}^{\infty} \left( \frac{1}{16} \right)^{k-1} \binom{2(k-1)}{k-1}^2 \frac{H_{k-1}}{2 k-1} \\ & = -\frac{4 (2 G - 1)}{\pi } + \sum _{k = 1}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^2 H_k\left(\frac{1}{(2k-1)^3}+\frac{2}{(2k-1)^2}+\frac{1}{2k-1}\right). \end{align*} Using previously known Ramanujan-like series introduced in \cite{Campbell2018,Chen2016}, we obtain that the series in \eqref{remainsHk} equals: $$ \frac{24}{\pi }-\frac{8 G}{\pi }-\frac{24 \ln (2)}{\pi } + \sum _{k=1}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^2 \frac{ H_k }{(2 k-1)^3}. $$ We claim that \begin{equation}\label{equ:useItIntegral} \sum _{k=1}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^2 \frac{ H_k }{(2 k-1)^3} \end{equation} equals $$ \frac{8 G}{\pi }-\frac{16 G \ln (2)}{\pi }-\frac{16 \mathcal{G} }{\pi }+\frac{5 \pi^2}{8}-\frac{24}{\pi }+\frac{\ln ^2(2)}{2}+\frac{24 \ln (2)}{\pi }. $$ To show this, we use an interated integral-based approach, using results from \cite{XuZhao2022}. We recall that for any real numbers $a,b$ and $1$-forms $f_j(t)\, dt$ ($j=1,\dots, d$) with $d>1$ we may recursively define the iterated integral \begin{equation*} \int_a^b f_1(t)\, dt\, f_2(t)\, dt \cdots f_d(t)\, dt:=\int_a^b f_1(u) \Big(\int_a^u f_2(t)\, dt \cdots f_d(t)\, dt \Big)\, du. \end{equation*} By the proof of \cite[Theorem 6.1]{XuZhao2022} we may use iterated integrals to compute \eqref{equ:useItIntegral} as follows: \begin{align*} I_s:=&\,\sum_{k>m>0} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2\frac{1}{(2k-1)^{s+2}(2m)}\\ =&\,\frac{2}{\pi} \int_0^{\pi/2} \sin t\, dt (\cot t\,dt)^{s}\, \cot^2 t\,dt\,d(\sec t) \Big(\csc t-\cot t\Big) \,dt\\ =&\,\frac{2}{\pi} \int_0^{\pi/2} \sin t\, dt (\cot t\,dt)^s\, (\csc^2 t-1)\,dt\,d(\sec t) \Big(\csc t-\cot t\Big) \,dt = X_s-Y_s-I_{s-1}, \end{align*} where \begin{align*} X_s:=&\,\frac{2}{\pi} \int_0^{\pi/2} \sin t\, dt (\cot t\,dt)^s\, \sec t\,dt\, \Big(\csc t-\cot t\Big) \,dt,\\ Y_s:=&\, \frac{2}{\pi} \int_0^{\pi/2} \sin t\, dt (\cot t\,dt)^s\,dt\,d(\sec t) \Big(\csc t-\cot t\Big) \,dt. \end{align*} Set 1-forms $\ta=dt/t$, $\tx_\mu=dt/(\mu-t)$, $\td_{\mu,\nu}=\tx_{\mu}-\tx_{\nu}$ for any roots of unity $\mu$ and $\nu$, and $\ty=\tx_{i}+\tx_{-i}-\tx_{1}-\tx_{-1}$. Then \begin{align*} X_0=&\,\frac{2}{\pi} \int_0^{\pi/2} dt\, \Big(\csc t-\cot t\Big) \,dt = \frac{2}{\pi} \int_0^1 i \Big(2\tx_{-1}-\tx_{i}-\tx_{-i}\Big) \td_{-i,i}, \\ X_1=&\,\frac{2}{\pi} \int_0^{\pi/2} (\cos t\cot t)\, dt \, \sec t\,dt\, \Big(\csc t-\cot t\Big) \,dt\\ =&\,\frac{2}{\pi} \int_0^{\pi/2} (\csc t-\sin t)\, dt \, \sec t\,dt\, \Big(\csc t-\cot t\Big) \,dt\\ =&\,-X_0+\frac{2}{\pi} \int_0^{\pi/2} \csc t\, dt \, \sec t\,dt\, \Big(\csc t-\cot t\Big) \,dt\\ =&\,-X_0-(-1)^s \frac{2}{\pi} \int_0^1 \Big(2\tx_{-1}-\tx_{i}-\tx_{-i}\Big)\ta \td_{-1,1} \end{align*} by the change of variables $t\to \sin^{-1}(1-t^2)/(1+t^2)$ at the last step for both $X_0$ and $X_1$. Similarly, \begin{align*} Y_0:=&\, \frac{2}{\pi} \int_0^{\pi/2} \sin t\, dt \,dt\,d(\sec t) \Big(\csc t-\cot t\Big) \,dt\\ =&\, \frac{2}{\pi} \int_0^{\pi/2} \cos t\, dt\,d(\sec t) \Big(\csc t-\cot t\Big) \,dt\\ =&\,\frac{2}{\pi} \int_0^{\pi/2} dt \Big(\csc t-\cot t\Big)\,dt -\frac{2}{\pi} \int_0^{\pi/2} \Big(1-\sin t\Big)\Big(\csc t\sec t -\csc t\Big)\,dt \\ =&\,\frac{2}{\pi} \int_0^1 i \Big(2\tx_{-1}-\tx_{i}-\tx_{-i}\Big) \td_{-i,i}+i \td_{-i,i}-2\tx_{-1}, \\ Y_1:=&\, \frac{2}{\pi} \int_0^{\pi/2} \sin t\, dt \cot t\,dt \,dt\,d(\sec t) \Big(\csc t-\cot t\Big) \,dt\\ =&\, \frac{2}{\pi} \int_0^{\pi/2} (\cos t\cot t)\, dt \,dt\,d(\sec t) \Big(\csc t-\cot t\Big) \,dt\\ =&\, -Y_0+\frac{2}{\pi} \int_0^{\pi/2} \csc t\, dt \,dt\,d(\sec t) \Big(\csc t-\cot t\Big) \,dt\\ =&\, -Y_0+\frac{2}{\pi} \int_0^{\pi/2} \csc t\, dt \Big[\sec t\, dt\, \Big(\csc t-\cot t\Big)\, dt -dt\, \Big(\csc t\sec t-\csc t\Big) \,dt\Big]\\ = &\, -Y_0+\frac{2}{\pi} \int_0^1 \Big(2\tx_{-1}-\tx_{i}-\tx_{-i}\Big) \ta \td_{-1,1} -\frac{2}{\pi} \int_0^{\pi/2} i \Big(\ta+2\tx_{-1}\Big)\td_{-i,i} \td_{-1,1} . \end{align*} Thus \begin{align*} I_1=X_1-Y_1-I_0=&\, Y_0-X_0-I_0+\frac{2}{\pi}\int_0^1 \Big(2\tx_{-1}-\tx_{i}-\tx_{-i}\Big)\ta \td_{-1,1}\\ &\,-\frac{2}{\pi} \int_0^1 \Big(2\tx_{-1}-\tx_{i}-\tx_{-i}\Big) \ta \td_{-1,1} +\frac{2}{\pi} \int_0^{\pi/2} i \Big(\ta+2\tx_{-1}\Big)\td_{-i,i} \td_{-1,1} \\ =&\, -I_0+\frac{2}{\pi} \int_0^1 i \Big(2\tx_{-1}-\tx_{i}-\tx_{-i}\Big) \td_{-i,i}+i \td_{-i,i}-2\tx_{-1} \\ &\, -\frac{2}{\pi} \int_0^1 i \Big(2\tx_{-1}-\tx_{i}-\tx_{-i}\Big) \td_{-i,i} +\frac{2}{\pi} \int_0^{\pi/2} i \Big(\ta+2\tx_{-1}\Big)\td_{-i,i} \td_{-1,1}. \end{align*} Note that $I_0$ is given by \cite[Example B.8]{XuZhao2022}. Hence we obtain the following formula by using Au's Mathematica package of colored multiple zeta values \cite{Au2020}: \begin{align*} &\,\sum_{k>m>0} \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^2 \frac{1}{(2k-1)^3(2m)} \\ = &\,\frac{2}{\pi} \bigg(\frac5{32}\pi^3 - 4 \mathcal{G} -2G(1+2\ln2)+6\ln2+\frac{\pi}8\Big(8\ln2+\ln^22-12\Big) \bigg). \end{align*} Now, using partial fraction decomposition, we see that \begin{align*} &\,\sum_{k>0} \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^2 \frac{1}{(2k-1)^3 (2k)}\\ =&\,\sum_{k>0} \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^2 \left(\frac{1}{(2k-1)^3}-\frac{1}{(2k-1)^2}+\frac{1}{2k-1}-\frac{1}{2k}\right)\\ =&\,\frac{2}{\pi}\left(\frac{\pi}{2}+2G-3\right) -\frac2{\pi}\Big(2-\frac{\pi}2\Big) +\frac{2}{\pi} \Big(\frac{\pi}{2}-1\Big) -\frac{2}{\pi} (\pi\ln2-2G) \\ =& \frac2{\pi}\Big( \frac{3\pi}2+4G-\pi\ln2-6 \Big) \end{align*} by Examples B.1 and B.3 of \cite{XuZhao2022}. Therefore, \begin{align*} &\,\sum_{k>0} \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^2 \frac{H_k}{(2k-1)^3}\\ =&\,2\sum_{k>0} \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^2 \frac{1}{(2k-1)^3 (2k)} +2\sum_{k>m>0} \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^2 \frac{1}{(2k-1)^3(2m)} \\ =&\, \frac{4}{\pi} \bigg(\frac5{32}\pi^3 - 4 \mathcal{G} +2G(1-2\ln2)+6\ln2+\frac{\pi}8 \ln^22 -6 \bigg). \end{align*} \noindent So, we obtain that: \begin{equation}\label{afterJay} \sum _{k=1}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^2 \frac{H_k}{2k+1} = -\frac{16 G \ln (2)}{\pi }-\frac{16 \mathcal{G} }{\pi }+\frac{5 \pi ^2}{8}+\frac{\ln ^2(2)}{2}. \end{equation} Thus, according to \eqref{Paulbrilliant}, we find that: \begin{equation}\label{odd2kp1} \sum _{k=1}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^2 \frac{O_{k} }{2 k+1} = -\frac{4 G \ln (2)}{\pi }-\frac{16 \mathcal{G} }{\pi }+\frac{3 \pi ^2}{8}+\frac{\ln ^2(2)}{2}. \end{equation} So, according to our relation for \eqref{nocentral}, as derived from \cite{Campbell2022}, we obtain the desired result. \end{proof} \subsection{Alternate approaches}\label{subsectionalternate} The series in \eqref{mainproblem} may be reduced to a $\mathbb{Q}$-combination of the series treated in \cite{XuZhao2022}. We offer a sketch of a proof, as below, based on this alternate approach. First, for all $k \in \mathbb{N}$ and $\bfs=(s_1,\dots,s_d)\in \mathbb{N}^d$ we define the multiple $t$-sums as \begin{align*} t_k(\bfs):= \sum_{k\ge k_1>\dots>k_d>0} \frac{1}{(2k_1-1)^{s_1}\cdots (2k_d-1)^{s_d}}. \end{align*} They satisfy the stuffle relations. For example, \begin{align*} t_k(1)^2= 2t_k(1,1)+ t_k(2). \end{align*} Since $O_k=t_k(1)$ the $\gl=1$ case of (2) is \begin{align*} \sum_{k>0} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2\frac{O_k^2}{(2k-1)^2} =2\sum_{k>0} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2\frac{t_k(1,1)}{(2k-1)^2} +\sum_{k>0} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2\frac{t_k(2)}{(2k-1)^2}. \end{align*} Now, for all $\bfs=(s_1,\dots,s_d) \in \mathbb{N}^d$ and $m \in \mathbb{N}$, \begin{align*} &\, \sum_{k>0} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2\frac{t_k(\bfs)}{(2k-1)^m} \\ =&\, \sum_{k>k_2>\cdots>k_d>0} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2\frac{1}{(2k-1)^{s_1+m}(2k_2-1)^{s_2}\cdots (2k_d-1)^{s_d}} \\ &\, +\sum_{k>k_1>\cdots>k_d>0} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2\frac{1}{(2k-1)^{m}(2k_1-1)^{s_1}\cdots (2k_d-1)^{s_d}}, \end{align*} and this can be handled by the approach in \cite{XuZhao2022}. It is not hard to see that this above procedure can be generalized to treat arbitrary powers of $O_k$, not only its square. In an unpublished online posting\footnote{\url{https://mathoverflow.net/questions/323287}}, Martin Nicholson described a technique, using results from \cite{CantariniDAurizio2019}, that may be used to evaluate the series shown in \eqref{remainsHk} and \eqref{odd2kp1}. So, in view of the WZ-derived identity involved in our proof of Theorem \ref{maintheorem}, along with the rest of this proof, we may formulate an alternative proof of Theorem \ref{maintheorem}, using Nicholson's arguments to evaluate the series in \eqref{remainsHk} and \eqref{odd2kp1}. \section{The general case}\label{sectionnotConclusion} {\textcolor{black}{Starting from the base case of \eqref{mainproblem} with $\lambda=1$, we are now able, using a reindexing argument, to derive an evaluation for} \begin{equation} \sum_{k=1}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2\frac{O_{k}^{2}}{(2k-3)^2} \end{equation} from Theorem \ref{maintheorem}. \textcolor{black}{In the process we obtain an evaluation of} \begin{equation}\label{wouldrequire} \sum_{k=1}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{O_{k}}{(2 k - 1)^3}. \end{equation} \subsection{The $\lambda = 2$ case}\label{subsectionlambda2} \begin{theorem}\label{theoremlambda2} The series \begin{equation*} \sum_{k=1}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{O_{k}^{2}}{(2k-3)^2} \end{equation*} admits the following symbolic form: $$ -\frac{32 G}{27 \pi }-\frac{32 G \ln (2)}{9 \pi }-\frac{64 \mathcal{G} }{3 \pi }+\frac{\pi ^2}{2}+\frac{11 \pi }{162}+\frac{16}{27 \pi }+\frac{44 \ln ^2(2)}{27 \pi }+\frac{2 \ln ^2(2)}{3}-\frac{52 \ln (2)}{27 \pi}. $$ \end{theorem} \begin{proof} Applying a reindexing argument to the $\lambda = 1$ case of \eqref{mainproblem}, we find that $$ \sum _{k = 1}^{\infty} \left(\frac{1}{16}\right)^k \binom{2 k}{k}^2 O_{k - 1}^{2} \left(\frac{9}{4 (2 k - 3)^2}-\frac{3}{4 (2 k - 3)}+\frac{1}{4 (2 k - 1)^2}+\frac{3}{4 (2 k - 1)}\right)$$ admits the same symbolic form as in the $\lambda = 1$ case. Using the ${}_{4}F_{3}(1)$-evaluation \begin{equation}\label{20220520520PM1A} {}_{4}F_{3}\!\!\left[ \begin{matrix} -\frac{1}{2}, -\frac{1}{2}, -\frac{1}{2}, -\frac{1}{2} \vspace{1mm}\\ \frac{1}{2}, \frac{1}{2}, 1 \end{matrix} \ \Bigg| \ 1 \right] = -\frac{8 G}{\pi }-\frac{16 \mathcal{G} }{\pi }+\frac{3 \pi ^2}{8}+\frac{8}{\pi }+\frac{\ln ^2(2)}{2}, \end{equation} which was given in an equivalent form in \cite{Nimbran2015}, and which may be proved using a reindexing argument from \cite{Nimbran2015} together with a Fourier--Legendre-based argument from \cite{CantariniDAurizio2019}, we obtain the following, using results from \cite{Campbell2018,Campbell2019,Campbell2022}: The expression \begin{equation}\label{almostforlambda2} -\frac{1}{2} \sum _{k=1}^{\infty} \left(\frac{1}{16}\right)^k \binom{2 k}{k}^2 \frac{ O_{k} }{(2 k-1)^3} + \frac{9}{4} \sum _{k=1}^{\infty} \left(\frac{1}{16}\right)^k \binom{2 k}{k}^2 \frac{ O_{k}^{2} }{(2 k-3)^2} \end{equation} admits the symbolic form $$ \frac{4 G}{3 \pi }-\frac{6 G \ln (2)}{\pi }-\frac{32 \mathcal{G} }{\pi }+\frac{3 \pi ^2}{4}+\frac{11 \pi }{72}+\frac{4}{3 \pi }+\frac{11 \ln ^2(2)}{3 \pi }+\ln ^2(2)-\frac{22 \ln (2)}{3 \pi }. $$ We proceed to reindex \eqref{odd2kp1}: \begin{align*} & \sum _{k=0}^{\infty} \left(\frac{1}{16}\right)^k \binom{2 k}{k}^2\frac{O_{k} }{2 k+1} = \sum_{k=0}^{\infty}\left(\frac{1}{16}\right)^k \binom{2 k}{k}^2 \frac{ O_{k+1}-\frac{1}{2k+1}}{2 k+1} \\ & =\sum _{k=1}^{\infty}\left(\frac{1}{16}\right)^k \binom{2 k}{k}^2 O_k \left(\frac{1}{2k-1} + \frac{2}{(2k-1)^2}+\frac{1}{(2k-1)^3} \right) - \sum_{k=0}^\infty \left(\frac{1}{16}\right)^k \binom{2 k}{k}^2 \frac{1}{(2k+1)^2} \\ & =\frac{2 \ln(2)}{\pi} + \frac{8(G-\ln(2))}{\pi} + \sum _{k=1}^{\infty} \left(\frac{1}{16}\right)^k \binom{2 k}{k}^2 \frac{O_{k} }{(2k-1)^3} - \left(- \frac{16 \mathcal{G} }{\pi } + \frac{3 \pi ^2}{8} +\frac{\ln ^2(2)}{2} \right). \end{align*} The first two series evaluations are from \cite{WangChu2022}, and the sum of the last series is given in \eqref{eq:5}. In this manner, we obtain: \begin{equation} \label{thirdpower} \sum_{k=1}^{\infty} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{O_{k}}{(2 k - 1)^3}= -\frac{4 G (\ln (2)+2)}{\pi }-\frac{32 \mathcal{G} }{\pi }+ \frac{6 \ln (2)}{\pi}+\frac{3 \pi ^2}{4}+\ln ^2(2). \end{equation} So, in view of our symbolic form for \eqref{almostforlambda2}, we obtain the desired result. \end{proof} \subsection{Further reindexings} {\color{black}We will need the following Lemma: \begin{lemma} \label{Lemma} For $m \in \mathbb{N}_{0}$, we have that \begin{equation}\label{Lemma1} \sum_{n=0}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{1}{2n+1-2m} = -\frac{16^{m}}{ 2\pi m^2 \binom{2m}{m}^2 } \sum_{k=0}^{m - 1} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 =: A(m) \end{equation} \begin{align} \sum_{n=0}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{1}{(2n+1-2m)^2} = & \frac{16^{m}}{m^2 \binom{2m}{m}^2 } \sum_{k=1}^{m -1} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{k}{2k+1} A(k) \label{Lemma2}\\ & +\frac{16^{m}}{ \pi m^2 \binom{2m}{m}^2 } \sum_{k=0}^{m -1} \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{1}{2k+1} .\nonumber \end{align} \end{lemma} \noindent The proof of these two results is similar to that of \eqref{SfunctionRamanujan}. See also \cite{Nimbran2015}. } Note that reindexing proves to be a very powerful tool. Using an alternative method of reindexing we are able to deduce recurrences for the series \eqref{mainproblem}, \eqref{firstproblem}, \eqref{secondproblem}. For instance, for \eqref{mainproblem} we start from $$\sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2 }{(1 + 2n - 2 \lambda)^2 } = \sum_{n=0}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{\left(O_{n+1}-\frac{1}{2n+1}\right)^2}{(1 + 2n - 2 \lambda)^2 } .$$ Expanding the square in the numerator gives us three terms. The third one, which contains no factor $O_n$, can be split using a partial fraction expansion: $$ \frac{1}{(2n+1)^2(1 + 2n - 2 \lambda)^2 } = \frac{1}{4\lambda^3} \left(\frac{1}{2n+1}-\frac{1}{2n+1- 2 \lambda}+\frac{\lambda}{(2n+1)^2}+\frac{\lambda}{(2n+1- 2 \lambda)^2} \right) .$$ This leads to four series for which the values are well known \cite{Nimbran2015}. For the first and second terms we use reindexing: \begin{align*} \sum_{n=0}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n+1}^2}{(1 + 2n - 2 \lambda)^2 } & = \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n-1} \binom{2n-2}{n-1}^2 \frac{O_{n}^2}{(1 + 2n - 2 (\lambda+1))^2 } \\ & = \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{4n^2}{(2n-1)^2(1 + 2n - 2 (\lambda+1))^2 }O_{n}^2, \end{align*} \begin{align*} -2\sum_{n=0}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 & \frac{O_{n+1}}{(2n+1)(1 + 2n - 2 \lambda)^2 } \\ & = -2\sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{4n^2}{(2n-1)^3(1 + 2n - 2 (\lambda+1))^2 }O_{n}. \end{align*} Now using partial fraction expansions both these series can be written as linear combinations of series of the form \eqref{mainproblem}, \eqref{firstproblem}, \eqref{secondproblem}, and some other series which can be found in \cite{WangChu2022}. For the last one we also need \eqref{thirdpower}. The same method may be applied to \eqref{firstproblem} and \eqref{secondproblem}. Rearranging the terms in the results leads to the following three recurrences:\\ {\bf For the series \eqref{firstproblem}}: \begin{align*} \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 & \frac{O_{n}}{(1 + 2n - 2 (\lambda+1))^2} = \frac{4\lambda^2}{(1+2\lambda)^2}\sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}}{(1 + 2n - 2 \lambda)^2}\\ & + \frac{1}{\lambda(1+2\lambda)}\sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}}{1 + 2n - 2 (\lambda+1)} \\ & - \frac{3+2\lambda}{4\lambda^2(1+2\lambda)}\sum_{n=0}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{1}{1 + 2n - 2 (\lambda+1)} \\ & + \frac{4\lambda^2}{(1+2\lambda)^2}\sum_{n=0}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{1}{(1 + 2n - 2 (\lambda+1))^2}\\ & - \frac{4\ln(2)\lambda+3}{2\lambda^2(1+2\lambda)^2\pi}. \end{align*} The value of the second series at the RHS can be found in \cite{WangChu2022}, {}\color{black} the sums of the third and fourth series can be found using Lemma \ref{Lemma}}. The first few values are given by: \begin{align*} & \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}}{(2n-3)^2} = -\frac{44 \ln \left(2\right)}{27\pi}+\frac{26}{27\pi}+\frac{16 G}{9\pi}, \\ & \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}}{(2n-5)^2} = -\frac{3388 \ln \! \left(2\right)}{3375 \pi}+\frac{2954}{3375 \pi}+\frac{256 G}{225 \pi}, \\ & \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}}{(2n-7)^2} = -\frac{92804 \ln \! \left(2\right)}{128625 \pi}+\frac{32642}{42875 \pi}+\frac{1024 G}{1225 \pi}, \end{align*} confirming the results obtained in \cite{WangChu2022} using Mathematica.\\ {\bf For the series \eqref{secondproblem}}: \begin{align*} \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 & \frac{O_{n}^2}{1 + 2n - 2 (\lambda+1)} = \frac{4\lambda^2}{(1+2\lambda)^2}\sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2}{1 + 2n - 2 \lambda}\\ & + \frac{1}{\lambda}\sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}}{1 + 2n - 2 (\lambda+1)} \\ & - \frac{1}{4\lambda^2}\sum_{n=0}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{1}{1 + 2n - 2 (\lambda+1)} \\ & - \frac{(24\ln^2(2)+\pi^2)\lambda^2+24\ln(2)\lambda+6}{12\lambda^2(1+2\lambda)^2\pi} . \end{align*} {\color{black} Using the case $\lambda=1$ given in \eqref{Basel}, we obtain} \begin{align*} & \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2}{2n-3} = -\frac{10 \ln^2(2)}{9 \pi}-\frac{5 \pi}{108}+\frac{8 \ln \! \left(2\right)}{9 \pi}+\frac{16 G}{9 \pi}-\frac{2}{9 \pi} ,\\ & \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2}{2n-5} = -\frac{178 \ln^2(2)}{225 \pi}-\frac{89 \pi}{2700}+\frac{208 \ln \! \left(2\right)}{225 \pi}+\frac{256 G}{225 \pi}-\frac{74}{225 \pi}, \\ & \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2}{2n-7} = -\frac{762 \ln^2(2)}{1225 \pi}-\frac{127 \pi}{4900}+\frac{3208 \ln \! \left(2\right)}{3675 \pi}+\frac{1024 G}{1225 \pi}-\frac{818}{2205 \pi}. \end{align*} {\bf For the series \eqref{mainproblem}}: \begin{align*} \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 & \frac{O_{n}^2}{(1 + 2n - 2 (\lambda+1))^2} = \frac{4\lambda^2}{(1+2\lambda)^2}\sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2}{(1 + 2n - 2 \lambda)^2}\\ & + \frac{1}{\lambda(1+2\lambda)}\sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2}{1 + 2n - 2 (\lambda+1)}\\ & - \frac{3+2\lambda}{2\lambda^2(1+2\lambda)}\sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}}{1 + 2n - 2 (\lambda+1)} \\ & + \frac{1}{\lambda}\sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}}{(1 + 2n - 2 (\lambda+1))^2} \\ & + \frac{1+\lambda}{2\lambda^3(1+2\lambda)}\sum_{n=0}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{1}{1 + 2n - 2 (\lambda+1)} \\ &- \frac{1}{4\lambda^2}\sum_{n=0}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{1}{(1 + 2n - 2 (\lambda+1))^2}\\ & + \frac{(24\ln^2(2)+\pi^2)\lambda^2+36\ln(2)\lambda+12}{12\lambda^3(1+2\lambda)^2\pi} . \end{align*} Note that for this last one we need the results from the previous two recurrences. {\color{black}The case $\lambda=2$ is discussed in Theorem \ref{theoremlambda2}. For $\lambda=3$ and $\lambda=4$ we obtain:} \begin{align*} & \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2}{(2n-5)^2} = \frac{32 \ln^2(2)}{75}+\frac{8 \pi^{2}}{25}+\frac{3388 \ln^2(2)}{3375 \pi}-\frac{512 G \ln \! \left(2\right)}{225 \pi} \\ & \hspace{5cm} +\frac{847 \pi}{20250}-\frac{5908 \ln \! \left(2\right)}{3375 \pi}-\frac{256 G}{3375 \pi}-\frac{1024 \mathcal{G}}{75 \pi}+\frac{2624}{3375 \pi}, \\ & \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2}{(2n-7)^2} = \frac{384 \ln^2(2)}{1225}+\frac{288 \pi^{2}}{1225}+\frac{92804 \ln^2(2)}{128625 \pi}-\frac{2048 G \ln \! \left(2\right)}{1225 \pi}\\ & \hspace{3cm} +\frac{23201 \pi}{771750}-\frac{65284 \ln \! \left(2\right)}{42875 \pi}+\frac{11264 G}{42875 \pi}-\frac{12288 \mathcal{G} }{1225 \pi}+\frac{188144}{231525 \pi}. \end{align*} \section{Further evaluations}\label{sectionFurther} Although the main purpose of our work is devoted to Wang and Chu's problem concerning \eqref{mainproblem}, we remark that many of the formulas involved in our proof of Theorem \ref{maintheorem} are of interest in their own right and may be applied to evaluate intractable integrals over elementary expressions. The formula in \eqref{afterJay}, in particular, together with the integration technique from \cite{Campbell2018}, gives us that $$ \int_0^1 \frac{\sin ^{-1}(x) \ln \left(1-x^2\right)}{x \sqrt{1-x^2}} \, dx = 4 G \ln (2)+8 \mathcal{G} -\frac{5 \pi ^3}{16}-\frac{ \pi \ln ^2(2)}{4}, $$ but current computer algebra systems, including the 2022 version of Mathematica, cannot evaluate this definite integral and cannot evaluate the underlying indefinite integral. {\color{black}Note that \begin{equation} \label{logcos} \int_0^1 \frac{\sin ^{-1}(x) \ln \left(1-x^2\right)}{x \sqrt{1-x^2}} \, dx =2 \int_0^{\pi/2} \frac{\theta}{\sin \theta} \ln (\cos \theta) d \theta . \end{equation}} Similarly, using \eqref{odd2kp1} together with the integration method from \cite{Campbell2019}, we may obtain the following remarkable evaluation: \begin{align*} \int_0^1 \frac{\sqrt{1-x^2} \sin ^{-1}(x) \ln (x)}{x} \, dx & {\color{black} = \int_0^{\pi/2} \frac{\theta\cos^2 \theta}{\sin \theta} \ln (\sin \theta) d \theta} \\ & = -2 G- 4 \mathcal{G} + \frac{\pi ^3}{32}+2+\frac{\pi \ln ^2(2)}{8}, \end{align*} {\color{black} which together with \eqref{logcos} leads to $$ G = 1 + \frac{1}{2}\int_0^{\pi/2} \theta \, {\sin \theta} \ln (\sin \theta) d \theta .$$} Much of our proof in Section \ref{subsectionproof} relied on the evaluation of series involving summand factors of the form $$ \left( \frac{1}{16} \right)^{k} \binom{2 k}{k}^{2} \frac{1}{2k+1} $$ for $k \in \mathbb{N}_{0}$, so it seems worthwhile to consider further evaluations for such sums and relation summations. In this regard, setting $\beta(4)=\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)^4}$, let us reproduce the following identities from \cite{CampbellLevrieNimbran2021,CantariniDAurizio2019}: \begin{align} & \sum _{k=0}^{\infty } \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{1}{2 k+1} = \frac{4G}{\pi}, \label{eq:4}\\ & \sum _{k=0}^{\infty } \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{1}{(2 k+1)^2} = - \frac{16 \mathcal{G} }{\pi } + \frac{3 \pi ^2}{8} +\frac{\ln ^2(2)}{2}, \label{eq:5}\\ & \sum _{k=0}^{\infty } \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 \frac{1}{(2 k+1)^3} = -64\frac{\Im \left(\text{Li}_{4} (1-i)\right)}{\pi} - \pi^2 \ln(2)-\frac{2}{3} \ln^3 (2) - 48 \frac{\beta(4)}{\pi}. \label{20220518359PM1A} \end{align} {\color{black}Such evaluations are closely related to the evaluation of series of the form $\sum_{k=1}^\infty \frac{\binom{2k}{k}^2}{k^n \, 16^k}$. For example, consider the following analogue of \eqref{eq:5}.} \begin{theorem}\label{thm:alpha2} The evaluation $$ {\sum_{k=1}^\infty \frac{\binom{2k}{k}^2}{k^2 \, 16^k} = \frac{3 \pi^2}{2} - \frac{ 64 \mathcal{G}}{\pi} - 6 \ln^2 (2)} $$ holds true. \end{theorem} \begin{proof} For our proof, we have to calculate the following: \begin{equation}\label{break2int} \sum_{k=1}^\infty \frac{\binom{2k}{k}^2}{k^2 \, 16^k} \cdot \frac{\pi}{2}= \int_0^{\pi/2} 2\, {\rm Li}_2 (\tfrac{1-\cos (t)}{2}) {dt}- \int_0^{\pi/2}\ln^2 (\cos(\tfrac{t}{2}) ) {dt} . \end{equation} The second integral in \eqref{break2int} is equivalent with \begin{equation} \int_0^{\pi/4}\ln^2 (\cos(\tfrac{t}{2}) ) {dt}= -\frac{7\pi^3}{64}+\frac{7}{32}\pi \ln^2(2)+\mathcal{G}- \frac{G \ln(2)}{2} \label{eq:2.3} \end{equation} which can be found in \cite{CampbellLevrieNimbran2021}. For the first one, we use partial integration: \begin{align*} \int_0^{\pi/2} 2\, {\rm Li}_2 (\tfrac{1-\cos (t)}{2}) {dt} & = \frac{1}{12} \pi^3 - \frac{1}{2}\pi \ln^2(2) + 2\int_0^{\pi/2} t \cot (\tfrac{t}{2}) \ln (\cos^2 (\tfrac{t}{2})) {dt} \\ & = \frac{1}{12} \pi^3 - \frac{1}{2}\pi \ln^2(2) + 8\int_0^{\pi/4} t \cot (t) \ln (\cos^2 (t)) {dt} . \end{align*} For this integral we use the substitution $u = \tan (t)$: \begin{align*} \int_0^{\pi/4} t \cot (t) \ln (\cos^2 (t)) {dt} & = -\int_0^1 \arctan (u) \ln (1+u^2) \frac{1}{u (1+u^2)} {du} \\ & = -\int_0^1 \arctan (u) \ln (1+u^2) \left( \frac{1}{u}-\frac{u}{1+u^2} \right) {du}. \end{align*} The first integral can be found in \cite{CampbellLevrieNimbran2021}.The second one requires one partial integration in combination with \cite{CampbellLevrieNimbran2021}: \begin{align*} & \int_{0}^{\pi/4} \ln^2 (\sin(t)) dt = - \mathcal{G} +\frac{1}{2} G \ln (2) +\frac{23}{384} \pi ^3 +\frac{9}{32} \pi \ln^2(2) \end{align*} and \eqref{eq:2}. Bringing everything together, we obtain the desired result. \end{proof} \begin{remark} Another method to evaluate the series in Theorem~\ref{thm:alpha2} is given in \cite[Example B.1]{XuZhao2022} using different basis elements. Moreover, Xu and Zhao proved in loc.\ cit.\ that \begin{equation}\label{equ:re2} \sum_{k=1}^\infty \frac{\binom{2k}{k}^2}{k^3 \, 16^k} =- \frac{512}{\pi} \Im \left(\text{Li}_4\Big(\frac{1+i}{2}\Big) \right) - 6 \pi^2 \ln(2) +8 \ln^3 (2) + 384 \frac{\beta(4)}{\pi} + 4\zeta(3). \end{equation} \end{remark} Noting that $$\sum_{k=1}^\infty \frac{\binom{2k}{k}^2}{k^n \, 16^k} = \sum_{k=0}^\infty \frac{4(2k+1)^2\binom{2k}{k}^2}{(k+1)^{n+2} \, 16^{k+1}} = \sum_{k=0}^\infty \frac{4(2(k+1)-1)^2\binom{2k}{k}^2}{(k+1)^{n+2} \, 16^{k+1}}, $$ we obtain, using previous results, that \begin{align*} \sum_{k=0}^\infty \frac{\binom{2k}{k}^2}{(k+1)^3 \, 16^k} & = \frac{48}{\pi} - 16 - \frac{32G}{\pi} - 16\ln (2), \\ \sum_{k = 0}^\infty \frac{\binom{2 k}{k}^2}{(k+1)^4 \, 16^k} & = \frac{128}{\pi}-48-\frac{128G}{\pi}+64\ln(2) + 6 \pi^2 - \frac{256\mathcal{G}}{\pi}-24 \ln^2 (2), \\ \sum_{k=0}^\infty \frac{\binom{2k}{k}^2}{(k+1)^5 \, 16^k} & = \frac{320}{\pi}-128-\frac{384G}{\pi}+192\ln(2) + 24 \pi^2 + \frac{1024\mathcal{G}}{\pi} -96 \ln^2 (2) \\ & - \frac{2048}{\pi} \Im \left(\text{Li}_4\Big(\frac{1+i}{2}\Big) \right) -24 \pi^2 \ln(2) + 32 \ln^3 (2) + \frac{1536\beta(4)}{\pi} + 16\zeta(3). \end{align*} Our applications, as above, of series as in \eqref{eq:4}--\eqref{20220518359PM1A} motivate the study and application of identities as below. \begin{theorem}\label{thm-Varz} For any integer $p\geq 0$, we have \begin{align}\label{equ-int-log-sin-1} \sum_{n=0}^\infty \binom{2n}{n}\frac{z^{2n+1}}{(2n+1)^{p+1}}=\frac{\theta}{2} \frac{\ln^p(2\sin\theta)}{p!}+\frac{1}{4p!} \sum_{j=1}^p (-1)^{j-1} \binom{p}{j} \ln^{p-j}(2\sin\theta)\Ls_{j+1}(2\theta), \end{align} where $\theta:=\arcsin(2z)$ and \begin{align} \Ls_j^{(k)}(\theta):=-\int_0^\theta t^k\ln^{j-k-1}\left(2\sin\frac{t}{2}\right)dt\quad \text{and}\quad \Ls_j(\theta):=\Ls_j^{(0)}(\theta). \end{align} \end{theorem} \begin{proof} From \cite{Berndt1989}, we have \begin{align}\label{binom-arcsin-1} \sum_{n=0}^\infty \binom{2n}{n} \frac{z^{2n+1}}{2n+1}=\frac1{2}\arcsin(2z). \end{align} Hence, we get \begin{align*} &\sum_{n=0}^\infty \binom{2n}{n}\frac{z^{2n+1}}{(2n+1)^{p+1}}=\frac{1}{2(p-1)!} \int_0^z \frac{\ln^{p-1}\left(\frac{z}{t}\right)\arcsin(2t)}{t}dt\\ &=\frac{1}{2(p-1)!} \int_0^z \frac{\ln^{p-1}\left(\frac{4z}{4t}\right)\arcsin(2t)}{4t}d(4t)\\ &=\frac{1}{2(p-1)!} \sum_{j=0}^{p-1} \binom{p-1}{j}(-1)^j \ln^{p-1-j}(4z) \int_0^z \frac{\ln^j(4t)\arcsin(2t)}{4t}d(4t)\\ &=\frac{1}{2(p-1)!} \sum_{j=0}^{p-1} \binom{p-1}{j}\frac{(-1)^j}{j+1} \ln^{p-1-j}(4z) \int_0^z \arcsin(2t)d\ln^{j+1}(4t)\\ &=\frac{1}{2(p-1)!} \sum_{j=0}^{p-1} \binom{p-1}{j}\frac{(-1)^j}{j+1} \ln^{p-1-j}(4z) \int_0^\theta xd\ln^{j+1}(2\sin x)\\ &=\frac{1}{2(p-1)!} \sum_{j=0}^{p-1} \binom{p-1}{j}\frac{(-1)^j}{j+1} \ln^{p-1-j}(4z) \left(\theta\ln^{j+1}(2\sin\theta)-\int_0^\theta \ln^{j+1}(2\sin x)dx\right)\\ &=\frac{1}{2(p-1)!}\theta \ln^p(2\sin\theta)\sum_{j=0}^{p-1} \binom{p-1}{j}\frac{(-1)^j}{j+1}+\frac{1}{4p!} \sum_{j=1}^p (-1)^{j-1} \binom{p}{j} \ln^{p-j}(2\sin\theta)\Ls_{j+1}(2\theta). \end{align*} Then, noting the fact that \[\sum_{j=0}^{p-1} \binom{p-1}{j}\frac{(-1)^j}{j+1}=\frac1{p},\] we obtain the desired evaluation. \end{proof} Setting $p\leq 3$ in \eqref{equ-int-log-sin-1}, we deduce {\color{black}(with $\theta:=\arcsin(2z)$)} \begin{align}\label{equ-paulExample} &\sum_{n=0}^\infty \binom{2n}{n} \frac{z^{2n+1}}{2n+1}=\frac1{2}\theta,\\ &\sum_{n=0}^\infty \binom{2n}{n} \frac{z^{2n+1}}{(2n+1)^2}=\frac1{2}\theta\ln(2\sin\theta)+\frac1{4}\cl_2(2\theta),\\ &\sum_{n=0}^\infty \binom{2n}{n} \frac{z^{2n+1}}{(2n+1)^3}=\frac{1}{4}\theta\ln^2(2\sin\theta)+\frac1{4}\ln(2\sin\theta)\cl_2(2\theta)-\frac{1}{8}\Ls_3(2\theta), \end{align} where we used the relation $\Ls_2(\theta)=\cl_2(\theta)$ and for any positive integer $m$ the Clausen function $\cl_m(\theta)$ is defined by \begin{align*} &{\cl}_{2m-1}(\theta):=\sum_{n=1}^\infty \frac{\cos(n\theta)}{n^{2m-1}}\quad \text{and}\quad {\cl}_{2m}(\theta):=\sum_{n=1}^\infty \frac{\sin(n\theta)}{n^{2m}} \qquad \forall \theta\in [0,\pi]. \end{align*} If we replace $z$ by $\frac{\sin \theta}{2}$ in \eqref{equ-paulExample}, divide by $\sin \theta$, multiply by $\ln (\sin \theta)$ and integrate between 0 and $\frac{\pi}{2}$, we get the following result: \begin{align*} \int_0^{\pi/2} \frac{\theta}{\sin \theta} \ln \sin (\theta)\, {d\theta} & = \frac{\pi}{2}\sum_{k=0}^\infty \left( \frac{1}{16} \right)^{k} \binom{2k}{k}^2 (O_k - \frac{1}{2}H_k - \ln 2) \frac{1}{2k+1} \\ & = - 4 \mathcal{G} + \frac{\pi^3}{32} + \frac{\pi \ln^2 (2)}{8} \end{align*} using \eqref{Paulbrilliant}. This integral is a companion to \eqref{logcos}. \section{Some concluding remarks} In this paper, we have studied a few variations of Ramanujan-like series as in \eqref{Sfunction}, and we have provided a solution to a problem raised by Wang and Chu in \cite{WangChu2022}. The main result gives closed evaluations of the series of the form \begin{equation*} \sum_{n=1}^{\infty} \left( \frac{1}{16} \right)^{n} \binom{2n}{n}^2 \frac{O_{n}^2 }{(1 + 2n - 2 \lambda)^2 } \end{equation*} for $\lambda=1,2,3,4$. Our method works for general $\lambda$ although the computation will be more complicated as $\lambda$ increases. In the process, we have utilized a combination of several different approaches to calculate many other series and a few highly non-trivial definite integrals in the intermediate steps, which are of interest in their own right. Our last result, Theorem~\ref{thm-Varz}, differs from the others in flavor in two aspects: First, the binomial coefficients appear as a first power instead of a square; second, the series involves a variable $z$ so that its specializations may offer many series identities involving log-sine integrals. We wonder if Theorem~\ref{thm-Varz} can be generalized to evaluate the following two series: \begin{align}\label{sum-defn} \sum_{n=0}^\infty \binom{2n}{n}\frac{z^{2n+1}}{(2n+1)^p}H_{n}=? \quad \text{and}\quad \sum_{n=0}^\infty \binom{2n}{n}\frac{z^{2n+1}}{(2n+1)^p}H_{2n+1}=?\quad (p\in \mathbb{N}). \end{align} These would lead to possible solutions of some conjectures of Z.-W. Sun \cite{Sun2021}. For example, Prof. Sun conjectured the following two identities of Ap\'ery-like sums (see \cite[Conjectures 10.59 and 10.60]{Sun2021}): \begin{align} &\sum_{n=0}^\infty \frac{\binom{2n}{n}}{(2n+1)^2(-16)^n}\left(5H_{2n+1}+\frac{12}{2n+1}\right)\overset{\text{?}}{=}14\zeta(3),\label{conj1}\\ &\sum_{n=0}^\infty \frac{\binom{2n}{n}}{(2n+1)^3 16^n}\left(9H_{2n+1}+\frac{32}{2n+1}\right)\overset{\text{?}}{=}40\beta(4)+\frac{5}{12}\pi \zeta(3).\label{conj2} \end{align} \subsection*{Competing interests statement} There are no competing interests to declare. \subsection*{Acknowledgement} Ce Xu is supported by the National Natural Science Foundation of China (Grant No. 12101008), the Natural Science Foundation of Anhui Province (Grant No. 2108085QA01), and the University Natural Science Research Project of Anhui Province. The corresponding Grant Number for the last case is KJ2020A0057. J. Zhao is supported by the Jacobs Prize from The Bishop's School.
1,314,259,996,438
arxiv
\section{Introduction} \label{sec:intro} The third observing run (O3) of the Advanced LIGO \citep{adLIGO} and Virgo \citep{adVirgo} detectors has brought the number of compact binary merger observations up to 90 events with a probability of astrophysical origin $>0.5$ \citep{gwtc1,gwtc2, gwtc21, gwtc3_catalogue}. In particular, the 63 confident detections of binary black hole (BBH) mergers (with a false alarm rate FAR$<0.25$~yr$^{-1}$) lead to more accurate constraints on the mass and spin distribution of these systems \citep{gwtc3_population}. The intrinsic distribution of primary black hole (BH) masses inferred by the LIGO--Virgo--KAGRA collaboration (hereafter, LVK) shows several sub-structures, including a main peak at $\approx{10}$ M$_\odot$, a secondary peak at $\approx{30-40}$~M$_\odot$, and a long tail extending up to $\sim{80}$ M$_\odot$ \citep[e.g.,][]{gwtc3_population}. The inferred distribution of mass ratios has a strong preference for equal-mass systems, but several BBHs are confidently unequal-mass \citep[e.g.,GW190517][]{GW190517}. Focusing on BH spins, we can safely exclude that all BHs are maximally spinning \citep{Farr_2017,Farr_2018, gwtc1}. Typical spin magnitudes in BBHs are small, with $\sim{50}$\% of BHs having $\chi \lesssim{} 0.3$ \citep[e.g.,][]{Wysocki_2019,gwtc2}, although not all BHs in the LVK sample have zero spin \citep{Roulet_2019,Miller_2020}. For example, GW151226 \citep{abbott2016GW151226} and GW190517 \citep{gwtc3_population} confidently possess spin. LVK data also support some mild evidence for spin-orbit misalignment \citep[e.g.,][]{Tiwari_2018,gwtc2,gwtc3_population,Venumadhav_2020,Olsen_2022,Callister_2021,Callister_2022}. These results provide crucial insights to understand BBH formation and evolution \citep[e.g.,][]{Gerosa_2013,Stevenson_2015,Rodriguez_2016,Stevenson_2017,Talbot_2017,Fishbach_2017,Vitale_2017,Zevin_2017,Farr_2018,Barrett_2018,Taylor_2018,Sedda_2019,Roulet_2019,Wysocki_2019,Bouffanais_2019,Bouffanais_2021a,Bouffanais_2021b,Kimball_2020,Baibhav_2020,Sedda_2020,Zevin_2021,Mapelli_2021,Mapelli_2022}. Moreover, the mass and spin of BHs (BHs) carry the memory of their progenitor stars and therefore are a key to unravel the details of massive star evolution and collapse \citep[e.g.,][]{Fryer_2001,Heger_2003,Belczynski_2010,Mapelli_2013,Fragos_2015,Marchant_2016,Eldridge_2016,Demink_2016,Spera_2017,Bavera_2020,Belczynski_2020,Mandel_2021,Fryer_2022,Olejak_2022,Chattopadhyay_2022,Vanson_2022,Briel_2022,Stevenson_2022, Broekgaarden_2022,Broekgaarden_2022b}. In particular, the spin magnitude of a stellar-origin BH should retain the imprint of the spin of the core of its progenitor star \citep[e.g.,][]{Qin_2018,Qin_2019,Fuller_2019,Bavera_2020,Belczynski_2020,Olejak_2021,Stevenson_2022b}. Several models have been proposed to infer the spin magnitude of the BH from that of the progenitor star. The main open question concerns the efficiency of angular momentum transport within a star \citep[e.g.,][]{Maeder_2000,Cantiello_2014,Fuller_2019b}. If angular momentum is efficiently transferred from the core to the outer layers, mass loss by stellar winds can dissipate most of it, leading to a low-spinning stellar core and then to a low-spinning BH. If instead the core retains most of its initial angular momentum until the final collapse, the BH will be fast spinning. In the shellular model \citep{Zahn_1992,Ekstroem_2012,Limongi_2018,Costa_2019}, angular momentum is mainly transported by meridional currents and shear instabilities, leading to relatively inefficient spin dissipation. In contrast, according to the Tayler-Spruit dynamo mechanism \citep{Spruit_2002}, differential rotation induces the formation of an unstable magnetic field configuration, leading to an efficient transport of angular momentum via magnetic torques. Building upon the Tayler-Spruit mechanism, \cite{Fuller_2019} derived a new model with an even more efficient angular momentum dissipation, predicting that the core of a single massive star might end its life with almost no rotation. Electromagnetic observations yield controversial results. Asteroseismology favours slowly rotating cores in the late evolutionary stages, but the vast majority of stars with an asteroseismic estimate of the spin are low-mass stars \citep{Mosser_2012,Gehan_2018,Aerts_2019}. Continuum-fitting derived spins of BHs in high-mass X-ray binaries are extremely high \citep[e.g.,][]{Reynolds_2021,Miller-jones_2021,Fishbach_2022}, but such measurements might be affected by substantial observational biases \citep[e.g.,][]{Reynolds_2021}. Finally, BH spins inferred from quasi periodic oscillations yield notably smaller values than continuum fitting. For example, the estimate of the dimensionless spin of the BH in GRO~J1655--40 is $\chi=0.7\pm{}0.1$ and $0.290\pm{} 0.003$ from continuum fitting \citep{Shafee_2006} and quasi-periodic oscillations \citep{Motta_2014}, respectively. In a binary system, the evolution of the spin is further affected by tidal forces and accretion, which tend to spin up a massive star, whereas non-conservative mass transfer and common-envelope ejection enhance mass loss, leading to more efficient spin dissipation \citep{Kushnir_2016,Hotokezaka_Piran_2017,Zaldarriaga_2018,Qin_2018}. For example, the model by \cite{Bavera_2020} shows that the second-born BH can be highly spinning if its progenitor was tidally spin up when it was a Wolf-Rayet star orbiting about the first-born BH. Furthermore, the orientation of the BH spin with respect to the orbital angular momentum of the binary system encodes information about binary evolution processes. In a tight binary system, tides and mass transfer tend to align the stellar spins with the orbital angular momentum (\citealt{Gerosa_2018}, but see \citealt{Stegmann_2021} for a possible spin flip process induced by mass transfer). If the binary system is in the field, the supernova kick is the main mechanism that can misalign the spin of a compact object with respect to the orbital angular momentum, by tilting the orbital plane \citep[e.g.,][]{Kalogera_2000}. Finally, the spins of BHs in dynamically formed binary systems are expected to be isotropically distributed, because close encounters in a dense stellar cluster reset any previous signature of alignment \citep[e.g.,][]{Rodriguez_2016,Mapelli_2021}. Here, we perform a model-selection hierarchical Bayesian analysis on confident LVK BBHs ($p_{\rm astro}\,>\,0.9$ and $FAR\,<\,0.25\,{\rm yr}^{-1}$). We consider models of field BBHs for three of the most used angular-momentum transport models: (i) the shellular model as implemented in the Geneva stellar evolution code \citep{Ekstroem_2012}, (ii) the Tayler-Spruit dynamo model as implemented in the {\sc mesa} code \citep{Cantiello_2014}, and (iii) the model by \cite{Fuller_2019}. Hereafter, we will refer to these three models simply as GENEVA (G), MESA (M) and FULLER (F) models, following the description in \cite{Belczynski_2020}. For each of these models, we consider an additional variation accounting for the Wolf-Rayet (WR) star tidal spin-up mechanism described by \cite{Bavera_2020}. Also, we account for spin tilts induced by core-collapse supernova explosions. This paper is organized as follows. \Sec{sec:models} presents our population-synthesis models. \Sec{sec:analysis} describes the hierarchical Bayesian framework we used and discusses the LVK events used in our study. We lay down the results in \Sec{sec:results}, and summarize our conclusions in \Sec{sec:conclusion}. \section{Astrophysical Models}\label{sec:models} \subsection{{\sc mobse} and natal kicks} \label{sec:mobse} We simulated our binary systems with the code {\sc mobse} \citep{Mapelli_2017,Giacobbo_2018}. {\sc mobse} is a custom and upgraded version of {\sc bse } \citep{Hurley_2000,Hurley_2002}, in which we introduced metallicity-dependent stellar winds for OB \citep{Vink_2001}, WR \citep{Belczynski_2010}, and luminous blue variable stars \citep{GiacobboMapelli_2018}. {\sc mobse} includes a formalism for electron-capture \citep{GiacobboMapelli_2019}, core-collapse \citep{Fryer_2012}, and (pulsational) pair-instability supernovae \citep{Mapelli_2020}. Here, we adopt the rapid core-collapse supernova prescription, which enforces a gap between the maximum mass of neutron stars and the minimum mass of BHs (2--5 M$_\odot$, \citealt{Oezel_2010,Farr_2011}). We model natal kicks of neutron stars and BHs according to three different models, as shown in Fig.~\ref{fig:kick_distri}: \begin{itemize} \item A unified kick model, in which both neutron stars and BHs receive a kick $v_{\rm kick}\propto{}m_{\rm ej}/m_{\rm rem}$, where $m_{\rm ej}$ is the mass of the ejecta and $m_{\rm rem}$ the mass of the compact remnant \citep[][hereafter GM20]{GiacobboMapelli_2020}. This model naturally produces low-kicks for electron-capture, stripped and ultra-stripped supernovae \citep{Tauris_2015,Tauris_2017}. Hereafter, we call this model GM20. \item A model in which compact-object kicks are drawn from a Maxwellian curve with one-dimensional root-mean-square $\sigma=265$ km s$^{-1}$, consistent with observations of Galactic pulsars \citep{Hobbs_2005}. This realistically represents the upper limit for BH natal kicks. Hereafter, we name this model $\sigma{}265$. \item A model in which compact-object kicks are drawn from a Maxwellian curve with $\sigma=150$ km s$^{-1}$. This value of $\sigma$ is more similar to what suggested from indirect measurements of Galactic BH kicks \citep[e.g.,][]{Repetto_2017,Atri_2019}. Hereafter, we refer to this model as $\sigma{}150$. \end{itemize} \begin{figure} \includegraphics[width=\columnwidth]{kick_pdf.pdf} \caption{Probability distribution function (PDF) of the binary kick velocities in the centre of mass ($V_{\rm CM}$), for our sample of simulated BBH mergers. The centre-of-mass kick velocity takes into account both the first and the second supernova event in each binary system \protect\citep{Perna2022}. Dashed dark-cyan line: model GM20; solid black line: $\sigma{}150$; dotted red line: $\sigma{}265$. This figure only shows the kick velocity of the stellar progenitors of BBHs that merge within the lifetime of the Universe.} \label{fig:kick_distri} \end{figure} For more details about {\sc mobse}, see \cite{GiacobboMapelli_2018}. {\sc mobse} is an open-source code and can be downloaded from \href{https://gitlab.com/micmap/mobse_open}{this link}. \subsection{Spin magnitude} \label{sec:spins} We have implemented four models for the spin magnitude in {\sc mobse}, the first three from \cite{Belczynski_2020}, and the fourth from \cite{Bouffanais_2019}. Given the large uncertainties on angular momentum transport, we do not claim that these four models are a complete description of the underlying physics: our models must be regarded as toy models, which bracket the current uncertainties on BH spins. \subsubsection{Geneva (G) model} In the Geneva (hereafer, G) model, the dimensionless natal spin magnitude of a BH ($\chi{}$) can be approximated as: \begin{equation} \chi{} = \begin{cases} 0.85 & M_\textup{CO} \leq m_{1} \\ a\,{}M_\textup{CO} + b & m_{1} < M_\textup{CO} < m_{2} \\ a_\textup{low} & M_\textup{CO}, \geq m_{2} \end{cases} \label{eq:Gmodel} \end{equation} where $a = -0.088$ for all models, $M_{\rm CO}$ is the final carbon-oxygen mass of the progenitor star, while the values of $b$, $m_1$, $m_2$, and $a_{\rm low}$ depend on metallicity, as indicated in Table~\ref{tab:Gspins}. This model springs from a fit by \cite{Belczynski_2020} to some evolutionary tracks by the Geneva group \citep{Ekstroem_2012}, in which angular momentum transport is relatively inefficient. \begin{table} \begin{tabular}{ccccc} \hline{} $b$ & $m_1\,{}({\rm M}_\odot)$ & $m_2\,{}({\rm M}_\odot)$ & $a_{\rm low}$ & $Z$ \\ \hline{} 2.58 & 16.0 & 24.2 & 0.13 & $\ge{}0.010$\\ 3.578 & 31.0 & 37.8 & 0.25 & $[0.004,\,{}0.010)$\\ 2.434 & 18.0 & 27.7 & 0.0 & $[0.0012,0.004)$\\ 3.666 & 32.0 & 38.8 & 0.25 & $<0.0012$\\ \hline{} \end{tabular} \caption{Parameters adopted in model G. See Eq.~\ref{eq:Gmodel} for details.} \label{tab:Gspins} \end{table} \subsubsection{MESA (M) model} In the M model, we use the fits done by \cite{Belczynski_2020} to a set of stellar tracks run with the {\sc mesa} code. {\sc mesa} models the transport of angular momentum according to the Tayler-Spruit magnetic dynamo (\citealt{Spruit_2002}, see also \citealt{Cantiello_2014}). This yields a dimensionless natal BH spin \begin{equation} \chi{} = \begin{cases} a_1\,{}M_\textup{CO} + b_1 & {\rm if}\,{} M_\textup{CO} \leq m_{1} \\ a_2\,{}M_\textup{CO} + b_2 & {\rm if}\,{} M_\textup{CO}> m_{1}, \end{cases} \label{eq:Mmodel} \end{equation} where $a_1$, $b_1$, and $m_1$ are given in Table~\ref{tab:Mspins}. \begin{table} \begin{tabular}{cccccc} \hline{} $a_1$ & $b_1$ & $a_2$ & $b_2$ & $m_1\,{}({\rm M}_\odot)$ & $Z$ \\ \hline{} $-0.0016$ & 0.115 & -- & -- & $\infty$ & $\ge{}0.010$ \\ $-0.0006$ & 0.105 & -- & -- & $\infty$ & $[0.004,\,{}0.010)$ \\ $0.0076$ & 0.050 & $-0.0019$ & 0.165 & $12.09 $ & $[0.0012,0.004)$\\ $-0.0010$ & 0.125 & -- & -- & $\infty$ & $\le{}0.0012$\\ \hline{} \end{tabular} \caption{Parameters adopted in model M. See Eq.~\ref{eq:Mmodel} for details.} \label{tab:Mspins} \end{table} \subsubsection{Fuller (F) model} \cite{Fuller_2019} predict that angular momentum transport can be even more efficient than the one predicted by the Tayler-Spruit dynamo. \cite{Belczynski_2020} summarize the results of the model by \cite{Fuller_2019} simply as $\chi{}= 0.01$ for all single stars and metallicities. \subsubsection{Maxwellian model (Max)} Finally, we also introduce a toy model in which we represent the spin of a BH as a random number drawn from a Maxwellian curve with one-dimensional root-means square $\sigma{}_\chi=0.1$ and truncated to $\chi_{\rm max}=1.0$. This model has been first introduced by \cite{Bouffanais_2019}, because it is a good match to the distribution arising from LVK data \citep[e.g.,][]{gwtc1,gwtc2,gwtc3_population}. Hereafter, we will indicate this Maxwellian toy model as Max, for brevity. \subsection{Tidal spin up} \label{sec:spinup} The progenitor star of the second-born BH can be substantially spun-up by tidal interactions. In the scenario explored by \cite{Bavera_2020}, a common-envelope or an efficient stable mass transfer episode can lead to the formation of a BH--WR binary system, in which the WR star is the result of mass stripping. The orbital period of this BH--WR binary system can be sufficiently short to lead to efficient tidal synchronisation and spin-orbit coupling. The WR star is then efficiently spun-up. If the WR star then collapses to a BH directly, the final spin of the BH will retain the imprint of the final WR spin. Based on the simulations by \cite{Bavera_2020}, \cite{Bavera_2021} derive a fitting formula to describe the spin-up of the WR star and the final spin of the second-born BH: \begin{equation} \chi{} = \begin{cases} \alpha_{\rm WR} \log_{10}^2{(P/[\textup{day}])} + \beta_{\rm WR}\,{} \log_{10}{(P/\textup{day})} & {\rm if} P \leq 1 \,{}\textrm{d}\\ 0 & \textrm{otherwise}, \end{cases} \label{eq:bavera1} \end{equation} where $P$ is the orbital period of the BH--WR sytem, $\alpha_{\rm WR} = f\left(M_{\rm WR}, c^{\alpha}_{1},c^{\alpha}_{2},c^{\alpha}_{3}\right)$ and $\beta_{\rm WR}= f\left(M_{WR}, c^{\beta}_{1},c^{\beta}_{2},c^{\beta}_{3}\right)$. In this definition, \begin{equation} f\left(M_{\rm{WR}},c_1,c_2,c_3\right) = \frac{-c_1}{c_2 + \exp{\left(-c_3 M_{\rm{WR}}/[{\rm M}_\odot]\right)}}, \end{equation} where $M_{\rm WR}$ is the mass of the WR star, while the coefficients $c_1$, $c_2$ and $c_3$ have been determined through non-linear least-square minimization and can be found in \cite{Bavera_2021}. In {\sc mobse}, we can use these fits for the spin of the second-born BH, while still adopting one of the models presented in the previous subsections (G, M, F, and Max) for the first-born BH. \subsection{Spin orientation} We assume that natal kicks are the only source of misalignment between the orbital angular momentum vector of the binary system and the direction of BH spins \citep{Rodriguez_2016,Gerosa_2018}. Furthermore, we conservatively assume that accretion onto the first-born BH cannot change the direction of its spin \citep{Maccarone_2007}. For simplicity, we also neglect the spin-flip process recently described by \citep{Stegmann_2021}. Under such assumptions, we can derive the angle between the direction of the spins of the two compact objects and that of the orbital angular momentum of the binary system as \citep{Gerosa_2013,Rodriguez_2016} \begin{equation} \cos{\delta{}}=\cos{(\nu{}_1)}\,{}\cos{(\nu{}_2)}+\sin{(\nu{}_1)}\,{}\sin{(\nu{}_2)}\,{}\cos{(\phi{})}, \end{equation} where $\nu{}_i$ is the angle between the new (${\vec L}_{\rm new}$) and the old (${\vec L}_{\rm old}$) orbital angular momentum after a supernova ($i=1,\,{}2$ corresponding to the first and second supernova), so that $\cos{(\nu{})}={\vec L}_{\rm new}\cdot{}{\vec L}_{\rm old}/(L_{\rm new}\,{}L_{\rm old})$, while $\phi$ is the phase of the projection of the orbital angular momentum into the orbital plane. \subsection{Setup of {\sc mobse} runs} \label{sec:sims} \begin{table} \begin{tabular}{lccl} \hline{} Model Name & Spin Magnitude\footnotesize{$^a$} & B21\footnotesize{$^{b}$} & Kick Model\footnotesize{$^{c}$}\\ \hline{} G & Geneva (G) & no & GM20, $\sigma{}265$, $\sigma{}150$ \\ G\_B21 & Geneva (G) & yes & GM20, $\sigma{}265$, $\sigma{}150$ \vspace{0.1cm}\\ M & MESA (M) & no & GM20, $\sigma{}265$, $\sigma{}150$ \\ M\_B21 & MESA (M) & yes & GM20, $\sigma{}265$, $\sigma{}150$\vspace{0.1cm} \\ F & Fuller (F) & no & GM20, $\sigma{}265$, $\sigma{}150$ \\ F\_B21 & Fuller (F) & yes & GM20, $\sigma{}265$, $\sigma{}150$ \vspace{0.1cm}\\ Max & Maxwellian (Max) & no & GM20, $\sigma{}265$, $\sigma{}150$\\ Max\_B21 & Maxwellian (Max) & yes & GM20, $\sigma{}265$, $\sigma{}150$\vspace{0.1cm}\\ \hline{} \end{tabular} \caption{Description of the runs performed for this work. $^{a}$Model for the spin magnitude (Section~\ref{sec:spins}). $^{b}$Correction of the spin magnitude accounting for tidal spin up, as described in B21 (Section~\ref{sec:spinup}). $^{c}$Model for the natal kick (Section~\ref{sec:mobse}).} \label{tab:runs} \end{table} Hereafter, we consider eight possible models for the spins (see also Table~\ref{tab:runs}): \begin{itemize} \item the first four models (hereafter, G, M, F, and Max) adopt the Geneva, Mesa, Fuller and Maxwellian models for both the first- and second-born BHs, \item the other four models (hereafter, G\_B21, M\_B21, F\_B21, and Max\_B21) adopt the fits by \cite{Bavera_2021} for the second-born BH and the Geneva, Mesa, Fuller and Maxwellian models for the first-born BH. \end{itemize} For each of these eight spin models we consider three different kick models: the GM20, $\sigma{}265$, and $\sigma{}150$ models discussed in Section~\ref{sec:mobse}. Finally, for each of these 24 models, we considered 12 metallicities ($Z=0.0002$, 0.0004, 0.0008, 0.0012, 0.0016, 0.002, 0.004, 0.006, 0.008, 0.012, 0.016, and 0.02). For each metallicity, we ran $10^7$ ($2\times{}10^7$) binary systems if $Z\leq{}0.002$ ($Z\geq{}0.004$). Hence, for each model we ran $1.8\times{}10^8$ binary systems, for a total of $4.32\times{}10^9$ binary systems encompassing the eight models. We sampled the initial conditions for each binary system as follows. We have randomly drawn the zero-age main sequence mass of the primary stars from a Kroupa \citep{Kroupa_2001} initial mass function in the range $5-150$ M$_\odot$. The initial orbital parameters (semi-major axis, orbital eccentricity and mass ratio) of binary stars have been randomly drawn as already described in \cite{Santoliquido_2021}. In particular, we derive the mass ratios $q \equiv{} m_2 / m_1$ (with $m_2\leq{}m_1$) as $\mathcal{F} (q) \propto q^{ -0.1}$ with $q \in [0.1,\,{} 1]$, the orbital period $P$ from $\mathcal{F}(\Pi)\propto{}-0.55$ with $\Pi = \log_{10}{(P/{\rm d})} \in [0.15,\,{} 5.5]$ and the eccentricity $e$ from $\mathcal{F}(e)\propto{}e ^{-0.42}$ with $0\leq{}e\leq{}0.9$ \citep{Sana_2012}. As to the main binary evolution parameters, here we use $\alpha=1$ for common envelope, while the parameter $\lambda{}$ depends on the stellar structure as described in \cite{Claeys_2014}. The other binary evolution parameters are set-up as described in \cite{Santoliquido_2021}. \subsection{Merger rate density} \label{sec:cosmorate} We estimate the evolution of BBH mergers with redshift by using our semi-analytic code {\sc Cosmo}$\mathcal{R}${\sc ate}{} \citep{Santoliquido_2020,Santoliquido_2021}. With {\sc Cosmo}$\mathcal{R}${\sc ate}{}, we convolve our {\sc mobse{}}{} catalogues (Section~\ref{sec:sims}) with an observation-based metallicity-dependent star formation rate (SFR) density evolution of the Universe, SFRD$(z,Z)$, in order to estimate the merger rate density of BBHs as \begin{equation} \label{eq:mrd} \mathcal{R}_{\rm BBH}(z) = \int_{z_{{\rm{max}}}}^{z}\left[\int_{Z_{{\rm{min}}}}^{Z_{{\rm{max}}}} {\rm{SFRD}}(z',Z)\,{} \mathcal{F}(z',z,Z) \,{}{\rm{d}}Z\right]\,{} \frac{{{\rm d}t(z')}}{{\rm{d}}z'}\,{}{\rm{d}}z', \end{equation} where \begin{equation} \frac{{\rm{d}}t(z')}{{\rm{d}}z'} = [H_{0}\,{}(1+z')]^{-1}\,{}[(1+z')^3\Omega_{M}+ \Omega_\Lambda]^{-1/2}. \end{equation} In the above equation, $H_0$ is the Hubble constant, $\Omega_M$ and $\Omega_\Lambda$ are the matter and energy density, respectively. We adopt the values in \cite{Planck2018}. The term $\mathcal{F}(z',z,Z)$ is given by: \begin{equation} \mathcal{F}(z',z,Z) = \frac{1}{\mathcal{M}_{{\rm{TOT}}}(Z)}\frac{{\rm{d}}\mathcal{N}(z',z, Z)}{{\rm{d}}t(z)} \end{equation} where $\mathcal{M}_{{\rm{TOT}}}(Z)$ is the total simulated initial stellar mass, and ${{\rm{d}}\mathcal{N}(z',z, Z)/{\rm{d}}}t(z)$ is the rate of BBHs forming from stars with initial metallicity $Z$ at redshift $z'$ and merging at $z$, extracted from our {\sc mobse{}}{} catalogues. In {\sc Cosmo}$\mathcal{R}${\sc ate}{}, ${\rm{SFRD}}(z,Z)$ is given by \begin{equation {\rm{SFRD}}(z',Z) = \psi(z')\,{}p(z',Z) \end{equation} where $\psi(z')$ is the cosmic SFR density at formation redshift $z'$, and $p(z',Z)$ is the log-normal distribution of metallicities $Z$ at fixed formation redshift $z'$, with average $\mu(z')$ and spread $\sigma_{Z}$. Here, we take both $\psi{}(z)$ and $\mu{}(z)$ from \cite{Madau_2017}. Finally, we assume a metallicity spread $\sigma_Z = 0.3$. \subsection{Hyper-parametric model description} \label{sec:hyper} For each of our models (Table~\ref{tab:runs}), described by their hyper-parameters $\lambda$, we predict the distributions of BBH mergers \begin{equation} \frac{\mathrm{d}N}{\mathrm{d}\theta}(\lambda) = N_\lambda \,{}p(\theta|\lambda), \end{equation} where $\theta$ are the merger parameters, and $N_\lambda$ is the total number of mergers predicted by the model. Assuming an instrumental horizon redshift $z_{\rm max} =1.5$, $N_\lambda$ can be calculated as \begin{equation} N_\lambda = \int_0^{z_{\rm max}} \mathcal{R}(z)\,{}\frac{{\rm d}V_{\rm c}}{{\rm d}z}\,{} \frac{T_{\rm obs}}{(1+z)}\,{}{\rm d}z, \label{eq:Nlambda} \end{equation} where $\frac{{\rm d}V_{\rm c}}{{\rm d}z}$ is the comoving volume and $T_{\rm obs}$ the observation duration. To model the population of merging BBHs, we have chosen five observable parameters $\theta = \{\mathcal{M}_{\rm c},\,{} q,\,{} z, \,{}\chi_{\rm eff},\,{}\chi_{\rm p} \}$, where $\mathcal{M}_{\rm c} = (m_1\,{}m_2)^{3/5}/(m_1+m_2)^{1/5}$ is the chirp mass in the source frame with $m_1$ ($m_2$) the masses of the primary (secondary) BH of the binary, $q= m_2/m_1$. and $z$ is the redshift of the merger. In addition, we used two spin parameters: the effective spin ($\chi_{\rm eff}$) and the precessing spin ($\chi_{\rm p}$). The effective spin $\chi_{\rm eff}$ is the mass-weighted projection of the two individual BH spins on the binary orbital angular momentum $\vec L$ \begin{equation} \chi_{\rm eff} = \frac{(\vec \chi_1 + q\,{}\vec \chi_2)}{1+q} \cdot{} \frac{\vec L}{L}, \end{equation} where $\vec \chi_{1,2} = {\vec s_{1,2}\,{}c/(G\,{}m_{1,2}^2)}$ is the dimensionless spin parameter of the two BHs. The precessing spin $\chi_{\rm p}$ is defined as \begin{equation} \chi_{\rm p} = {\rm max}\left(\chi_{1,\perp},\,{} A\, \chi_{2,\perp}\right), \end{equation} where $\chi_{1,\perp}$ ($\chi_{2,\perp}$) is the spin component of the primary (secondary) BH perpendicular to the orbital angular momentum vector $\vec{L}$, and $A = \left({4\,{}q+3}\right)\,{}q/\left({4+3\,{}q}\right)$. To compute the distributions $p(\theta|\lambda)$, we constructed a catalogue of $10^6$ sources for all possible combinations of hyper-parameters $\lambda$, using the merger rate density and the metallicity given by {\sc Cosmo}$\mathcal{R}${\sc ate}{}. From these catalogues we derived continuous estimations of $p(\theta|\lambda)$ by making use of a Gaussian kernel density estimation assuming a bandwidth of 0.15. \begin{figure} \includegraphics[width=\columnwidth]{Mc_distrib_a_0.4.png} \caption{Predicted detectable distribution of chirp mass, for each kick model: GM20 (solid dark-cyan line), $\sigma{}150$ (dotted black line) and $\sigma265$ (dashed red line). For detectable distribution we mean the distribution of simulated BBHs with sufficiently high signal-to-noise ratio (Section~\ref{sec:analysis}). The shaded gray area is the distribution we obtain by stacking the posterior samples of the 59 confident detections from GWTC-3 (Appendix~\ref{sec:events}).} \label{fig:Mc_distri} \end{figure} \begin{figure*} \includegraphics[width=2\columnwidth]{chip_chieff_distrib_V3.png} \caption{Predicted detectable distribution of $\chi_{\rm p}$ (left) and $\chi_{\rm eff}$ (right) for all of our models. Different colours refer to the spin model: G, M, F and Max. Solid (dashed) lines include (do not include) the tidal spin-up model by B21. From top to bottom: GM20, $\sigma{}150$, and $\sigma{}265$. The shaded gray areas are the distributions we obtain by stacking the posterior samples of the 59 confident detections from GWTC-3 (Appendix~\ref{sec:events}).} \label{fig:chi_distri} \end{figure*} \section{Hierarchical Bayesian inference}\label{sec:analysis} Given a set $\mathcal{H}=\{h^k\}_{k=1}^{N_{\rm obs}} $ of $N_{\rm obs}$ GW observations, the posterior distribution of a set of hyper-parameters $\lambda$ associated to an astrophysical model can be described as an in-homogeneous Poisson distribution \citep[e.g.,][]{Loredo_2004,Mandel_2019,Thrane_2019,Bouffanais_2019,Bouffanais_2021a,Bouffanais_2021b}: \begin{equation} p(\lambda, N_\lambda|\mathcal{H}) = e^{-\mu_\lambda}\,\pi(\lambda, N_\lambda)\prod^{N_{\rm obs}}_{k=1}N_\lambda\int\mathcal{L}^k(h^k|\theta)\,p(\theta|\lambda)\,\mathrm{d}\theta, \label{eq:p1} \end{equation} where $N_{\rm obs}$ is the number of events observed by the LVK, with an ensemble of parameters $\theta$, $N_\lambda$ is the number of predicted mergers by the model (as calculated in eq.~\ref{eq:Nlambda}), $\mu_\lambda$ the number of predicted observations given a model and a detector, $\pi(\lambda,\,{} N_\lambda)$ are the prior distributions on $\lambda$ and $N_\lambda$, and $\mathcal{L}^k(\{h\}^k|\theta)$ is the likelihood of the $k^{\rm th}$ observation. The predicted number of events $\mu_\lambda$ can be written in terms of detection efficiency $\beta(\lambda)$ for a given model: \begin{equation} \mu_\lambda = N_\lambda\,\beta(\lambda) , \quad \mathrm{with} \quad \beta(\lambda) = \int_{\theta}p(\theta|\lambda)\,{}p_{\rm det}(\theta)\,{}\mathrm{d}\theta, \label{eq:p2} \end{equation} where $p_{\rm det}(\theta{})$ is the detection probability for a set of parameters $\theta$. This probability can be inferred by computing the optimal signal to noise ratio (SNR) of the sources and comparing it to a detection threshold. In our case we chose as reference a threshold $\rho_{\rm thr} = 8$ in the LIGO Livingston detector, for which we approximated the sensitivity using the measurements for the three runs separately \citep{LIGO_2010, LIGO_2016, Wysocki_2018}. The values for the event's log-likelihood were derived from the posterior and prior samples released by the LVK. Hence, the integral in eq.~\ref{eq:p1} is approximated with a Monte Carlo approach as \begin{equation} \mathcal{I}^k = \int \mathcal{L}^k(h^k|\theta)\,{}p(\theta|\lambda)\,{}\mathrm{d}\theta \approx \frac{1}{N^k_s}\sum^{N^k_s}_{i=1}\frac{p(\theta_i^k|\lambda)}{\pi^k(\theta^k_i)}, \label{eq:indiv_logLike} \end{equation} where $\theta^k_i$ is the $i^{\rm th}$ posterior sample of the $k^{\rm th}$ detection and $N_s^k$ is the total number of posterior samples for the $k^{\rm th}$ detection. To compute the prior term in the denominator, we also used Gaussian kernel density estimation. Finally, we can also choose to neglect the information coming from the number of sources predicted by the model when estimating the posterior distribution. By doing so, we can have some insights on the impact of the rate on the analysis. In practice, this can be done by marginalising eq.~\ref{eq:p1} over $N_\lambda$ using a prior $\pi(N_\lambda)\sim 1/N_\lambda$ \citep{Fishbach_2018}, which yields to the following expression for a model log-likelihood \begin{equation} \mathcal{L} = p(\lambda|\{h\}^k)\sim \pi(\lambda)\,{}\prod_{k=1}^{N_{\rm obs}}\left[\frac{ \mathcal{I}^k}{\beta(\lambda)}\right]. \label{eq:p4} \end{equation} We adopted the formalism described in eqs.~\ref{eq:p1}--\ref{eq:p4} to perform a hierarchical Bayesian inference to compare the astrophysical models presented Sec.~\ref{sec:models} with the third gravitational-wave transient catalogue (GWTC-3), the most updated catalogue of gravitational-wave events from the LVK \citep{gwtc3_catalogue, gwtc3_population}. GWTC-3 contains 90 event candidates with probability of astrophysical origin $p_{\rm astro}>0.5$. From GWTC-3, we extract 59 confident detections of BBHs with a false alarm rate ${\rm FAR}<0.25$~yr$^{-1}$. In this sub-sample, we do not include binary neutron stars and neutron star -- BH systems, and we also exclude the other BBH candidates with an higher FAR. Our chosen FAR threshold ensures a sufficiently pure sample for our analysis \citep{gwtc3_population}. A list of the events used in this study is available in Appendix \ref{sec:events}. For the observable parameters $\theta$, we use the choice described in Section~\ref{sec:hyper}, namely $\theta = \{\mathcal{M}_{\rm c},\,{} q,\,{} z, \,{}\chi_{\rm eff},\,{}\chi_{\rm p} \}$. \section{Results}\label{sec:results} \subsection{Chirp mass}\label{sec:chirp} The chirp mass distribution (Fig.~\ref{fig:Mc_distri}) does not depend on the spin model, by construction. Therefore, we only show different natal kicks. Models $\sigma{}150$ and $\sigma{}265$ show a similar distribution of chirp masses with two peaks of similar importance, one at $\mathcal{M}_{\rm c}\approx{8}$~M$_\odot$ and the other (broader) peak at $\mathcal{M}_{\rm c}\approx{15}$~M$_\odot$. In contrast, model GM20 has a much stronger preference for low-mass BHs, with a dominant peak at $\mathcal{M}_{\rm c}\approx{8}$~M$_\odot$. The reason for this difference is that all BHs in tight binary systems receive slow natal kicks in model GM20 (Fig.~\ref{fig:kick_distri}). This happens because stars in tight binary systems lose their envelope during mass transfer episodes; hence, the mass of supernova ejecta ($m_{\rm ej}$) is small, triggering low kicks in model GM20. Figure~\ref{fig:Mc_distri} also compares the detectable distribution of our models with the stacked posterior samples from the confident BBH detections in GWTC-3. This figure highlights two main differences between the population synthesis models and the posterior samples: the peak at $\mathcal{M}_{\rm c}\approx{15}$ M$_\odot$ is stronger in the models than it is in the data, while the data present a more significant excess at $\mathcal{M}_{\rm c}\approx{25-30}$ M$_\odot$ than the models. Finally, the peak at $\mathcal{M}_{\rm c}\approx{9}$ M$_\odot$ in the data approximately matches the peak at $\mathcal{M}_{\rm c}\approx{8}$ M$_\odot$ in the models. The main features of our population synthesis models (in particular, the peaks at $\mathcal{M}_{\rm c}\approx{8-10}$ M$_\odot$ and $\mathcal{M}_{\rm c}\approx{15-20}$ M$_\odot$) are also common to other population-synthesis models \citep[e.g.,][]{Belczynski_2020,Vanson_2022} and mostly spring from the core-collapse SN prescriptions by \cite{Fryer_2012}. Alternative core-collapse SN models \citep[e.g.,][]{Mapelli_2020,Mandel_2021,Patton_2022,Olejak_2022} produce different features and deserve further investigation (Iorio et al., in prep.). \subsection{Spin parameters}\label{sec:spin} Figure~\ref{fig:chi_distri} shows the detectable distribution of spin parameters $\chi_{\rm p}$ and $\chi_{\rm eff}$ for all of our models. By construction, large spins are much more common in models G and G\_B21, while models F and F\_B21 have a strong predominance of vanishingly small spins. Models M, M\_B21, Max and Max\_B21 are intermediate between the other two extreme models. Including or not the correction by B21 has negligible impact on the distribution of $\chi_{\rm p}$ and $\chi_{\rm eff}$ for models G, because of the predominance of large spin magnitudes. In contrast, introducing the spin-up correction by B21 has a key impact on models F, because it is the only way to account for mild to large spins in these models. The correction by B21 is important also for models M and Max, being responsible for the large-spin wings. Finally, our model with slow kicks (GM20) results in a distribution of $\chi_{\rm p}$ that is more peaked at zero (for models G, M and Max) with respect to the other two kick models ($\sigma{}150$ and $\sigma{}265$). In fact, the supernova kicks in model GM20 are not large enough to appreciably misalign BH spins (see Fig.~\ref{fig:kick_distri}). A similar effect is visible in the distribution of $\chi_{\rm eff}$: model $\sigma{}265$ produces a distribution of $\chi_{\rm eff}$ that is less asymmetric about the zero with respect to models $\sigma{}150$ and especially GM20. \subsection{Model Selection} \label{sec:model_sel} Figure~\ref{fig:results} and Table~\ref{tab:results} report the values of the log-likelihood $\log\mathcal{L}$ defined in Eq.~\ref{eq:p4}. We can quantify the difference between two models A and B by computing the average absolute difference in percentage \begin{equation} \Delta{\rm log}\mathcal{L}({\rm A,\,B}) = \left<\frac{2\left|{\rm log}\mathcal{L}_i^{\rm A}-{\rm log}\mathcal{L}_i^{\rm B}\right|}{{\rm log}\mathcal{L}_i^{\rm A}+{\rm log}\mathcal{L}_i^{\rm B}}\right>_{var}, \label{eq:average_diff} \end{equation} on the non-A,B variation $var$ ($var$ would be kick(spin) if A and B are spin(kick) models). For example to compare the two models G and G\_B21, A and B become G\_B21 and G and $var = \{{\rm GM}20,\, \sigma150, \,\sigma265\}$. The tidal spin-up mechanism (B21) affects the spin of a small part of the population of each model (Fig.~\ref{fig:chi_distri}). However, it improves the likelihood of the F and M models significantly (e.g., $\Delta{\rm log}\mathcal{L}({\rm M\_B21},\,{}{\rm M}) =89\%$, Table~\ref{tab:results}). This improvement of the log-likelihood can be explained by the presence of higher values of $\chi_{\rm p}$ and $\chi_{\rm eff}$ in the distribution of populations M\_B21 and F\_B21 compared to M and F (Fig.~\ref{fig:chi_distri}). The F model yields $\mathcal{L}({\rm F})=-\infty{}$ if we do not include the tidal spin-up correction, regardless of the kick model. This indicates that the LVK data do not support vanishingly small BH spins for the entire BBH population. However, it is sufficient to inject a tiny sub-population of spinning BHs, by switching on the B21 correction, and the F model becomes one of the best considered models. In fact, the F\_B21 models only includes 0.4\% of BHs with $\chi > 0.01$ and achieves $\log{\mathcal{L}} > 200$ (for spin models $\sigma{150}$ and $\sigma{265}$). The G and G\_B21 spin models exhibit lower log-likelihood values than the others for all kicks models: ${\rm log}\mathcal{L} \leqslant 150$ for $\sigma{150}$ and $\sigma265$, and ${\rm log}\mathcal{L} < 0$ for GM20. This happens because the distribution of $\chi_{\rm eff}$ has non-negligible support for extreme values $\chi_{\rm eff}<-0.5$ and $\chi_{\rm eff}>0.5$ (Fig.~\ref{fig:chi_distri}). The kick models $\sigma{150}$ and $\sigma265$ show similar results ($\Delta{\rm log}\mathcal{L}({\rm \sigma{150},{}\sigma265})<3\%)$ for every spin assumptions. Also, for all spin assumptions, the GM20 kick model scores a significantly lower likelihood than the other models $\sigma{150}$ and $\sigma265$ with $\Delta{\rm log}\mathcal{L}({\rm \sigma150,{}{\rm GM20}})\,\sim\, \Delta{\rm log}\mathcal{L}({\rm \sigma265,{}GM20})\,\sim\,$150\%. This result can be explained by the high peak of model GM20 at low chirp masses ($\mathcal{M}_{\rm c}\sim 8 {\rm M}_\odot$, see Sec.\ref{sec:chirp} and Fig.\ref{fig:Mc_distri}) and by the low value of $\chi{}_{\rm p}$ compared to the other kick models (Fig.~\ref{fig:chi_distri}). Models Max and Max\_B21 are possibly the best match to the data, but this is not surprising, because they were built as a toy model to visually match the data. Among the astrophysically-motivated models (i.e., after excluding the Max model), M, M\_B21 and F\_B21 (with kick models $\sigma{150}$ and $\sigma{}265$) are the most favoured by the data. This might be interpreted as a support for the Tayler-Spruit instability mechanism (adopted in models M) and for the tidal spin-up model by B21. \subsection{Importance of $\chi{\rm _p}$} The $\chi{\rm _p}$ parameter encodes information on the spin component in the orbital plane. Its impact on gravitational-wave signals is much lower than that of $\chi_{\rm eff}$, and therefore its measurement is less precise. To understand the impact of $\chi_{\rm p}$ on our results, we re-ran the analysis without this parameter. The results are shown in Table~\ref{tab:results_nochip} and in Fig.~\ref{fig:results} with empty markers. Fig.~\ref{fig:results} shows that, if we do not include $\chi_{\rm p}$, the models M and M\_B21 have almost the same log-likelihood, and even the F model yields a positive log-likelihood. Furthermore, the analysis without $\chi_{\rm p}$ results in significantly larger values of $\mathcal{L}$ for the kick model GM20. Our results demonstrate that the measured $\chi_{\rm p}$ of GWTC-3 BBHs carries substantial information, despite the large uncertainties. \begin{table} \centering \caption{Log-likelihood $\mathcal{L}$ (Eq.~\ref{eq:p4}) estimated with five merger parameters $\theta = \left\{\mathcal{M}_{\rm c}\,, z\,, \chi_{\rm eff}\,, q\,,\chi_{\rm p} \right\}$.} \label{tab:results} \begin{tabular}{lccc} \hline Model Name & GM20 & $\sigma$150 & $\sigma$265 \\ \hline G & -1 & 149 & 145 \\ G\_B21 & -12 & 150 & 141 \vspace{0.1cm} \\ M & 0 & 162 & 171 \\ M\_B21 & 36 & 232 & 232 \vspace{0.1cm} \\ F & -$\infty$ & -$\infty$ & -$\infty$ \\ F\_B21 & 88 & 250 & 242 \vspace{0.1cm} \\ Max & 92 & 255 & 254 \\ Max\_B21 & 106 & 257 & 250 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Log-likelihood $\mathcal{L}$ (Eq.~\ref{eq:p4}) estimated with four merger parameters $\theta = \left\{\mathcal{M}_{\rm c}\,, z\,, \chi_{\rm eff}\,, q \right\}$. Here, we ignore $\chi_{\rm p}$.} \label{tab:results_nochip} \begin{tabular}{lccc} \hline Model Name & GM20 & $\sigma$150 & $\sigma$265 \\ \hline G & 35 & 146 & 147 \\ G\_B21 & 47 & 149 & 154 \vspace{0.1cm} \\ M & 141 & 192 & 190 \\ M\_B21 & 130 & 199 & 180 \vspace{0.1cm} \\ F & 85 & 146 & 138 \\ F\_B21 & 185 & 207 & 180 \vspace{0.1cm} \\ Max & 161 & 208 & 155 \\ Max\_B21 & 160 & 206 & 200 \\ \hline \end{tabular} \end{table} \begin{figure*} \includegraphics[width=13cm]{results_final_all.png} \caption{Values of the log-likelihood $\mathcal{L}$ defined in Eq.~\ref{eq:p4} for the four different models Geneva (G), MESA (M), Fuller (F), and Maxwellian (Max), with/without the tidal spin-up mechanism (B21). Blue crosses: GM20; dark pluses: $\sigma{}150$; red circles: $\sigma{}265$.} \label{fig:results} \end{figure*} \section{Discussion} \label{sec:conclusion} The spin magnitude of BHs is largely uncertain, mostly because we do not fully understand angular momentum transport in massive stars. Here, we have taken a number of spin models bracketing the main uncertainties, we have implemented them into our population-synthesis code {\sc mobse}, and compared them against GWTC-3 data within a hierarchical Bayesian framework. The data do not support models in which the entire BH population has vanishingly small spins (model F). This result is mainly driven by the $\chi_{\rm p}$ parameter. This is in agreement with, e.g., the complementary analysis presented in \cite{Callister_2022}. They employed a variety of complementary methods to measure the distribution of spin magnitudes and orientations of BBH mergers, and concluded that the existence of a sub-population of BHs with vanishing spins is not required by current data. \cite{Callister_2022} find that the fraction of non-spinning BHs can comprise up to $\sim{60 - 70}$\% of the total population. In our F\_B21 models, we have \micmap{$\sim{99.6}$\%} of BHs with $\chi<0.01$. Recently, \cite{Roulet_2021} and \cite{Galaudage_2021} claimed the existence of a sub-population of zero-spin BHs. From our analysis, we cannot exclude the existence of such sub-population, as the F model with B21 correction (F\_B21) still represents a good match of the data. Similarly to \cite{Belczynski_2020} and \cite{Gerosa_2018}, we find that models with large spins (G, G\_B21) are less favoured by the data, but they are still acceptable if we allow for large kicks. Overall, we find a preference for large natal kicks. This result goes into the same direction as the work by \cite{Callister_2021}. Actually, this preference for large natal kicks is degenerate with the adopted formation channel. Had we included the dynamical formation channel in dense star clusters, we would have added a sub-population of isotropically oriented spins (see, e.g., Figure~8 of \citealt{Mapelli_2022}). In a forthcoming study, we will extend our analysis to a multi-channel analysis. While it is unlikely that BBH mergers only originate from one single channel, adding more formation channels to a hierarchical Bayesian analysis dramatically increases the number of parameters, making it more difficult to reject some portions of the parameter space. \section{Summary} \label{sec:summary} The origin of BH spins is still controversial, and angular momentum transport inside massive stars is one of the main sources of uncertainty. Here, we apply hierarchical Bayesian inference to derive constraints on spin models from the 59 most confident BBH merger events in GWTC-3. We consider five parameters: chirp mass, mass ratio, redshift, effective spin, and precessing spin. For model selection, we use a set of binary population synthesis simulations spanning different assumptions for black hole spins and natal kicks. In particular, our spin models account for relatively inefficient (G), efficient (Max and M), and very efficient angular-momentum transport (F). A higher efficiency of angular momentum transport is associated with lower BH spins. In particular, model F predicts vanishingly small spins for the entire BH population. For each of our models, we also include the possibility that some BHs are tidally spun-up (B21). We considered three different natal kick models: according to models $\sigma{}265$ and $\sigma{}150$, we randomly draw the kicks from a Maxwellian curve with $\sigma{}=265$ and 150 km~s$^{-1}$, respectively; in the third model (G20), we also derive the kicks from a Maxwellian curve with $\sigma{}=265$ km~s$^{-1}$, but the kick magnitude is then modulated by the ratio between the mass of the ejecta and the mass of the BH. We summarize our main results as follows. \begin{itemize} \item The data from GWTC-3 do not support models in which the entire BH population has vanishingly small spins (model F). \item In contrast, models in which most spins are vanishingly small, but that also include a sub-population of tidally spun-up BHs (model F\_B21) are a good match to the data. \item The models in which angular momentum transport is relatively inefficient (G and G\_21) yield log-likelihood values that are much lower than models with efficient angular momentum transport (M, M\_B21, Max, and Max\_B21). \item Models with large BH kicks ($\sigma{}150$ and $\sigma{}265$ ) are favoured by our analysis with respect to low-kick models (G20). \item Our results show that the precessing spin parameter $\chi_{\rm p}$ plays a crucial impact to constrain the spin distribution of BBH mergers. \end{itemize} \section*{Acknowledgements} MM, CP, FS and YB acknowledge financial support from the European Research Council for the ERC Consolidator grant DEMOBLACK, under contract no. 770017. This research made use of \textsc{NumPy} \citep{Harris_2020}, and \textsc{SciPy} \citep{SciPy_2020}. For the plots we used \textsc{Matplotlib} \citep{Hunter_2007}. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. The latest public version of {\sc mobse} can be downloaded from \href{https://gitlab.com/micmap/mobse_open}{this repository}. {\sc Cosmo}$\mathcal{R}${\sc ate}{} can be downlowaded from \href{https://gitlab.com/Filippo.santoliquido/cosmo_rate_public}{this link}. \bibliographystyle{mnras}
1,314,259,996,439
arxiv
\section{Matrix Facts} \newtheorem*{fact:ne}{Fact~\ref{lem:ne}} \begin{fact:ne} \lemne \end{fact:ne} \begin{proof} Compared with Lemma~B.2 of~\cite{cohen2017almost}, we need to show that under the condition \eq{2\xx^\top\AA\yy \leq \xx^\top\UU\xx + \yy^\top\UU\yy, \qquad \forall \xx, \yy \in \Real^n,} we have \eq{\ker\pr{\UU} \sleq \ker\pr{\AA} \cap \ker\pr{\AA^\top}.} By the assumption, we have for any $\vv\in \ker\pr{\UU} \dele \dr{\mathbf{0} }$, \eq{ 2\xx^\top\AA\vv \leq \xx^\top\UU\xx,\quad 2\vv^\top\AA\yy \leq \yy^\top\UU\yy,\qquad \forall \xx, \yy\in \Real^n. } By choosing $\xx = c\AA\vv$, $\yy = c\AA^\top\vv$, we have \eq{ 2c \nt{\AA\vv}^2 \leq c^2 \nt{\UU}\nt{\AA}^2\nt{\vv}^2,\quad 2c \nt{\AA^\top\vv}^2 \leq c^2\nt{\UU}\nt{\AA}^2\nt{\vv}^2,\qquad \forall c > 0. } By letting $c \arr 0^+$, we have $\nt{\AA\vv} = 0$ and $\nt{\AA^\top\vv} = 0$, i.e., $\vv\in \ker\pr{\AA}\cap \ker\pr{\AA^\top}$. Thus, $\ker\pr{\UU} \sleq \ker\pr{\AA} \cap \ker\pr{\AA^\top}. $ \end{proof} \begin{fact}\label{fact:aleqsum} If $\AA \aleq a\CC$, $\BB \aleq b\CC$, then $\AA + \BB \aleq \pr{a + b}\CC$. \end{fact} \begin{fact}\label{fact:aleqpleq1} If $\AA \aleq \BB$, $\BB \preccurlyeq \CC$, then $\AA \aleq \CC$. \end{fact} \begin{fact}\label{fact:aleqU} If $\AA\aleq \BB$, then $\U{\AA} \preccurlyeq \BB$. \end{fact} \begin{fact}\label{fact:ESchurE} Schur complements of Eulerian Laplacians are Eulerian Laplacians; Schur complements of strongly connected Eulerian Laplacians are strongly connected Eulerian Laplacians. \end{fact} \begin{fact} For any Eulerian Laplacian $\LL$, $\U{\LL}$ is PSD. \end{fact} \begin{fact}\label{fact:alpRCDDPSDpPD1} If matrix $\AA\in \MS{n}{n}$ is $\alp$-RCDD $(\alp \geq 0)$, then $\U{\AA}$ is PSD. If $\alp > 0$, then $\U{\AA}$ is PD. \end{fact} \begin{fact}\label{fact:strcrank1} Any \strc\ Laplacian $\LL\in\MS{n}{n}$ has rank $n-1$. \end{fact} \begin{fact}\label{fact:scprvpleq} (Lemma~B.1 of~\cite{miller2013approximate}) Suppose that $\AA, \BB\in\MS{n}{n} $ are PSD, $F, C$ is a partition of $[n]$, where $\AA_{FF}, \BB_{FF}$ are nonsingular and $\AA\preccurlyeq \BB$. Then, $\sc{\AA, F} \preccurlyeq \sc{\BB, F}. $ \end{fact} \begin{fact}\label{fact:sctran} (Lemma~C.2 of~\cite{cohen2018solving}) Let $\MM$ be an $n$-by-$n$ matrix and $F, C$ a partition of $[n]$ such that $\MM_{FF}$ is nonsingular. Let $F_1, F_2$ be a partition of $F$ such that $\MM_{F_1 F_1}$ is nonsingular. Then, \eq{ \sc{\sc{\MM, F_1}, F_2} = \sc{\MM, F}. } \end{fact} \begin{fact}\label{fact:Schurxusmall} For any symmetric matrix $\UU\in \MatSize{n}{n}$ and $F, C$ a partition of $[n]$, where $\UU_{FF}$ is positive definite, for any $\xx\in \Real^{\abs{C}}, \xtil\in \Real^n, \text{with } \xtil_C = \xx$, we have \eq{ \nA{\UU}{\xtil}^2 = \nA{\sc{\UU, F}}{\xx}^2 + \nA{\UU_{FF}}{\xtil_F + \UU_{FF}\inv\UU_{FC}\xx}^2 \geq \nA{\sc{\UU, F}}{\xx}^2. } \end{fact} \begin{proof} Define \eq{ \xu = \vc{ - \UU_{FF}\inv\UU_{FC}\xx \\ \xx }. } Then, \eq{ \UU\xu = \vc{ \zerov{F} \\ \sc{\UU, F}\xx }. } Since $\pr{\xtil - \xu}_C = \mathbf{0}$ and $\pr{\UU\xu}_F = \mathbf{0}$, \eq{ \pr{\xtil - \xu}^\top\UU\xu = 0. } Thus, \eq{ &\xtil^\top\UU\xtil = \xu^\top\UU\xu + \pr{\xtil - \xu}^\top\UU\pr{\xtil - \xu} + 2\pr{\xtil - \xu}^\top\UU\xu \\ =& \xu^\top\UU\xu + \pr{\xtil - \xu}^\top\UU\pr{\xtil - \xu} = \nA{\sc{\UU, F}}{\xx}^2 + \nA{\UU_{FF}}{\xtil_F + \UU_{FF}\inv\UU_{FC}\xx}^2. } \end{proof} \begin{fact}\label{fact:kerAsmltkUAaPSD} For matrix $\AA\in \MS{n}{n}$, if $\U{\AA}$ is PSD, then we have $\ker\pr{\AA} \sleq \ker\pr{\U{\AA}}$. \end{fact} \begin{proof} Since $\U{\AA}$ is PSD, we have \eq{ \AA\xx = \mathbf{0} \ \Rightarrow \ \xx^\top\U{\AA}\xx = 0 \ \Rightarrow \ \U{\AA}^{\dagger/2}\xx = \mathbf{0} \ \Rightarrow \ \U{\AA}\xx = \mathbf{0}, } i.e., $\ker\pr{\AA} \sleq \ker\pr{\U{\AA}}. $ \end{proof} \begin{fact}\label{fact:LUL} (Lemma~B.9 of~\cite{cohen2017almost}) For any matrix $\LL\in \MatSize{n}{n}$ with $\U{\LL} \succcurlyeq \mathbf{0} $ and $\ker\pr{\LL} = \ker\pr{\LL^\top} = \ker\pr{\U{\LL}}$, we have \eq \U{\LL} \preccurlyeq \LL\U{\LL}^{\dagger}\LL^\top. } \end{fact} \begin{fact}\label{fact:LDL} (Lemma~4.5 of~\cite{cohen2018solving}) For Eulerian Laplacian $\LL $ and $\DD = \Diag{\LL}$, we have \eq \LL^\top\DD\inv\LL \preccurlyeq 2\U{\LL}. } \end{fact} \begin{fact}\label{fact:scz} Consider symmetric matrices $\AA\in \MS{n}{n}$, $F, C$ a partition of $[n]$ and $\BB\in \MS{\abs{F}}{\abs{F}}$, where $\AA_{FF}$ is PD. Then, $\BB \preccurlyeq \sc{\AA, F}$ is equivalent to \eq{ \mx{ \zerom{F}{F} & \zerom{F}{C} \\ \zerom{C}{F} & \BB } \preccurlyeq \AA, } or equivalently, \eq{ \xx^\top \mx{ \zerom{F}{F} & \zerom{F}{C} \\ \zerom{C}{F} & \BB } \xx \leq \xx^\top \AA \xx,\ \forall \xx\in \Real^n. } \end{fact} \begin{proof} If $\BB \preccurlyeq \sc{\AA, F} $, then by Fact~\ref{fact:Schurxusmall}, for any $\xx\in \Real^n$, \eq{ \xx^\top \mx{ \zerom{F}{F} & \zerom{F}{C} \\ \zerom{C}{F} & \BB } \xx = \xx_C^\top \BB \xx_C \leq \xx_C^\top \sc{\AA, F} \xx_C \leq \xx^\top \AA \xx. } This gives $\mx{ \zerom{F}{F} & \zerom{F}{C} \\ \zerom{C}{F} & \BB } \preccurlyeq \AA$. If $\yy^\top \mx{ \zerom{F}{F} & \zerom{F}{C} \\ \zerom{C}{F} & \BB } \yy \leq \yy^\top \AA \yy,\ \forall \yy\in \Real^n$, then for any $\xx\in \Real^{\abs{C}}$, define $ \xtil = \vc{ -\AA_{FF}\inv\AA_{FC}\xx \\ \xx }. $ Then, $\xx^\top\BB\xx = \xtil^\top\mx{ \zerom{F}{F} & \zerom{F}{C} \\ \zerom{C}{F} & \BB }\xtil \leq \xtil^\top\AA\xtil = \xx^\top\sc{\AA, F}\xx$. \end{proof} The following fact is from Lemma~4.4 of~\cite{cohen2018solving} and Fact~\ref{fact:scz}. \begin{fact}\label{lem:scrobust} For \strc\ Eulerian Laplacian $\LL\in \MatSize{n}{n} $ and $F, C$ a partition of $[n]$ such that $\LL_{FF}$ is $\alp$-RCDD, we have \eq{ \U{\sc{\LL, F}} \preccurlyeq \pr{3 + \frac{2}{\alp}}\sc{\U{\LL}, F} } \end{fact} \begin{fact}\label{fact:ninobnt} (Lemma~B.4 of~\cite{cohen2017almost}) For positive diagonal matrices $\DD_1\in\MS{m}{m}$, $\DD_2\in \MS{n}{n}$ and arbitrary $\MM\in \MS{m}{n}$, we have \eq{ \nt{\DD_1\MM\DD_2} \leq \max\dr{\ni{\DD_1^2\MM}, \no{\MM\DD_2^2}}. } \end{fact} \begin{fact}\label{fact:DLD} For any Eulerian Laplacian $\LL\in\MS{n}{n}$, let $\DD = \Diag{\LL}$, then, $ \nt{\DD^{-1/2}\LL\DD^{-1/2}} \leq 2. $ \end{fact} \begin{proof} Since $\LL$ is an Eulerian Laplacian, $\ni{\DD\inv\LL} \leq 2$, $\no{\LL\DD\inv} \leq 2$. Then, the result follows by Fact~\ref{fact:ninobnt}. \end{proof} \begin{fact}\label{fact:scUpleqUsc} For any matrix $\LL\in\MS{n}{n}$, denote $\UU = \U{\LL}$. If $F, C$ is a partition of $[n]$ such that $\UU_{FF}$ is PD, then \eq{ \sc{\UU, F} \preccurlyeq \U{\sc{\LL, F}}. } \end{fact} \begin{proof} By Fact~\ref{fact:kerAsmltkUAaPSD}, $\LL_{FF}$ is nonsingular. For any $\xx\in \Real^{\abs{C}}$, define $ \xtil = \vc{ - \LL_{FF}\inv \LL_{FC} \xx \\ \xx }. $ Then, by Fact~\ref{fact:Schurxusmall}, \eq{ \xx^\top\sc{\UU, F}\xx \leq \xtil^\top\UU\xtil = \xtil^\top\LL\xtil = \xx^\top \sc{\LL, F} \xx = \xx^\top \U{\sc{\LL, F}} \xx, } i.e., $ \sc{\UU, F} \preccurlyeq \U{\sc{\LL, F}}. $ \end{proof} \begin{fact}\label{fact:MpinvABC} (Lemma~C.3 of~\cite{cohen2018solving}) Consider matrices $\AA \in \MS{m}{m}$, $\BB\in \MS{m}{n}$ and $\CC\in \MS{n}{n}$. Let $\MM = \AA\BB\CC$. Let $\PP_{\MM}$, $\PP_{\MM^\top}$ denote the orthogonal projection matrix onto the column space of $\MM$, $\MM^\top$, respectively. If $\AA, \CC$ are nonsingular, then \eq{ \MM\dg = \PP_{\MM}\CC\inv\BB\dg\AA\inv\PP_{\MM^\top}. } \end{fact} \begin{fact}\label{fact:la2scup1} Let $\UU \in \MS{n}{n} \ (n \geq 2)$ be a \strc\ symmetric Laplacian, then, for any partition $F, C$ of $[n]$, we have \eq{ \lambda_2\pr{\UU} \leq \lambda_2\pr{\sc{\UU, F}}. } And \eq{ \min_{i\in [n]} \UU_{ii} \geq \frac{1}{2}\lambda_2\pr{\UU}. } \end{fact} \begin{proof} By Lemma~C.1 of~\cite{cohen2018solving}, $\sc{\UU, F} = \pr{\PP_S \UU\dg \PP_S }\dg$, where $\PP_{S}$ is the projection matrix onto the image space of $\sc{\UU, F}$. Then, $\lambda_2\pr{\sc{\UU, F}} = \frac{1}{\lambda_{\max}\pr{\PP_S \UU\dg \PP_S}} \geq \frac{1}{\lambda_{\max}\pr{\UU\dg}} = \lambda_2\pr{\UU}, $ where the inequality uses the fact that $\PP_S$ is a projection matrix. Let $\dr{\ee_i}_{i=1}^n$ be standard basis of $\Real^n$, then $ \lambda_2\pr{\UU} = \inf_{\nt{\xx} = 1, \xx\perp \mathbf{1}} \xx^\top \UU \xx = \inf_{\xx \neq \frac{1}{n}\pr{\xx^\top\mathbf{1}}\mathbf{1}} \frac{\xx^\top \UU \xx}{\nt{\xx - \frac{\mathbf{1}^\top\xx}{n}\mathbf{1}}^2 } \leq \min_{i\in[n]} \frac{\ee_i^\top \UU \ee_i}{\nt{\ee_i - \frac{\mathbf{1}^\top\ee_i}{n}\mathbf{1}}^2 } \leq \min_{i\in[n]} \frac{\UU_{ii}}{1 - \frac{2}{n} + \frac{1}{n}} \leq 2\min_{i\in [n]} \UU_{ii}, $ where the last inequality is from $n \geq 2$. \end{proof} \begin{fact}\label{fact:sequenceproductelemta} For 2 sequences of matrices $\AA_1, \AA_2, \cdots, \AA_N \in \MS{n}{n}$ and $\BB_1, \BB_2, \cdots, \BB_N\in \MS{n}{n}$. For any $1\leq k\leq N$, \eq{ &\ni{\AA_1 \cdots \AA_N - \BB_1 \cdots \BB_N} \\ \leq& \pr{n\sum_{i=1}^{k}\ni{\AA_i - \BB_i} + \sum_{i=k+1}^{N}\no{\AA_i - \BB_i}}\prod_{i=1}^{k}\max\dr{1, \ni{\AA_i} + \ni{\AA_i - \BB_i}} \\ & \cdot \prod_{i=k+1}^{N}\max\dr{1, \no{\AA_i} + \no{\AA_i - \BB_i} }. } \end{fact} \begin{proof} It follows directly from the expansion \eq{ &\AA_1 \cdots \AA_N - \BB_1 \cdots \BB_N = \sum_{i=1}^{N}\prod_{j=1}^{i-1}\AA_j \pr{\AA_i - \BB_i} \prod_{j=i+1}^{N} \BB_j } and the fact that $\no{\CC} \leq n\ni{\CC}$ for any matrix $\CC \in \MS{n}{n} $. \end{proof} \begin{fact}\label{fact:FLFgood} (Lemma~6.3 of~\cite{cohen2018solving}, paraphrased) Consider matrix $\AA \in \MS{n}{n}$, where $\ker\pr{\AA} = \ker\pr{\AA^\top}$. Let $F_1, F_2, \cdots, F_d$ be a partition of $[n]$. Denote $E_1 = \emptyset$, $E_i = \cup_{j=1}^{i-1} F_i \ (2\leq i\leq d)$ and $C_0 = [n]$, $C_i = [n] \dele \pr{E_i \cup F_i} \ (1\leq i\leq d)$. Suppose that $\U{\sc{\AA, E_i}}_{F_i F_i}$ is PD for any $1\leq i\leq d - 1$ and $\U{\sc{\AA, E_i}}$ is PSD for any $1\leq i\leq d$. Let $\dr{\theta_i}_{i=1}^d$ be nonnegative numbers such that $\sum_{i=1}^{d}\theta_i = 1$. Define the matrix $\BB = \sum_{i=1}^{d} \theta_i\put{\sc{\AA, E_i}, C_{i-1}, C_{i-1}, n} $, then, \eq{ \tpp{\AA\dg} \BB \AA\dg \preccurlyeq \BB\dg. } \end{fact} \begin{fact}\label{fact:repprvpleq} If $\AA\preccurlyeq \BB$, then $ \rep{k, C, \AA} \preccurlyeq \rep{k, C, \BB}, $ $ \repp{k, C, \AA, N} \preccurlyeq \repp{k, C, \BB, N}. $ \end{fact} \begin{fact}\label{fact:repatimesb1} $\rep{b, C, \rep{a, C, \AA}} = \rep{a \cdot b, C, \AA}. $ \end{fact} \def\putb#1{#1\Fn + \Cn} \begin{fact}\label{fact:permuB1} Consider symmetric matrices $\AA\in \MS{n}{n}$, $\BB\in \MS{\pr{\putb{d}}}{\pr{\putb{d}}}$ and sets $F = \dr{n - \Fn +1, \cdots, n-1, n}$, $C = \dr{1, 2, \cdots, n - \Fn}$ and \eq{ F_a = \dr{b\in \mathbb{Z}:\ (a-1)\Fn+\Cn+1, \cdots, a\Fn + \Cn-1, a\Fn + \Cn },\ \forall a\geq 0. } If $\BB_{-C, -C}$ is positive definite and $\AA\preccurlyeq \sc{\BB, - [n]}$, then, for any $k\geq 0$, there is a permutation matrix $\PP\in \MS{\pr{\putb{kd}}}{\pr{\putb{kd}}}$ such that \eq{ \repp{k, C, \AA, \putb{kd}} \preccurlyeq \PP\rep{k, C, \BB}\PP^\top. } In addition, the permutation matrix $\PP$ also satisfies \begin{itemize} \item $\pr{\PP\rep{k, C, \BB}\PP^\top}_{-C, -C} $ is $\alp$-RCDD if $\BB_{-C, -C}$ is $\alp$-RCDD; \item $\Diag{\rep{k, C, \BB}} = \PP\Diag{\rep{k, C, \BB}}\PP^\top $ if $\Diag{\BB}_{-C, -C} = \rep{d, \emptyset, \DD}$ for some diagonal matrix $\DD\in \MS{\Fn}{\Fn }$; \item for any $\xx \in \Real^n$, the vector \eq{ \xtil = ( \underbrace{\xx_{F}^\top \ \cdots \ \xx_{F}^\top }_{\text{$kd$ repetitions of $\xx_{F}^\top$}} \ \xx_{C}^\top )^\top, } satsifies $\xtil^\top\PP\rep{k, C, \BB}\PP^\top\xtil = \xtil^\top \rep{k, C, \BB} \xtil $. \end{itemize} \end{fact} \begin{proof} Define the set \eq{ F^+ = \left[\putb{d}\right]\dele \left[n\right] = \cup_{a = 2}^{d} F_a, } and a mapping $\phi: [\putb{kd}] \arr [\putb{kd}] $: \eq{ \phi\pr{i} = \left\{ \begin{split} & i,\ i\in C \\ & i - v\pr{d-1}\Fn,\ i\in F_a,\ a\in \dr{1 + vd: 0\leq v\leq k-1} \\ & i + \pr{k - v - 1}\Fn,\ i\in F_a,\ a\in \dr{u + vd: 0\leq v \leq k-1, 2\leq u\leq d} \end{split} \right. } Let $\FF_{i, j}$ denote a matrix whose $(i,j)$-th entry is $1$, and $0$ everywhere else. The permutation matrix $\PP$ is then given by \eq{ \PP = \sum_{i\in \putb{kd} } \FF_{\phi\pr{i}, i}. } In other words, $\PP$ is a permutation matrix such that \eq{ \PP\rep{k, C, \BB}\PP^\top = \mx{ \BB_{CC} & \BB_{CF} & \cdots & \BB_{CF} & \BB_{C F^+} & \cdots & \BB_{C F^+} \\ \BB_{FC} & \BB_{FF} & & & & & \\ \vdots & & \ddots & & & & \\ \BB_{FC} & & & \BB_{FF} & & & \\ \BB_{F^+ C} & & & & \BB_{F^+ F^+} & & \\ \vdots & & & & & \ddots & \\ \BB_{F^+ C} & & & & & & \BB_{F^+ F^+ } }. } Define the set $F^{++} = \cup_{a=k+1}^{kd} F_a $, then \eq{ \sc{\PP\rep{k, C, \BB}\PP^\top, F^{++}} = \rep{k, C, \sc{\BB, F^+}}. } By the condition $\AA\preccurlyeq \sc{\BB, F^+}$ and Fact~\ref{fact:repprvpleq}, we have \eq{ \rep{k, C, \AA} \preccurlyeq \rep{k, C, \sc{\BB, F^+}} = \sc{\PP\rep{k, C, \BB}\PP^\top, F^{++}}. } Then, by Fact~\ref{fact:scz}, \eq{ \repp{k, C, \AA, \putb{kd}} \preccurlyeq \PP\rep{k, C, \BB}\PP^\top. } The remaining properties of $\PP$ follow by the fact that $\PP$ is a permutation among the $\dr{F_a}_{a = 1}^{kd}$ blocks that keeps the rows and columns indexed by $C$ invariant. \end{proof} \section{Exact Partial Block Elimination }\label{sec:exactPBE} Let $\PP$ be a permutation matrix such that $\PP\LL\PP^\top = \mx{\LL_{FF} & \LL_{FC} \\ \LL_{CF} & \LL_{CC} } $ in this Section. We initiate $\Lt{0} = \LL $, $\DD = \Diag{\Lt{0}}$ and $\At{0} = \DD - \Lt{0}$, and then update for $k = 1, 2, \cdots, $ \begin{subequations}\label{eq:randomwalk1} \begin{align} \Lt{k} &= \PP^\top\mx{ \DD_{FF} & - \At{k-1}_{FC} \\ - \At{k-1}_{CF} & 2\Lt{k-1}_{CC} }\PP - \At{k-1}_{:, F}\DD_{FF}\inv\At{k-1}_{F,:} \notag \\ &= \PP^\top\mx{ \DD_{FF} - \At{k-1}_{FF} \DD_{FF}\At{k-1}_{FF } & - \pr{\II + \At{k-1}_{FF}\DD_{FF}\inv}\At{k-1}_{FC} \\ - \At{k-1}_{CF}\pr{\II + \DD_{FF}\inv\At{k-1}_{FF}} & 2\Lt{k-1}_{CC} - \At{k-1}_{CF}\DD_{FF}\inv\At{k-1}_{FC} }\PP \label{line:densebiclique} \\ \At{k} &= \PP^\top\mx{\DD_{FF} & \\ & \Diag{\Lt{k}}}\PP - \Lt{k}. \label{line:denseA} \end{align} \end{subequations} The following lemmas characterize the performance of the ideal but inefficient scheme~\eqref{eq:randomwalk1}. \begin{lemma}\label{lem:scLtkequal1} Let $\Lt{k}$ be the output of running $k$ steps of $\textsc{IdealSchur}(\LL, F)$. At any step we have: \eql{\label{scLtk}}{ \sc{\Lt{k}, F} = 2^k\sc{\LL, F}. } \end{lemma} \begin{proof By~\eqref{eq:D-A}, \eq{ &2\sc{\Lt{k-1}, F} = 2\Lt{k-1}_{CC} - 2\At{k-1}_{CF} \pr{\Lt{k-1}_{FF}}\inv \At{k-1}_{FC} \\ =& 2\Lt{k-1}_{CC} - 2\At{k-1}_{CF} \pr{\DD_{FF} - \At{k-1}_{FF}}\inv \At{k-1}_{FC} \\ =& 2\Lt{k-1}_{CC} - \At{k-1}_{CF} \DD_{FF}\inv \At{k-1}_{FC} \\ & - \At{k-1}_{CF}\pr{\II + \DD_{FF}\inv\At{k-1}_{FF}} \pr{\DD_{FF} - \At{k-1}_{FF}\DD_{FF}\inv\At{k-1}_{FF}}\inv \pr{\II + \At{k-1}_{FF}\DD_{FF}\inv}\At{k-1}_{FC} \\ =& \Lt{k}_{CC} - \At{k}_{CF} \pr{\Lt{k}_{FF}}\inv \At{k}_{FC} = \sc{\Lt{k}, F}. } Then, the relation~\eqref{scLtk} follows by induction. \end{proof} \begin{lemma}\label{lem:LtkE} For any $k\geq 0$, $\Lt{k}$ is an Eulerian Laplacian. \end{lemma} \begin{proof We prove it by induction. Firstly, $\Lt{0} = \LL$ is an Eulerian Laplacian. Assuming that $\Lt{k-1}$ is an Eulerian Laplacian, we prove that $\Lt{k}$ is also an Eulerian Laplacian. By induction hypothesis, $\Lt{k-1 }\mathbf{1} = \mathbf{0}$, i.e., \eq{ &\At{k-1}_{:, F}\mathbf{1} = \At{k-1}_{FF}\mathbf{1} + \At{k-1}_{FC}\mathbf{1} = \DD_{FF}\mathbf{1} \\ &\At{k-1}_{CF}\mathbf{1} = \Lt{k-1}_{CC}\mathbf{1}. } Thus, \eq{ \PP\Lt{k}\PP^\top\mathbf{1} =& \mx{ \DD_{FF} & - \At{k-1}_{FC} \\ - \At{k-1}_{CF} & 2\Lt{k-1}_{CC} }\mathbf{1} - \At{k-1}_{:,F}\DD_{FF}\inv\At{k-1}_{:,F}\mathbf{1} \\ =& \vc{ \DD_{FF}\mathbf{1} - \At{k-1}_{FC}\mathbf{1} \\ - \At{k-1}_{CF}\mathbf{1} + 2\Lt{k-1}_{CC}\mathbf{1} } - \vc{ \At{k-1}_{FF}\mathbf{1} \\ \At{k-1}_{CF}\mathbf{1} } \\ =& \mathbf{0}. } Since $\PP$ is a permutation matrix, we have $\Lt{k}\mathbf{1} = \mathbf{0}$. Similarly, $\mathbf{1}^\top\Lt{k} = \mathbf{0}^\top$. Therefore, $\Lt{k}$ is an Eulerian Laplacian. The result then follows by induction. \end{proof} \begin{lemma}\label{lem:LtkCClimequal1} The result of the update formula~\eqref{eq:randomwalk1} satisfies $ \lim_{k \arr +\infty}\frac{1}{2^k}\Lt{k}_{CC} = \sc{\LL, F}. $ \end{lemma} \begin{proof Since $\LL_{FF} $ is $\alp$-RCDD, $\ni{\DD_{FF}\inv\At{0}_{FF}} \leq \frac{1}{1 + \alp} $. By the equality $\Lt{k}_{FF} = \DD_{FF} - \At{k-1}_{FF}\DD_{FF}\inv\At{k-1}_{FF}$ from~\eqref{line:densebiclique}, we have $ \DD_{FF}\inv\At{k} = \DD_{FF}\inv\At{k-1}_{FF}\DD_{FF}\inv\At{k-1}_{FF}. $ Thus, by induction, \eql{\label{eq:DinvAsuperl}}{ \ni{\DD_{FF}\inv\At{k}} \leq \pr{\frac{1}{1 + \alp}}^{2^k}. } By Lemma~\ref{lem:LtkE}, $\ni{\At{k}_{CF}} \leq n\ni{\At{k}_{CF}} \leq n\nt{\DD_{FF}}$, $\ni{\DD_{FF}\inv\At{k}_{FC}} \leq 1$. Combining the above arguments, \eql{\label{eq:exactAkzero}}{ &\lim_{k\arr +\infty}\frac{1}{2^k}\ni{\At{k}_{CF}\pr{\Lt{k}_{FF}}\inv\At{k}_{FC}} \\ =& \lim_{k\arr +\infty}\frac{1}{2^k}\ni{\At{k}_{CF}\pr{\II - \DD_{FF}\inv\At{k}_{FF}}\inv\DD_{FF}\inv\At{k}_{FC}} \\ \leq& \limsup_{k\arr +\infty}\frac{1}{2^k}\ni{\At{k}_{CF}}\ni{\pr{\II - \DD_{FF}\inv\At{k}_{FF}}\inv}\ni{\DD_{FF}\inv\At{k}_{FC}} \\ \leq& n \nt{\DD_{FF}} \cdot \limsup_{k\arr +\infty}\frac{1}{2^k} \cdot \limsup_{k\arr +\infty}\ni{\pr{\II - \DD_{FF}\inv\At{k}_{FF}}\inv} = 0, } i.e., \eq{ \lim_{k \arr +\infty} \frac{1}{2^k}\At{k}_{CF}\pr{\Lt{k}_{FF}}\inv\At{k}_{FC} = \mathbf{0}. } Then, using Lemma~\ref{lem:scLtkequal1} and the above equation, we have \eq{ &\lim_{k \arr +\infty}\frac{1}{2^k}\Lt{k}_{CC} = \lim_{k \arr +\infty}\frac{1}{2^k}\sc{\Lt{k}, F} + \lim_{k \arr +\infty}\frac{1}{2^k}\At{k}_{CF}\iv{\Lt{k}_{FF}}\At{k}_{FC} = \sc{\LL, F}. } \end{proof} \def\Lm#1{\widetilde{\mathcal{L}}^{\pr{#1}}} \section{Sparsifying Directed Laplacians}\label{sec:sparsify} First, we check that the Eulerian Laplacian sparsifier in Section~3 of~\cite{cohen2017almost} meets the requirements of Theorem~\ref{thm:SparEoracle1}. This procedure can be briefly summarized as: \begin{enumerate} \item decompose $\LL = \sum_{i=1}^{K} \calLt{i} $ such that each $\U{\calLt{i}} $ is an expander; \item sample the entries in the adjacency matrix of each $\calLt{i}$ and use a patch matrix to keep the row sums and the column sums invariant. \end{enumerate} This procedure was analyzed in~\cite{cohen2017almost} by (1) using matrix concentration inequalities to bound the errors in each adjacency matrix with respect to the in-degree and out-degree diagonal matrix; (2) using the property of the expander to bound the errors with respect to $\U{\calLt{i}}$ and in turn $\U{\LL}$. Next, we give a precise bound of the running time of directed Laplacian sparsification by combining the expander decomposition in~\cite{saranurak2019expander} with the degree-fixing on expanders routine from Section~3 of~\cite{cohen2017almost}. By setting $\phi = O\pr{1/\log^3 n}$ in Theorem~4.1 of~\cite{saranurak2019expander} and deleting the edges recursively, we can have a $\pr{s, \phi, 1} $-decomposition (Definition~3.14 of~\cite{cohen2017almost}) of the original directed graph $\mathcal{G}[\LL]$, denoted by $\dr{\calLt{i}}_{i=1}^K$, and each $\U{\calLt{i}}$ is a $\phi$-expander. Here $s = n\log n$ is the sum of the sizes of the subgraphs $\calLt{i}$. The running time of this step is $O\pr{m\log^8 n}$. By Cheeger's inequality, the spectral gap $\delta$ of each $\phi$-expander is $O\pr{\phi^2}$. Then, by Lemma~3.13 of~\cite{cohen2017almost}, we can have a $\Lm{i}$ by sampling edges in $\calLt{i}$, such that $\sum_{i=1}^{K}\nnz{\Lm{i}} \leq s\log n / \delta^2 = O\pr{n\log^{14} n}. $ And $\Lm{i}$ is an $O(1)$-asymmetric approximation of $\calLt{i}$. By summing up $\widetilde{\mathcal{L}} = \sum_{i=1}^{K}\calLt{i}$, we have a $O(1)$-asymmetric approximation of $\LL$, with $\nnz{\widetilde{\mathcal{L}}} = O(n\log^{14} n)$. Putting these costs into Theorem~\ref{thm:TSENSEsolver1}, specifically setting $\TSE\pr{m, n, 1} = O\pr{m\log^8 n}$, $\NSE\pr{n, 1} = O\pr{n\log^{14} n}$, gives that the overall (construction + solve) running times of our algorithm is $O\pr{m\log^8 n + n\log^{15}n\log\frac{n}{\eps}} + \Otil{n\log^{23} n}$. With a slower processing time, we can use short cycle based Eulerian sparsifiers to get a smaller Schur complement chain, and solve for each query vector faster. Given an Eulerian Laplacian $\LL$, the current best bound for the nonzero entries of its sparsifier is $\Otil{n \log^4{n} \epsilon^{-2}}$ edges (Lemma~\ref{lem:shortcycleexistsparseEL}). \begin{lemma}\label{lem:shortcycleexistsparseEL} (Existence of Eulerian Laplacian sparsifier) \cite{CGPSSW18} For any Eulerian Laplacian $\LL\in \MS{n}{n}$ with $\nnz{\LL} = m$ and error parameter $\eps \in (0, 1)$, there is a Eulerian Laplacian $\Lap$ such that $\nnz{\Lap} \leq O(n \log^{4}n \eps^{-2})$ and $\Lap - \LL \aleq \eps \cdot \U{\LL}. $ Such an $\Lap$ can be constructed in time $O(mn\log^{O(1)} n)$. \end{lemma} Invoking the Eulerian sparsifier routine of Lemma~\ref{lem:shortcycleexistsparseEL} above within each iteration of Algorithm~\ref{alg:SCC}, with error set to $\eps = O(\frac{1}{i^2})$ at the $i$-th iteration, we obtain, after $O(n^2 \log^{O(1)} n )$ preprocessing time, an $\{\alp, \frac{1}{16(1+\alp)}, \{\frac{O(1)}{i^2}\}_{i=1}^{d}\}$-Schur complement chain \[ \dr{\dr{\Stt{i}}_{i=1}^d, \dr{F_i}_{i=1}^d } \] with $\alp = O(1)$, $d = O(\log n)$ and $\sum_{i=1}^{n}\nnz{\Stt{i}} = O(n\log^4 n)$. Then, Corollary~\ref{coro:shortcyclep1} follows similarly to the proof of Theorem~\ref{thm:TSENSEsolver1}. \section{Supporting Lemmas and Omitted Proofs }\label{sec:someprfs} \begin{lemma}\label{lem:SE} There is a routine $\SE$ which takes in an Eulerian Laplacian $\LL\in\MS{n}{n}$, error parameter $\eps\in (0, 1)$ and a subset $F\sleq [n]$, where $\nnz{\LL} = m$. And then, $\SE$ runs in $O\pr{\TSE\pr{m, n, \dlt}}$ time to return an Eulerian Laplacian $\Ltil\in \MS{n}{n}$ such that $\Diag{\Ltil} = \Diag{\LL}$, $\Ltil_{FF}\mathbf{1} = \LL_{FF}\mathbf{1}$, $\Ltil_{FF}^\top\mathbf{1} = \LL_{FF}^\top\mathbf{1}$, $\nnz{\Ltil} = O\pr{\NSE\pr{n, \dlt} }$ and $ \Ltil - \LL \aleq \eps \cdot \U{\LL} $ with high probability. Analogously, under the same conditions as in Lemma~\ref{lem:SparP}, there is a routine $\SP$ which takes in vectors $\xx, \yy$, error parameter $\eps$, probability $p$ and a subset $F\sleq [n]$, runs in $O\pr{m\eps^{-2}\log\frac{m}{p}}$ to return with high probability a nonnegative matrix $\BB$ which possesses all the properties of $\AA$ in Lemma~\ref{lem:SparP}. In addition, $\BB_{FF}\mathbf{1} = \pr{\yy_F^\top\mathbf{1}}\xx_F$ and $\mathbf{1}^\top\BB_{FF} = \pr{\xx_F^\top\mathbf{1}}\yy_{F}^\top$. \end{lemma} \begin{proof We apply $\SparE$ to the directed Laplacians \eq{ &\mx{\LL_{FF} - \Diag{\mathbf{1}^\top\LL_{FF}} & \zerom{F}{C} \\ \zerom{C}{F} & \zerom{C}{C}}, \mx{\zerom{F}{F} & \zerom{F}{C} \\ \zerom{C}{F} & \LL_{CC} - \Diag{\mathbf{1}^\top\LL_{CC}}}, \\ &\mx{-\Diag{\mathbf{1}^\top\LL_{CF}} & \zerom{F}{C} \\ \LL_{CF} & \zerom{C}{C} }, \mx{\zerom{F}{F} & \LL_{FC} \\ \zerom{C}{F} & -\Diag{\mathbf{1}^\top\LL_{FC}}} } respectively and summing up the resulting sparsified matrices, we have the $\SparE$ in Lemma~\ref{lem:SE}. Analogously, by setting $\pr{\xx, \yy}$ as \eq{ \pr{\vc{\zerov{F} \\ \xx_{C}}, \vc{ \yy_{F} \\ \zerov{C}}}, \pr{\vc{\xx_{F} \\ \zerov{C} }, \vc{\yy_{F} \\ \zerov{C}}}, \pr{\vc{\xx_{F} \\ \zerov{C} }, \vc{\zerov{F} \\ \yy_{C} }}, \pr{\vc{\zerov{F} \\ \xx_{C}}, \vc{\zerov{C} \\ \yy_{C} }} } in $\SparP$ respectively, we get $\SP$ in Lemma~\ref{lem:SE}. The performance of $\SE$, $\SP$ follows directly by the fact that $\SparE$ preserves the diagonal entries and the row sums, $\SP $ preserves the row and column sums. \end{proof} \begin{remark} By carefully designing a sampling rule, $\SE$ and $\SP$ can be replaced by a single sparsification procedure. Here, we use $\SparE$ and $\SparP$ to construct $\SE$ and $\SP$ merely for simplicity. When invoking $\SparP$, $\SP$ in this paper, we set $p = O\pr{\frac{1}{\poly{n}}}$ and omit the probability parameter $p$ for notational simplicity. For $\xx = \mathbf{0}$ or $\yy = \mathbf{0}$, both $\SparP$ and $\SP$ return $\mathbf{0}\in \MatSize{n}{n}$ naturally. \end{remark} We have the following properties of Algorithm~\ref{alg:SparSchur}. \begin{lemma}\label{lem:LapetcpropSparSchurCpmt1} With high probability, the following statements hold: \begin{enumerate}[(i)] \item $\dr{\Ltt{k}}_{k=0}^K$ are Eulerian Laplacians; \label{item:LttkE} \item $\dr{\Att{k}}_{k=0}^{K} $ are nonnegative matrices satisfying \eql{\label{eq:Attsuperl}}{ \ni{\DD_{FF}\inv\Att{k}_{FF}} \leq \pr{\frac{1}{1 + \alp}}^{2^k},\ \forall 0\leq k\leq K; } \label{item:Attsuperl} \item $\SS, \Sap$ are Eulerian Laplacians; \label{item:SapE} \item The matrix $\Rap$ satisfies $\Rap\mathbf{1} = \Rap^\top\mathbf{1} = \mathbf{0} $ and \eql{\label{eq:R}}{ \nt{\Rap} \leq \frac{n^2\nt{\DD_{FF}}}{2^{K-1} \alp}\pr{\frac{1}{1 + \alp}}^{2^K}. } \end{enumerate} \end{lemma} \begin{proof We prove~(\ref{item:LttkE}) by induction. Firstly, $\Ltt{0} = \LL$ is an Eulerian Laplacian. Suppose that $\Ltt{k-1}$ is an Eulerian Laplacian, we will prove the Eulerianness for $k$. Since $\SP$ preserves the row sum, \eq{\Ytt{k}\mathbf{1} = \sum_{i\in F }\frac{1}{\DD_{ii}}\Att{k-1}_{:,i}\Att{k-1}_{i,:}\mathbf{1} = \Att{k-1}_{:,F}\DD_{FF}\inv\Att{k-1}_{F,:}. } In this proof, we denote \eql{\label{eq:Wtk}}{ \Wt{k} \stackrel{\mathrm{def}}{=} \PP^\top\mx{ \DD_{FF} & - \Att{k-1}_{FC} \\ - \Att{k-1}_{CF} & 2\Ltt{k-1}_{CC} }\PP - \Att{k-1}_{:, F}\DD_{FF}\inv\Att{k-1}_{F,:}, } where $\PP$ is the permutation matrix defined in Section~\ref{sec:exactPBE}. By similar arguments with Lemma~\ref{lem:LtkE}, $\Wt{k}$ is an Eulerian Laplacian. Then, we have \eq{ \Ltt{k, 0}\mathbf{1} = \Wt{k}\mathbf{1} + \pr{\Att{k-1}_{:,F}\DD_{FF}\inv\Att{k-1}_{F, :} - \Ytt{k}}\mathbf{1} = \mathbf{0}. } By combining with the fact that $\Ytt{k}$ is a nonnegative matrix (from Lemma~\ref{lem:SparP}), $\Ltt{k, 0}$ is an Eulerian Laplacian. Then, (\ref{item:LttkE}) follows by Lemma~\ref{lem:SE} and induction. The nonnegativity of $\Att{k}$ follows directly by Lemma~\ref{lem:SE}. Since $\SP$ preserves the row sum and $\SE$ preserves the diagonal and row sum on the submatrix $\Ltt{k,0}_{FF}$, we have \eq{ &\ni{\DD_{FF}\inv\Att{k}_{FF}} = \DD_{FF}\inv\Att{k}_{FF}\mathbf{1} = \DD_{FF}\inv\sum_{i\in F }\frac{1}{\DD_{ii}}\Ytt{k,i}_{FF}\mathbf{1} = \DD_{FF}\inv\sum_{i\in F } \frac{1}{\DD_{ii}} \Att{k-1}_{F,i} \Att{k-1}_{i,F} \mathbf{1} \\ =& \DD_{FF}\inv\Att{k-1}_{FF}\DD_{FF}\inv \Att{k-1}\mathbf{1} = \ni{\DD_{FF}\inv\Att{k-1}_{FF}\DD_{FF}\inv\Att{k-1}_{FF}} \leq \ni{\DD_{FF}\inv\Att{k-1}_{FF}}^2. } Then,~(\ref{item:Attsuperl}) can be shown by induction. For~(\ref{item:SapE}), it can be shown directly by the nonnegativity of $\Xap$ and~\eqref{item:LttkE} that all off-diagonal entries of $\Stt{0}$ is non-positive. By Fact~\ref{fact:ESchurE}, $\sc{\Ltt{K}, F}$ is an Eulerian Laplacian. As $\SparP$ preserves the row sum, \eql{\label{eq:Sttone}}{ &2^K \Stt{0}\mathbf{1} = \sc{\Ltt{K}, F}\mathbf{1} + \Att{K}_{CF}\pr{\DD_{FF} - \Att{K}_{FF}}\inv\Att{K}_{FC}\mathbf{1} - \Xap\mathbf{1} \\ =& \Att{K}_{CF}\pr{\DD_{FF} - \Att{K}_{FF}}\inv\Att{K}_{FC}\mathbf{1} - \sum_{i\in F }\frac{1}{\DD_{ii}}\Att{K}_{C,i}\Att{K}_{i,C} \mathbf{1} \\ =& \Att{K}_{CF}\DD_{FF}\inv\Att{K}_{FF}\pr{\pr{\DD_{FF} - \Att{K}_{FF}}\inv - \DD_{FF}\inv}\Att{K}_{FC}\mathbf{1} \\ =& \Att{K}_{CF}\DD_{FF}\inv\Att{K}_{FF}\sum_{i=0}^{+\infty}\pr{\DD_{FF}\inv\Att{k}_{FF}}^i\DD_{FF}\inv\Att{K}_{FC}\mathbf{1}. } Then, $\Stt{0}\mathbf{1}$ is a nonnegative vector. Analogously, $\mathbf{1}^\top\Stt{0}$ is also nonnegative. So, $\Stt{0}$ is RCDD. Since $\Stt{0}\mathbf{1}$, $\mathbf{1}^\top\Stt{0}$ are nonnegative, from the way we compute the patching matrix $\RR$, off-diagonal entries of $\RR$ are non-positive. Thus, all off-diagonal entries of $\Sap$ are non-positive. It follows by the definition of $\RR_{1,1}$ and direct calculations that $\Sap\mathbf{1} = \Sap^\top\mathbf{1} = \mathbf{0}$. Then, we have shown $\Sap$ is an Eulerian Laplacian. Thus, $\SS$ is also an Eulerian Laplacian by the definition of the oracle $\SparE$. By~\eqref{eq:Sttone} and~\eqref{eq:defRap}, $\Stt{0}\mathbf{1} = \pr{\Rap - \RR}\mathbf{1}$. Then, as we have just shown $\Sap$ is an Eulerian Laplacian, $\Rap\mathbf{1} = \pr{\Stt{0} + \RR}\mathbf{1} = \Sap\mathbf{1} = \mathbf{0}$. Analogously, $\Rap^\top\mathbf{1} = \mathbf{0}$. As we have shown $\RR$ is non-positive, we have $\ni{\RR} \leq \mathbf{1}^\top\RR\mathbf{1} = \mathbf{1}^\top\Stt{0}\mathbf{1}$. As $ \Rap - \RR = \Att{K}_{CF}\DD_{FF}\inv\Att{K}_{FF}\pr{\pr{\DD_{FF} - \Att{K}_{FF}}\inv - \DD_{FF}\inv}\Att{K}_{FC} $ is nonnegative, we have $\ni{\Rap - \RR} \leq \mathbf{1}^\top\pr{\Rap - \RR}\mathbf{1} \comeq{\eqref{eq:Sttone}} \mathbf{1}^\top\Stt{0}\mathbf{1} $. Thus, $\ni{\Rap} \leq \ni{\RR} + \ni{\Rap - \RR} \leq 2 \cdot \mathbf{1}^\top\Stt{0}\mathbf{1}. $ As $\SP$ preserves the row sum, by Lemma~\ref{lem:LtkE}, $ \ni{\Att{K}_{CF}} \leq n\no{\Att{K}_{CF}} \leq n\nt{\DD_{FF}}. $ Since $\Ltt{K}$ is an Eulerian Laplacian, $\ni{\DD_{FF}\inv\Att{K}_{FC}} \leq 1$. Then, by~\eqref{eq:Sttone}, \eq{ \mathbf{1}^\top\Stt{0}\mathbf{1} \leq \frac{n\ni{\Stt{0}\mathbf{1}} }{2^K} \leq \frac{n^2\nt{\DD_{FF}}}{2^K }\pr{\frac{1}{1 + \alp}}^{2^K}\sum_{i=0}^{+\infty}\pr{\frac{1}{1 + \alp}}^{2^i} \leq \frac{n^2\nt{\DD_{FF}}}{\alp}\pr{\frac{1}{1 + \alp}}^{2^K}. } So, $\ni{\Rap} \leq \frac{n^2\nt{\DD_{FF}}}{2^{K-1}\alp}\pr{\frac{1}{1 + \alp}}^{2^K}. $ Analogously, $\no{\Rap} \leq \frac{n^2\nt{\DD_{FF}}}{2^{K-1} \alp}\pr{\frac{1}{1 + \alp}}^{2^K}. $ Then,~\eqref{eq:R} follows by Fact~\ref{fact:ninobnt}. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:EYEXEtt}] For any nonnegative vectors $\pmb{\mathit{a}}, \bb \in \Real^n$, we define $\Uvc{\pmb{\mathit{a}}, \bb}$ as the undirectification of a biclique as follows: \eq{ \Uvc{\pmb{\mathit{a}}, \bb} = \UG{\pr{\mathbf{1}^\top\pmb{\mathit{a}}}\Diag{\bb} - \pmb{\mathit{a}}\bb^\top } = \frac{1}{2}\pr{\pr{\bb^\top\mathbf{1}}\Diag{\pmb{\mathit{a}}} + \pr{\pmb{\mathit{a}}^\top\mathbf{1}}\Diag{\bb}} - \pmb{\mathit{a}}\bb^\top - \bb \pmb{\mathit{a}}^\top. } Then, by Lemma~\ref{lem:SparP}, Lemma~\ref{lem:SE} and Lemma~\ref{lem:ne}, we have \eq{ 2\ex^\top \EY{k,i} \yy \leq \eps\pr{\ex^\top\Uvc{\Att{k-1}_{:,i}, \tpp{\Att{k-1}_{i,:}}}\ex + \ey^\top\Uvc{\Att{k-1}_{:,i}, \tpp{\Att{k-1}_{i,:}}}\ey}. } Then, summing over $i\in F $ yields that \eql{\label{eq:exEYeyUt}}{ 2 \ex^\top \EY{k} \ey \leq \eps \pr{\ex^\top \Ut{k} \ex + \ey^\top \Ut{k} \ey}, } where $\Ut{k} = \sum_{i \in F }\frac{1}{\DD_{ii}}\Uvc{\Att{k-1}_{:,i}, \tpp{\Att{k-1}_{i,:}}}$. Since $\Ut{k}$ is a weighted summation of symmetric Laplacians, $\Ut{k}$ is also a symmetric Laplacian. We also define $\Wt{k}$ as in~\eqref{eq:Wtk} in this proof. And we have shown $\Wt{k}$ is an Eulerian Laplacian. By setting $\LL = \Ltt{k-1}$ in Lemma~\ref{lem:Mti} and Lemma~\ref{lem:ULtm}, we have \eql{\label{cor:Wtk}}{\U{\Wt{k}} \preccurlyeq 2\pr{3 + \frac{2}{\alp}} \U{\Ltt{k-1}}. } . By the definition of $\Wt{k}$, \eq{ \U{\Wt{k}} = \Zt{k} - \frac{1}{2}\pr{\Att{k-1}_{:, F}\DD_{FF}\inv\Att{k-1}_{F, :} + \tpp{\Att{k-1}_{F,:}}\DD_{FF}\inv\tpp{\Att{k-1}_{:, F}}}, } where $\Zt{k}$ is a matrix whose off-diagonal entries are all non-positive. And by the definition of $\Ut{k}$, we have \eq{ \Ut{k} = \Dt{k} - \frac{1}{2}\pr{\Att{k-1}_{:, F}\DD_{FF}\inv\Att{k-1}_{F, :} + \tpp{\Att{k-1}_{F,:}}\DD_{FF}\inv\tpp{\Att{k-1}_{:, F}}}, } where $\Dt{k}$ is a diagonal matrix. Thus, the off-diagonal entries of $\Gt{k} \stackrel{\mathrm{def}}{=} \U{\Wt{k}} - \Ut{k} = \Zt{k} - \Dt{k}$ is all non-positive. And since $\U{\Wt{k}}$ and $\Ut{k}$ are both symmetric Laplacians, we have $\Gt{k}\mathbf{1} = \tpp{\Gt{k}}\mathbf{1} = \mathbf{0}$. So, $\Gt{k}$ is a symmetric Laplacian. Then, $\Gt{k} \succcurlyeq \mathbf{0}$. Thus, we have \eq{ \U{\Wt{k}} = \Ut{k} + \Gt{k} \succcurlyeq \Ut{k}. } By combining with~\eqref{eq:exEYeyUt}, we have \eql{\label{eq:EYUWt}}{ 2\ex^\top \EY{k} \ey \leq \eps \pr{\ex^\top \U{\Wt{k}} \ex + \ey^\top \U{\Wt{k}} \ey}. } By Fact~\ref{lem:ne} and Fact~\ref{fact:aleqU}, $\U{\EY{k}} \preccurlyeq \eps \U{\Wt{k}}$. It follows by the definitions of $\Wt{k}$ and $\EY{k}$ that $ \Ltt{k, 0} = \Wt{k} + \EY{k}. $ Thus, \eq{ \U{\Ltt{k, 0}} = \U{\Wt{k}} + \U{\EY{k}} \preccurlyeq \pr{1 + \eps}\U{\Wt{k}}. } By Lemma~\ref{lem:SE}, \eql{\label{eq:Ettk011112}}{ 2\ex^\top \Ett{k, 0} \ey \leq \eps \pr{\ex^\top \U{\Ltt{k, 0}} \ex + \ey^\top \U{\Ltt{k, 0}} \ey} \leq \eps\pr{1 + \eps}\pr{\ex^\top \U{\Wt{k}} \ex + \ey^\top \U{\Wt{k}} \ey}. } By~\eqref{cor:Wtk},~\eqref{eq:EYUWt},~\eqref{eq:Ettk011112}, Fact~\ref{lem:ne} and the relation $\Ett{k} = \EY{k} + \Ett{k,0} $, we have \eq{ \Ett{k} \aleq \pr{\eps + \pr{1 + \eps}\eps} \U{\Wt{k}} \preccurlyeq 2\pr{3 + \frac{2}{\alp}}\pr{2\eps + \eps^2 }\U{\Ltt{k-1}}. } By the definition of $\epsz $,~\eqref{eq:EYLtt} follows. The inequality~\eqref{eq:EXscLtt} follows analogously. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:SCC}] By Lemma~\ref{thm:SparSchur}, we have \eq{ \St{i+1} - \sc{\Stt{i}, F_i} \aleq \dlt_{i+1}' \U{\sc{\Stt{i}, F_i}} = \frac{\dlt }{3 i^2 }\U{\sc{\Stt{i}, F_i}}. } Then, $\pr{1 - \dlt_{i+1}'}\U{\sc{\Stt{i} , F_i}} \preccurlyeq \U{\St{i+1}} \preccurlyeq \pr{1 + \dlt_{i+1}'} \U{\sc{\Stt{i}, F_i}}. $ Thus, we have \eq{ \U{\Stt{i+1}} = \frac{1}{1 - \dlt_{i+1}' }\U{\St{i+1} } \succcurlyeq \U{\sc{\Stt{i}, F_i}} } and \eq{ &\Stt{i+1} - \sc{\Stt{i}, F_i} = \St{i+1} - \sc{\Stt{i}, F_i} + \frac{\dlt_{i+1}'}{1 - \dlt_{i+1}'}\U{\St{i+1}} \\ \aleq& \pr{\dlt_{i+1}' + \frac{\dlt_{i+1}'}{1 - \dlt_{i+1}'}\cdot \pr{1 + \dlt_{i+1}'}} \U{\sc{\Stt{i}, F_i}} \preccurlyeq \frac{\dlt}{ i^2} \U{\sc{\Stt{i}, F_i}}. } Analogously, $\U{\LL} \preccurlyeq \U{\Stt{1}} $, $\Stt{1} - \LL \aleq \dlt \cdot \U{\LL}. $ By Lemma~\ref{lem:FindRCDD}, $\bet = \frac{1}{16\pr{1 + \alp}}$. Thus, the loop will terminate in $d = O\pr{\log n}$ iterations. Since $\TSE\pr{m, n, \dlt}$ and $\NSE\pr{n, \dlt}$ depends on $m, n $ nearly linearly and $\dlt\inv $ polynomially, the result follows by combining Theorem~\ref{thm:SparSchur} and Lemma~\ref{lem:FindRCDD} with the fact $\sum_{i=1}^{+\infty} \pr{1 - \bet}^{i-1} \poly{i^2 } = O(1) $. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:BBpleqo1Bap}] By~\eqref{eq:defLap}, \eq{ &\put{\sc{\Lap, F_1}, C_1, C_1, n} \\ =& \put{\sc{\Stt{1}, F_1}, C_1, C_1, n} + \put{\Stt{2} - \sc{\Stt{1}, F_1}, C_1, C_1, n} \\ & + \sum_{i=2}^{d-1} \put{\Stt{i+1} - \sc{\Stt{i}, F_{i}}, C_{i}, C_{i}, n} \\ =& \Stt{2} + \sum_{i=2}^{d-1} \put{\Stt{i+1} - \sc{\Stt{i}, F_{i}}, C_{i}, C_{i}, n} . } Repeating this process gives us that \eq{ &\put{\sc{\Lap, \cup_{j=1}^{i-1} F_j}, C_{i-1}, C_{i-1}, n} \\ =& \put{\Stt{i}, C_{i-1}, C_{i-1}, n } + \sum_{j=i}^{d-1} \put{\Stt{j+1} - \sc{\Stt{j}, F_{j}}, C_{j}, C_{j}, n},\ \forall i\in [d]. } Thus, \eq{ \Bap = \BB + \dlt_1\pr{\U{\Stt{1}} - \U{\LL}} + \sum_{i=1}^{d-1} \pr{\sum_{j=1}^{i+1}\dlt_j} \put{\U{\Stt{i+1}} - \U{\sc{\Stt{i}, F_i}}, C_i, C_i, n}. } By the definition of Schur complement chain, $\U{\Stt{1}} \succcurlyeq \U{\LL} $, $\U{\Stt{i+1}} \succcurlyeq \U{\sc{\Stt{i}, F_i}} $ and $\Stt{1} - \LL \aleq \dlt_1 \cdot \U{\LL} $, $\Stt{i+1} - \sc{\Stt{i}, F_i} \aleq \dlt_{i+1}\cdot \U{\sc{\Stt{i}, F_i}}. $ Combining with the condition $\sum_{i=1}^{d}\dlt_i\leq 1$, we have \eq{ \BB \preccurlyeq \Bap \preccurlyeq \BB + \dlt_1^2 \U{\LL} + \sum_{i=1}^{d-1} \dlt_{i+1} \pr{\sum_{j=1}^{i+1} \dlt_{j}} \U{\sc{\Stt{i}, F_i}} \preccurlyeq 2\BB. } \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:Mpt}] By the definition of $\Mpt{i, N}$, we have \eq{ \iv{\Stt{i}_{F_i F_i}} - \Mpt{i, N} = \frac{1}{2}\sum_{k=N}^{+\infty}\pr{\II - \frac{1}{2}\Dap_{F_i F_i}\inv\Stt{i}_{F_i F_i}}^k\Dap_{F_i F_i}\inv = \pr{\II - \frac{1}{2}\Dap_{F_i F_i}\inv\Stt{i}_{F_i F_i}}^N \iv{\Stt{i}_{F_i F_i}}. } By the Gershgorin circle theorem, the modulus of the eigenvalues of $\II - \frac{1}{2}\Dap_{F_i F_i}\inv\Stt{i}_{F_i F_i}$ are no greater than $\frac{2 + \alp}{2\pr{1 + \alp}}$. Then, the modulus of the eigenvalues of $\pr{\II - \frac{1}{2}\Dap_{F_i F_i}\inv\Stt{i}_{F_i F_i}}^N$ are no greater than $\pr{\frac{2 + \alp}{2\pr{1 + \alp}}}^N $. Thus, we have $\II - \pr{\II - \frac{1}{2}\Dap_{F_i F_i}\inv\Stt{i}_{F_i F_i}}^N$ is nonsingular. It follows that \eq{\Mpt{i, N} = \pr{\II - \pr{\II - \frac{1}{2}\Dap_{F_i F_i}\inv\Stt{i}_{F_i F_i}}^N} \iv{\Stt{i}_{F_i F_i}} } is nonsingular. Then, we have \eq{ &\ni{\iv{\Utilt{i, N}} - \iv{\Utilt{i, \infty}}} \\ =& \ni{\pr{\iv{\Stt{i}_{F_i F_i}} - \Mpt{i, N}}\Stt{i}_{F_i C_i}} = \frac{1}{2}\ni{\sum_{k=N}^{+\infty}\pr{\II - \frac{1}{2}\Dap_{F_i F_i}\inv\Stt{i}_{F_i F_i}}^k\Dap_{F_i F_i}\inv\Stt{i}_{F_i C_i} } \\ \leq& \frac{1}{2}\sum_{k=N}^{+\infty}\ni{\II - \frac{1}{2}\Dap_{F_i F_i}\inv\Stt{i}_{F_i F_i}}^k\ni{\Dap_{F_i F_i}\inv\Stt{i}_{F_i C_i} } \leq \frac{1}{2}\sum_{k=N}^{+\infty} \pr{\frac{2 + \alp}{2\pr{1 + \alp}}}^k = \frac{\pr{1 + \alp}}{\alp} \pr{\frac{2 + \alp}{2\pr{1 + \alp}}}^N. } Analogously, we have $\no{\iv{\Ltilt{i, N}} - \iv{\Ltilt{i, \infty}}} \leq \frac{\pr{1 + \alp}}{\alp}\pr{\frac{2 + \alp}{2\pr{1 + \alp}}}^N $ and \\ $ \ni{\iv{\Stt{i}_{F_i F_i}} - \Mpt{i, N}} \leq \frac{\pr{1 + \alp}}{\alp}\pr{\frac{2 + \alp}{2\pr{1 + \alp}}}^N \ni{\Dap_{F_i F_i}\inv}. $ \end{proof} \end{appendices} \section{Introduction} The design of efficient solvers for systems of linear equations in graph Laplacian matrices and their extensions has been a highly fruitful topic in algorithms. Laplacian matrices directly correspond to undirected graphs: off-diagonal entries are negations of edge weights, while the diagonal entries contain weighted degrees. Solvers for Laplacian matrices led to breakthroughs in fundamental problems in combinatorial optimization. Tools developed during such studies have in turn influenced data structures, randomized numerical linear algebra, scientific computing, and network science~\cite{S10,T10}. An important direction in this Laplacian paradigm of designing graph algorithms is extending tools developed for undirected Laplacian matrices to directed graphs. Here a perspective from random walks and Markov chains leads to directed Laplacian matrices~\cite{CKPPSV16}. Such matrices have directed edge weights in off-diagonal entries, and weighted out-degrees on diagonals. In contrast to solving linear systems in undirected Laplacians, solving linear systems in directed Laplacians is significantly less well-understood. Almost-linear time~\cite{cohen2017almost} and nearly-linear time solvers~\cite{cohen2018solving} were developed very recently, and involve many more moving pieces. In particular, the nearly-linear time algorithm from~\cite{cohen2018solving} combined block Gaussian elimination with single variables/vertex elimination, analyzed using matrix Martingales. In contrast, for undirected Laplacians, both block elimination~\cite{kyng2016sparsified} or matrix Martingales~\cite{KS16} can give different nearly-linear time solver algorithms, and there also exists more combinatorial approaches~\cite{KOSZ13}. In this paper, we simplify this picture for directed Laplacian solvers by providing an analog of the sparsified Cholesky/multi-grid solver from~\cite{kyng2016sparsified}. This algorithm's running time is close to the limit of sparsification based algorithms: the running time of invoking a sparsification routine on its own output. Formally, we show: \newcommand\StrcEulerianLapSolveBlk{ Given a \strc\ Eulerian Laplacian $\LL\in\MS{n}{n}$ and an error parameter $\eps\in (0, 1)$, we can process it in time $O\pr{\TSE\pr{m, n, 1} } + \Otil{\TSE\pr{\NSE\pr{n, 1}, n, 1}\log n }$ so that, with high probability, given any vector $\bb\in \Real^n$ with $\bb\perp \mathbf{1}$, we can compute a vector $\xx\in \Real^n$ in time $O\pr{\NSE\pr{n, 1}\log n \log\pr{n/\eps}}$ such that \eq{ \nA{\U{\LL}}{\xx - \LL\dg\bb} \leq \eps \nA{\U{\LL}}{\LL\dg \bb}, } where $\U{\LL} = \pr{\LL + \LL^\top}/2$. } \begin{theorem}\label{thm:TSENSEsolver1} \StrcEulerianLapSolveBlk \end{theorem} This result improves the at least $\Omega(\log^{5}n)$ factor overhead upon sparsification of the previous nearly-linear time directed Laplacian solver~\cite{cohen2018solving}, and is analogous to the current best overheads for sparsification based solvers for undirected Laplacians~\cite{kyng2016sparsified}. From the existence of sparsifiers of size $O(n\log^4 n\eps^{-2})$~\cite{CGPSSW18}, we also obtain the existence of $O(n \log^{5}n\log(n/\eps))$ time solver routines that require quadratic time preprocessing to compute. As with other improved solvers for directed Laplacians our improvements directly applies to applications of such solvers, including random walk related quantities~\cite{CKPPSV16}, as well as PageRank / Perron-Frobenius vectors~\cite{AJSS19}. Our result complements recent developments of better sparsifiers of Eulerian Laplacians~\cite{CGPSSW18,LSY19,PY19}. By analyzing a pseudocode that's entirely analogous to the undirected block-elimination algorithm from~\cite{kyng2016sparsified}, we narrow the gap between Laplacian solvers for directed and undirected graphs. Our result also emphasizes the need for better directed sparsification routines. While there is a rich literature on undirected sparsification~\cite{BSST13}, the current best directed sparsification algorithms rely on expander decompositions, so have rather large logarithmic factor overheads. We discuss such bounds in detail in Appendix~\ref{sec:sparsify}. Finally, our analysis of this more direct algorithm require better understanding the accumulation errors in Eulerian Laplacians and their partially eliminated states, known as Schur Complements. It was observed in~\cite{cohen2018solving} that these objects are significantly less robust than their undirected analogs. Our analysis of these objects rely on augmentations of matrices: constructing larger matrices whose Schur complements correspond to the final objects we wish to approximate, and bounding errors on these larger matrices instead. This approach has roots in symbolic computation, and can be viewed as a generalization of low-rank perturbation formulas such as Sherman-Morrison-Woodbury~\cite{B21}. We believe both this algebraic technique, and the additional robustness properties of directed Schur Complements we show, may be of independent interest. \subsection{Related Works} Directed Laplacian matrices arise in problems related to directed random walks / non-reversible Markov chains, such as computations of stationary distributions, hitting times and escape probabilities. A formal treatment of applying an Eulerian solver to these problems can be found in~\cite{CKPPSV16} and~\cite{AJSS19}. Adaptations of Eulerian Laplacian solvers have also led to improved bounded-space algorithms for estimating random walk probabilities~\cite{AKMPS20}. Our algorithm is most closely related to the previous nearly-linear time directed Laplacian solver~\cite{cohen2018solving}. That algorithm is motivated by single variable elimination and a matrix Martingale based analysis. However, it invokes both components of block elimination algorithms: finding strongly diagonally dominant subsets, and invoking sparsification as black-boxes. The runtime overhead of this routine over sparsification is at least $\log^{5}n$: in~\cite{cohen2018solving}\footnote{arXiv version 1~\url{https://arxiv.org/pdf/1811.10722v1.pdf}}, Lemma 5.1 gives that each phase (for a constant factor reduction) invokes sparsification $O(\log^{2}n)$ times, and each call is ran with error at most $\frac{1}{O(\log^{3}n)}$ (divided by $\log^{2}n$ in Line 2 of Algorithm 2, and also by $\log{n}$ in Line 5 of Algorithm 3). While our algorithms are directed analogs of the undirected block elimination routines from in~\cite{kyng2016sparsified}, our analyses rely on many structures developed in~\cite{cohen2018solving}. Specifically, our cumulative error during elimination steps is bounded via the matrix that's the sum of undiretifications of the intermediate directed matrices. On the other hand, we believe our algorithm is more natural: our sampling no longer needs to be locally unbiased, the per-step errors do not need to be decreased by polylog factors, and the algorithm is no longer divided into inner/outer phases. This more streamlined algorithm leads to our runtime improvements. Our Schur Complement sparisifcation algorithm is based on the partial block elimination routine from~\cite{kyng2016sparsified}, which is in turn based on a two-term decomposition formula for (pseudo-)inverses from~\cite{peng2014efficient}. There is a subsequent algorithm that replaces this decomposition with directly powering via random walks~\cite{CCLPT15} that's also applicable for sparsifying undirected Schur Complements. However, that algorithm relies on sparsifying $3$-step random walk polynomials, which to our knowledge, is a subroutine that has not been studied in directed settings. As a result, we are unable to utilize this later development directly. The existence of $O(n \log^{4}n)$ sized sparsifiers in~\cite{CGPSSW18} relies on decomposing unit weighted graphs into short cycles and $O(n)$ extra edges. While this decomposition has a simple $O(m^2)$ time algorithm (peel off all vertices with degree $<$ 3, then return the lowest cross-edge in the BFS tree), the current fastest construction of it takes $m^{1 + o(1)}$ time~\cite{CGPSSW18,PY19,LSY19}. As a result, we need to instead invoke the more expensive, graph decomposition based, algorithms from~\cite{cohen2017almost} for sparsification. Also, we can only use the naive $O(m^2)$ construction of $O(\log{n})$-lengthed cycle decompositions (after an initial sparsification call to make $m = O(n\log^{O(1)}n)$) because the almost-linear time algorithm in~\cite{LSY19} produces $O(\log^{2}{n})$-lengthed cycles. \section{Preliminary }\label{sec:notation} \subsection{Notations} \begin{flushleft} \textbf{General Notations:} The notation $\Otil{\cdot}$ suppresses the $polyloglog(n)$ factors in this paper. We let $[n] = \dr{1, 2, \cdots, n}$. For matrix $\AA$, $\nnz{\AA}$ denotes its number of nonzero entries. For matrix $\AA\in \MS{n}{n}$ and subsets $T_1, T_2\sleq [n]$, $\AA_{T_1 T_2}\in \MS{\abs{T_1}}{\abs{T_2 }}$ is the submatrix containing the entries with row indexes and column indexes in $(T_1, T_2)$; and $\AA_{-T_1, - T_2}$ is the submatrix of $\AA$ by removing the rows indexed by $T_1$ and columns indexed by $T_2$. For vector $\vv\in \Real^n$ and subset $C\sleq [n]$, $\vv_{C}$ is the subvector of $\vv$ containing the entries indexed by $C$. \\~\\ \textbf{Matrix:} We use $\II_{a}$, $\mathbf{0}_{b\times c}$ to denote the identity matrix of size $a$ and the $b$-by-$c$ zero matrix, and we sometimes omit the subscripts when their sizes can be determined from the context. For any matrix $\XX\in \MatSize{a}{b}$ and set $T_1, T_2 \sleq [n]$ with $\abs{T_1} = a$, $\abs{T_2} = b$, $\put{\XX, T_1, T_2, n} $ denotes an $n$-by-$n $ matrix whose submatrix indexed by $\pr{T_1, T_2}$ equals $\XX$ and all the other entries equal $0$. In other words, $\put{\XX, T_1, T_2, n}$ can be regarded as replacing the submatrix indexed by $\pr{T_1, T_2}$ with $\XX$ in the zero matrix $\mathbf{0}_{{n}\times {n}}$. For symmetric matrix $\AA\in\MS{n}{n}$, we use $\lambda_i\pr{\AA}$ to denotes its $i$-th smallest eigenvalue. For symmetric matrices $\AA, \BB\in \MS{n}{n}$, we use $\AA \succcurlyeq \BB$ $(\AA \succ \BB)$ to indicate that for any $\xx\in \Real^n$, $\xx^\top\AA\xx \geq \xx^\top\BB\xx$ $(\xx^\top\AA\xx > \xx^\top\BB\xx)$. We define $\preccurlyeq$, $\prec$ analogously. A square matrix $\AA\in\MS{n}{n}$ is positive semidefinite (PSD) iff $\AA$ is symmetric and $\AA\succcurlyeq \mathbf{0}$; $\AA\in \MS{n}{n}$ is positive definite (PD) iff $\AA$ is symmetric and $\AA \succ \mathbf{0}$. For PSD matrix $\AA$, $\AA^{1/2}$ is its square root; $\AA\dg$ denotes its Moore-Penrose pseudoinverse; $\AA^{\dagger/2}$ is the square root of its Moore-Penrose pseudoinverse. \\~\\ \textbf{Vector:} $\mathbf{1}_a$, $\mathbf{0}_b$ denote the $a$-dimensional all-ones vector and $b$-dimensional all-zeros vector; when their sizes can be determined from the context, we sometimes omit the subscripts. For matrix $\AA\in \MS{n}{n}$, $\Diag{\AA}$ is an $n$-by-$n$ diagonal matrix with the same diagonal entries as $\AA$. For vector $\xx\in \Real^n$, $\Diag{\xx}$ denotes an $n$-by-$n$ diagonal matrix with its $i$-th diagonal entry equalling $\xx_i$. For any positive semidefinite matrix $\AA$, we define the vector norm $\nA{\AA}{\xx} = \sqrt{\xx^\top\AA\xx}$. And $\nm{\cdot}_{p}$ denotes the $\ell^p$ norm. \\~\\ \textbf{Matrix norm:} $\nm{\cdot}_p$ denotes the $\ell^p$ norm. For instance, for $\AA\in \MS{n}{n}$, $\nt{\AA} = \sqrt{\lambda_n\pr{\AA^\top\AA}}$; $\ni{\AA} = \max_{i\in [n]}\sum_{j=1}^{n}\abs{\AA_{ij}}$. For matrix $\BB\in \MS{n}{n}$ and PSD matrix $\AA\in \MS{n}{n}$, we denote $\narr{\AA}{\BB} = \sup_{\nA{\AA}{\xx}\neq 0}\frac{\nA{\AA}{\BB\xx}}{\nA{\AA}{\xx}}$. \\~\\ \textbf{Schur complement:} For $\AA\in \MS{n}{n}$ and $F, C$ a partition of $[n]$ such that $\AA_{FF}$ is nonsingular, the Schur complement of $F$ in $\AA$ is defined as $\sc{\AA, F} = \AA_{CC} - \AA_{CF}\AA_{FF}\inv\AA_{FC}$. When we need to emphasize the support set of the entries that remain, we also denote $\sc{\AA, - C} = \sc{\AA, F}$. \subsection{(Directed) Laplacians, Symmetrizations} A matrix $\LL\in \MS{n}{n}$ is called a directed Laplacian iff $\mathbf{1}^\top\LL = \mathbf{0}$ and all off-diagonal entries of $\LL$ are non-positive, i.e., $\LL_{ii} = - \sum_{j: j\neq i}\LL_{ji} $ for all $i\in [n]$ and $\LL_{ij} \leq 0$ for all $i\neq j$. A (directed) Laplacian $\LL$ can be associated with a (directed) graph $\mathcal{G}[{\LL}]$ whose adjacency matrix is $\Ahat = \Diag{\LL} - \LL^\top$. The in-degrees/out-degrees of $\LL$ are defined as the in-degrees/out-degrees of $\mathcal{G}[\LL]$. For directed Laplacians, its out-degrees equal its diagonal entries. If $\mathcal{G}[{\LL}]$ is strongly connected, we say the (directed) Laplacian $\LL$ is strongly connected. In addition, if $\LL\mathbf{1} = \mathbf{0}^\top$, we call $\LL$ an Eulerian Laplacian. These Laplacians have the property that in-degrees of vertices equal to out-degrees. The undirected Laplacian is a special case where $\LL = \LL^\top$. We often refer to these as symmetric Laplacians, or just Laplacians. \\~\\ \textbf{Symmetrization:} For square matrix $\AA\in \MS{n}{n}$, we define its matrix symmetrization as $\U{\AA} = \frac{\AA + \AA^\top}{2}$. For a directed Laplacian $\LL\in \MS{n}{n}$, we define its undirectification as $\UG{\LL} = \frac{1}{2}\pr{\LL + \LL^\top - \Diag{\pr{\LL + \LL^\top}\mathbf{1}}}. $ $\UG{\LL}$ is called the undirectification because it is a symmetric Laplacian whose adjacency matrix is $\U{\Ahat}$, where $\Ahat = \Diag{\LL} - \LL^\top$ is the adjacency matrix of $\mathcal{G}[{\LL}]$. For an Eulerian Laplacian $\LL$, its matrix symmetrization coincides with its undirectification, i.e., $\U{\LL} = \UG{\LL}$. Eulerian Laplacians are critically important in solvers for directed Laplacians because they are the only setting in which the undirectification is positive semidefinite. \\~\\ \textbf{Row Column Diagonal Dominant (RCDD):} A square matrix $\AA\in \MS{n}{n}$ is $\alp$-RCDD iff $\sum_{j\in [n]\dele \dr{i} }\abs{\AA_{ij}} \leq \frac{1}{1 + \alp}\AA_{ii}$ and $\sum_{j\in [n]\dele \dr{i} }\abs{\AA_{ji}} \leq \frac{1}{1 + \alp}\AA_{ii}$ for any $i\in [n]$. We also say $\AA$ is RCDD if $\AA$ is $0$-RCDD. \end{flushleft} \subsection{Sparsification}\label{sec:sparseblkb} All almost-linear time or faster solvers for directed Laplacians to date are built around sparsification: the approximation of graphs by ones with fewer edges. As it's difficult to even approximate reachability of directed graphs, \cite{cohen2017almost} introduced the key idea of measuring approximations w.r.t. a symmetric PSD matrix. Such approximations are at the core of all subsequent algorithms, including ours. \begin{definition}\label{def:aleq1} (Asymmetrically bounded) Given a matrix $\AA\in \MS{n}{n}$ and a PSD matrix $\UU\in\MS{n}{n}$, $\AA$ is \asymb\ by $\UU$ iff $\ker\pr{\UU} \sleq \ker\pr{\AA^\top}\cap \ker\pr{\AA}$ and $\ndd{\UU}{\AA} \leq 1$. We denote it by $\AA \aleq \UU$. \end{definition} By our definition, $\AA \aleq \UU$ is equivalent to $-\AA \aleq \UU$. The following lemma is changed slightly from Lemma~B.2 of~\cite{cohen2017almost}. \newcommand\lemne{ For any matrix $\AA\in \MS{n}{n}$ and PSD matrix $\UU\in \MS{n}{n}$, the following statements are equivalent: \begin{itemize} \item $\AA \aleq \UU. $ \item $2\xx^\top\AA\yy \leq \xx^\top\UU\xx + \yy^\top\UU\yy,\ \forall \xx, \yy \in \Real^n. $ \end{itemize} } \begin{fact}\label{lem:ne} \lemne \end{fact} \begin{definition} \label{def:UApprox} (Approximation of directed Laplacians via undirectification) Given matrix $\AA\in \MS{n}{n}$ and directed Laplacian $\BB\in \MS{n}{n}$, $\AA$ is an \Lapap{\eps} of $\BB$ iff $\AA - \BB$ is \asymb\ by $\eps \cdot \UG{\BB}$. \end{definition} In particular, for \strc\ Eulerian Laplacians $\AA$ and $\BB$, $\AA$ is an \Lapap{\eps} of $\BB$ iff $\nd{\BB}{\pr{\AA - \BB}} \leq \eps. $ We will utilize sparsifiers for Eulerian Laplacians~\cite{cohen2017almost,CGPSSW18}, as well as implicit sparsifiers for products of directed adjacency matrices as black boxes throughout our presentations. The formal statements of these black boxes are below. \begin{theorem}\label{thm:SparEoracle1} (Directed Laplacian sparsification oracle) Given a directed Laplacian $\LL\in \MS{n}{n}$ with $\nnz{\LL} = m$ and error parameter $\dlt \in (0, 1)$, there is an oracle $\SparE$ which runs in at most $\TSE\pr{m, n, \dlt}$ time, where $\TSE\pr{m, n, \dlt} = O((m\log^{O(1)}n + n\log^{O(1)}n) \dlt^{-O(1)})$, to return with high probability a directed Laplacian $\Lap$ satisfying: \begin{enumerate}[(i)] \item $\nnz{\Lap} \leq \NSE\pr{n, \dlt}$ where $\NSE\pr{n, \dlt} = O(n\log^{O(1)}n \dlt^{-O(1)})$; \item $\Diag{\Lap} = \Diag{\LL}$; \label{enum:Diagnece} \item $\Lap - \LL \aleq \dlt\cdot \UG{\LL} $. \label{enum:LapLLdltUdef1} \end{enumerate} \end{theorem} \begin{remark} Conditions~\eqref{enum:Diagnece},~\eqref{enum:LapLLdltUdef1} in Theorem~\ref{thm:SparEoracle1} above are equivalent to $\Lap$ and $\LL$ having the same in- and out-degrees, and $\nG{\LL}{\pr{\Lap - \LL}} \leq \dlt $ respectively. By having the same in-degrees and out-degrees, we mean $\Diag{\Lap} = \Diag{\LL} $ and $\Lap\mathbf{1} = \LL\mathbf{1}$. \end{remark} \begin{lemma}\label{lem:SparP} (Lemma~3.18 of~\cite{cohen2017almost}) Let $\xx, \yy\in \Real^n$ be nonnegative vectors with $\nnz{\xx} + \nnz{y} = m$ and let $\eps, p\in (0, 1)$. And we denote $\GG = \pr{\mathbf{1}^\top\xx}\Diag{\yy} - \xx\yy^\top$. Then, there is a routine $\SparP$ which computes with probability at least $1 - p$ a nonnegative matrix $\AA$ in $O\pr{m\eps^{-2}\log \frac{m}{p}}$ time such that $\nnz{\AA } = O\pr{m\eps^{-2}\log \frac{m}{p} }$, $\AA - \xx\yy^\top \aleq \eps\cdot \UG{\GG}$. \end{lemma} Given an Eulerian Laplacian $\LL\in \MS{n}{n} $ and a partition $F, C$ of $[n]$, by invoking \\ $\SparE$ on subgraphs with edges inside $\pr{F, F}$, $(F, C)$, $(C, F)$, $(C, C)$ respectively, we can get a Laplacian sparsification procedure $\SE$ so that the sparsified Eulerian Laplacians returned by $\SE$ not only satisfy all the properties mentioned in Theorem~\ref{thm:SparEoracle1}, but also keep the in-degrees and out-degrees of the subgraph supported by $\pr{F, F}$. Analogously, a routine $\SP$ can be constructed by applying $\SparP$ four time. For explicit definitions of $\SE$ and $\SP$, see Lemma~\ref{lem:SE}. \subsection{Sufficiency of Solving Eulerian Systems to Constant Error} Previous works on solvers for directed Laplacians and their generalizations (to RCDD and M-matrices) established that it's sufficient to solve Eulerian systems to constant relative accuracy in their undirectification. \begin{itemize} \item The iterative refinement procedure shown in~\cite{cohen2017almost} shows that a constant accuracy solver can be amplified to one with $\epsilon$ relative accuracy in $O(\log(1 / \epsilon))$ iterations. \item The stationary computation procedure in~\cite{CKPPSV16} showed that arbitrary strongly connected Laplacians with mixing time $T_{mix}$ can be solved to $2$-norm error $\epsilon$ by solving $O(\log(T_{mix} / \epsilon))$ systems in Eulerian Laplacians. This was subsequently simplified and extended to M-matrices and row-column-diagonally-dominant matrices in~\cite{AJSS19} (with an extra $\log{n}$ factor in running time). A purely random walk (instead of matrix perturbation) based outer loop is also given in the thesis of Peebles~\cite{Peebles19:thesis}. \end{itemize} \input{overview} \input{schur} \input{solver} \section{Overview} \label{sec:overview} Our algorithm is based on sparse Gaussian elimination. Before we discuss the block version, it is useful to first describe how the single variable version works on Eulerian Laplacians. Recall that Eulerian Laplacians store the (weighted) degrees on the diagonal, and the negation of the out edge weights from $j$ in column $j$. Suppose we eliminate vertex $j$. Then we need to add a rescaled version of row $j$ to each row $i$ where $\LL_{j,:}$ is non-zero. Accounting for $\LL_{j, j} = \dd_{j}$, this weight for row $i$ is given by $\frac{\ww_{j \rightarrow i}}{\dd_{j}}$, and the corresponding decrease in entry $j, k$ is then \[ \frac{\ww_{j \rightarrow i} \ww_{k \rightarrow j}}{\dd_{j}}. \] In other words, when eliminating vertex $j$, we add an edge $k \rightarrow i$ for each triple of vertices $k \rightarrow j \rightarrow i$, with weight given by above. The effect of this elimination on the vector $b$ can also be described through this `distribution' of row $j$ onto its out-neighbors. However, to start with, it's useful to focus on what happens to the matrix. The key observation for elimination based Laplacian solvers is that this new matrix remains a graph. In fact, it can be checked that this process exactly preserves the in and out degrees of all neighbors of $i$, so the graph also remains Eulerian. However, without additional assumptions on the non-zero structures such as separators, directly performing the above process takes $O(n^{3})$ time: the newly added entries quickly increases the density of the matrix until each row has $\Theta(n)$ entries. So the starting point of elimination based Laplacian solvers is to address the following two problems: \begin{enumerate} \item Keeping the results of elimination sparse. \item Find vertices that are easy to eliminate. \end{enumerate} \subsection{Block Cholesky} \newcommand\PrecPosition{ \vspace{0.2cm} \begin{algorithm}[H] \caption{$\PreC\pr{\dr{\dr{\Stt{i}}_{i=1}^d, \dr{F_i}_{i=1}^d}, \xx, N}$} \label{alg:precondition} \KwIn{ $\pr{\alp, \bet, \dr{\dlt_i}_{i=1}^d }$-Schur complement chain $\dr{\dr{\Stt{i}}_{i=1}^d, \dr{F_i}_{i=1}^d}$; vector $\xx\in \Real^n $; error parameter $\eps\in (0, 1)$ } \KwOut{ vector $\xx\in \Real^n$ } \For{$i = 1, \cdots, d - 1$}{ $\xx_{F_i} \arl \PRI\pr{\Stt{i}_{F_i F_i}, \xx_{F_i}, \Diag{\Stt{i}}_{F_i F_i}\inv, \frac{1}{2}, N } $ \; $\xx_{C_i} \arl \xx_{C_i} - \Stt{i}_{C_i F_i}\xx_{F_i} $ } $\xx_{F_d} \arl \pr{\Stt{d}}\dg\xx_{F_d} $ \; \For{$i = d - 1, \cdots, 1 $}{ $\xx_{F_i} \arl \xx_{F_i} - \PRI\pr{\Stt{i}_{F_i F_i}, \Stt{i}_{F_i C_i}\xx_{C_i}, \Diag{\Stt{i}}_{F_i F_i}\inv, \frac{1}{2}, N } $ } Let $\xx \arl \xx - \frac{\mathbf{1}^\top\xx}{n}\cdot \mathbf{1} $ \; Return $\xx$ \end{algorithm} \vspace{0.2cm} } One possible solution to the issue above is to directly sample the edges formed after eliminating each vertex. It leads to sparsified/incomplete Cholesky factorization based algorithms~\cite{KS16,cohen2018solving}, including the first nearly-linear time solver for directed Laplacians. Our algorithm is based on eliminating blocks of vertices, and is closest to the algorithm from~\cite{kyng2016sparsified}. It aims to eliminate a block of $\Omega(n)$ vertices simulatenously, in parallel. This subset, which we denote using $F$, is chosen to be almost independent. That is, $F$ is picked so that each vertex in $F$ has at least a constant portion of its out-degree going to $V \setminus F$, which we denote as $C$. This property means that any random walk on $F$ exits it with constant probability. This intuition, when viewed from iterative methods perspective, implies that power method allows rapid simulation of elimination onto $C = V \setminus F$. From a matrix perspective, it means these matrices are well-approximated by their diagonal. So subproblem $\LL_{FF}\inv\bb$ can be solved to high accuracy in $O\pr{\log n}$ iterations via power method. We formalize the guarantees of such procedures, \textsc{PreRichardson} in Lemma~\ref{lem:PRIconverge1} in Section~\ref{sec:preconditioner}. \PrecPosition Compared to single-vertex elimination schemes, block elimination has the advantage of having less error accumulation. Single elimination can be viewed as eliminating $1$ vertex per step, while we will show that the block method eliminates $\Omega(n)$ vertices in $O(\log\log{n})$ steps. This smaller number of dependencies in turn provides us the ability to bound errors more directly. Formally, given a partition $F, C\in [n]$, the block Cholesky factorization of $\LL\in \MS{n}{n}$ is given as \eq{ \LL = \mx{ \II_{\abs{F}} & \zerom{F}{C} \\ \LL_{CF}\LL_{FF}\inv & \II_{\abs{C}} } \cdot \mx{ \LL_{FF} & \zerom{F}{C} \\ \zerom{C}{F} & \sc{\LL, F} } \cdot \mx{ \II_{\Fn} & \LL_{FF}\inv\LL_{FC} \\ \zerom{C}{F} & \II_{\abs{C}} }. } Using the above factorization iteratively (with sparsification) generates a Schur complement chain $\dr{\dr{\Stt{i}}_{i=1}^d, \dr{F_i}_{i=1}^d }$ (Definition~\ref{def:SCC1}), the solver algorithm loops through these and solves the subproblems via the projection / prologation maps defined via the random walk on $F_i$ respectively, using the power-method based elimination procedure described above. Its pseudocode is given in Algorithm~\ref{alg:precondition} for completeness. We remark that in practice, the iteration numbers of the preconditioned Richardson iterations in Algorithm~\ref{alg:precondition} can differ with each other. Here, we set a uniform $N$ merely for simplicity. If we have unlimited (or quadratic) precomputation power, the method described above suffices to give us a fast solver. However, due to the exact Schur Complements being dense, the major remaining difficulty is to efficiently compute an approximate Schur complement. \subsection{Schur Complement Sparsification via Partial Block Elimination } \newcommand\SparSchurPseuC{ \vspace{0.2cm} \begin{algorithm}[H] \caption{$\SparseSchur\pr{\LL, F, \dlt}$} \label{alg:SparSchur} \KwIn{ strongly connected Eulerian Laplacian $\LL\in \MatSize{n}{n}$; partition $F, C$ of $[n]$; error parameter $\dlt \in (0, 1)$ } \KwOut{ Sparse approximate Schur complement $\SS$ } If $\nnz{\LL} \geq O\pr{\NSE\pr{n, \dlt}}$, call $\SparE$ to sparsify $\LL$ with error parameter $O\pr{\dlt}$ \; Find a permutation matrix $\PP$ such that $\PP\LL\PP^\top = \mx{\LL_{FF} & \LL_{FC} \\ \LL_{CF} & \LL_{CC} } $ \footnote{This permutation matrix $\PP$ is only used to simplify the pseudocodes in Lines~\ref{line:Lttk01},~\ref{line:Attk1}. We don't need to construct it in practice. } \; Set $K \arl O\pr{\log\log \frac{n}{\dlt}}$, $\eps \arl O\pr{\frac{\dlt }{K}}$, $\Ltt{0} \arl \LL $, $\DD \arl \Diag{\Ltt{0}}$, $\Att{0} \arl \DD - \Ltt{0}$ \; \For{$k = 1, \cdots, K$}{ \For{$i \in F$}{ $\Ytt{k, i} \arl \SP\pr{\Att{k-1}_{:, i}, \tpp{\Att{k-1}_{i,:}}, \eps, F}$ } Let $\Ytt{k} \arl \sum_{i\in F} \frac{1}{\DD_{ii}}\Ytt{k,i}$, $\Ltt{k,0} \arrlf \PP^\top \mx{ \DD_{FF} & - \Att{k-1}_{FC} \\ - \Att{k-1}_{CF} & 2\Ltt{k-1}_{CC} }\PP - \Ytt{k} $ \label{line:Lttk01} \; $\Ltt{k} \arl \SE\pr{\Ltt{k,0}, \eps, F}$ and $\Att{k} \arl \PP^\top\mx{\DD_{FF} & \\ & \Diag{\Ltt{k}_{CC}} }\PP - \Ltt{k} $ \label{line:Attk1} \; } \For{$i \in F$}{ $\Xtt{i} \arl \SparP\pr{\Att{K}_{C,i}, \tpp{\Att{K}_{i, C}}, \eps } $ } Let $\Xap \arl \sum_{i\in F} \frac{1}{\DD_{ii}} \Xtt{i}$ and $\Stt{0} \arrlf \frac{1}{2^K}\pr{\Ltt{K}_{CC} - \Xap} $ \; Compute a patching matrix $\RR \in \MatSize{\Cn}{\Cn} $ with $\RR_{2:\Cn,1} = - \Stt{0}_{2:\Cn,:}\mathbf{1} $, $\RR_{1,2:\Cn} = - \mathbf{1}^\top\Stt{0}_{:, 2:\Cn}$, $\RR_{1, 1} = - \RR_{1,2:\Cn}\mathbf{1} - \mathbf{1}^\top\RR_{2:\Cn,1} - \mathbf{1}^\top\Stt{0}\mathbf{1} $, and $\RR_{ij} = 0$ for $i\neq 1$ and $j\neq 1$ \; Set $\Sap = \Stt{0} + \RR$ \; Return $\SS = \SparE\pr{\Sap, \dlt/8}$ \end{algorithm} \vspace{0.2cm} } Thus, the main bottleneck toward an efficient algorithm is the fast construction of approximate Schur complements. We will give such an algorithm whose running time is close to those of sparsification primitives via a partial block elimination process. In simple terms, a step of this process squares the $(F, F)$ block. Repeating this gives quadratic convergence. With $\alpha$ about $O(1)$, $O(\log\log{n})$ iterations suffice, so the resulting error is easier to control than martingales. \begin{lemma} [\cite{peng2014efficient}] For any diagonal matrix $\DD\in\MatSize{n}{n}$ and a matrix $\AA\in \MatSize{n}{n}$ with $\DD - \AA$ nonsingular, we have \eql{\label{eq:D-A}}{ \pr{\DD - \AA}\inv = \frac{1}{2}\pr{\DD\inv + \pr{\II + \DD\inv\AA}\pr{\DD - \AA\DD\inv\AA}\inv\pr{\II + \AA\DD\inv} }. } \end{lemma} The main identity~\eqref{eq:D-A} gives rise to our definition for partial-block-eliminated Laplacian of $\LL$ \eq{ \Lt{1} = \mx{ \DD_{FF} & - \AA_{FC} \\ - \AA_{CF} & 2\LL_{CC} } - \AA_{:, F}\DD_{FF}\inv\AA_{F,:} } where $\DD = \Diag{\LL}$ and $\AA = \DD - \LL$. We will usually conduct such decompositions for Laplacians in this paper. $\frac{1}{2}\Lt{1}$ is an Eulerian Laplacian which has the same Schur complement of $F$ in $\LL$, i.e., \eq{ \sc{\LL, F} = \frac{1}{2}\sc{\Lt{1}, F }. } Repeating this process gives us a sequence of $\Lt{1}, \Lt{2}, \cdots, \Lt{K}$, which are called the first to the $K$-th partially-block-eliminated Laplacians. $\Lt{k}$ can also be regarded as a partially powered matrix of $\LL$, which uses the powering to obtain better spectral properties. Specifically, when we focus on the $(F, F)$ block of $\Lt{k}$, it is easy to see that $\ni{\DD_{FF}\inv\At{k}_{FF}}$ converges at a quadratic rate, where $\At{k}_{FF} = \Lt{k}_{FF} - \DD_{FF}$. Formal construction of the $k$-th partially-block-eliminated Laplacians and their properties are deferred to Appendix~\ref{sec:exactPBE}. To encounter the increasing density of $\Lt{k}$, sparsification blackboxes in Section~\ref{sec:sparseblkb} are naturally accompanied with the partial block elimination to yield a Schur complement sparsification method (Algorithm~\ref{alg:SparSchur}). Our algorithm is essentially a directed variant of the one from~\cite{kyng2016sparsified}. A slight difference is that in the last step, we need to fix the degree discrepancies caused by approximating the strongly RCDD matrix by its diagonal. The performance of Algorithm~\ref{alg:SparSchur} is shown in Theorem~\ref{thm:SparSchur}. The $k$-th iterand $\Ltt{k}$ of Algorithm~\ref{alg:SparSchur} is termed the approximate $k$-th partially-block-eliminated Laplacian, while its exact version is just the (exact) $k$-th partially-block-eliminated Laplacian $\Lt{k}$ defined above. To guarantee the performance of Algorithm~\ref{alg:SparSchur}, the most important thing is to provide relatively tight bounds for the difference $\Ltt{k} - \Lt{k}$. \SparSchurPseuC \newcommand\thmSparSchur{ (Schur complement sparsification) For a \strc\ Eulerian Laplacian $\LL\in\MS{n}{n}$, let $F, C$ be a partition of $[n]$ such that $\LL_{FF}$ is $\alp$-RCDD $(\alp = O(1))$ and let $\dlt \in (0, 1)$ be an error parameter, the subroutine $\SparseSchur$ (Algorithm~\ref{alg:SparSchur}) runs in time \eq{ O\pr{\TSE\pr{m, n, \dlt}} + \Otil{\TSE\pr{\NSE\pr{n, \dlt}\dlt^{-2} , n, \dlt}\log n } } to return with high probability a \strc\ Eulerian Laplacian $\SS$ satisfying $\nnz{\SS} = O\pr{\NSE\pr{\abs{C}, \dlt}} $ and \eql{\label{eq:sttgood}}{ \SS - \sc{\LL, F} \aleq \dlt \cdot \U{\sc{\LL, F}}. } } \begin{theorem}\label{thm:SparSchur} \thmSparSchur \end{theorem} Compared to the undirected analog from~\cite{kyng2016sparsified}, powered directed matrices exhibit significantly more complicated spectral structures. To analyze them, we develop new interpretations of directed Schur complements based on matrix extensions. \subsection{Bounding Error Accumulations in Partially-Eliminated Laplacians by Augmented Matrices }\label{sec:oveaugm1} When considering the approximate partial block elimination, in one update step, not only new sparsification errors are added into $\Ltt{k}$, the errors accumulated from previous steps will multiply with each other and get possibly amplified. In addition, error accumulations in Schur complements of directed Laplacians are not as straightforward as their undirected counterparts. It's not the case that for two directed Eulerian Laplacians with the same undirectifiation, the undirectification of their Schur complements are the same. For instance, consider the undirected vs. directed cycle,eliminated till only two originally diametrically opposite vertices remain. The former has a Schur complement that has weight $2/n$, while the latter has a Schur complement that has weight $1$. By the definition of \Lapap{\eps}, we need to essentially show the following inequality in order to obtain the approximations needed for a nearly-linear time algorithm: \eq{ \frac{1}{2^k}\U{\Lt{k}} =& \frac{1}{2^k} \U{\vc{ \DD_{FF} - \At{k-1}_{FF} \DD_{FF}\At{k-1}_{FF } & - \pr{\II + \At{k-1}_{FF}\DD_{FF}\inv}\At{k-1}_{FC} \\ - \At{k-1}_{CF}\pr{\II + \DD_{FF}\inv\At{k-1}_{FF}} & 2\Lt{k-1}_{CC} - \At{k-1}_{CF}\DD_{FF}\inv\At{k-1}_{FC} }} \\ \preccurlyeq& \frac{O\pr{1}}{2^{k-1}}\cdot \U{\Lt{k-1}} \preccurlyeq \cdots \preccurlyeq O\pr{1}\cdot \U{\LL}. } Here significant difficulties arise due to the already complicated formula of $\Lt{k}$. From the above reasons, directly bounding $\frac{1}{2^k}\pr{\Ltt{k} - \Lt{k}}$ from $\frac{1}{2^{k-1}}\pr{\Ltt{k-1} - \Lt{k-1}} $ may lead to blow up easily. So we instead express the exact and approximate partial block elimination as Schur complements of large augmented matrices. That is, for the exact and approximate $k$-th partially-block-eliminated matrices $\Lt{k}$, $\Ltt{k}$, we analyze augmented matrices $\Mt{0, k}$, $\Mtt{0, k}$ of size $2^{k} |F| + |C|$ directly. We start with the construction of a desirable $\Mt{0, k}$. To this end, we define a sequence of augmented matrices $\dr{\Mt{i, k}}_{i=0}^k$, where $\Mt{k,k} = \Lt{k}$ and each $\Mt{i, k}$ is a Schur complement of $\Mt{i-1, k}$. Here we only give an informal explanation of how we construct $\Mt{i,k}$. The formal definitions of these augmented matrices are given in Section~\ref{sec:reformPBEvAM1}. To begin with, for some fixed $k\in [K]$, we define $ \Mt{k, k} \stackrel{\mathrm{def}}{=} \Lt{k}. $ And we write $F_1 = F$ in the remainder of Section~\ref{sec:overview} and the entire Section~\ref{sec:reformPBEvAM1} and Section~\ref{sec:Schurcplstable2}. Next, we take $\Mt{k-1, k}$ and $\Mt{k-2, k}$ as examples to show how we define such a sequence of matrices $\Mt{k-1, k}, \cdots, \Mt{0, k}$. Define $\Mt{k-1, k}$ as follows \eq{ \Mt{k-1, k} \stackrel{\mathrm{def}}{=} \begin{blockarray}{cccl} F_2 & F_1 & C & \arl \text{column indexes } \\ \begin{block}{[ccc]l} \DD_{FF} & - \At{k-1}_{FF} & -\At{k-1}_{FC} & \arl \text{rows indexed by } F_2 \\ - \At{k-1}_{FF} & \DD_{FF} & -\At{k-1}_{FC} & \arl \text{rows indexed by } F_1 \\ - \At{k-1}_{CF} & - \At{k-1}_{CF} & 2\Lt{k-1}_{CC} & \arl \text{rows indexed by } C \\ \end{block} \end{blockarray} } Then, it follows by direct calculations that \eq{ \sc{\Mt{k-1, k}, F_2} = \mx{ \DD_{FF} - \At{k-1}_{FF}\DD_{FF}\inv\At{k-1}_{FF} & - \pr{\II + \At{k-1}_{FF}\DD_{FF}\inv}\At{k-1}_{FC} \\ - \At{k-1}_{CF}\pr{\II + \DD_{FF}\inv\At{k-1}_{FF}} & 2\Lt{k-1}_{CC} - \At{k-1}_{CF}\DD_{FF}\inv\At{k-1}_{FC} } \comeq{\eqref{line:densebiclique}} \Lt{k}. } Next, we define $\Mt{k-2, k}$ as follows \eq{ \Mt{k-2, k} \stackrel{\mathrm{def}}{=} \begin{blockarray}{cccccl} F_4 & F_3 & F_2 & F_1 & C & \arl \text{column indexes } \\ \begin{block}{[ccccc]l} \DD_{FF} & & & - \At{k-2}_{FF} & -\At{k-2}_{FC} & \arl \text{rows indexed by } F_4 \\ & \DD_{FF} & -\At{k-2}_{FF} & & -\At{k-2}_{FC} & \arl \text{rows indexed by } F_3 \\ - \At{k-2}_{FF} & & \DD_{FF} & & -\At{k-2}_{FC} & \arl \text{rows indexed by } F_2 \\ & -\At{k-2}_{FF} & & \DD_{FF} & -\At{k-2}_{FC} & \arl \text{rows indexed by } F_1 \\ - \At{k-2}_{CF} & - \At{k-2}_{CF} & - \At{k-2}_{CF} & - \At{k-2}_{CF} & 4\Lt{k-2}_{CC} & \arl \text{rows indexed by } C \\ \end{block} \end{blockarray}. } It follows by direct calculations that \eq{ &\sc{\Mt{k-2, k}, F_3\cup F_4} \\ =& \mx{ \DD_{FF} & - \At{k-2}_{FF}\DD_{FF}\inv\At{k-2}_{FF} & - \pr{\II + \At{k-2}_{FF}\DD_{FF}\inv}\At{k-2}_{FC} \\ - \At{k-2}_{FF}\DD_{FF}\inv\At{k-2}_{FF} & \DD_{FF} & - \pr{\II + \At{k-2}_{FF}\DD_{FF}\inv}\At{k-2}_{FC} \\ - \At{k-2}_{CF}\pr{\II + \DD_{FF}\inv\At{k-2}_{FF}} & - \At{k-2}_{CF}\pr{\II + \DD_{FF}\inv\At{k-2}_{FF}} & 4\Lt{k-2}_{CC} - 2\At{k-2}_{CF}\DD_{FF}\inv\At{k-2}_{FC} } \\ =& \mx{ \DD_{FF} & - \At{k-1}_{FF} & -\At{k-1}_{FC} \\ - \At{k-1}_{FF} & \DD_{FF} & -\At{k-1}_{FC} \\ - \At{k-1}_{CF} & - \At{k-1}_{CF} & 2\Lt{k-1}_{CC} } \\ =& \Mt{k-1, k}. } Then, we attach sparsification errors to $\Mt{0, k}$ to define $\Mtt{0, k}$. Analyzing errors in $\Mt{0, k}$ is much more friendly compared to directly working with the update steps from $\frac{1}{2^{k-1}}\pr{\Ltt{k-1} - \Lt{k-1}}$ to $\frac{1}{2^k}\pr{\Ltt{k} - \Lt{k}}$. Under matrix norms related to a special kind of matrices termed ``reptition matrix" (Section~\ref{sec:erroraccusml1}), how error accumulates in the difference $\Mtt{0, k} - \Mt{0, k}$ with respect to $k$ can be seen clearly. Now, we can derive relatively tight bounds for $\Ltt{k} - \Lt{k}$ by analyzing the difference $\Mtt{0, k} - \Mt{0, k}$ and using the robustness of Schur complements in this case (Section~\ref{sec:Schurcplstable2}). We believe this representation may be of independent interest. We also remark that these augmented matrices only arise during analysis, and are not used in the algorithms. \section{Schur Complement Sparsification }\label{sec:Schur} \section{Partial Block Elimination via Augmented Matrices} \label{sec:reformPBEvAM1} In this section, we introduce our augmented matrices based view of partial block elimination. As we will show later, after $O\pr{\log\log n}$ steps of partial elimination, the $(F, F)$ block of the approximate partially-block-eliminated Laplacian $\Ltt{k}$ can be approximated by its diagonal ``safely". So, what remains is to bound the error accumulations in the difference $\frac{1}{2^k}\pr{\sc{\Ltt{k}, F} - \sc{\Lt{k}, F}}$, which we do by bounding differences in $\frac{1}{2^k}\pr{\Ltt{k} - \Lt{k}}$. In Section~\ref{sec:ULtk}, we represent the exact $k$-th partially-block-eliminated Laplacian $\Lt{k}$ by a Schur complement of an augmented matrix $\Mt{0, k}\in \MS{\pr{2^k\abs{F} + \Cn}}{\pr{2^k\abs{F} + \Cn}}$. In Section~\ref{sec:erroraccusml1}, we define $\Mtt{0,k}$ as an inexact version of $\Mt{0,k}$ where sparsification errors accumulate. Then, we introduce a special type of augmented matrices, which we term \emph{reptition matrices}, and bound the difference $\Mtt{0, k} - \Mt{0, k}$ in norms based on these \emph{reptition matrices}. \subsection{A Reformulation of the Exact Partial Block Elimination }\label{sec:ULtk} To bound the difference $\frac{1}{2^k}\pr{\Ltt{k} - \Lt{k}}$, a direct way is to bound the difference $\frac{1}{2^k}\pr{\Ltt{k} - \Lt{k}}$ by $\frac{1}{2^{k-1}}\pr{\Ltt{k-1} - \Lt{k-1}}$ recursively. However, our attempts of doing so lead to errors that grow exponentially in $k$, the number of partial block elimination steps. In this section, we provide a reformulation of the exact version of partial block elimination which is more friendly to error analysis. To be specific, our strategy is to construct a large matrix $\Mt{0, k} \in \MS{\pr{\puts{k}}}{\pr{\puts{k}}}$ such that $\Lt{k}$ is a Schur complement of the large matrix $\Mt{0, k}$. And there is a partition of $\Mt{0, k}$ such that each block is a zero matrix or equals some submatrix of $\LL$. To construct $\Mt{0, k}$, we will construct a sequence of augmented matrices $\dr{\Mt{i, k}}_{i=0}^{k}$ satisfying Lemma~\ref{lem:Mti}. Later, by analyzing the large matrix $\Mt{0, k}$, we can derive tighter bounds for quantities related to $\Lt{k}$. In Section~\ref{sec:reformPBEvAM1} and Section~\ref{sec:Schurcplstable2}, without loss of generality, we assume $F = \dr{n - \Fn +1, \cdots, n-1, n}$ and $C = \dr{1, 2, \cdots, \Cn} = [n]\dele F$. Now, we give a rigorous way to construct such a sequence of matrices $\dr{\Mt{i, k}}_{i=0}^k \ (0\leq k\leq K)$. To begin with, we define some sets to be used later. We define \eq{F_a = \dr{b\in \mathbb{Z}:\ \Cn + \pr{a - 1}\Fn + 1 \leq b \leq \Cn + a\Fn},\ \forall 1\leq a\leq 2^K. } Note that in our notation, $F = F_1$. Then, we construct a sequence of bijections $\dr{\psit{i}\pr{\cdot}}_{i=0}^K$ which indicate the ``positions" of the blocks equalling $- \At{i}_{FF}$ in the large augmented matrix $\Mt{k - i, k}$. We start with $\psit{0}\pr{\cdot}$ and will define these $\psit{i}$ iteratively. The mapping $\psit{0}\pr{\cdot}$ is defined as a trivial mapping from $\dr{1}$ to $\dr{1}$ with \eq{ \psit{0}\pr{1} = 1. } Then, assume we have defined $\psit{i-1}\pr{\cdot}$. Now, we define $\psit{i}$ as follows: \eq{ \psit{i}\pr{a} = \left\{ \begin{split} &a + 2^{i-1},\ a\in [2^{i-1}] \\ &\psit{i-1}\pr{a - 2^{i-1}},\ 2^{i-1}+1 \leq a\leq 2^{i} \end{split} \right. } If $\psit{i-1}\pr{\cdot}$ is a bijection from $[2^{i-1}]$ to $[2^{i-1}]$, then $\psit{i}\pr{\cdot} \Big|_{\dr{2^{i-1}+1, \cdots, 2^i}}$ is a bijection from $\dr{2^{i-1} + 1, \cdots, 2^i}$ to $[2^{i-1}]$. And by the definition, $\psit{i}\pr{\cdot}\Big|_{[2^{i-1}]}$ is a bijection from $[2^{i-1}]$ to $\dr{2^{i-1}+1, \cdots, 2^i}$. Then, $\psit{i}$ is a bijection from $[2^i]$ to $[2^i]$. It follows by induction that for any $k\in [K]$, $\psit{k}\pr{\cdot}$ is a bijection from $[2^k]$ to $[2^k]$. And by the fact that $2^{i-1}+1 \leq \psit{i}\pr{a} \leq 2^{i}$ for $a\in [2^{i-1}]$ and $\psit{i}\pr{a} \in [2^{i-1}]$ for $2^{i-1}+1 \leq a \leq 2^{i} $, we have the following relation \eql{\label{eq:Atnotdiag}}{ \psit{j}\pr{a} \neq a,\ \forall 1\leq j\leq K,\ a\in [2^{j}]. } With the notations defined above, we define the matrix $\Mt{i, k}$ as \eql{\label{eq:Mtik1}}{ \Mt{i, k} =& 2^{k-i}\put{\Lt{i}_{CC}, C, C, \puts{k-i}} + \sum_{a = 1}^{2^{k-i}}\Big(\put{\DD_{FF}, F_a, F_a, \puts{k-i} } \\ & + \put{- \At{i}_{FF}, F_a, F_{\psit{k-i}\pr{a}}, \puts{k-i}} + \put{- \At{i}_{FC}, F_a, C, \puts{k-i}} \\ & + \put{- \At{i}_{CF}, C, F_{a}, \puts{k-i}}\Big),\ \forall 0\leq k\leq K,\ 0\leq i\leq k. } where the notation $\put{\XX, A, B, n}$ has been defined in Section~\ref{sec:notation}, which means putting matrix $\XX$ in the submatrix indexed by $\pr{A, B}$ in a zero matrix $\mathbf{0}_{n\times n}$; $\Lt{k}$ is the exact $k$-th partially-block-eliminated Laplacian and formal definitions of $\Lt{k}, \At{k}$ are in Appendix~\ref{sec:exactPBE}. We have the following properties of $\dr{\Mt{i, k}}$. \begin{lemma}\label{lem:Mti} For any $0\leq k\leq K$, $0\leq i\leq k$, $\Mt{i, k}$ is an Eulerian Laplacian; $\Mt{i, k}_{ - [n], - [n]}$, $\Mt{i, k}_{- C, - C}$ are $\alp$-RCDD; the Schur complement satisfies \eql{\label{eq:scMFi+}}{ \sc{\Mt{i, k}, \cup_{a = 2^{k-i-1}+1}^{2^{k-i}} F_a } = \Mt{i+1, k}. } Further, \eql{\label{eq:scMFtili+}}{ \sc{\Mt{i, k}, - [n]} = \Lt{k}. } In addition, for any $\xx\in \Real^n$, let \eql{\label{eq:xtil2pk}}{ \xtil = \big(\underbrace{\xx_{F}^\top \ \cdots \ \xx_F^\top}_{\text{$2^k$ repetitions of $\xx_F^\top$}} & \xx_C^\top \big)^\top, } then, \eql{\label{eq:morning1}}{ \xtil^\top\Mt{0, k}\xtil = 2^k \xx^\top \LL \xx. } \end{lemma} \begin{proof} We only need to prove the cases of $i < k$. Denote $F_i^+ = \cup_{a = 2^{i-1}+1}^{2^i } F_a$ in this proof. Since $\psit{k-i}\pr{\cdot}$ is a bijection from $[2^{k-i}]$ to $[2^{k-i}]$, then, for any $a\in [2^{k-i}]$, $\Mt{i, k}_{F_a, :} $ contains 3 nonzero blocks equaling $\DD_{FF}, -\At{i}_{FF}, -\At{i}_{FC}$, respectively; and the other blocks are all zero matrices. Analogously, for any $a\in [2^{k-i}]$, $\Mt{i, k}_{:, F_a}$ contains 3 nonzero blocks equaling $\DD_{FF}, -\At{i}_{FF}, -\At{i}_{CF}$, respectively, while the other blocks are all zero matrices. Since $\Lt{i}$ is Eulerian (Lemma~\ref{lem:LtkE}), we have $\Mt{i,k}_{F_a, :}\mathbf{1} = \DD_{FF}\mathbf{1} - \At{i}_{FF}\mathbf{1} - \At{i}_{FC}\mathbf{1} = \Lt{i}_{F, :}\mathbf{1} = \mathbf{0} $. Analogously $\mathbf{1}^\top\Mt{i,k}_{:, F_a} = \mathbf{0}^\top $. Also by the definition~\eqref{eq:Mtik1}, $\Mt{i, k}_{C, :}\mathbf{1} = 2^{k-i}\pr{\Lt{i}_{CC}\mathbf{1} - \At{i}_{CF}\mathbf{1}} = 2^{k-i}\Lt{i}_{C, :}\mathbf{1} = \mathbf{0}$. Analogously, $\mathbf{1}^\top\Mt{i, k}_{:, C} = \mathbf{0}^\top$. All off-diagonal entries of $\Mt{i,k}$ are non-positive by the definition~\eqref{eq:Mtik1}. Thus, $\Mt{i, k}$ is an Eulerian Laplacian. Notice that all blocks equaling $-\At{i}_{CF}$ are on the rows indexed by $C$; and all blocks equaling $- \At{i}_{FC}$ are on the columns indexed by $C$, thus, on each block row or block column of $\Mt{i, k}_{- C, - C}$, there is exactly one block equaling $\DD_{FF}$ and exactly one block equaling $- \At{i}_{FF}$, and the other blocks are all-zeros matrices. Thus, by~\eqref{eq:DinvAsuperl}, $\Mt{i, k}_{- C, - C}$ is $\alp$-RCDD. Then, $\Mt{i, k}_{- [n], - [n]}$ is also $\alp$-RCDD as it is a submatrix of $\Mt{i, k}_{ - C, - C}$. Also, as there are only 3 nonzero blocks on each row or column of $\Mt{i, k}$, when computing the Schur complement of $F_a \ (2^{k-i-1}+1\leq a \leq 2^{k-i})$, we only need to focus on the submatrix \eq{ \Mt{i, k}_{F_a\cup F_{a - 2^{k-i-1}} \cup C,\ F_a\cup F_{\psit{k-i}\pr{a}}\cup C} = \mx{ \DD_{FF} & - \At{i}_{FF} & - \At{i}_{FC} \\ - \At{i}_{FF} & \mathbf{0} & - \At{i}_{FC} \\ - \At{i}_{CF} & - \At{i}_{CF} & 2^{k-i}\Lt{i}_{CC} }, } where the block $\Mt{i, k}_{F_{a - 2^{k-i-1}}, F_{\psit{k-i}\pr{a}}} = \mathbf{0}$ is by~\eqref{eq:Atnotdiag}. Then, by direct calculations, \eq{ &\sc{\Mt{i, k}_{F_a\cup F_{a - 2^{k-i-1}} \cup C,\ F_a\cup F_{\psit{k-i}\pr{a }}\cup C}, F_a} \\ =& \mx{ - \At{i}_{FF}\DD_{FF}\inv\At{i}_{FF} & - \pr{\II + \At{i}_{FF}\DD_{FF}\inv}\At{i}_{FC} \\ - \At{i}_{CF}\pr{\II + \DD_{FF}\inv\At{i}_{FF}} & 2^{k-i}\Lt{i}_{CC} - \At{i}_{CF}\DD_{FF}\inv\At{i}_{FC} } \\ =& \mx{ - \At{i+1}_{FF} & - \At{i+1}_{FC} \\ - \At{i+1}_{CF} & 2^{k-i}\Lt{i}_{CC} - \At{i}_{CF}\DD_{FF}\inv\At{i}_{FC} }. } When computing the Schur complement of $F_{k-i }^+ $, the term $\At{i}_{CF}\DD_{FF}\inv\At{i}_{FC} $ is subtracted from $\Mt{i, k}_{CC} = 2^{k-i}\Lt{i}_{CC}$ for $2^{k-i-1}$ times. By combining with the equality $\Lt{i+1}_{CC} = 2\Lt{i}_{CC} - \At{i}_{CF}\DD_{FF}\inv\At{i}_{FC}$ from~\eqref{line:densebiclique}, we have $ \sc{\Mt{i, k}, F_{k-i}^+}_{CC} = \Mt{i+1, k}_{CC}. $ To derive~\eqref{eq:scMFi+}, what remains is to show that the ``positions" of the blocks equaling $-\At{i+1}_{FF}$ are the same in $\sc{\Mt{i, k}, F^+_{k-i} }$ and $\Mt{i+1, k}$. This can be seen easily from the observation that for $2^{k-i-1}+1 \leq a\leq 2^{k-i}$, $ \sc{\Mt{i, k}_{F_a\cup F_{a - 2^{k-i-1}} \cup C,\ F_a\cup F_{\psit{k-i}\pr{a }}\cup C}, F_a} $ is supported on the submatrix indexed by \eq{ \pr{F_{a - 2^{k-i-1}} \cup C, F_{\psit{k-i}\pr{a }}\cup C}. } Thus, the ``positions" of blocks equaling $-\At{i+1}_{FF}$ in $\sc{\Mt{i, k}, F_{k-i}^+}$ are \eq{ &\dr{ \pr{F_{a - 2^{k-i-1}}, F_{\psit{k-i}\pr{a }}}: 2^{k-i-1}+1 \leq a \leq 2^{k-i}} \\ =&\dr{ \pr{F_{a - 2^{k-i-1}}, F_{\psit{k-i-1}\pr{a - 2^{k-i-1} }}}: 2^{k-i-1}+1 \leq a \leq 2^{k-i}} \\ =& \dr{ \pr{F_a, F_{\psit{k-i-1}\pr{a}}}: a\in [2^{k-i-1}]}, } which are the exact ``positions" of the blocks equalling $-\At{i+1}_{FF}$ in $\Mt{i+1, k}$ by~\eqref{eq:Mtik1}. Therefore, we have proved~\eqref{eq:scMFi+}. The relation~\eqref{eq:scMFtili+} follows by Fact~\ref{fact:sctran} and induction: \eq{ \sc{\Mt{i, k}, - [n]} = \sc{\sc{\Mt{i, k}, F_{k-i}^+}, -[n]} = \sc{\Mt{i+1, k}, -[n]} = \cdots = \Mt{k, k} = \Lt{k}. } The relation~\eqref{eq:morning1} follows by the fact that $\psit{k}\pr{\cdot}$ is a bijection from $[2^k]$ to $[2^k]$ and~\eqref{eq:Mtik1}. \end{proof} The following lemma answers a question in Section~\ref{sec:oveaugm1}. That is, $\frac{1}{2^k}\U{\Lt{k}} \preccurlyeq O\pr{1}\U{\LL}$. \begin{lemma}\label{lem:ULtm} For any $0\leq k\leq K$, \eql{\label{eq:Ltm}}{ \frac{1}{2^k} \U{\Lt{k}} \preccurlyeq \pr{3 + \frac{2}{\alp}} \U{\LL}. } \end{lemma} \begin{proof} By Lemma~\ref{lem:Mti}, $\Mt{0, k}_{-[n], -[n]} $ is $\alp$-RCDD, and $\Mt{0, k}$ is an Eulerian Laplacian. Thus, using Fact~\ref{lem:scrobust}, \eq{ \U{\sc{\Mt{0, k}, -[n]}} \preccurlyeq \pr{3 + \frac{2}{\alp}}\sc{\U{\Mt{0, k }}, -[n]}. } By~\eqref{eq:scMFtili+}, $\U{\Lt{k}} \preccurlyeq \pr{3 + \frac{2}{\alp}}\sc{\U{\Mt{0, k}}, -[n] }. $ For any $\xx\in \Real^n$, define $\xtil$ as in~\eqref{eq:xtil2pk}. By Fact~\ref{fact:alpRCDDPSDpPD1}, $\U{\Mt{0, k}}_{-[n], -[n]}$ is PD. Then, by Fact~\ref{fact:Schurxusmall} and~\eqref{eq:morning1}, we have \eq{ \xx^\top\sc{\U{\Mt{0, k}}, -[n]}\xx \leq \xtil^\top\U{\Mt{0, k}}\xtil = 2^k \xx^\top \U{\LL} \xx, } i.e., $\sc{\U{\Mt{0, k}}, -[n] } \preccurlyeq 2^k \U{\LL} $. Combining the above equations yields that \eq{ \U{\Lt{k}} \preccurlyeq \pr{3 + \frac{2}{\alp}}\sc{\U{\Mt{0, k}}, -[n]} \preccurlyeq 2^k\pr{3 + \frac{2}{\alp}} \U{\LL}. } \end{proof} \subsection{Bounding Error Accumulation Using Repetition Matrices } \label{sec:erroraccusml1} Before we define $\Mtt{0,k}$, we introduce our notations for the errors induced by sparsification. \eq{ &\EY{k,i} = \Att{k-1}_{:,i}\Att{k-1}_{i,:} - \Ytt{k,i},\ \EY{k} = \sum_{i\in F }\frac{1}{\DD_{ii}}\EY{k,i} = \Att{k-1}_{:,F}\DD_{FF}\inv\Att{k-1}_{F,:} - \Ytt{k}, \\ &\Ett{k,0} = \Ltt{k,0} - \Ltt{k},\ \Ett{k} = \EY{k} + \Ett{k,0} } and \eq{ \EXX{i} = \Att{K}_{C,i}\Att{K}_{i,C} - \Xtt{i},\ \EX = \sum_{i\in F }\frac{1}{\DD_{ii}}\EXX{i} = \Att{K}_{CF}\DD_{FF}\inv\Att{K}_{FC} - \Xap. } We also denote in the rest of this paper, \eql{\label{eq:defRap}}{ \Rap = \RR + \frac{1}{2^K}\pr{\Att{K}_{CF}\pr{\DD_{FF} - \Att{K}_{FF}}\inv - \Att{K}_{CF}\DD_{FF}\inv\Att{K}_{FC}}. } Some elementary facts of the results of Algorithm~\ref{alg:SparSchur} is given by Lemma~\ref{lem:LapetcpropSparSchurCpmt1} in Appendix~\ref{sec:someprfs}. By Lemma~\ref{lem:SparP} and Lemma~\ref{lem:SE}, we can provide bounds for the one-step errors in the next lemma. Its proof is deferred to Appendix~\ref{sec:someprfs}. \begin{lemma}\label{lem:EYEXEtt} The error matrices satisfies \al{ &\Ett{k} \aleq \epsz \U{\Ltt{k-1}}, \label{eq:EYLtt} \\ &\EX \aleq \eps \U{\sc{\Ltt{K}, F}}, \label{eq:EXscLtt} } where $\epsz = 2\pr{3 + \frac{2}{\alp}}\pr{2\eps + \eps^2}. $ \end{lemma} In the remainder of this paper, we write $\epsz = 2\pr{3 + \frac{2}{\alp}}\pr{2\eps + \eps^2}$. Recall that we define an augmented matrix $\Mt{0, k}$ such that $\Lt{k}$ is its Schur complement in Section~\ref{sec:ULtk}. Now, we define $\Mtt{0, k}$ which is an inexact version of $\Mt{0, k} $ to analyze the properties of $\Ltt{k}$. We first define \eq{ \unierr{k, a, \Ett{i}} =& \put{\Ett{i}_{FF}, F_a, F_{\psit{k-i}\pr{a}}, \puts{k-i}} + \put{\Ett{i}_{FC}, F_a, C, \puts{k-i}} \\ & + \put{\Ett{i}_{CF}, C, F_{\psit{k-i}\pr{a}}, \puts{k-i}} + \put{\Ett{i}_{CC}, C, C, \puts{k-i} }. } Then, we define the error matrices \eql{\label{eq:defErrtik1}}{ \Errt{i, k} = \sum_{a=1}^{2^{k-i}} \unierr{k, a, \Ett{i}} } and \eq{ \Ers{k} = \sum_{i=1}^{k} \Errt{i, k}. } The matrix $\Mtt{0,k}$ is defined as follows \eq{ \Mtt{0, k} = \Mt{0, k} + \Ers{k}. } \begin{lemma}\label{lem:Mtti} The Schur complement of $[\puts{k}]\dele [n]$ in $\Mtt{0, k}$ satisfies: \eql{\label{eq:Mttsc1}}{ \sc{\Mtt{0,k}, -[n]} = \Ltt{k}. } \end{lemma} \begin{proof} We define the following auxiliary matrices in this proof \eq{ \Ntt{i, k} =& 2^{k-i}\put{\Ltt{i}_{CC}, C, C, \puts{k-i}} + \sum_{a = 1}^{2^{k-i}}\Big(\put{\DD_{FF}, F_a, F_a, \puts{k-i} } \\ & + \put{- \Att{i}_{FF}, F_a, F_{\psit{k-i}\pr{a}}, \puts{k-i}} + \put{- \Att{i}_{FC}, F_a, C, \puts{k-i}} \\ & + \put{- \Att{i}_{CF}, C, F_{a}, \puts{k-i}}\Big). } Since $\Ltt{0} = \LL$, we have $\Ntt{0, k} = \Mt{0, k}$. Also, from definition, $\Ntt{k, k} = \Ltt{k}$. We also denote $F_i^+ = \cup_{a = 2^{i-1}+1}^{2^{i} } F_a $ in this proof. Similar with the arguments in Lemma~\ref{lem:Mti}, when computing the Schur complement of $F_a$ in $\Ntt{i,k} + \Errt{i+1,k}$ where $2^{k-i-1}+1 \leq a \leq 2^{k-i} $, we only need to focus on the following submatrix \eq{ &\pr{\Ntt{i, k} + \Errt{i+1, k}}_{F_a\cup F_{a - 2^{k-i-1}} \cup C,\ F_a\cup F_{\psit{k-i}\pr{a }}\cup C} \\ =& \mx{ \DD_{FF} & - \Att{i}_{FF} & - \Att{i}_{FC} \\ - \Att{i}_{FF} & \mathbf{0} & - \Att{i}_{FC} \\ - \Att{i}_{CF} & - \Att{i}_{CF} & 2^{k-i}\Ltt{i}_{CC} } + \mx{ \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \Ett{i+1}_{FF} & \Ett{i+1}_{FC} \\ \mathbf{0} & \Ett{i+1}_{CF} & 2^{k-i-1}\Ett{i+1}_{CC} }, } where the block $\Ntt{i,k}_{F_{a - 2^{k-i-1}}, F_{\psit{k-i}\pr{a}}} = \mathbf{0}$ is also by~\eqref{eq:Atnotdiag}. By the definition of $\Ett{i+1 } $, we have \eq{ &\sc{\pr{\Ntt{i, k} + \Errt{i+1,k}}_{F_a\cup F_{a - 2^{k-i-1}} \cup C,\ F_a\cup F_{\psit{k-i }\pr{a }}\cup C}, F_a} \\ =& \mx{ - \Att{i}_{FF}\DD_{FF}\inv\Att{i}_{FF} + \Ett{i+1}_{FF} & - \pr{\II + \Att{i}_{FF}\DD_{FF}\inv}\Att{i}_{FC} + \Ett{i+1}_{FC} \\ - \Att{i}_{CF}\pr{\II + \DD_{FF}\inv\Att{i}_{FF}} + \Ett{i+1}_{CF} & 2^{k-i}\Ltt{i}_{CC} - \Att{i}_{CF}\DD_{FF}\inv\Att{i}_{FC} + 2^{k-i-1}\Ett{i+1}_{CC} } \\ =& \mx{ - \Att{i+1}_{FF} & - \Att{i+1}_{FC} \\ - \Att{i+1}_{CF} & \pr{2^{k-i} - 2}\Ltt{i}_{CC} + \Ltt{i+1}_{CC} }. } By similar arguments with Lemma~\ref{lem:Mti}, the ``positions" of the blocks equalling $- \Att{i+1}_{FF}$ in \\ $\sc{\Ntt{i, k} + \Errt{i+1,k}, F_{k-i}^+}$ are exactly the same as those in $\Ntt{i+1, k}$. Since $F_{k-i}^+$ contains $2^{k-i-1}$ sets of size $\abs{F}$, we have \eq{ \sc{\Ntt{i, k} + \Errt{i+1,k}, F_{k-i}^+}_{CC} = 2^{k-i-1}\pr{2\Ltt{i}_{CC} - \Att{i}_{CF}\DD_{FF}\inv\Att{i}_{FC} + \Ett{i+1}_{CC}} = 2^{k-i-1}\Ltt{i+1}_{CC}. } Thus, we have shown \eq{ \sc{\Ntt{i, k} + \Errt{i+1,k}, F_{k-i}^+} = \Ntt{i+1, k}. } Since the support set of $\sum_{j=i+2}^{k}\Errt{j, k} $ is $[\puts{k-i-2}]$ which is disjoint with $F_{k-i}^+ $, we have \eq{ \sc{\Ntt{i, k} + \sum_{j=i+1}^{k}\Errt{j,k}, F_{k-i}^+} = \Ntt{i+1, k} + \sum_{j=i+2}^{k}\Errt{j,k}. } By induction, \eq{ \sc{\Ntt{0, k} + \Ers{k}, -[n]} = \Ntt{k, k}. } Then, the relation~\eqref{eq:Mttsc1} follows by the fact that $\Mtt{0, k} = \Mt{0, k} + \Ers{k} = \Ntt{0, k} + \Ers{k} $ and $\Ntt{k, k} = \Ltt{k}$. \end{proof} To help bound $\Ers{k}$, we define a special kind of matrices termed \emph{``repetition matrices"}. We construct the matrices $\dr{\XL{k}}$ as linear combinations of \emph{``repetition matrices"} in the proof of Lemma~\ref{lem:Ersepsz1}. \begin{definition} (``Repetition matrix") Consider a matrix $\AA\in \MS{m}{m}$ and subset $C \sleq [m]$ and $F = [m]\dele C$. The $k$-\repmat\ of $\AA$ is defined as follows: \eq{ \rep{k, C, \AA} = \mx{ k \AA_{CC} & \AA_{CF} & \AA_{CF} & \cdots & \AA_{CF} \\ \AA_{FC} & \AA_{FF} & \mathbf{0} & \cdots & \mathbf{0} \\ \AA_{FC} & \mathbf{0} & \AA_{FF} & \ddots & \vdots \\ \vdots & \vdots & \ddots& \ddots & \mathbf{0} \\ \AA_{FC} & \mathbf{0} & \cdots & \mathbf{0} & \AA_{FF} } \in \MS{\pr{k\abs{F} + \abs{C}}}{\pr{k\abs{F} + \abs{C}}}, } where the repetition numbers of the blocks $\AA_{CF}, \AA_{FC}, \AA_{FF}$ are $k$. We also define $\repp{k, C, \AA, N}$ as a larger matrix by appending all-zeros rows and columns to $\rep{k, C, \AA}$: \eq{ \repp{k, C, \AA, N} = \mx{ \rep{k, C, \AA} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} }\in \MS{N}{N}, } where $N \geq k\abs{F} + \abs{C}$ is used to indicate the size of $\repp{k, C, \AA, N}$. \end{definition} \begin{lemma}\label{lem:Ersepsz1} There is a sequence of symmetric matrices $\dr{\XL{k}}_{k=0}^K$ such that \\ $\XL{k}\in\MS{\pr{\puts{k}}}{\pr{\puts{k}}} \ (0\leq k\leq K)$ and for the error quantities defined as \eql{\label{eq:defgamk}}{ \gam_k = \sup_{\xx, \yy\notin \ker\pr{\sc{\XL{k }, -[n]}}} \frac{2\xx^\top\pr{\Ltt{k} - \Lt{k}}\yy}{\xx^\top\sc{\XL{k}, -[n]}\xx + \yy^\top\sc{\XL{k}, -[n]}\yy } , } they satisfy: \begin{enumerate}[(i)] \item $\XL{k}$ is a Laplacian; \label{enum:Q1} \item $\U{\Mt{0,k}} \preccurlyeq \pr{4 + \frac{2}{\alp} + \frac{\sum_{i=0}^{k-1}\gam_i}{k}}\XL{k} $; \label{enum:Q2} \item $\Diag{\XL{k}} = \Diag{\Mt{0,k}} $; \label{enum:Q3} \item $\XL{k}_{ - C, - C } $, $\XL{k}_{-[n], -[n]}$ are $\alp$-RCDD; \label{enum:Q4} \item $\nt{\pr{\XL{k}}^{1/2}\Diag{\Mt{0,k}}^{-1/2}}^2 \leq 2 $; \label{enum:Q5} \item for any vector $\xx\in \Real^{n}$, if we define $ \xtil = \big(\underbrace{\xx_{F}^\top \ \cdots \ \xx_F^\top}_{\text{$2^k$ repetitions of $\xx_F^\top$}} \ \xx_C^\top \big)^\top, $ then, $\xtil^\top\XL{k}\xtil = 2^k \xx^\top\U{\LL}\xx $; \label{enum:Q7} \item $\sc{\XL{k}, - C } \preccurlyeq 2^{k}\U{\sc{\LL, F}} $. \label{enum:Q6} \end{enumerate} In addition, \eql{\label{eq:XLb1}}{ \Ers{k} \aleq \epsz \pr{4k + \frac{2k}{\alp} + \sum_{i=0}^{k-1}\gam_i} \XL{k}. } \end{lemma} \begin{proof} We only prove the case when all $\gam_k < +\infty$, and the proof for the case when some $\gam_k = +\infty$ follows trivially.\footnote{Actually, we will show in Lemma~\ref{lem:scLttyes} that all $\gam_k < +\infty$. } We define $\XL{0} = \U{\LL}$, and will construct $\XL{k}$ iteratively. That is, we want to show that given $\XL{0}, \cdots, \XL{k-1}$ satisfying these conditions, we can find $\XL{k}$. Firstly, we fix some $i\in \dr{0, 1, \cdots, k-1}$. By Lemma~\ref{lem:EYEXEtt} and Fact~\ref{lem:ne}, we have \eq{ 2\xx^\top \Ett{i+1} \yy \leq \epsz \pr{\xx^\top \U{\Ltt{i}} \xx + \yy^\top \U{\Ltt{i}} \yy},\ \qquad \forall \xx, \yy\in \Real^n. } Then, for any $\xx, \yy\in \Real^{\puts{k}}, $ by the definition of $\Errt{i+1, k}$ in~\eqref{eq:defErrtik1}, we have \eq{ &2\xx^\top \Errt{i+1, k} \yy = 2\sum_{a=1}^{2^{k-i-1} } \xx^\top \unierr{k, a, \Ett{i+1}} \yy = 2\sum_{a=1}^{2^{k-i-1} } \vc{\xx_{F_a} \\ \xx_C}^\top \Ett{i+1} \vc{\yy_{F_{\psit{k - i - 1}\pr{a}}} \\ \yy_C} \\ \leq& \epsz \sum_{a=1}^{2^{k-i-1} } \pr{ \vc{\xx_{F_a} \\ \xx_C}^\top \U{\Ltt{i}} \vc{\xx_{F_{a}} \\ \xx_C} + \vc{\yy_{F_{\psit{k-i-1}\pr{a}}} \\ \yy_C}^\top \U{\Ltt{i} } \vc{\yy_{F_{\psit{k-i-1}\pr{a}}} \\ \yy_C} }. } By the fact that $\psit{k-i-1}\pr{\cdot}$ is a bijection from $[2^{k-i-1}]$ to $[2^{k-i-1} ]$ and $\U{\Ltt{i}}$ is PSD (because $\Ltt{i}$ is an Eulerian Laplacian from Lemma~\ref{lem:LapetcpropSparSchurCpmt1}), we have \eql{\label{eq:xEy127}}{ &2\xx^\top \Errt{i+1, k} \yy \\ \leq& \epsz \sum_{a=1}^{2^{k-i-1} } \pr{ \vc{\xx_{F_a} \\ \xx_C}^\top \U{\Ltt{i}} \vc{\xx_{F_{a}} \\ \xx_C} + \vc{\yy_{F_{a}} \\ \yy_C}^\top \U{\Ltt{i} } \vc{\yy_{F_{a }} \\ \yy_C} } \\ \leq& \epsz \sum_{a=1}^{2^{k-i} } \pr{ \vc{\xx_{F_a} \\ \xx_C}^\top \U{\Ltt{i}} \vc{\xx_{F_{a}} \\ \xx_C} + \vc{\yy_{F_{a}} \\ \yy_C}^\top \U{\Ltt{i} } \vc{\yy_{F_{a }} \\ \yy_C} } \\ =& \epsz \pr{\xx^\top \repp{2^{k-i}, C, \U{\Ltt{i}}, \puts{k}}\xx + \yy^\top \repp{2^{k-i}, C, \U{\Ltt{i}}, \puts{k}}\yy }. } To bound the repetition matrix $\repp{2^{k-i}, C, \U{\Ltt{i}}, \puts{k}}$, we bound the matrices $\repp{2^{k-i}, C, \U{\Ltt{i} - \Lt{i}}, \puts{k}}$ and $\repp{2^{k-i}, C, \U{\Lt{i}}, \puts{k}} $, respectively. By the definition of $\gam_i $ in~\eqref{eq:defgamk}, \eq{ \U{\Ltt{i} - \Lt{i}} \preccurlyeq \gam_i \sc{\XL{i}, -[n] }. } Then, by Fact~\ref{fact:permuB1}, there is a permutation matrix $\Pt{i,k,1}$ such that \eql{\label{eq:XLLtt-Lt1}}{ &\repp{2^{k-i}, C, \U{\Ltt{i} - \Lt{i}}, \puts{k}} \\ \preccurlyeq& \gam_i \Pt{i,k,1}\rep{2^{k-i}, C, \XL{i}}\tpp{\Pt{i,k,1}}, } where we also used the fact that $\XL{i}\in \MS{\pr{\puts{i}}}{\pr{\puts{i}}}$ and \\ $\rep{2^{k-i}, C, \XL{i}} \in\MS{\pr{\puts{k}}}{\pr{\puts{k}}} . $ By Lemma~\ref{lem:Mti}, $\Mt{0, i}_{-[n], -[n] }$ is $\alp$-RCDD and $\sc{\Mt{0,i}, -[n]} = \Lt{i} $. Then, by Fact~\ref{lem:scrobust}, we have \eq{ \U{\Lt{i}} = \U{\sc{\Mt{0, i}, -[n]}} \preccurlyeq \pr{3 + \frac{2}{\alp}}\sc{\U{\Mt{0, i}}, -[n]}. } By Fact~\ref{fact:permuB1}, there is a permutation matrix $\Pt{i,k,2}$ such that \eql{\label{eq:XLLt1}}{ &\repp{2^{k-i}, C, \U{\Lt{i}}, \puts{k}} \\ \preccurlyeq& \pr{3 + \frac{2}{\alp}}\Pt{i,k,2}\rep{2^{k-i}, C, \U{\Mt{0, i}}}\tpp{\Pt{i,k,2}}, } where we also used the fact that $\Mt{0, i}\in \MS{\pr{\puts{i}}}{\pr{\puts{i}}}$ and $\rep{2^{k-i}, C, \Mt{0, i}}\in \MS{\pr{\puts{k}}}{\pr{\puts{k}}}. $ Now, we define $\XL{k}$ as: \eql{\label{eq:newday1}}{ \XL{k} \stackrel{\mathrm{def}}{=} & \frac{k}{4k + \frac{2k}{\alp} + \sum_{i=0}^{k-1}\gam_i} \U{\Mt{0,k}} \\ & + \frac{1}{4k + \frac{2k}{\alp} + \sum_{i=0}^{k-1}\gam_i} \sum_{i=0}^{k-1} \gam_i\Pt{i,k,1}\rep{2^{k-i}, C, \XL{i} }\tpp{\Pt{i,k,2}} \\ & + \frac{{3 + \frac{2}{\alp}}}{4k + \frac{2k}{\alp} + \sum_{i=0}^{k-1}\gam_i} \sum_{i=0}^{k-1} \Pt{i,k,2}\rep{2^{k-i}, C, \U{\Mt{0,i}} }\tpp{\Pt{i,k,2}}, } Then, by substituting~\eqref{eq:XLLtt-Lt1},~\eqref{eq:XLLt1} into~\eqref{eq:xEy127}, summing over $i=0, 1, \cdots, k-1$, and combining with~\eqref{eq:newday1} and Fact~\ref{lem:ne}, we have~\eqref{eq:XLb1}. We remark that the first term $\U{\Mt{0, k}}$ in~\eqref{eq:newday1} is only used to guarantee that $\XL{k} \succcurlyeq \frac{k }{4k + \frac{2k}{\alp} + \sum_{i=0}^{k-1}\gam_i}\U{\Mt{0,k}}$ (Property~\eqref{enum:Q2}). Without this term,~\eqref{eq:XLb1} still holds with only slight changes in the constants. We now show the remaining properties of $\XL{k}$ which can be derived by Lemma~\ref{lem:Mti} and Fact~\ref{fact:permuB1} easily. \begin{enumerate}[(i)] \item By Lemma~\ref{lem:Mti}, each $\Mt{0, i}$ is an Eulerian Laplacian. Then, by definition of repetition matrices and the fact that $\Pt{i,k,2}$ is a permutation matrix, each matrix \\ $\Pt{i,k,2}\rep{2^{k-i}, C, \U{\Mt{0,i}} }\tpp{\Pt{i,k,2}}$ is a Laplacian. By induction hypothesis, $\XL{i}$ is a Laplacian. So, we also have \\ $\Pt{i,k,1}\rep{2^{k-i}, C, \XL{i}}\tpp{\Pt{i,k,1}} $ is a Laplacian. Thus, by~\eqref{eq:newday1}, $\XL{k}$ is also a Laplacian. \item Follows directly from~\eqref{eq:newday1}. \item By the definition of $\Mt{0, i}$, $\Diag{\Mt{0, i}}_{-C, -C} = \Diag{\rep{2^i, \emptyset, \DD_{FF}}}$. By Fact~\ref{fact:permuB1}, $\Diag{\Pt{i,k,2}\rep{2^{k-i}, C, \U{\Mt{0,i}} }\tpp{\Pt{i,k,2}}}_{-C, -C} = \Diag{\rep{2^{k-i}, C, \U{\Mt{0,i}} }}_{-C, -C}$ $ = \rep{2^k, \emptyset, \DD_{FF}} = \Diag{\Mt{0, k}}_{-C, -C}$. By induction hypothesis, $\Diag{\XL{i}}_{-C, -C} = \Diag{\Mt{0, i}}_{-C, -C} $. Analogously, we have $\Diag{\Pt{i,k,1}\rep{2^{k-i}, C, \XL{i}}\tpp{\Pt{i,k,1}}}_{-C, -C} = \Diag{\Mt{0, k}}_{-C, -C}. $ By the definition of $\Mt{0, i}$, $\Diag{\rep{2^{k-i}, C, \U{\Mt{0, i}}}}_{CC} = 2^{k-i}\cdot 2^i\cdot\Diag{\Lt{0}}_{CC} = 2^k\Diag{\LL}_{CC} = \Diag{\Mt{0, k}}_{CC}. $ Combining with Fact~\ref{fact:permuB1}, we have \\ $\Diag{\Pt{i,k,2}\rep{2^{k-i}, C, \U{\Mt{0,i}} }\tpp{\Pt{i,k,2}}}_{CC} = \Diag{\rep{2^{k-i}, C, \U{\Mt{0,i}} }}_{CC} = \Diag{\Mt{0, k}}_{CC}. $ By induction hypothesis and similar arguments as above, \\ $\Diag{\Pt{i,k,1}\rep{2^{k-i}, C, \XL{i}}\tpp{\Pt{i,k,1}}}_{CC} = \Diag{\Mt{0, k}}_{CC} $. Then, Property~\eqref{enum:Q3} follows by the fact that the sum of coefficients of the terms on the RHS of~\eqref{eq:newday1} equals $1$. \item By Lemma~\ref{lem:Mti}, $\Mt{0, i}_{-C, -C}$ is $\alp$-RCDD. By Fact~\ref{fact:permuB1}, we have \\ $\pr{\Pt{i,k,2}\rep{2^{k-i}, C, \U{\Mt{0,i}} }\tpp{\Pt{i,k,2}}}_{-C, -C} $ is $\alp$-RCDD. By induction hypothesis, $\XL{i}_{-C, -C} $ is $\alp$-RCDD. Analogously, $\pr{\Pt{i,k,1}\rep{2^{k-i}, C, \XL{i}}\tpp{\Pt{i,k,1}}}_{-C, -C} $ is also $\alp$-RCDD. Then by the fact that the sum of the coefficients on the RHS of~\eqref{eq:newday1} equals $1$, we have $\XL{k}_{-C, -C}$ is $\alp$-RCDD. $\XL{k}_{-[n], -[n]}$ is also $\alp$-RCDD, as it is a submatrix of $\XL{k}_{-C, -C}$. \item Follows directly by Property~\eqref{enum:Q1}, Fact~\ref{fact:DLD} and Property~\eqref{enum:Q3}. \item Denote \eq{ \xhatt{i} = \big(\underbrace{\xx_{F}^\top \ \cdots \ \xx_F^\top}_{\text{$2^i$ repetitions of $\xx_F^\top$}} \ \xx_C^\top \big)^\top. } By induction hypothesis, $\tpp{\xhatt{i}} \XL{i}\xhatt{i} = 2^i\xx^\top\U{\LL} \xx$. Then, by Fact~\ref{fact:permuB1}, we have \\ $\tpp{\xhatt{k}}\Pt{i,k,1}\rep{2^{k-i}, C, \XL{i}}\tpp{\Pt{i,k,1}}\xhatt{k} = \tpp{\xhatt{k} } \rep{2^{k-i}, C, \XL{i}} \xhatt{k} = $ \\ $2^{k-i} \tpp{\xhatt{i}} \XL{i} \xhatt{i} = 2^k \xx^\top \U{\LL}\xx. $ By Lemma~\ref{lem:Mti} and similar arguments, $\tpp{\xhatt{k}} \Pt{i,k,2}\rep{2^{k-i}, C, \U{\Mt{0,i}} }\tpp{\Pt{i,k,2}} \xhatt{k} = 2^k\xx^\top\U{\LL}\xx. $ Then Property~\eqref{enum:Q7} follows from the fact that the sum of the coefficients on the RHS of~\eqref{eq:newday1} equals $1$. \item For any vector $\xx\in \Real^{n}$, we define $\xtil$ as in Property~\eqref{enum:Q7}. By Property~\eqref{enum:Q4} and Fact~\ref{fact:alpRCDDPSDpPD1}, $\XL{k}_{-[n], -[n]}$ is PD. Then, by Fact~\ref{fact:Schurxusmall} and Property~\eqref{enum:Q7}, we have \eq{ \xx^\top\sc{\XL{k}, -[n]}\xx \leq \xtil^\top\XL{k}\xtil = 2^k \xx^\top\U{\LL}\xx, } i.e., $\sc{\XL{k}, -[n]}\preccurlyeq 2^k\U{\LL}$. Then, using Fact~\ref{fact:scprvpleq} and Fact~\ref{fact:scUpleqUsc}, we have $\sc{\XL{k}, -C} \preccurlyeq 2^k \sc{\U{\LL}, F} \preccurlyeq 2^k \U{\sc{\LL, F}}. $ \end{enumerate} \end{proof} \section{Robustness of Schur Complements and Full Error Analysis}\label{sec:Schurcplstable2} In this section, we show additional robustness properties of Schur complements suitable for analyzing errors on the augmented matrices. Specifically, we establish conditions on $\AA, \BB, \UU$ where $\AA - \BB \aleq \eps\cdot \UU $, as well as the set to be eliminated, $F$, so that $\sc{\AA, F} - \sc{\BB, F} \aleq \dlt\cdot \sc{\UU, F} $. Using these properties, we bound the norms of errors in Schur complements of the $\XL{k}$, $\gam_k $ as defined in~\eqref{eq:defgamk}. Such bounds allow us to complete the proof of Theorem~\ref{thm:SparSchur}. \subsection{Schur Complement Robustness } The next lemma is a perturbed version of Fact~\ref{fact:LDL}. It is used to prove Lemma~\ref{lem:schurdUL1} below. \begin{lemma}\label{lem:pertbLDL} Suppose that $\LL\in \MatSize{n}{n}$ is an Eulerian Laplacian, $\DD = \Diag{\LL}$, $\WW$ is PSD, $\nt{\WW^{1/2}\DD^{-1/2}} \leq a $, and the matrix $\EE\in \MatSize{n}{n}$ satisfies $\EE \aleq b\WW$ with $a^2 b < 2$. Then the matrix $\MM = \LL + \EE$ satisfies: \eq{ &\MM^\top\DD\inv\MM \preccurlyeq \frac{1}{2 - a^2b}\pr{\pr{4 + 2a^2b}\U{\LL} + 2b \WW}, \\ &\MM\DD\inv\MM^\top \preccurlyeq \frac{1}{2 - a^2b}\pr{\pr{4 + 2a^2b}\U{\LL} + 2b \WW}. } \end{lemma} \begin{proof} For any $\xx \in \Real^n$, \eq{ \xx^\top\MM^\top\DD\inv\MM\xx = \xx^\top\LL^\top\DD\inv\LL\xx + \xx^\top\LL^\top\DD\inv\EE\xx + \xx^\top\EE^\top\DD\inv\MM\xx. } We bound each of these terms on the RHS separately. \begin{itemize} \item For the first term, by Fact~\ref{fact:LDL}, $ \xx^\top\LL^\top\DD\inv\LL\xx \leq 2\xx^\top \U{\LL} \xx. $ \item For the second term, by Fact~\ref{lem:ne}, the conditions $\EE \aleq b \WW $ and $\nt{\WW^{\dagger/2}\DD^{-1/2}} \leq a $ and Fact~\ref{fact:LDL}, \eq{ &2\xx^\top\LL^\top\DD\inv\EE\xx \leq b\pr{\xx^\top \WW \xx + \xx^\top \LL^\top \DD\inv \WW \DD\inv \LL \xx} \\ =& b\pr{\xx^\top \WW \xx + \nt{\WW^{1/2}\DD\inv\LL\xx}^2 } \leq b\pr{\xx^\top \WW \xx + \nt{\WW^{1/2}\DD^{-1/2}}^2 \nt{\DD^{-1/2}\LL\xx}^2 } \\ \leq& b\pr{\xx^\top \WW \xx + a^2 \xx^\top \LL^\top \DD\inv \LL \xx } \leq b\pr{\xx^\top \WW \xx + 2 a^2 \xx^\top \U{\LL} \xx }. } \item For the third term, using the conditions $ \EE \aleq b\WW $ and $\nt{\WW^{\dagger/2}\DD^{-1/2}} \leq a $ again yields that \eq{ &2\xx^\top\EE^\top\DD\inv\MM\xx \leq b\pr{\xx^\top \WW \xx + \xx^\top \MM^\top \DD\inv \WW \DD\inv \MM \xx } \\ =& b\pr{\xx^\top \WW \xx + \nt{\WW^{1/2}\DD\inv\MM\xx}^2 } \leq b\pr{\xx^\top \WW \xx + a^2\nt{\DD^{-1/2}\MM\xx}^2 } \\ =& b\pr{\xx^\top \WW \xx + a^2 \xx^\top \MM^\top \DD\inv\MM\xx}. } By combining the above equations, we have \eq{ 2\xx^\top\MM^\top\DD\inv\MM\xx \leq \pr{4 + 2a^2b}\xx^\top\U{\LL} \xx + 2b \xx^\top\WW\xx + a^2 b \xx^\top \MM^\top\DD\inv\MM \xx. } \end{itemize} Rearranging the above equations yields that \eq{ \xx^\top\MM^\top\DD\inv\MM\xx \leq \frac{1}{2 - a^2b}\pr{\pr{4 + 2a^2b}\xx^\top\U{\LL}\xx + 2b \xx^\top\WW\xx}. } which gives the result for $\MM^\top\DD\inv\MM$. The bound for $\MM\DD\inv\MM^\top$ follows analogously. \end{proof} The following lemma shows the robustness of the Schur complements. It's used in the proof of Lemma~\ref{lem:scLttyes} to bound $\gam_k$. \begin{lemma}\label{lem:schurdUL1} Let $\NN\in \MatSize{n}{n}$ be an Eulerian Laplacian, let $\MM$ be an $n$-by-$n$ matrix, let $\UU \in\MatSize{n}{n}$ be PSD and $F, C$ a partition of $[n]$. Suppose that $\UU_{FF} $ is nonsingular, $\UU\mathbf{1} = \mathbf{0}$, $\NN_{FF}$ is $\rho$-RCDD $(\rho > 0)$, $\U{\NN}_{FF} \succcurlyeq \frac{1}{\mu}\UU_{FF}$, $\U{\NN} \preccurlyeq \beta \UU$, $\nt{\UU^{1/2}\Diag{\NN}^{-1/2}} \leq a$, and the matrix $\EE = \MM - \NN$ satisfies $\EE\aleq b \cdot \UU $ with $b < \min\dr{\frac{2}{a^2}, \frac{1}{\mu}}$. Then, $\MM_{FF}$, $\NN_{FF}$ are nonsingular and \eql{\label{eq:scdiff1}}{ {\sc{\MM, F} - \sc{\NN, F}} \aleq b \pr{1 + \frac{1}{\rho}}\frac{\mu\pr{\bet\pr{4 + 2a^2 b} + 2b}}{\pr{1 - \mu b}^2\pr{2 - a^2 b}} \cdot \sc{\UU, F}. } \end{lemma} \begin{proof} By the condition $\MM - \NN \aleq b\cdot \UU $, we have $2\xx^\top\pr{\MM - \NN}\yy \leq b\pr{\xx^\top\UU\xx + \yy^\top\UU\yy},\ \forall \xx\in \Real^n $. Then, $\xx^\top\pr{\U{\MM}_{FF} - \U{\NN}_{FF}}\xx \leq b\xx^\top\UU_{FF}\xx,\ \forall \xx\in \Real^{\abs{F}}. $ Thus, $\U{\MM}_{FF} \succcurlyeq \U{\NN}_{FF} - b\UU_{FF} $. By the condition $\U{\NN}_{FF} \succcurlyeq \frac{1}{\mu}\UU_{FF}$, we have \eql{\label{eq:MFFpgeqNFF1}}{ \U{\MM}_{FF} \succcurlyeq \U{\NN}_{FF} - b\UU_{FF} \succcurlyeq \pr{1 - b\mu}\U{\NN}_{FF} } and \eql{\label{eq:MFpgeqUFF1}}{ \U{\MM}_{FF} \succcurlyeq \pr{1 - b\mu}\U{\NN}_{FF} \succcurlyeq \pr{\frac{1}{\mu} - b}\UU_{FF} \succ \mathbf{0}. } Since $\UU$ is PSD, $\UU_{FF}$ is also PSD. Since $\UU_{FF}$ is also nonsingular, $\UU_{FF} \succ \mathbf{0}$. Then, by the condition $b < \frac{1}{\mu} $ and~\eqref{eq:MFpgeqUFF1}, we have $\U{\MM}_{FF} \succ \mathbf{0} $. Then, by Fact~\ref{fact:kerAsmltkUAaPSD}, $\MM_{FF}$ and $\NN_{FF}$ are nonsingular. And by Fact~\ref{fact:LUL}, we have \eql{\label{eq:MLUL1}}{\MM_{FF}\inv \U{\MM}_{FF} \tpp{\MM_{FF}\inv} \preccurlyeq \U{\MM}_{FF}\inv, } \eql{\label{eq:NUNNbyfactLUL}}{\tpp{\NN_{FF}\inv} \U{\NN}_{FF} \NN_{FF}\inv \preccurlyeq \U{\NN_{FF}}\inv. } For any $\xx, \yy\in \Real^{\abs{C}}$, define \eq{ \xn = \vc{ - \NN_{FF}\inv\NN_{FC}\xx \\ \xx }, \quad \xu = \vc{ - \UU_{FF}\inv\UU_{FC}\xx \\ \xx } } and \eq{ \ym = \vc{ - \tpp{\MM_{FF}\inv}\MM_{CF}^\top \yy \\ \yy }, \quad \yu = \vc{ - {\UU_{FF}\inv}\UU_{CF}^\top\yy \\ \yy }. } Then, \eq{ \NN\xn = \vc{ \zerov{F} \\ \sc{\NN, F}\xx },\quad \ym^\top\MM = \vc{ \zerov{F}^\top & \yy^\top\sc{\MM, F} }. } Thus, we have \eq{ \ym^\top\MM\xn = \vc{\zerov{F}^\top & \yy^\top\sc{\MM, F} } \vc{ - \NN_{FF}\inv\NN_{FC}\xx \\ \xx } = \yy^\top\sc{\MM, F}\xx. } Similarly, $ \ym^\top \NN \xn = \yy^\top\sc{\NN, F}\xx. $ Therefore, \eq{ \yy^\top\pr{\sc{\MM, F} - \sc{\NN, F}}\xx = \ym^\top\pr{\MM - \NN}\xn. } Combining the above equation with $\EE \aleq b \cdot \UU $ and Fact~\ref{lem:ne} yields that \eq{ 2\yy^\top\pr{\sc{\MM, F} - \sc{\NN, F}}\xx \leq b \pr{\ym^\top\UU\ym + \xn^\top\UU\xn }. } Denote the projection matrix onto the image space of $\NN$ by $\Con $. It follows by direct calculations that $\pr{\xn}_F + \UU_{FF}\inv\UU_{FC}\xx = - \NN_{FF}\inv\pr{\NN\xu}_F = - \NN_{FF}\inv\pr{\Con\NN\xu}_F $. We denote the matrix \eq{ \PP = \Con \mx{ \tpp{\NN_{FF}\inv} \UU_{FF} \NN_{FF}\inv & \zerom{F}{C} \\ \zerom{C}{F} & \zerom{C}{C} } \Con } in this proof. Then by Fact~\ref{fact:Schurxusmall}, we have \eql{\label{eq:xnUP1}}{ &\xn^\top\UU\xn = \nA{\sc{\UU, F}}{\xx}^2 + \nA{\UU_{FF}}{{\NN_{FF}}\inv\pr{\Con\NN\xu}_F}^2 = \nA{\sc{\UU, F}}{\xx}^2 + \nA{\PP}{\NN\xu}^2. } By the condition $\U{\NN_{FF}} \succcurlyeq \frac{1}{\mu}\UU_{FF}$, we have \eql{\label{eq:NFUNbyNFbU}}{\tpp{\NN_{FF}\inv} \UU_{FF} \NN_{FF}\inv \preccurlyeq \mu\tpp{\NN_{FF}\inv} \U{\NN_{FF}} \NN_{FF}\inv. } Since $\NN_{FF}$ is $\rho$-RCDD, we have $\U{\NN_{FF}} \succcurlyeq \frac{\rho}{1 + \rho}\Diag{\NN}_{FF}$, i.e., \eql{\label{eq:DiagUNFF1}}{\U{\NN_{FF}}\inv \preccurlyeq \pr{1 + \frac{1}{\rho}}\Diag{\NN}_{FF}\inv. } Then, combining~\eqref{eq:NFUNbyNFbU},~\eqref{eq:NUNNbyfactLUL},~\eqref{eq:DiagUNFF1} with Fact~\ref{fact:LDL} yields that \eql{\label{eq:Pmorning1}}{ &\PP \preccurlyeq \pr{1 + \frac{1}{\rho}} \mu \Con \mx{ \Diag{\NN}_{FF}\inv & \zerom{F}{C} \\ \zerom{C}{F} & \mathbf{0}{C}{C} } \Con \preccurlyeq \pr{1 + \frac{1}{\rho}} \mu \Con\Diag{\NN}\inv\Con \\ =& \pr{1 + \frac{1}{\rho}}\mu \pr{\NN\dg}^\top\NN^\top\Diag{\NN}\inv\NN\NN\dg \preccurlyeq 2\pr{1 + \frac{1}{\rho}}\mu \pr{\NN\dg}^\top \U{\NN} \NN\dg \\ \preccurlyeq& 2\pr{1 + \frac{1}{\rho}}\mu \bet \pr{\NN\dg}^\top \UU \NN\dg. } We also define the matrix \eq{ \QQ = \Contil \mx{ \MM_{FF}\inv \UU_{FF} \tpp{\MM_{FF}\inv} & \zerom{F}{C} \\ \zerom{C}{F} & \zerom{C}{C} } \Contil } in this proof, where $\Contil$ is the projection matrix onto the image space of $\MM^\top$. Similar to~\eqref{eq:xnUP1}, we have \eql{\label{eq:ymUQ1}}{ \ym^\top \UU \ym = \nA{\sc{\UU, F}}{\yu}^2 + \nA{\QQ}{\MM^\top\yu}^2. } Then, combining~\eqref{eq:MFpgeqUFF1},~\eqref{eq:MLUL1},~\eqref{eq:MFFpgeqNFF1},~\eqref{eq:DiagUNFF1} yields that \eql{\label{eq:QMc1}}{ &\MM_{FF}\inv \UU_{FF} \tpp{\MM_{FF}\inv} \preccurlyeq \frac{\mu}{1 - \mu b} \MM_{FF}\inv \U{\MM}_{FF} \tpp{\MM_{FF}\inv} \preccurlyeq \frac{\mu}{1 - \mu b}\U{\MM}_{FF}\inv \\ \preccurlyeq& \frac{\mu}{\pr{1 - \mu b}^2}\U{\NN}_{FF}\inv \preccurlyeq \frac{\mu}{\pr{1 - \mu b}^2}\pr{1 + \frac{1}{\rho}}\Diag{\NN}_{FF}\inv. } By~\eqref{eq:QMc1}, we have \eq{ \QQ &\preccurlyeq \frac{\mu}{\pr{1 - \mu b}^2}\pr{1 + \frac{1}{\rho}} \Contil \mx{ \Diag{\NN}_{FF}\inv & \zerom{F}{C} \\ \zerom{C}{F} & \mathbf{0}{C}{C} } \Contil \preccurlyeq \frac{\mu}{\pr{1 - \mu b}^2}\pr{1 + \frac{1}{\rho}} \Contil\Diag{\NN}\inv\Contil \\ &= \frac{\mu}{\pr{1 - \mu b}^2}\pr{1 + \frac{1}{\rho}} \MM\dg\MM\Diag{\NN}\inv\MM^\top\pr{\MM\dg}^\top. } Then, using Lemma~\ref{lem:pertbLDL} and $\U{\NN} \preccurlyeq \beta \UU$, we have \eql{\label{eq:QM511}}{ \QQ &\preccurlyeq \frac{\mu}{\pr{1 - \mu b}^2}\pr{1 + \frac{1}{\rho}}\MM\dg\pr{\frac{1}{2 - a^2b}\pr{\pr{4 + 2a^2b}\U{\NN} + 2b \UU}}\pr{\MM\dg}^\top \\ &\preccurlyeq \pr{1 + \frac{1}{\rho}}\frac{\mu\pr{\bet\pr{4 + 2a^2 b} + 2b}}{\pr{1 - \mu b}^2\pr{2 - a^2 b}} \MM\dg \UU \tpp{\MM\dg}. } With the above preparations, our proof for~\eqref{eq:scdiff1} has 2 steps. Firstly, we prove~\eqref{eq:scdiff1} with an additional condition: $\ker\pr{\NN^\top}\cup\ker\pr{\MM} \sleq \ker\pr{\UU}$. Then, we remove this extra condition by taking limits. To begin with, we prove~\eqref{eq:scdiff1} under the condition $\ker\pr{\NN^\top}\cup\ker\pr{\MM} \sleq \ker\pr{\UU}$. Since $\Con$ is the projection matrix onto the image space of $\NN$ and $\ker\pr{\NN^\top}\sleq \ker{\UU}$, we have $\Con\UU\Con = \UU. $ Then, by~\eqref{eq:Pmorning1}, we have $\nA{\PP}{\NN\xu}^2 \leq 2\pr{1 + \frac{1}{\rho}}\mu\bet \xu^\top \NN^\top\tpp{\NN\dg}\UU\NN\dg\NN \xu = 2\pr{1 + \frac{1}{\rho}}\mu\bet \xu^\top\Con\UU\Con\xu = 2\pr{1 + \frac{1}{\rho}}\mu\bet \xu^\top\UU\xu. $ Then, combining with~\eqref{eq:xnUP1} yields that \eq{ \xn^\top \UU \xn \leq \nA{\sc{\UU, F}}{\xx}^2 + 2\pr{1 + \frac{1}{\rho}}\mu \bet \xu^\top \UU \xu = \pr{1 + 2\pr{1 + \frac{1}{\rho}}\mu \bet} \xx^\top \sc{\UU, F} \xx. } Analogously, we have $\Contil\UU\Contil = \UU$. Then, by~\eqref{eq:QM511} and~\eqref{eq:ymUQ1}, we have \eq{ \ym^\top \UU \ym \leq \pr{1 + \pr{1 + \frac{1}{\rho}}\frac{\mu\pr{\bet\pr{4 + 2a^2 b} + 2b}}{\pr{1 - \mu b}^2\pr{2 - a^2 b}}} \yy^\top \U{\sc{\UU, F}} \yy. } Combining the above equations yields that when $\ker\pr{\NN^\top}\cup\ker\pr{\MM} \sleq \ker\pr{\UU}$, \eql{\label{eq:kercupadditional1}}{ &2\yy^\top \pr{\sc{\MM, F} - \sc{\NN, F}} \xx \\ \leq& b \pr{\pr{1 + 2\pr{1 + \frac{1}{\rho}}\mu{\bet}}\xx^\top\sc{\UU, F}\xx + \pr{1 + \pr{1 + \frac{1}{\rho}}\frac{\mu\pr{\bet\pr{4 + 2a^2 b} + 2b}}{\pr{1 - \mu b}^2\pr{2 - a^2 b}}}\yy^\top\sc{\UU, F}\yy} \\ \leq& b \pr{1 + \pr{1 + \frac{1}{\rho}}\frac{\mu\pr{\bet\pr{4 + 2a^2 b} + 2b}}{\pr{1 - \mu b}^2\pr{2 - a^2 b}}}\pr{\xx^\top\sc{\UU, F}\xx + \yy^\top\sc{\UU, F}\yy},\ \forall \xx, \yy\in \Real^{\abs{C}}. } Next, we remove the condition $\ker\pr{\NN^\top}\cup\ker\pr{\MM} \sleq \ker\pr{\UU}$. The PSD matrix $\UU$ has the spectral decomposition as $\UU = \sum_{i=1}^{n} \lambda_{i}\pr{\UU} \zz_i\zz_i^\top$. Since $\UU\mathbf{1} = \mathbf{0}$, without loss of generality, we may assume $\zz_1 = \mathbf{1}$. Denote $d = {\rm rank}\pr{\UU}$. Then, by adding small symmetric perturbations of the form $\sum_{i=2}^{n-d}\dlt^{\pr{i}}\zz_i\zz_i^\top \ (\dlt^{(i)} > 0)$, we can have a sequence of PSD matrices $\dr{\Uzoo{j}}_{j\geq 0}$ such that for each $j \geq 0$, $\ker\pr{\Uzoo{j}} = \spanrm{\mathbf{1}} $, $\Uzoo{j} \succ \UU$ and $\lim_{j\arr +\infty} \Uzoo{j} = \UU$. By adding undirected edges with small weights in $\NN$, we can find a sequence of Eulerian Laplacians $\dr{\Nzoo{j}}_{j\geq 0}$ such that each $\Nzoo{j}$ is strongly connected, $\Nzoo{j}_{FF}$ is nonsingular and $\lim_{j\arr +\infty} \Nzoo{j} = \NN. $ Then, by Fact~\ref{fact:strcrank1}, \eq{ \ker\pr{\Nzoo{j}} = \ker\pr{\tpp{\Nzoo{j}}} = \spanrm{\mathbf{1}} = \ker(\Uzoo{j}). } We can let the weights of the new undirected edges in $\NN$ tend to zero faster than $\lambda_2\pr{\Uzoo{j} - \UU}$. Then, \eq{ \lim_{j\arr +\infty} \ndd{\pr{\Uzoo{j}}}{\pr{\NN - \Nzoo{j}}} = 0. } Since $\EE\aleq b\UU$ and $\UU\mathbf{1} = \mathbf{0}$, we have $\EE\mathbf{1} = \EE^\top\mathbf{1} = \mathbf{0}$. Thus, $\MM\mathbf{1} = \MM^\top\mathbf{1} = \mathbf{0}$. By adding small symmetric perturbations of the form $\sum_{i=2}^{n } \widehat{\dlt}^{(i)}\zz_i\zz_i^\top$ into $\MM$, we can find a sequence of matrices $\{\Mzoo{j}\}_{j\geq 0} $ such that $\Mzoo{j}_{FF}$ is nonsingular, $\lim_{j\arr +\infty} \Mzoo{j} = \MM $ and \eq{\ker(\Mzoo{j}) = \ker\pr{\tpp{\Mzoo{j}}} = \spanrm{\mathbf{1}} = \ker(\Uzoo{j}). } Also, by setting the perturbations added into $\MM$ to be small enough (with respect to the magnitudes of the perturbations in $\UU $), we can let \eq{ \lim_{j\arr +\infty} \ndd{\pr{\Uzoo{j}}}{\pr{\MM - \Mzoo{j}}} = 0. } Define $\bzoo{j} = b + \ndd{\pr{\Uzoo{j}}}{\pr{\NN - \Nzoo{j}}} + \ndd{\pr{\Uzoo{j}}}{\pr{\MM - \Mzoo{j}}}$. Then, combining the above equations and using the relation $\Uzoo{j} \succ \UU $ and $\MM - \NN \aleq b\UU $, we have $\lim_{j\arr +\infty} \bzoo{j} = b $ and \eq{ \Mzoo{j} - \Nzoo{j} \aleq \bzoo{j} \Uzoo{j}. } As the perturbations mentioned above tend to zero, we can also define real numbers \\ $\dr{\azoo{j}, \rhozoo{j}, \muzoo{j}, \betzoo{j}}_{j\geq J}$ easily such that the matrix sequence $\{\Mzoo{j}, \Nzoo{j}, \Uzoo{j}\}_{j\geq 0}$ satisfy \begin{itemize} \item $\Nzoo{j}_{FF} $ is $\rhozoo{j}$-RCDD $(\rhozoo{j} > 0)$, \item $\U{\Nzoo{j}}_{FF} \succcurlyeq \frac{1}{\muzoo{j}}\Uzoo{j}_{FF} $, \item $\U{\Nzoo{j}} \preccurlyeq \betzoo{j} \Uzoo{j} $, \item $\nt{\pr{\Uzoo{j}}^{1/2}\Diag{\Nzoo{j}}^{-1/2}} \leq \azoo{j} $, \end{itemize} for any $j \geq J$, and $\azoo{j}$, $\rhozoo{j}$, $\muzoo{j}$, $\betzoo{j}$ tend to $a$, $\rho$, $\mu$, $\bet$, respectively. Since $b < \min\dr{\frac{2}{a^2}, \frac{1}{\mu}}$ and $\bzoo{j}, \azoo{j}, \muzoo{j}$ tend to $b, a, \mu$ respectively, then, there exists a $J' > 0$ such that for any $j \geq J'$, $\bzoo{j} < \min\dr{\frac{2}{\pr{\azoo{j}}^2}, \frac{1}{\muzoo{j}} }. $ Then by~\eqref{eq:kercupadditional1}, we have for any $ j\geq \max\dr{J, J'}$ and $\xx, \yy\in \Real^{\abs{C}}$, \eql{\label{eq:perturbedscr1fknl1}}{ &2\yy^\top \pr{\sc{\Mzoo{j}, F} - \sc{\Nzoo{j}, F}} \xx \\ \leq& \bzoo{j} \pr{1 + \pr{1 + \frac{1}{\rhozoo{j}}}\frac{\muzoo{j}\pr{\betzoo{j}\pr{4 + 2(\azoo{j})^2 \bzoo{j}} + 2\bzoo{j}}}{\pr{1 - \muzoo{j} \bzoo{j}}^2\pr{2 - (\azoo{j})^2 \bzoo{j}}}}\pr{\xx^\top\sc{\Uzoo{j}, F}\xx + \yy^\top\sc{\Uzoo{j}, F}\yy}. } Since $\MM_{FF} $ is nonsingular, we have $\lim_{j\arr +\infty} \iv{\Mzoo{j}_{FF}} = \MM_{FF}\inv$. Thus, $\lim_{j\arr +\infty} \sc{\Mzoo{j}, F} = \sc{\MM, F}$. Analogously, $\lim_{j\arr +\infty} \sc{\Uzoo{j}, F} = \sc{\UU, F}$, $\lim_{j\arr +\infty} \sc{\Nzoo{j}, F} = \sc{\NN, F}$. Then, taking limits on both sides of~\eqref{eq:perturbedscr1fknl1} and combining with Fact~\ref{lem:ne} lead to~\eqref{eq:scdiff1}. \end{proof} \subsection{Inductive Accumulation of Errors} We can now obtain a relatively tight bound for $\sc{\Ltt{K}, F} - \sc{\Lt{K}, F}$ by bounding $\gam_k$ iteratively. \begin{lemma}\label{lem:scLttyes} For any $\dlt_0 \in (0, 1)$, with a small $\eps = O\pr{\frac{\dlt_0}{K}} $ in Algorithm~\ref{alg:SparSchur}, the exact and approximate $K$-th partially-block-eliminated Laplacians $\Lt{K}, \Ltt{K}$ satisfies \eql{\label{eq:scLttgood}}{ {\sc{\Ltt{K}, F} - \sc{\Lt{K}, F}} \aleq O\pr{2^K \dlt_0} \cdot \U{\sc{\LL, F}}. } \end{lemma} \begin{proof} First, we will prove $\gam_k \leq O\pr{\dlt_0} \ (\forall 0\leq k\leq K)$ by induction, where $\dr{\gam_k}$ are defined in~\eqref{eq:defgamk}. Since $\Ltt{0} = \LL $, we have $\gam_0 = 0$. Now, assume $\gam_{i} \leq O\pr{\dlt_0},\ \forall 0\leq i\leq k-1 $, we will show that $\gam_k \leq O\pr{\dlt_0}$. By Lemma~\ref{lem:Mti}, $\Mt{0, k}$ is an Eulerian Laplacian. $\Mt{0, k}_{-C, -C}$, $\Mt{0, k}_{-[n], -[n]}$ are $\alp$-RCDD. By Lemma~\ref{lem:Ersepsz1}, $\XL{k}$ is Laplacian, thus, $\XL{k}$ is PSD and $\XL{k}\mathbf{1} = \mathbf{0}$. By Lemma~\ref{lem:Ersepsz1},$\XL{k}_{-C, -C}$, $\XL{k}_{-[n], -[n]}$ are $\alp$-RCDD. Thus, $\XL{k}_{-[n], -[n]} \preccurlyeq \frac{2 + \alp}{1 + \alp}\Diag{\XL{k}}_{-[n], -[n]} \preccurlyeq 2\Diag{\XL{k}}_{-[n], -[n]}. $ By combining with the fact $\Diag{\Mt{0, k}}_{-[n], -[n]} = \Diag{\XL{k}}_{-[n], -[n]}$ from Lemma~\ref{lem:Ersepsz1}, we have \eq{ \U{\Mt{0, k}}_{-[n], -[n]} \succcurlyeq \frac{\alp}{1 + \alp}\Diag{\Mt{0, k}}_{-[n], -[n]} = \frac{\alp}{1 + \alp}\Diag{\XL{k}}_{-[n], -[n]} \succcurlyeq \frac{\alp}{2 + 2\alp}\XL{k}_{-[n], -[n]}. } By the induction hypothesis, $\sum_{i=0}^{k-1}\gam_i \leq O(k\dlt_0)$. Then, by combining with Lemma~\ref{lem:Ersepsz1} and the condition $\epsz = O\pr{\frac{\dlt_0}{K}}$, we have \eq{ &\U{\Mt{0,k}} \preccurlyeq \pr{4 + \frac{2}{\alp} + O(\dlt_0)}\XL{k} \\ &\nt{\pr{\XL{k}}^{1/2}\Diag{\Mt{0,k}}^{-1/2}}^2 \leq 2 \\ &\Ers{k} \aleq \epsz \pr{4k + \frac{2k}{\alp} + k\dlt_0}\cdot \XL{k} \preccurlyeq O\pr{\dlt_0}\cdot \XL{k}. } Now, we invoke Lemma~\ref{lem:schurdUL1} with $\NN:=\Mt{0,k}$, $\MM := \Mtt{0,k}$, $\UU := \XL{k}$, $\EE:= \Ers{k}$, $m:= 2^k \Fn + \Cn$, $F:= [\puts{k}]\dele [n] $, $C:= [n] $, $\rho := \alp$, $\mu := \frac{2 + 2\alp}{\alp}$, $\bet:= 4 + \frac{2}{\alp} + O\pr{\dlt_0} $, $a^2:= 2$, $b:= O(\dlt_0)$. By the arguments above, all the conditions of Lemma~\ref{lem:schurdUL1} are satisfied. Then, we have \eql{\label{eq:scMFtil0k}}{ &\sc{\Mtt{0,k}, -[n]} - \sc{\Mt{0,k}, -[n]} \\ \aleq& b \pr{1 + \pr{1 + \frac{1}{\rho}}\frac{\mu\pr{\bet\pr{4 + 2a^2 b} + 2b}}{\pr{1 - \mu b}^2\pr{2 - a^2 b}}} \cdot \sc{\XL{k}, -[n]} \\ \preccurlyeq& O\pr{\dlt_0} \cdot \sc{\XL{k}, -[n]}. } By combining with the fact $\Lt{k} = \sc{\Mt{0, k}, -[n]}$, $\Ltt{k} = \sc{\Mtt{0, k}, -[n]}$, the definition of $\gam_k$ in~\eqref{eq:defgamk} and Fact~\ref{lem:ne}, we have $\gam_k \leq O(\dlt_0)$. Then, by induction, we have $\gam_K\leq O\pr{\dlt_0}$. By setting $\NN:=\Mt{0,k}$, $\MM := \Mtt{0,k}$, $\UU := \XL{k}$, $\EE:= \Ers{k}$, $m:= 2^k \Fn + \Cn$, $F:= [\puts{k}]\dele C $, $C:= C $, $\rho := \alp$, $\mu := \frac{2 + 2\alp}{\alp}$, $\bet:= 4 + \frac{2}{\alp} + O\pr{\dlt_0} $, $a^2:= 2$, $b:= O(\dlt_0)$ in Lemma~\ref{lem:schurdUL1}, it is easy to check that all conditions of Lemma~\ref{lem:schurdUL1} are satisfied by similar arguments as above. Thus, similar to~\eqref{eq:scMFtil0k}, we have \eql{\label{eq:scGs}}{ & \sc{\Mtt{0,K}, -C} - \sc{\Mt{0,K}, -C} \aleq O\pr{\dlt_0}\cdot \sc{\XL{K}, -C}. } By Fact~\ref{fact:sctran}, Lemma~\ref{lem:Mti}, Lemma~\ref{lem:Mtti}, \eq{ &\sc{\Mt{0,K}, -C} = \sc{\sc{\Mt{0,K},-[n]}, F} = \sc{\Lt{K}, F}. \\ &\sc{\Mtt{0,K}, -C} = \sc{\sc{\Mtt{0,K}, -[n]}, F} = \sc{\Ltt{K}, F}. } By Lemma~\ref{lem:Ersepsz1}, we have $\sc{\XL{K}, - C } \preccurlyeq 2^{K}\U{\sc{\LL, F}} $. Substituting the above 3 equations into~\eqref{eq:scGs} and combining with Fact~\ref{fact:aleqpleq1} complete this proof. \end{proof} \begin{remark} Since $\Stt{0} = \frac{1}{2^K}\pr{\Ltt{K}_{CC} - \Xap} $ (in Algorithm~\ref{alg:SparSchur}), the $2^K$ factor on the RHS of~\eqref{eq:scLttgood} doesn't matter. \end{remark} Now, we are prepared to prove Theorem~\ref{thm:SparSchur}. \begin{proof}[Proof of Theorem~\ref{thm:SparSchur}] By Lemma~\ref{lem:scLtkequal1}, we have the expansion \eq{ &\SS - \sc{\LL, F} = \SS - \Sap + \Stt{0} + \RR - \frac{1}{2^K}\sc{\Lt{K}, F} \\ =& \SS - \Sap + \frac{1}{2^K}\pr{\Ltt{K}_{CC} - \Xap - \sc{\Lt{K}, F}} + \RR \\ =& \SS - \Sap + \frac{1}{2^K}\pr{\sc{\Ltt{K}, F} - \sc{\Lt{K}, F}} + \frac{1}{2^K}\pr{\Att{K}_{CF}\DD_{FF}\inv\Att{K}_{FC} - \Xap } \\ & + \frac{1}{2^K}\pr{\Att{K}_{CF}\pr{\DD_{FF} - \Att{K}_{FF}}\inv\Att{K}_{FC} - \Att{K}_{CF}\DD_{FF}\inv\Att{K}_{FC}} + \RR \\ =& \SS - \Sap + \frac{1}{2^K}\pr{\sc{\Ltt{K}, F} - \sc{\Lt{K}, F}} + \frac{1}{2^K}\EX + \Rap. } By Lemma~\ref{lem:scLttyes} and choosing a small $\eps = O\pr{\frac{\dlt}{K}}$ in Algorithm~\ref{alg:SparSchur}, we can have \eq{ \frac{1}{2^K}\pr{\sc{\Ltt{K}, F} - \sc{\Lt{K}, F}} \aleq \frac{\dlt}{4}\U{\sc{\LL, F}}. } Then, by Lemma~\ref{lem:scLtkequal1} and Fact~\ref{fact:aleqU}, \eq{ \U{\sc{\Ltt{K}, F}} \preccurlyeq \U{\sc{\Lt{K}, F}} + 2^K\cdot\frac{\dlt}{4}\U{\sc{\LL, F}} = 2^K\pr{1 + \frac{\dlt}{4}}\U{\sc{\LL, F}}. } Combining the above equation with~\eqref{eq:EXscLtt} and Fact~\ref{fact:aleqpleq1} yields that by choosing $\eps \leq \frac{\dlt}{4 + \dlt} $, we have \eq{ \frac{1}{2^K}\EX \aleq \frac{\eps}{2^K}\U{\sc{\Ltt{k}, F}} \preccurlyeq \pr{1 + \frac{\dlt}{4}}\eps \U{\sc{\LL, F}} \preccurlyeq \frac{\dlt}{4}\U{\sc{\LL, F}}. } By Fact~\ref{fact:scUpleqUsc}, Fact~\ref{fact:la2scup1} and Cheeger's inequality, we have $ \lambda_2\pr{\U{\sc{\LL, F}}} \geq \lambda_2\pr{\sc{\U{\LL}, F}} \geq \lambda_2\pr{\U{\LL}} \geq \frac{\min_{i\in [n]}\DD_{ii}\pr{\min_{(i,j): \LL_{ij}\neq 0}\abs{\LL_{ij}}}^2}{8\pr{\sum_{i\in [n]}\DD_{ii}}^2} = \Omega\pr{\frac{1}{\poly{n}}}. $ Then, by choosing $K = O\pr{\log\log \frac{n}{\dlt}}$ and using~\eqref{eq:R}, we can let $ \nd{\sc{\LL, F}}{\Rap} \leq \frac{1}{\lambda_2\pr{\U{\sc{\LL, F}}}} \nt{\Rap} \leq \frac{\dlt}{4}. $ Since $\LL$ is strongly connected, we have $\sc{\LL, F}$ is strongly connected. So, $\ker\pr{\U{\sc{\LL, F}}} = \spanrm{\mathbf{1}} $. By combining with the fact $\Rap\mathbf{1} = \Rap^\top\mathbf{1} = \mathbf{0} $ from Lemma~\ref{lem:LapetcpropSparSchurCpmt1}, we have \eq{ \Rap \aleq \frac{\dlt}{4} \U{\sc{\LL, F}}. } Combining the above equations with Fact~\ref{fact:aleqsum} yields that \eq{ \Sap - \sc{\LL, F} \aleq \frac{3\dlt}{4}\U{\sc{\LL, F}}. } Then, $\U{\Sap}\preccurlyeq \pr{1 + \frac{3\dlt}{4}}\U{\sc{\LL, F}} \preccurlyeq 2\U{\sc{\LL, F}}$. Thus, \eq{ \Sap - \SS \aleq \frac{\dlt}{8}\U{\Sap} \preccurlyeq \frac{\dlt}{4}\U{\sc{\LL, F}}. } Then,~\eqref{eq:sttgood} follows. The connectivity of $\SS$ can be readily checked as follows. If $\SS$ is not \strc, then, there is a vector $\xx\neq \mathbf{0}$ and $\xx$ not parallel to $\mathbf{1}$ such that $\SS\xx = \mathbf{0}$. Since $\LL$ is a \strc\ Eulerian Laplacian, we have $\sc{\LL, F}$ is \strc, thus, $\xx^\top\U{\sc{\LL, F}}\xx > 0$. Then, by~\eqref{eq:sttgood}, \eq{ \xx^\top\U{\sc{\LL, F}}\xx = \xx^\top\pr{\U{\sc{\LL, F}} - \U{\SS}}\xx \leq \dlt \xx^\top\U{\sc{\LL, F}}\xx, } which contradicts the condition $\dlt\in (0, 1)$. Thus, $\SS$ is \strc. Since we call $\SE$ in each iteration and $\eps = \Otil{\dlt}$, we have $\nnz{\Ltt{k}} = \Otil{\NSE\pr{n, \dlt}}$. Since $\eps = \Otil{\dlt}$ and $K = \Otil{1}$, the total running time of $\SP$ and $\SparP$ is $\Otil{\NSE\pr{n, \dlt}\dlt^{-2}\log n }$ and $\nnz{\Ltt{k, 0}} = \Otil{\NSE\pr{n, \dlt}\dlt^{-2}\log n }$. As $K = \Otil{1}$, $\eps = \Otil{\dlt}$, by Theorem~\ref{thm:SparEoracle1} and Lemma~\ref{lem:SE}, the total running time of $\SE$ is $O\pr{\TSE\pr{m,n,\dlt}}+ \Otil{\TSE\pr{\NSE\pr{n, \dlt}\dlt^{-2}\log n , n, \dlt} } = O\pr{\TSE\pr{m,n,\dlt}}+ \Otil{\TSE\pr{\NSE\pr{n, \dlt}\dlt^{-2} , n, \dlt}\log n } $ which gives the overall running time bound for Algorithm~\ref{alg:SparSchur}. \end{proof} \section{A Nearly-linear Time Solver }\label{sec:solver} In this section, we complete the Sparsified Schur Complement based algorithm by invoking the nearly-linear time Schur complement sparsification procedure derived above in Sections~\ref{sec:reformPBEvAM1} and~\ref{sec:Schurcplstable2}. We first call this Schur complement sparsification procedure repeatedly to construct a sparse Schur complement chain, in Section~\ref{sec:scc}. Then, in Section~\ref{sec:preconditioner}, we show that this Schur complement chain gives a preconditoner $\PreC$ for the initial Eulerian Laplacian matrix. The full high accuracy solver then follows from invoking this preconditioner inside Richardson iteration. \begin{algorithm}[H] \caption{Block Cholesky solver for directed Laplacians } \label{alg:DLap} \KwIn{strongly connected Eulerian Laplacian $\LL\in \MS{n}{n}$; query vectors $\dr{\bt{q}}_{q=1}^Q\sleq \Real^n$ with each $\bt{q} \perp \mathbf{1}$; error parameters $\dr{\eps_q}_{q=1}^Q\sleq (0, 1)$ } \KwOut{solutions $\dr{\xt{q }}_{q=1}^Q\sleq \Real^n$ } Call $\SCC\pr{\LL, 0.25, 0.1}$ to compute a $\dr{0.25, 0.05, \dr{\frac{ 0.1}{ i^2}}_{i=1}^{O(\log n)} }$-Schur complement chain $\dr{\dr{\Stt{i}}_{i=1}^d, \dr{F_i}_{i=1}^d}$ (Sections~\ref{sec:reformPBEvAM1},~\ref{sec:Schurcplstable2},~\ref{sec:scc}) Generate the operator $\ZZ\pr{\xx} = \PreC\pr{\dr{\dr{\Stt{i}}_{i=1}^d, \dr{F_i}_{i=1}^d}, \xx, O\pr{\log n } }$ (Section~\ref{sec:preconditioner}) \; Using the preconditioned Richardson iteration with the preconditioner $\ZZ$ to solve the Laplacian systems: for each query vector $\bt{q}$, compute $\xt{q} \arl \PRI\pr{\LL, \bt{q}, \ZZ\pr{\cdot}, 1, O\pr{\log \pr{n/\eps_q}} } $ \end{algorithm} \subsection{Schur Complement Chains }\label{sec:scc} We first define Schur complement chains over directed graphs, which is a variant of the Schur complement chain for undirected graphs in~\cite{kyng2016sparsified}. \begin{definition}\label{def:SCC1} (Schur complement chain) Given a \strc\ Eulerian Laplacian $\LL\in\MS{n}{n}$, an $\pr{\alp, \bet, \dr{\dlt_i}_{i=1}^d}$-Schur complement chain of $\LL$ is a sequence of \strc\ Eulerian Laplacians and subsets $\dr{\dr{\Stt{i}}_{i=1}^d, \dr{F_i}_{i=1}^d }$ satisfying \begin{enumerate}[(i)] \item $\dr{F_i}_{i=1}^d$ is a partition of $[n]$; each $\Stt{i}$ is supported on $\pr{C_{i-1}, C_{i-1}} $, where $C_i \stackrel{\mathrm{def}}{=} [n]\dele \pr{\cup_{j=1}^{i} F_j} \ (i=0, 1, \cdots, d-1) $; $\abs{C_i} \leq \pr{1 - \bet}^i n$; $\abs{F_d} = \abs{C_{d-1}} = O\pr{1 }. $ \item For $1\leq i\leq d - 1 $, $\Stt{i}_{F_{i} F_{i}} $ is $\alp$-RCDD. \item $\Stt{1} - \LL \aleq \dlt_1 \cdot \U{\LL} $ and $\Stt{i+1} - \sc{\Stt{i}, F_i} \aleq \dlt_{i+1} \cdot \U{\sc{\Stt{i}, F}},\ 1\leq i\leq d-1. $ \item $\U{\Stt{1}} \succcurlyeq \U{\LL}$ and $\U{\Stt{i+1}} \succcurlyeq \U{\sc{\Stt{i}, F_i} },\ 1\leq i\leq d-1$. \label{enum:ErrPSD1} \end{enumerate} \end{definition} We also denote $F_0 = C_d = \emptyset$, $C_0 = [n]$ for notational simplicity. \begin{remark} Compared with the Schur complement chains for undirected graphs from~\cite{kyng2016sparsified}, the only new condition is Condition~\eqref{enum:ErrPSD1}. It guarantees the positive semi-definiteness of the symmetrization of the sparsified approximate Eulerian Laplacian $\Lap$ and the error-bounding matrix $\Bap$ defined in Section~\ref{sec:preconditioner}. \end{remark} To construct a Schur complement chain, we first use the following lemma to find an $\alp$-RCDD subset $F_1$, and then apply the Schur complement sparsification method $\SparseSchur$ to compute $\Stt{1}$ which is an approximation for $\sc{\LL, F_1}$. Then, we repeat this process to get a desirable Schur complement chain. \begin{lemma}\label{lem:FindRCDD} (Theorem~A.1 of~\cite{cohen2018solving}) Given an Eulerian Laplacian $\LL\in \MS{n}{n}$ with $\nnz{\LL} = m$, the routine $\FindRCDD$ outputs a subset $F\sleq [n]$ such that $\abs{F} \geq \frac{n}{16\pr{1 + \alp}}$ and $\LL_{FF}$ is $\alp$-RCDD in time $O\pr{m\log\frac{1}{p}}$ with probability at least $1 - p$. \end{lemma} By Lemma~\ref{lem:FindRCDD}, we can choose for instance $\alp = 0.1$ in practice. So, we assume $\alp = O(1)$, when analyzing the complexities below. Our method to construct a Schur complement chain is illustrated in Algorithm~\ref{alg:SCC}. It performance is shown in Theorem~\ref{thm:SCC} whose proof is deferred to Appendix~\ref{sec:someprfs}. \begin{algorithm} \caption{$\SCC\pr{\LL, \alp, \dlt }$ } \label{alg:SCC} \KwIn{strongly connected Eulerian Laplacian $\LL\in\MS{n}{n}$; parameters $\alp > 0$, $\dlt\in (0, 1] $ } \KwOut{$\pr{\alp, \frac{1}{16\pr{1 + \alp}}, \dr{\frac{\dlt}{i^2}}_{i=1}^d}$-Schur complement chain $\dr{\dr{\Stt{i}}_{i=1}^d, \dr{F_i}_{i=1}^d} $ } Set $\dlt_i' = \frac{\dlt}{3 i^2}$ for $i \geq 1$. \; Compute $\St{1} \arl \SparE\pr{\LL, \dlt_1'}$ \; Let $\Stt{1} \arl \St{1} + \frac{\dlt_1'}{1 - \dlt_1' }\U{\St{1}}. $ \; Set $i \arl 0$, $C_0 = [n]$ \; \While{$\abs{C_i} > 100$}{ $i \arl i + 1$ \; $F_i \arl \FindRCDD\pr{\Stt{i}, \alp}$ \; $C_i \arl C_{i-1}\dele F_i$ \; $\St{i+1} \arl \SparseSchur\pr{\Stt{i}, F_i, \dlt_{i+1}} $ \; $\Stt{i+1} \arl \St{i} + \frac{\dlt_{i+1}' }{1 - \dlt_{i+1}' }\U{\St{i}} $ } Return $\dr{\dr{\Stt{i}}_{i=1}^d, \dr{F_i}_{i=1}^d} $ \end{algorithm} \begin{theorem}\label{thm:SCC} Given a \strc\ Eulerian Laplacian $\LL\in \MS{n}{n}$ and parameters $\alp = O(1)$, $\dlt\in (0, 1] $, the routine $\SCC$ runs in time \eq{ O\pr{\TSE\pr{m, n, \dlt}} + \Otil{\TSE\pr{\NSE\pr{n, \dlt}\dlt^{-2}, n, \dlt }\log n } } with high probability to return an $\pr{\alp, \frac{1}{16\pr{1 + \alp}}, \dr{\frac{\dlt}{i^2}}_{i=1}^d}$-Schur complement chain, where $d = O\pr{\log n}$. In addition, $\sum_{i=1}^{d}\nnz{\Stt{i}} = O\pr{\NSE\pr{n, \dlt}}$. \end{theorem} \subsection{Construction of the Preconditioner and the Solver }\label{sec:preconditioner} After constructing a desirable Schur complement chain, we use the Schur complement chain to construct a preconditioner and solve $\LL\xx = \bb$ via the preconditioned Richardson iteration. Consider a linear system $\AA\xx = \bb$, where $\bb$ is in the image space of $\AA$. Given a preconditioner $\ZZ$, the classical preconditioned Richardson iteration updates as follows: \eq{ \xt{k+1} \arl \xt{k} + \eta \ZZ\pr{\bb - \AA \xt{k}}. } We initialize $\xt{0} = \mathbf{0}$ for simplicity. This procedure is denoted by $\xt{N} = \PRI\pr{\AA, \bb, \ZZ, \eta, N}$. We will use the following fundamental lemma to guarantee the performance of the preconditioned Richardson iteration in our methods. \begin{lemma}\label{lem:PRIconverge1} (Lemma~4.2 of~\cite{cohen2017almost}) Let $\AA, \ZZ, \UU\in \MS{n}{n}$, where $\UU$ is PSD and $\ker\pr{\UU} \sleq \ker\pr{\ZZ} = \ker\pr{\ZZ^\top} = \ker\pr{\AA} = \ker\pr{\AA^\top}$. Let $\bb\in \Real^n$ be a vector inside the image space of $\AA$. Denote the projection onto the image space of $\AA$ by $\PA$. Denote $\xt{N} = \PRI\pr{\AA, \bb, \ZZ, \eta, N}$. Then, $\xt{N}$ satisfies \eq{ \nA{\UU}{\xt{N} - \AA\dg\bb} \leq \narr{\UU}{\PP_{\AA} - \eta\ZZ\AA}^N \nA{\UU}{\AA\dg\bb}. } In addition, the preconditioned Richardson iteration is a linear operator with \eql{\label{eq:PrecondiRichardsonitera}}{ \xt{N} = \eta \sum_{k=0}^{N-1}\pr{\PA - \eta\ZZ \AA }^{k}\ZZ\bb. } \end{lemma} Our construction for the preconditioner is illustrated in Algorithm~\ref{alg:precondition}. To analyze Algorithm~\ref{alg:precondition}, we define the following matrices. $\Con = \II - \frac{\mathbf{1}\one^\top}{n}$ is the projection matrix onto the image space of $\LL$. $\Dap$ is an $n$-by-$n$ diagonal matrix with $\Dap_{F_i F_i} = \Diag{\Stt{i}}_{F_i F_i}$ for $i\in [d]$. $\Mpt{i, N}$ is the linear operator corresponding to the preconditioned Richardson iterations \eq{ \Mpt{i, N} = \frac{1}{2}\sum_{k=0}^{N-1}\pr{\II - \frac{1}{2}\Dap_{F_i F_i}\inv\Stt{i}_{F_i F_i}}^k\Dap_{F_i F_i}\inv = \frac{1}{2}\sum_{k=0}^{N-1}\Dap_{F_i F_i}\inv\pr{\II - \frac{1}{2}\Stt{i}_{F_i F_i}\Dap_{F_i F_i}\inv}^k,\ i \in [d - 1]. } $\Ltilt{i, N}$ and $\Utilt{i, N}$ are block lower triangular and block upper triangular matrices of the block Cholesky factorization with \eq{ \Ltilt{i, N} = \mx{ \II_{\sum_{j=1}^{i-1}\abs{F_i}} & & \\ & \II & \\ & \Stt{i}_{C_i F_i}\Mpt{i, N} & \II },\ \Utilt{i, N} = \mx{ \II_{\sum_{j=1}^{i-1}\abs{F_i}} & & \\ & \II & \Mpt{i, N}\Stt{i}_{F_i C_i} \\ & & \II }, } where $\II_k$ denotes the $k$-by-$k$ identity matrix. $\DS{N}$ is the block diagonal matrix corresponding to the block Cholesky factorization \eq{ \DS{N} = \mx{ \iv{\Mpt{1, N}} & & & \\ & \ddots & & \\ & & \iv{\Mpt{d-1, N}} & \\ & & & \Stt{d}_{F_d F_d} }, } where the invertibility of $\Mpt{i, N}$ is given by Lemma~\ref{lem:Mpt}. Note that \eq{ \pr{\Ltilt{i}}\inv = \mx{ \II_{\sum_{j=1}^{i-1}\abs{F_i}} & & \\ & \II & \\ & - \Stt{i}_{C_i F_i}\Mpt{i, N} & \II },\ \pr{\Utilt{i}}\inv = \mx{ \II_{\sum_{j=1}^{i-1}\abs{F_i}} & & \\ & \II & - \Mpt{i, N}\Stt{i}_{F_i C_i} \\ & & \II }. } Then, the routine $\PRI$ is a linear operator which is equivalent to multiplying vector $\xx$ with the matrix $\Con\ZZ$, where $\ZZ\in\MS{n}{n}$ is defined as follows: \eq{ \Zhat = \pr{\Utilt{1, N}}\inv \bigcdot \cdots \bigcdot \iv{\Utilt{d-1, N}} \pr{\DS{N}}\dg \iv{\Ltilt{d-1, N}} \bigcdot \cdots \bigcdot \iv{\Ltilt{1, N}}. } We also define the following matrices which are counterparts of $\dr{\Ltilt{i, N}}$ when $N = +\infty$ in Algorithm~\ref{alg:precondition}: \eq{ \Ltilt{i, \infty} = \mx{ \II_{\sum_{j=1}^{i-1}\abs{F_i}} & & \\ & \II & \\ & \Stt{i}_{C_i F_i}\iv{\Stt{i}_{F_i F_i}} & \II } } The matrices $\dr{\Utilt{i, \infty}}$, $\dr{\DS{\infty}}$ are defined similarly by replacing $\Mpt{i, N}$ with $\iv{\Stt{i}_{F_i F_i}}$ in $\dr{\Utilt{i, N}}$, $\dr{\DS{N}}$. Define $\Lap$ as an approximation for $\LL$ with the errors induced by the Schur complement sparsification procedure \eql{\label{eq:defLap}}{ \Lap = \Stt{1} + \sum_{i=1}^{d-1} \put{\Stt{i+1} - \sc{\Stt{i}, F_{i}}, C_{i}, C_{i}, n}, } where the notation $\put{\cdot}$ is defined in Section~\ref{sec:ULtk} which means putting a matrix on the designated position in an all-zeros matrix with designated size. Then, by direct calculations, \eql{\label{eq:LapLform1}}{ \Lap = \Ltilt{1, \infty} \cdots \Ltilt{d-1, \infty} \DS{\infty} \Utilt{d-1, \infty} \cdots \Utilt{1, \infty}. } The following matrices $\BB$ and $\Bap$ are playing the role of $\UU$ in Lemma~\ref{lem:PRIconverge1}: \eq{ &\BB = \dlt_1\U{\LL} + \sum_{i=2}^{d} \dlt_i\put{\U{\Stt{i}}, C_{i-1}, C_{i-1}, n}, \\ &\Bap = \dlt_1\U{\Lap} + \sum_{i=1}^{d-1} \dlt_{i+1} \put{\U{\sc{\Lap, \cup_{j=1}^{i}F_j}}, C_i, C_i, n }. } The proofs of the following lemmas are deferred to Appendix~\ref{sec:someprfs}. \begin{lemma}\label{lem:BBpleqo1Bap} If the input $\dr{\alp, \bet, \dr{\dlt_i}_{i=1}^d}$-Schur complement chain satisfies $\sum_{i=1}^{d}\dlt_i \leq 1 $, then $ \BB \preccurlyeq \Bap \preccurlyeq 2\BB. $ \end{lemma} \begin{lemma}\label{lem:Mpt} For $N \geq 1 $, $\Mpt{i, N}$ is nonsingular and $\ni{\iv{\Utilt{i, N}} - \iv{\Utilt{i, \infty}}} \leq \frac{\pr{1 + \alp}}{\alp}\pr{\frac{2 + \alp}{2\pr{1 + \alp}}}^N, $ $\no{\iv{\Ltilt{i, N}} - \iv{\Ltilt{i, \infty}}} \leq \frac{\pr{1 + \alp}}{\alp}\pr{\frac{2 + \alp}{2\pr{1 + \alp}}}^N, $ $ \ni{\iv{\Stt{i}_{F_i F_i}} - \Mpt{i, N} } \leq \frac{\pr{1 + \alp}}{\alp}\pr{\frac{2 + \alp}{2\pr{1 + \alp}}}^N \ni{\Dap_{F_i F_i}\inv}. $ \end{lemma} From~\eqref{eq:PrecondiRichardsonitera}, to analyze the quality of the predconditioner $\Con\Zhat$, we need to provide bounds for $\Con - \Con\Zhat\LL$. \begin{lemma}\label{lem:precondiquality1} Given $\dr{\alp, \bet, \dr{\dlt_i}_{i=1}^d}$-Schur complement chain with $d = O\pr{\log n}$ and $\sum_{i=1}^{d} \dlt_i \leq \frac{1}{4} $, by setting $N = O\pr{\log n }$ in Algorithm~\ref{alg:precondition}, we can have $ \narr{\Bap}{\Con - \Con\Zhat\LL} \leq \frac{1}{2}. $ \end{lemma} \begin{proof} From the fact $\dr{\Stt{i}}$ are all Eulerian Laplacians and Fact~\ref{fact:ESchurE}, we have $\Lap\mathbf{1} = \Lap^\top\mathbf{1} = \mathbf{0}$. By~\eqref{eq:LapLform1} and the strong connectivity of $\Stt{d}$, we have ${\rm rank}\pr{\Lap} = n-1$. Then, $\ker\pr{\Lap} = \ker\pr{\Lap^\top } = {\rm span}\pr{\mathbf{1}}$. Thus, $\Lap\Lap\dg = \Lap\dg\Lap = \Con$. Now, we expand $\Con - \Con\Zhat\LL$ as follows \eq{ \Con - \Con\Zhat\LL = \Lap\dg\pr{\Lap - \LL} + \pr{\Lap\dg - \Con\Zhat\Con}\LL. } By the fact that $\pr{\Con - \Con\Zhat\LL}\mathbf{1} = \mathbf{0} $ and $\Bap \succcurlyeq \U{\LL} $, we have $\ker\pr{\Con - \Con\Zhat\LL} \sgeq \ker\pr{\Bap} $. Then, by combining with the definition of $\narr{\Bap}{\cdot}$, we have \eql{\label{eq:percondiexpand1}}{ \narr{\Bap}{\Con - \Con\Zhat\LL}^2 &= \narr{\Bap}{\Lap\dg\pr{\Lap - \LL} + \pr{\Lap\dg - \Con\Zhat\Con}\LL}^2 \\ &= \nt{\Bap^{1/2}\pr{\Lap\dg\pr{\Lap - \LL} + \pr{\Lap\dg - \Con\Zhat\Con}\LL}\Bap^{\dagger/2}}^2 \\ &\leq 2\nt{\Bap^{1/2}\Lap\dg\pr{\Lap - \LL}\Bap^{\dagger/2}}^2 + 2\nt{\Bap^{1/2}\pr{\Lap\dg - \Con\Zhat\Con}\LL\Bap^{\dagger/2}}^2. } By the definitions of $\Lap, \BB$, we have \eq{ &2\xx^\top \pr{\Lap - \LL} \yy \\ \leq& \xx^\top \pr{\dlt_1 \U{\LL} + \sum_{i=2}^{d} \dlt_i \put{\U{\Stt{i}}, C_{i-1}, C_{i-1}, n}} \xx \\ & + \yy^\top \pr{\dlt_1 \U{\LL} + \sum_{i=2}^{d} \dlt_i \put{\U{\Stt{i}}, C_{i-1}, C_{i-1}, n}} \yy \\ =& \xx^\top\BB\xx + \yy^\top\BB\yy. } By combining with Lemma~\ref{lem:BBpleqo1Bap}, we have \eql{\label{eq:Lap-LBap1}}{ \ndd{\Bap}{\pr{\Lap - \LL}} \leq \ndd{\BB}{\pr{\Lap - \LL}} \leq 1. } Next, we bound $\pr{\Lap\dg - \Con\Zhat\Con}\LL $. From the definition of $\dr{\alp, \bet, \dr{\dlt_i}_{i=1}^d}$-Schur complement chain, we have $\U{\Stt{i+1}} \succcurlyeq \U{\sc{\Stt{i}, F_i}}. $ Combining with Fact~\ref{fact:scUpleqUsc}, we have $\U{\Stt{i+1}} \succcurlyeq \sc{\U{\Stt{i}}, F_i}$. Then, by Fact~\ref{fact:la2scup1}, $\lambda_2\pr{\U{\Stt{i+1}}} \geq \lambda_2\pr{\sc{\U{\Stt{i}}, F_i}} \geq \lambda_2\pr{\U{\Stt{i}}}. $ By induction, $\lambda_2\pr{\U{\Stt{i}}} \geq \lambda_2\pr{\U{\LL}}. $ By Cheeger's inequality, $\lambda_2\pr{\U{\LL}} = \Omega\pr{\frac{1}{\poly{n}}}$. Then, for any $i\in [d]$, $\lambda_2\pr{\U{\Stt{i}}} = \Omega\pr{\frac{1}{\poly{n}}}$. By Fact~\ref{fact:la2scup1}, $\ni{\Dap_{F_i F_i}\inv } \leq \frac{2}{\lambda_2\pr{\U{\Stt{i}}} } = O\pr{\poly{n}}. $ Also, $\lambda_2\pr{\Bap} \geq \lambda_2\pr{\BB} = \Omega\pr{\frac{1}{\poly{n}}}$. It follows by induction easily that $\nt{\Bap} = O\pr{\poly{n}} $. By Fact~\ref{fact:MpinvABC} and~\eqref{eq:LapLform1}, we have \eq{ \Lap\dg = \Con\iv{\Utilt{1, \infty}} \cdots \iv{\Utilt{d-1, \infty}} \pr{\DS{\infty}}\dg \iv{\Ltilt{d-1, \infty}} \cdots \iv{\Ltilt{1, \infty}}\Con. } Since $\ni{\iv{\Stt{i}_{F_i F_i}}\Stt{i}_{F_i C_i}} = \frac{1}{2}\ni{\sum_{k=0}^{+\infty} \pr{\II - \frac{1}{2}\Dap_{F_i F_i}\inv\Stt{i}_{F_i F_i}}\Dap_{F_i F_i}\inv \Stt{i}_{F_i C_i} } \leq \frac{1 + \alp}{\alp}, $ we have $\no{\iv{\Ltilt{i, \infty}}} \leq \frac{1 + 2\alp}{\alp} $. Analogously, $\ni{\iv{\Utilt{i, \infty}}} \leq \frac{1 + 2\alp}{\alp} $. Then, by Fact~\ref{fact:sequenceproductelemta}, Lemma~\ref{lem:Mpt}, the fact that $\ni{\Dap_{F_i F_i}\inv} = O\pr{\poly{n}} $ and $\pr{\frac{1 + 2\alp}{\alp}}^{O\pr{\log n }} = O\pr{\poly{n}} $, $\lambda_2\pr{\Bap} = \Omega\pr{\frac{1}{\poly{n}}}$, we have \eq{ \nt{\Bap^{1/2}\pr{\Lap\dg - \Con\Zhat\Con}\LL\Bap^{\dagger/2}} \leq \pr{\frac{2 + \alp}{2\pr{1 + \alp}}}^N \cdot O\pr{\poly{n}}. } Then, by setting $N = O\pr{\log n}$ in Algorithm~\ref{alg:precondition}, we can let \eql{\label{eq:NexpZLinv1}}{ \nt{\Bap^{1/2}\pr{\Lap\dg - \Con\Zhat\Con}\LL\Bap^{\dagger/2}}^2 \leq \frac{1}{16}. } By Fact~\ref{fact:FLFgood}, we have \eql{\label{eq:LBL}}{ \tpp{\Lap\dg}\Bap \Lap\dg \preccurlyeq \pr{\sum_{i=1}^{d}\dlt_i}^2 \Bap\dg \preccurlyeq \frac{1}{16} \Bap\dg. } Continuing with~\eqref{eq:percondiexpand1}, \eq{ &\narr{\Bap}{\Con - \Con\Zhat\LL}^2 \\ \leq& 2\nt{\Bap^{1/2}\Lap\dg\pr{\Lap - \LL}\Bap^{\dagger/2}}^2 + 2\nt{\Bap^{1/2}\pr{\Lap\dg - \Con\Zhat\Con}\LL\Bap^{\dagger/2}}^2 \\ =& 2 \nt{\Bap^{\dagger/2} \pr{\Lap - \LL}^\top \tpp{\Lap\dg} \Bap \Lap\dg\pr{\Lap - \LL} \Bap^{\dagger/2}} + 2\nt{\Bap^{1/2}\pr{\Lap\dg - \Con\Zhat\Con}\LL\Bap^{\dagger/2}}^2 \\ \leq& \frac{1}{8 } \nt{\Bap^{\dagger/2} \pr{\Lap - \LL}^\top \Bap\dg \pr{\Lap - \LL} \Bap^{\dagger/2}} + 2\nt{\Bap^{1/2}\pr{\Lap\dg - \Con\Zhat\Con}\LL\Bap^{\dagger/2}}^2 \\ \leq& \frac{1}{8} + \frac{1 }{8} = \frac{1}{4 }, } where the second inequality is by~\eqref{eq:LBL}; the last inequality is from~\eqref{eq:Lap-LBap1} and~\eqref{eq:NexpZLinv1}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:TSENSEsolver1}] By Fact~\ref{lem:scrobust} and the definition of $\dr{\alp, \bet, \dr{\dlt_i}_{i=1}^d}$-Schur complement chain, we have \eq{ \U{\Stt{i+1}} \preccurlyeq \pr{1 + \dlt_{i+1}} \pr{3 + \frac{2}{\alp}} \U{\sc{\Stt{i}, F_i}}. } It follows by Fact~\ref{fact:scprvpleq} and induction that $\U{\Stt{i}} \preccurlyeq \pr{3 + \frac{2}{\alp}}^{i-1}\prod_{j=2}^{i}\pr{1 + \dlt_{j}} \sc{\U{\Stt{1}}, \cup_{j=1}^{i-1}F_j} \preccurlyeq \pr{3 + \frac{2}{\alp}}^{i-1} \prod_{j=1}^{i}\pr{1 + \dlt_{j}} \sc{\U{\LL}, \cup_{j=1}^{i-1}F_j} $. Since $\sum_{i=1}^{d}\dlt_i = O(1)$, $d = O(\log n)$, we have $\pr{3 + \frac{2}{\alp}}^{d}\prod_{j=1}^{d}\pr{1 + \dlt_{j}} = O\pr{\poly{n}}, $ i.e., $\U{\Stt{i}} \preccurlyeq O\pr{\poly{n}}\cdot \sc{\U{\LL}, \cup_{j=1}^{i-1} F_j}$ for any $i\in [d]$. Thus, by Fact~\ref{fact:scz}, we have $\put{\U{\Stt{i}}, C_{i-1}, C_{i-1}, n} \preccurlyeq O\pr{\poly{n}}\cdot\U{\LL}. $ By combining with Lemma~\ref{lem:BBpleqo1Bap}, we have \eq{\Bap \preccurlyeq 2\BB \preccurlyeq 2\pr{\U{\LL} + \sum_{i=2}^{d}\put{\U{\Stt{i}}, C_{i-1}, C_{i-1}, n}} = O\pr{\poly{n}}\cdot \U{\LL}. } By Lemma~\ref{lem:precondiquality1}, after running $N'$ iterations of the preconditioned Richardson iteration, $ \nA{\U{\LL}}{\xt{N'} - \LL\dg\bb} \leq \nA{\Bap}{\xt{N'} - \LL\dg\bb} \leq \narr{\Bap}{\II - \Con\Zhat\LL}^N\nA{\Bap}{\LL\dg\bb} \leq \pr{\frac{1}{2}}^{N'}\nA{\Bap}{\LL\dg\bb} \leq \pr{\frac{1}{2}}^{N'}\cdot O\pr{\poly{n}} \cdot \nA{\U{\LL}}{\LL\dg\bb}. $ By setting $N' = O\pr{\log\pr{n/\eps}}$, we can let $ \nA{\U{\LL}}{\xt{N'} - \LL\dg\bb} \leq \eps \nA{\U{\LL}}{\LL\dg\bb}. $ The processing time follows by Theorem~\ref{thm:SCC} directly. By Theorem~\ref{thm:SCC}, $\sum_{i=1}^{d}\nnz{\Stt{i}} = \sum_{i=1}^d O\pr{\NSE\pr{\pr{1 - \bet}^{i-1}n, \frac{\dlt}{i^2}}} $. As $\dlt = O(1)$, $\bet = \frac{1}{16(1+\alp)} = O(1) $, we have $\sum_{i=1}^{d}\nnz{\Stt{i}} = O\pr{\NSE\pr{n, 1}}$. Thus, as we set $N = O\pr{\log n}$ in $\PreC$, running $\PreC $ for one time takes $O\pr{\NSE\pr{n, 1}\log n }$ time. Then, after obtaining a desirable Schur complement chain, an $\eps$-accurate vector $\xx$ can be computed in $O\pr{\NSE\pr{n, 1}\log n \log\pr{n/\eps} } $ time. \end{proof} Using the smaller Eulerian Laplacian sparsifiers based on short cycle decompositions to sparsify the approximate Schur complements returned by Algorithm~\ref{alg:SparSchur}, we get the following solver which has quadratic processing time, but faster solve time. Its proof is deferred to Appendix~\ref{sec:sparsify}. \begin{corollary}\label{coro:shortcyclep1} Given a strongly connected Eulerian Laplacian $\LL\in \MS{n}{n}$, we can process it time $O(n^2\log^{O(1)} n)$. Then, for each query vector $\bb\in \Real^n$ with $\bb \perp \mathbf{1}$, we can compute a vector $\xx\in \Real^n$ with $\nA{\U{\LL}}{\xx - \LL\dg\bb} \leq \eps \nA{\U{\LL}}{\LL\dg\bb}$ in time $O(n\log^5 n\log(n/\eps))$. \end{corollary} \begin{remark} Combining Theorem~\ref{thm:TSENSEsolver1} or Corollary~\ref{coro:shortcyclep1} with Appendix~D of~\cite{cohen2017almost} yields full solvers for strongly connected directed Laplacians. \end{remark}
1,314,259,996,440
arxiv
\section{Introduction} Let $p$ be a prime, $q$ be a power of $p$, and $k$ be a positive integer. An {\it $[n,k,d]$-linear code} ${\mathcal C}$ is a $k$-dimensional subspace of $\F_q^n$ with minimum distance $d$. Each element in ${\mathcal C}$ is called a {\it codeword}. If any cyclic shift of each codeword in ${\mathcal C}$ is again in ${\mathcal C}$, the code is called {\it cyclic}. To determine all the nonzero weights and their frequencies of a given code is one of the main problems in algebraic coding theory. The {\it weight enumerator} of ${\mathcal C}$ is defined as the polynomial $ 1+\sum_{i=1}^{n}A_i x^i$, where $A_i$ is the number of codewords of weight $i$ in ${\mathcal C}$. Furthermore, the sequence $(A_1,A_2,\ldots,A_n)$ is called the {\it weight distribution} of the code ${\mathcal C}$. Many important families of cyclic codes have been extensively studied in the literature, but the weight distributions are generally difficult to compute and there are only a few special families that this has been done. We assume that the reader is familiar with the basic facts about coding theory, see for instance \cite{MS06} and \cite{S09}. Let $\alpha$ be a primitive element of $\F_{q^k}$, and let $h$ and $e$ be positive integers such that $e\,|\,h$ and $h\,|\,q-1$. Put $g=\alpha^{(q-1)/h}$, $\beta=\alpha^{(q^k-1)/e}$, and $n=h(q^k-1)/(q-1)$. Since the order of $g^{-1}$ and $(\beta g)^{-1}$ are both equal to $n$, the minimal polynomials $f_1(x)$ and $f_2(x)$ of $g^{-1}$ and $(\beta g)^{-1}$ divide $x^n-1$. Furthermore, it is easy to show that $g^{-q^j} \ne(\beta g)^{-1}$ for any integer $j$, so we have $f_1(x)f_2(x)\,|\,x^n-1$. By Delsart's Theorem \cite{D75}, the cyclic code ${\mathcal C}_{(q,k,h,e)}$ with $f_1(x)f_2(x)$ as its parity-check polynomial can be represented in the following trace form. Let \[ {\bf c}(a,b)=(\Tr_{q^k/q}(a g^0+b(\beta g)^0),\Tr_{q^k/q}(a g^1+b(\beta g)^1),\ldots,\Tr_{q^k/q}(a g^{n-1}+b(\beta g)^{n-1})), \] where $\Tr _{q^k/q}$ is the relative trace from $\F_{q^k}$ to $\F_q$. Then, it holds that \[ {\mathcal C}_{(q,k,h,e)} =\{{\bf c}(a,b)\,|\,(a,b)\in \F_{q^k}^2\}. \] The dimension of the code is $2k$. The recent interest in the weight distribution of this type of codes ${\mathcal C}_{(q,k,h,e)}$ starts with \cite{MZ11}, and is followed by \cite{D12}, \cite{WTQYX11}, \cite{X12}. The objective of this note is to compute the weight distribution of a further class of such codes. In general, to evaluate the weight distribution of the code ${\mathcal C}_{(q,k,h,e)}$ is quite difficult and most cases remain unsettled. The weight distribution appears mostly rather complicated, but still there are cases where not so many nonzero weights are involved and a neat expression is available. Here we list the known cases in which the weight distributions have been explicitly evaluated. \begin{itemize} \item[(i)] $e>1$ and $m=1$ \cite{MZ11}, \item[(ii)] $e=2$ and $m=2$ \cite{MZ11}, \item[(iii)] $e=2$ and $m=3$ \cite{D12}, \item[(iv)] $e=2$ and $-1\in \langle p\rangle \,(\mod{m})$ \cite{D12}, \item[(v)] $e=3$ and $m=2$ \cite{WTQYX11}, \item[(vi)] $e=4$ and $m=2$ \cite{X12}, \end{itemize} where $m=\gcd{(\frac{q^k-1}{q-1},\frac{e}{h}(q-1))}$. Furthermore, if we set $h=q-1$ and drop the condition $e|h$, then they are related to primitive cyclic codes with two zeros and have been extensively studied in the literature, see for instance \cite{BM10,C04,M04,MR07,S95,YCD06} and the references therein. The purpose of this note is to compute the weight distribution of ${\mathcal C}_{(q,k,h,e)}$ for the case where $e=2$, $m$ is a prime, the subgroup $\langle p\rangle$ generated by $p\in \Z_{m}^\ast$ has index $2$ in $\Z_{m}^\ast$ and $-1\not\in \langle p\rangle $. Our evaluation is based on the explicit determination of certain index $2$ Gauss sums and the Davenport-Hasse theorem. \section{Index $2$ Gauss sums } Let $p$ be a prime, $f$ a positive integer, and $q=p^f$. The canonical additive character $\psi$ of $\F_q$ is defined by $$\psi\colon\F_q\to \C^{*},\qquad\psi(x)=\zeta_p^{\Tr _{q/p}(x)},$$ where $\zeta_p={\rm exp}(\frac {2\pi i}{p})$, and $\Tr _{q/p}$ is the absolute trace. For each multiplicative character $\chi$ of $\F_q^\ast$, we define a {\it Gauss sum} over $\F_q$ as follows: \[ G_q(\chi)=\sum_{x\in \F_q^\ast}\chi(x)\psi(x). \] Below are a few basic properties of Gauss sums \cite{LN97}: \begin{itemize} \item[(i)] $G_q(\chi)\overline{G_q(\chi)}=q$ if $\chi$ is nontrivial; \item[(ii)] $G_q(\chi^p)=G_q(\chi)$; \item[(iii)] $G_q(\chi^{-1})=\chi(-1)\overline{G_q(\chi)}$; \item[(iv)] $G_q(\chi)=-1$ if $\chi$ is principal. \end{itemize} In general, the explicit evaluation of Gauss sums is a very difficult problem. There are only a few cases where the Gauss sums have been evaluated. The most well known case is the {\it quadratic} case where the order of $\chi$ is two. In this case, it holds that \begin{equation}\label{eq:quad} G_{p^f}(\chi)=(-1)^{f-1}\left(\sqrt{p^\ast}\right)^f,\;p^\ast=(-1)^{\frac{p-1}{2}}p. \end{equation} The next well-studied case is the so-called {\it semi-primitive } case, where there exists an integer $j$ such that $p^j\equiv -1\,(\mod{N})$, with $N$ being the order of $\chi$. Please refer to \cite{BEW97,BMW82,CK86} for details on the explicit evaluation of Gauss sums in this case. The next interesting case is the index $2$ case, where the subgroup $\langle p\rangle$ generated by $p\in \Z_{N}^\ast$ has index $2$ in $\Z_{N}^\ast$ and $-1\not\in \langle p\rangle $. In this case, it is known that $N$ can have at most two odd prime divisors. Many authors have investigated this case, see e.g., \cite{BM73,L97,M98,MV03,YX10}. In particular, a complete solution to the problem of explicitly evaluating Gauss sums in this case is recently given in \cite{YX10}. We record here the following result which we shall use in the next section. \begin{theorem}\label{Sec2Thm1}(\cite{YX10}, Case A; Theorem~4.1) Let $N=p_1^\ell$, where $p_1$ is a prime $\equiv 3\,(\mod{4})$ with $p_1>3$. Assume that $p$ is a prime such that $\ord_{p_1^\ell}(p)=\phi(p_1^\ell)/2$. Let $f=\phi(N)/2$, $q=p^f$, and $\chi$ be a multiplicative character of order $N$ of $\F_q^\ast$. Then, for $0\le s\le \ell-1$, we have \begin{eqnarray*} G_q(\chi^{p_1^s})&=&p^{\frac{f-cp_1^s}{2}} \left(\frac{a+b\sqrt{-p_1}}{2}\right)^{p_1^s}, \end{eqnarray*} where $c$ is the class number of $\Q(\sqrt{-p_1})$, and $a$ and $b$ are integers determined by $a,b\not\equiv 0\,(\mod{p})$, $4p^{c}=a^2+p_1b^2$, and $a \equiv -2p^{\frac{f+c}{2}}\,(\mod{p_1})$. \end{theorem} Here, we should remark that index $2$ Gauss sums have been successfully applied to the determination of the weight distribution of certain {\it irreducible cyclic codes} in \cite{BM73}. Also, recently they have been used in the construction of new infinite families of combinatorial configurations, such as strongly regular graphs, skew Hadamard difference sets, and association schemes with nice properties (\cite{FX111,FX112,FX113,FMX11}). To obtain our main result, we will need the following theorems, the first known as the Davenport-Hasse theorem. \begin{theorem}\label{thm:lift}(\cite[Theorem 5.14]{LN97}) Let $\chi$ be a nonprincipal multiplicative character on $\F_q^\ast=\F_{p^f}^\ast$ and let $\chi'$ be the lifted character of $\chi$ to $\F_{q^s}^\ast $, i.e., $\chi'(\alpha):=\chi(\Norm_{\F_{q^s}/\F_q}(\alpha))$ for $\alpha\in \F_{q^s}$. Then, it holds that \[ G_{q^{s}}(\chi')=(-1)^{s-1}(G_{q}(\chi))^s. \] \end{theorem} \begin{theorem}(\cite[Theorem 5.30]{LN97}) Let $\psi$ be the canonical additive character of $\F_q$ and $\chi$ be a multiplicative character of $\F_q$ of order $d\,|\,q-1$. Then, it holds that \[ \sum_{x\in \F_q}\psi(ax^d+b)=\psi(b)\sum_{i=1}^{d-1}\chi^{-i}(a)G_q(\chi^i) \] for any $a,b\in \F_q$ with $a\not=0$. \end{theorem} \section{The weight distribution} In this section, we shall use the same notations as in the Introduction. Moreover, we fix the settings as follows: \begin{itemize} \item[(i)] $e=2$; \item[(ii)] $m=\gcd{(\frac{q^k-1}{q-1},\frac{e}{h}(q-1))}$ is a prime $\equiv 3 \,(\mod{4})$, which we write as $p_1$; \item[(iii)] $q=p^f$, $fk$ is divisible by $\frac{p_1-1}{2}$, say $fk=s\frac{p_1-1}{2}$ for some positive integer $s$; \item[(iv)] $p$ is of index $2$ modulo $p_1$. \end{itemize} Under these assumptions, we will determine the weight distribution of the cyclic code ${\mathcal C}_{(q,k,h,e)}$. Let $\alpha$ be a fixed primitive element of $\F_{q^k}$, and for each $x\in\F_{q^k}$,. We define $C_i^{(\ell,q^k)}:=\alpha^{i}\langle \alpha^{\ell}\rangle$ for any $\ell\,|\,q^k-1$, $i\in\Z$. Then, for any $a,b\in \F_{q^k}$, the Hamming weight of ${\bf c}(a,b)$ is $n-Z(q^k,a,b)$, where \[ Z(q^k,a,b)=|\{x\in C_0^{((q-1)/h,q^k)}\,|\,\Tr_{q^k/q}(ax+\beta^{\log_\alpha(x)}bx)=0\}|, \] where we use From \cite{D12,MZ11}, we have the following formula on $Z(q^k,a,b)$: \begin{equation}\label{eq:weight1} Z(q^k,a,b)=\frac{h(q^k-1)}{q(q-1)}+\frac{h}{eq} m \sum_{i=0}^{e-1} \sum_{x\in C_{(q-1)i/h}^{(m,q^k)}}\psi((a+\beta^ib)x), \end{equation} where $\psi$ is the canonical additive character of $\F_{q^k}$. \begin{remark}\label{re1} By Theorems~\ref{Sec2Thm1} and \ref{thm:lift}, the Gauss sum $G_{q^k}(\chi)$ with $\chi$ a multiplicative character of order $p_1$ of $\F_{q^k}$ is given as \begin{eqnarray*} G_{q^k}(\chi)&&=(-1)^{s-1}\left(G_{p^{(p_1-1)/2}}(\chi')\right)^s\\ &&=(-1)^{s-1}p^{\frac{(p_1-1-2c)s}{4}}\left(\frac{a+b\sqrt{-p_1}}{2}\right)^{s}\in\Q(\sqrt{-p_1}), \end{eqnarray*} where $a,b,$ and $c$ are as defined in Theorem~\ref{Sec2Thm1}, and $\chi'$ is a character of $\F_{p^{(p_1-1)/2}}$ whose lift to $\F_{q^k}^\ast$ is $\chi$. To ease the notation, we introduce the integers $a_s$, $b_s$ such that \[ \frac{a_s+b_s\sqrt{-p_1}}{2}:=\left(\frac{a+b\sqrt{-p_1}}{2}\right)^{s}. \] \end{remark} We comment that we allow $b_s$ to have a sign ambiguity of $\pm 1$. We are now ready to prove our main result. \begin{theorem} Let ${\mathcal C}_{(q,k,h,e)}$ be the $[n,2k]$ cyclic code satisfying the above assumptions (i)--(iv). Each codeword ${\bf c}(a,b)$ has weight $n-Z(q^k,a,b)$, and we associate to it the number \[ Y(q^k,a,b):=\frac{eq}{h}(Z(q^k,a,b)-\frac{h(q^k-1)}{q(q-1)})+2. \] Then the multiset $\{Y(q^k,a,b)\,|\,a,\,b\in\F_{q^k}\}$ has values and corresponding multiplicities as listed in Table~\ref{Tab1}. \begin{table}[h] \caption{ \label{Tab1} The values of $Y(q^k,a,b)$ and their corresponding multiplicities } $$ \begin{array}{|c|c|} \hline \mbox{$Y(q^k,a,b)$}&\mbox{frequency}\\ \hline 2q^k&1\\ \hline (-1)^sp^{\frac{s(p_1-1-2c)}{4}}(a_s-b_sp_1)&\left(\frac{p_1-1}{2}\right)^2\left(\frac{q^k-1}{p_1}\right)^2\\ \hline (-1)^sp^{\frac{s(p_1-1-2c)}{4}}(a_s+b_sp_1)&\left(\frac{p_1-1}{2}\right)^2\left(\frac{q^k-1}{p_1}\right)^2\\ \hline (-1)^sp^{\frac{s(p_1-1-2c)}{4}}(1-p_1)a_s&\left(\frac{q^k-1}{p_1}\right)^2\\ \hline (-1)^sp^{\frac{s(p_1-1-2c)}{4}}\frac{a_s-b_sp_1}{2}+q^k&\frac{(p_1-1)(q^k-1)}{p_1}\\ \hline (-1)^sp^{\frac{s(p_1-1-2c)}{4}}\frac{(a_s+b_sp_1)}{2}+q^k&\frac{(p_1-1)(q^k-1)}{p_1}\\ \hline (-1)^sp^{\frac{s(p_1-1-2c)}{4}}\frac{1-p_1}{2}a_s+q^k&\frac{2(q^k-1)}{p_1}\\ \hline (-1)^s p^{\frac{s(p_1-1-2c)}{4}}a_s&\frac{(p1-1)^2}{2}\left(\frac{q^k-1}{p_1}\right)^2\\ \hline \frac{(-1)^s}{2} p^{\frac{s(p_1-1-2c)}{4}}(-a_s(-2+p_1)-b_s p_1)&(p_1-1)\left(\frac{q^k-1}{p_1}\right)^2\\ \hline \frac{(-1)^s}{2} p^{\frac{s(p_1-1-2c)}{4}} (-a_s(-2+p_1)+b_s p_1)&(p_1-1)\left(\frac{q^k-1}{p_1}\right)^2\\ \hline \end{array} $$ \end{table} \end{theorem} \proof By the equation (\ref{eq:weight1}) above, it suffices to compute the sum \begin{equation}\label{eq:main} \sum_{i=0,1}\sum_{z\in C_{(q-1)i/h}^{(p_1,q^k)}}\psi((a+(-1)^ib)x). \end{equation} Let $E=\{0,1\}$ and $E_{0}^{a,b}=\{i\in E\,|\,a+(-1)^ib=0\}$. If $i\in E_{0}^{a,b}$, then the inner sum is $(q^k-1)/p_1$. Therefore, we have \begin{eqnarray*} & &\sum_{i=0,1}\sum_{z\in C_{(q-1)i/h}^{(p_1,q^k)}}\psi((a+(-1)^i b)z)-\frac{q^k-1}{p_1}|E_0^{a,b}|\\ &=&\frac{1}{p_1}\sum_{i\in E\setminus E_0^{a,b}}\sum_{z\in \F_{q^k}^\ast}\psi((a+(-1)^ib) \alpha^{(q-1)i/h} z^{p_1})\\ &=&\frac{1}{p_1}\sum_{i\in E\setminus E_0^{a,b}}\left(\sum_{z\in \F_{q^k}}\psi((a+(-1)^ib) \alpha^{(q-1)i/h} z^{p_1})-1\right)\\ &=&-\frac{e-|E_0^{a,b}|}{p_1}+\frac{1}{p_1}\sum_{i\in E\setminus E_0^{a,b}}\sum_{j=1}^{p_1-1}\chi^{-j}(a+(-1)^ib)G_{q^k}(\chi^j), \end{eqnarray*} where $\chi$ is a fixed multiplicative character of order $p_1$. It follows that \[ Y(q^k,a,b)=q^k|E_0^{a,b}|+\sum_{i\in E\setminus E_0^{a,b}}\sum_{j=1}^{p_1-1}\chi^{-j}(a+(-1)^ib)G_{q^k}(\chi^j). \] Now, by Remark~\ref{re1}, the Gauss sum $G_{q^k}(\chi)$ is written as \[ G_{q^k}(\chi)=(-1)^{s-1}p^{\frac{(p_1-1-2c)s}{4}}\left(\frac{a_s+b_s\sqrt{-p_1}}{2}\right). \] Since $G_{q^k}(\chi^p)=G_{q^k}(\chi)$ and $G_{q^k}(\chi^{-1})=\chi(-1)\overline{G_{q^k}(\chi)}=\overline{G_{q^k}(\chi)}$, the second summand in $Y(q^k,a,b)$ is equal to \begin{eqnarray}\label{eq} & &\sum_{i\in E\setminus E_0^{a,b}}\left(G_{q^k}(\chi)\sum_{j\in \langle p\rangle}\chi^{-j}(a+(-1)^ib) +G_{q^k}(\chi^{-1})\sum_{j\in -\langle p\rangle}\chi^{-j}(a+(-1)^ib)\right)\nonumber \nonumber\\ &=&(-1)^{s-1}p^{\frac{(p_1-1-2c)s}{4}}\sum_{i\in E\setminus E_0^{a,b}}2Re\left\{\left(\frac{a_s+b_s\sqrt{-p_1}}{2}\right)\sum_{j\in \langle p\rangle}\psi_{p_1}(-j\ell_{a+(-1)^ib})\right\},\nonumber \end{eqnarray} where $\psi_{p_1}$ is the canonical additive character of $\F_{p_1}$ and $\ell_{a+(-1)^i b}$ is the integer such that \[ {\ell_{a+(-1)^i b}}\equiv \log_\alpha (a+(-1)^i b)\,(\mod{p_1}). \] Now, we compute the sum $\sum_{j\in \langle p\rangle}\psi_{p_1}(jx)$. If $x\equiv 0\,(\mod{p_1})$, it is clear that \[ \sum_{j\in \langle p\rangle}\psi_{p_1}(jx)=\frac{p_1-1}{2}. \] Let $\eta$ be the quadratic character of $\F_{p_1}^\ast$. If $x\not\equiv 0\,(\mod{p_1})$, by (\ref{eq:quad}), it holds that \begin{eqnarray*} \sum_{j\in \langle p\rangle}\psi_{p_1}(jx) &=& \frac{1}{2}\sum_{j\in \F_{p_1}^\ast}(1+\eta(j))\psi_{p_1}(xj)\\ &=& \frac{-1+\eta(x)G_{p_1}(\eta)}{2}= \frac{-1+\eta(x)\sqrt{-p_1}}{2}. \end{eqnarray*} Then, the equation (\ref{eq}) is reformed as \begin{eqnarray}\label{eq:main2} & &(-1)^{s-1}2p^{\frac{(p_1-1-2c)s}{4}}Re\left\{\left(\frac{a_s+b_s\sqrt{-p_1}}{2}\right)\right.\\ & &\hspace{3cm}\left.\times \left(N_0\frac{-1-\sqrt{-p_1}}{2}+N_1\frac{-1+\sqrt{-p_1}}{2}+N_2\frac{p_1-1}{2}\right)\right\}\nonumber \end{eqnarray} where $N_0$ and $N_1$ are the numbers of nonzero squares and nonsquares modulo $p_1$ in $\{\ell_{a+(-1)^{i}b}\,|\,i\in E\setminus E_0^{a,b}\}$, respectively, and $N_2$ is the number of zeros modulo $p_1$ in $\{\ell_{a+(-1)^{i}b}\,|\,i\in E\setminus E_0^{a,b}\}$. After simplification, we see that the expression in (\ref{eq}) is equal to $\frac{(-1)^{s}}{2}p^{\frac{(p_1-1-2c)s}{4}}$ times \[ (N_0+N_1+N_2-p_1N_2)a_s+(N_1-N_0)p_1b_s. \] Since $e=2$, there are ten possibilities for the values of the tuple ($|E_0^{a,b}|,N_0,N_1,N_2$). We shall compute the frequency of each plausible tuple ($|E_0^{a,b}|,N_0,N_1,N_2$), which we denote by $N_{|E_0^{a,b}|,N_0,N_1,N_2}$. As a consequence, we obtain the values and multiplicities of $Y(q^k,a,b)$ in Table~\ref{Tab1}. It is clear that $N_{2,0,0,0}=1$, $N_{1,1,0,0}=N_{1,0,1,0}=\frac{(p_1-1)(q^k-1)}{p_1}$, $N_{1,0,0,1}=\frac{2(q^k-1)}{p_1}$. For instance, $N_{1,1,0,0}$ is the number of pairs $(a,b)$ such that $a+b=0$, $\log_\alpha(a-b)=\log_\alpha(2a)\pmod{p_1}$ is a nonzero square or $a-b=0$, $\log_\alpha(a+b)=\log_\alpha(2a)\pmod{p_1}$ is a nonzero square, which is easily seen to be $2\cdot\frac{p_1-1}{2}\cdot\frac{q^k-1}{p_1}=\frac{(p_1-1)(q^k-1)}{p_1}$. The other three numbers are obtained similarly. Furthermore, by considering the case where at least one of $a+b,\,a-b$ is in $ \bigcup_{i\in \langle p\rangle}C_i^{(p_1,q^k)}$, we have \begin{eqnarray*} & &N_{1,1,0,0}+2N_{0,2,0,0}+N_{0,1,1,0}+N_{0,1,0,1}\\ &=&|\{(a,b)\in \F_{q^k}^2\,|\,a+b\in \bigcup_{i\in \langle p\rangle}C_i^{(p_1,q^k)}\}| +|\{(a,b)\in \F_{q^k}^2\,|\,a-b\in \bigcup_{i\in \langle p\rangle}C_i^{(p_1,q^k)}\}|\\ &=&q^k\frac{(q^k-1)(p_1-1)}{p_1}. \end{eqnarray*} Similarly, we have \begin{eqnarray*} N_{1,0,1,0}+N_{0,1,1,0}+2N_{0,0,2,0}+N_{0,0,1,1}&=&q^k\frac{(q^k-1)(p_1-1)}{p_1},\\ N_{1,0,0,1}+N_{0,1,0,1}+N_{0,0,1,1}+2N_{0,0,0,2}&=&2q^k\frac{q^k-1}{p_1}. \end{eqnarray*} It is therefore enough to compute the values of $N_{0,2,0,0}$, $N_{0,0,2,0}$, and $N_{0,0,0,2}$ only. Now, we introduce the following notations: for $b\not=0$ \begin{eqnarray*} u&:=&ab^{-1}\in \F_{q^k}\setminus\{\pm 1\},\\ t&:=&\log_{\alpha}(u+1)\pmod{p_1}.\\ s&:=&\log_{\alpha}(u-1)\pmod{p_1},\\ x&:=&\log_{\alpha}(b)\pmod{p_1}, \end{eqnarray*} and \[ M:=\{u\in \F_{q^k}\setminus \{\pm 1\}\,|\,(u+1)/(u-1)\in C_{0}^{(p_1,q^k)}\}. \] Note that $|M|=(q^k-1)/p_1-1$. Moreover, we will use the well known fact (\cite[p.~81]{BEW97}) that \begin{eqnarray*} & &|(\langle p\rangle +u)\,(\mod{p_1}) \cap \langle p\rangle \,(\mod{p_1})|\\ &=&|(-\langle p\rangle +u) \,(\mod{p_1}) \cap -\langle p\rangle\,(\mod{p_1}) | =\frac{p_1-3}{4}. \end{eqnarray*} Now we are ready to compute the values of $N_{0,2,0,0}$, $N_{0,0,2,0}$, and $N_{0,0,0,2}$, from which all the remaining numbers $N_{0,1,1,0}$, $N_{0,1,0,1}$, $N_{0,0,1,1}$ will follow. (1) {\bf $N_{0,0,0,2}$: } Recall that \[ N_{0,0,0,2}=|\{(a,b)\in \F_{q^k}^2\,|\,a+b,a-b\in C_0^{(p_1,q^k)}\}|. \] There are $(q^k-1)/p_1$ such pairs with $b=0$. Assume $b\ne 0$. Then $a+b,a-b\in C_0^{(p_1,q^k)}$ amounts to $t+x=0,s+x=0$ in $Z_{p_1}$, so $t=s$, i.e., $\frac{u+1}{u-1}\in C_0^{(p_1,q^k)}$, which amounts to saying that $u\in M$. For each such $u$, there is a unique $x\in \Z_{p_1}$, so $(q^k-1)/p_1$ of $b\in C_x^{(p_1,q^k)}$. Now we have \[ N_{0,0,0,2}=\frac{q^k-1}{p_1}+\frac{q^k-1}{p_1}\cdot |M|=\left(\frac{q^k-1}{p_1}\right)^2. \] (2){ \bf $N_{0,2,0,0}$: } Recall that \[ N_{0,2,0,0}=|\{(a,b)\in \F_{q^k}^2|a+b,a-b\in \bigcup_{i\in \langle p\rangle}C_i^{(p_1,q^k)}\}|. \] There are $\frac{p_1-1}{2}\cdot \frac{q^k-1}{p_1}$ such pairs with $b=0$. Assume $b\ne 0$. Then $a+b,a-b\in \bigcup_{i\in \langle p\rangle}C_i^{(p_1,q^k)}$ amounts to $t+x,s+x\in \langle p\rangle$, i.e., $x\in (\langle p\rangle-t+s)\cap \langle p\rangle$. There are two cases: (1) $s=t$ and there are $\frac{p_1-1}{2}$ $x$'s, (2) $s\ne t$ and there are $\frac{p_1-3}{4}$ $x$'s. Note that $s=t$ if and only if $u\in M$. A similar argument as in the determination of $N_{0,2,0,0}$ gives that \begin{eqnarray*} & &N_{0,2,0,0}\\ &=&\left(\frac{p_1-1}{2}\right)\left(\frac{q^k-1}{p_1}\right) +|M|\cdot\left(\frac{p_1-1}{2}\right)\left(\frac{q^k-1}{p_1}\right) +(p_1-1)\left(\frac{p_1-3}{4}\right)\left(\frac{q^k-1}{p_1}\right)^2\\ &=&\left(\frac{p_1-1}{2}\right)^2\left(\frac{q^k-1}{p_1}\right)^2. \end{eqnarray*} (3) {\bf $N_{0,0,2,0}$: }Proceed exactly the same way as above and we obtain $N_{0,0,2,0}=\left(\frac{p_1-1}{2}\right)^2\left(\frac{q^k-1}{p_1}\right)^2$. To sum up, we get the result listed in Table~\ref{Tab1}. \qed \begin{example} Consider the case where \[ (p,f,k,e,h,m)=(3,5,55,2,2,11). \] In this case, $p=3$ is index $2$ modulo $11$ and the class number of $\Q(\sqrt{-11})$ is $1$. The Gauss sum $G_{3^{5}}(\chi)$ with $\chi$ a character of order $11$ of $\F_{3^5}$ is given as $ (1\pm \sqrt{-11})/2$, where the sign ambiguity $\pm$ will not matter. Then, the Gauss sum $G_{3^{5\cdot 11}}(\chi')$ for the lifted character $\chi'$ of $\chi$ is given as \[ \left(\frac{1\pm \sqrt{-11}}{2}\right)^{11}=\frac{67\pm 253\sqrt{-11}}{2}, \] i.e., $a_{11}=67$, $b_{11}=\pm 253$. By Table~\ref{Tab1}, the code ${\mathcal C}_{q,k,h,e}$ is a $[2(3^{55} - 1)/(3^5 - 1),22, 3^{17}(-1358+3^{33})]$-linear code over $\F_{3^5}$ with the following weight distribution: \begin{eqnarray*} & &1+25\ell^2 x^{2A(B-1358)}+25\ell^2 x^{2A(B+1425)}+ \ell^2x^{2A(B-335)} +10\ell x^{A(B-1358)}+10\ell x^{A(B+1425)}\\ & &+2\ell x^{A(B-335)}+50\ell^2 x^{A(2B+67)}+10\ell^2 x^{A(2B-1693)}+10\ell^2 x^{2A(B+545)}, \end{eqnarray*} where $A=3^{17},B=3^{33},\ell=(3^{55}-1)/11$. \end{example} \section{Conclusion} In this short note, we explicitly determine the weight distribution of a class of cyclic codes ${\mathcal C}_{(q,k,h,e)}$ under certain index $2$ condition as specified at the beginning of Section 3 when $e=2$. Under the assumptions (ii)--(iv), if we allow $e>1$ to be arbitrary, then there will be $\binom{e+3}{3}$ possible weights, and at least $\binom{e+2}{2}$ of them have roughly the same (nonzero) count when $q^k$ is large compared to $p_1^e$, according to the estimate by Xiong \cite{X12}. For instance, theoretically we should be able to determine the weight enumerator under the assumptions (ii)--(iv) when $e=3$ using the same technique here by more involved computations, but in general there will be $20$ weights. Therefore, it will be of interest to determine the cases where there are only few nonzero weights, say, less than ten. We leave this for future work. If we have a multiplicative character $\chi$ of prime order $p_1$ over a finite field $\F_{q_1}$, and $G_{q_1}(\chi)$ is in the quadratic subfield of $\Q(\zeta_{p_1})$, then our method also apply and yield similar results. In our construction, the index 2 condition is used to guarantee this point. So it will be interesting to determine all such Gauss sums. We leave this for future work. As we see in the application of applying Gauss sums to the construction of combinatorial objects, we first succeed in the index 2 case, and then extend to the index 4 case and then even more complicated settings. We wonder this will be the case in the application discussed in this note. We leave this for future work. \section*{Acknowledgment}
1,314,259,996,441
arxiv
\section{Norwegian homicide law and the documentary evidence}\label{sec2} This paper studies the number of killings in Norway in the period 1300--1569, that is, the last fifty years of Norway's High Middle Age, through the Late Middle Ages, and a generation or so into the Early Modern Age. The extant written data about such killings, is of course, only a fraction of the documents issued. Certain homicides (and some other crimes) were ``noncompensation crimes'' (\textit{ubotemal}), which means that they, unless the king decided otherwise, were atoned for by capital punishment or outlawry and confiscation of the criminal's property. Noncompensation homicides would, for instance, be the killing of a man in his own house, the killing of a kinsman, or a killing on a holy day. A study of the documents issued in such cases shows that King Magnus the Lawmender's National Law of 1274 was systematically set aside in such cases, for good economic reasons. There would be no compensation to the victim's next of kin, and it might even be a loss to the king's district officer (\textit{sysselmann}, the equivalent of an English sheriff) if he had to pay an executioner the equivalent of a craftsman's monthly pay for decapitating a pennyless youngster. With, however, an economic atonement for the killing (\textit{botemal}), the vicim's heirs would get their compensation, and the king's district officer would get the fine [strictly speaking, two fines, a recently introduced one for depriving the king of a subject (\textit{tegngilde}) and an older one for the king's pardon (\textit{fredkjop}), similar to the continental Germanic \textit{fredus}] nominally due to the king, which was about fifty percent of the normal compensation. In case of noncompensation killings the fine would be relatively higher, one regular fine for a killing, to which would be added another one for the killing of a brother, a second if it took place in his own house, and a third if it took place on a holy day. As we can see from some documents, family members would help to pay even though their legal obligation to do so had been abolished in 1260. The loss of a family member, cherished or not, would weaken the family. Some may have contributed in money or species, others may have guaranteed as securities as some documents show. Furthermore, there was some opportunity for haggling and the period before the compensation or fine was fully paid might on occasion be considerably longer than the year specified in the letter of pardon. This process had five documents as its outcome. The killer, who was left at large and indeed might be said to be the prosecutor, had first to go to the King's Chancellor in Oslo to get a protection letter (\textit{gridsbrev}) which both gave him a temporary protection against avengers and also was an order to the king's district officer to hear the case so as to find whether the killer had fulfilled the obligation of taking public responsibility for the killing and also whether he had sureties for the payment of compensation and fine. In accordance with this the district officer held a hearing with witnesses and the parties present and issued an evidence letter (\textit{provsbrev}) summing up the relevant facts, including what might make this one or several ubotemal. With this provsbrev the killer had once more to travel to the King's Chancellor who then issued a permanent pardon (\textit{landsvist}, right to stay in the country) which also stated the amount to be paid in fine, and the condition that compensation and fine were to be paid within a year. As we can see, practice did at times give the killer several years respite before these sums were paid, but when paid they resulted in one receipt from the king's district officer and one from the victim's heirs. These five letters were all preserved in the killer's archive as part of a farm archive together with deeds, inheritance divisions etc. until fire, wetness or some overly tidy daughter-in-law put an end to the existence of the large majority. Supplementary material [\citet{suppA}] is an index of the documents that did survive, showing evidence of 337 killings in this time period. Of these, 194 are documented from the killer's archive, 143 are only from other sources and 4 are mentioned both in the killer's archive and in other sources. The other sources are quite varied, but include local officials, the King's Chancellor, regional potentates, church officials, and private letters and diaries. The data used in this paper are summarized in Table \ref{tab1}. \begin{table} \caption{Two-way classification of records of killings}\label{tab1} \begin{tabular*}{\tablewidth}{@{\extracolsep{4in minus 4in}}lccd{3.0}d{2.0}cccc@{}} \hline & & \multicolumn{6}{c}{\textbf{Number of letters from killer's archive}} &\\[-4pt] & & \multicolumn{6}{l}{\hrulefill} &\\ & & \multicolumn{1}{c}{\textbf{0}} & \multicolumn{1}{c}{\textbf{1}} & \multicolumn{1}{c}{\textbf{2}} & \multicolumn{1}{c}{\textbf{3}} & \multicolumn{1}{c}{\textbf{4}} & \multicolumn{1}{c}{\textbf{5}} & \textbf{Total}\\ \hline \multirow{2}{*}{Mentioned in other sources?} & No & $n$ & 162 & 20 & 5 & 3 & 0 & $190 + n$\\ & Yes & 143 & 3 & 0 & 1 & 0 & 0 & 147\\ [6pt] Total & & $143+n$ & 165 & 20 & 6 & 3 & 0 & $337+n$\\ \hline \end{tabular*} \end{table} The purpose is to find a distribution for $n$ and hence for $337 + n$, the total number of killings in the period. \section{Demographic evidence about the number of killings}\label{secdemo} During this period Norway (like other European countries) underwent dramatic demographic changes. There is, furthermore, some disagreement about absolute numbers in given years during this period, but the most recent text book authors agree that when the plague first hit Norway in 1349 its population may have been 500,000 and perhaps slightly lower in the preceding half century. The recurrent plague epidemics reduced the population to its lowest point ca. 1450 to 1500, ca 200,000 or perhaps less [\citet{moseng2007}, pages 233--236, 294 and 295]. After this population started growing again and, in spite of recurrent epidemics, grew to 440,000 in the 1660s, the first really reliable assessment. These estimates concern Norway as it was then, before the country had lost almost ten percent of its territory and population due to Danish military misadventures. The data used here are, for the sake of comparison, only taken from present-day Norwegian territory, so about ten percent should be deducted from population estimates. With two exceptions there is no conspicuous geographic bias in the data. Telemark, which both in the Middle Ages and later had a reputation for violence, is very well represented in these data. Due to the cases where the scene of the homicide is geographically localized, or that of the person paying for receiving compensation or fine, or their provenience (come to an archive from a rural district) is, and the fact that family archives are preserved in rural districts, as farm archives while similar urban archives are unknown, we can be fairly sure that scarcely any of these documents had an urban origin---which means that they reflect the situation in the countryside, not in the much more violent cities and towns. This may account for the discrepancy between the homicide estimates for the mid-sixteenth century (10--15 per 100,000) made from another type of data (accounts of fines and confiscations) by \citet{naeshagen2005}, and the somewhat lower estimates this study yields. Only about 3 percent of the population lived in the three larger cities, Bergen, Trondheim and Oslo, but their population showed an extreme inclination to homicide. Thus, Bergen, Norway's largest and most heterogenous city, with a population of 6,000 had from 1562 to 1571 a homicide rate of 83 per 100,000 [\citet{sandnes1990}, pages 72--74]. Thus, with these rural data one should expect a somewhat lower estimate than N{\ae}shagen's 10 to 15 per 100,000 from the mid-sixteenth century which includes cities (2005). Central Norway (Tr$\o$ndelag) and Northern Norway with, respectively, 13 and 11 percent of the population [\citet{dyrvik1979}, page 18] seem not to be represented among these documents. Judging from the mid-sixteenth-century lists of fines and confiscations, homicides may have been rarer in Central Norway than in the rest of the country, while Northern Norway does not distinguish itself in any way [\citet{naeshagen2005}, page 416], and later data support the conclusion about Central Norway [\citet{sandnes1990}, page 79]. So supposing that the population of Norway as it was then was 500,000 in the period from 1300 to 1350, and roughly 200,000 in the period from 1350 to 1569, we must deduct 10\% to account for the territory lost. This yields 450,000 in 1300 to 1350, and 180,000 for the later period. Additionally, we deduct 24\% (13\% in Central Norway, 11\% in Northern Norway) for rural areas not covered, and another 3\% for the cities, yielding a deduction of 27\%. Thus, we estimate rural southern Norway to have had a population of 330,000 in the period from 1300 to 1350, and 130,000 from 1350 to 1569. It should be emphasized that these are rough estimates only. The next set of estimates concerns the rate of killings. Accepting the estimates from somewhat later of 10 to 15 per hundred thousand per year overall, but a much higher rate (83 per hundred thousand) for the 3\% of the urban population suggests a rate of 8 to 13 per hundred thousand per year in rural southern Norway. Applied to the 50 year period before the plague and the 219 years after the plague, this yields a range of 3600 to 5850 for the number of killings in rural southern Norway during the period in question. \section{Models of the data}\label{secmodels} Problems of missing data are ubiquitous; indeed, every parameter not known with certainty can be regarded as ``missing data'' in some sense. In biostatistics, survival analysis can be regarded as a method for dealing with missing time-of-death data for patients still alive. But these problems are especially acute in history, geology, the interpretation of fossils, astronomy and archeology. In one instance, \citet{kadanehastorf1988}, the authors assumed known preservation probabilities for different kinds of burnt seeds in an archeological site in Peru. While the methods used here bear a relationship with problems of estimating the number of species [see \citet{bungefitzpatrick1993} for a review], the more closely related literature is that of dual systems estimators, growing out of the early work of \citet{petersen1896} and \citet{lincoln1930}, and applied to the problem of census coverage by \citet{wolter1986}. \subsection*{A. Simple dual systems} The simplest treatment of data of this kind is to amalgamate all mentions in the killer's archive together, resulting in the following $2 \times2$ table. \begin{table} \caption{Reduced data}\label{tab2} \begin{tabular*}{295pt}{@{\extracolsep{4in minus 4in}}llcrc@{}} \hline & & \multicolumn{2}{c}{\textbf{Killer's archive?}}\\[-4pt] & & \multicolumn{2}{c}{\hrulefill}\\ & & \textbf{No} & \multicolumn{1}{c}{\textbf{Yes}} & \textbf{Total}\\ \hline \multirow{2}{*}{Mentioned in other sources?} & No & $n$ & 190 & $190+n$\\ & Yes & 143 & 4 & 147\\ [6pt] Total & & $143+n$ & 194 & $337+n$\\ \hline \end{tabular*} \end{table} To establish notation for this case, let the numbers in Table \ref{tab2} be represented as shown in Table \ref{tab3}. \begin{table}[b] \tablewidth=295pt \caption{General notation for Table \protect\ref{tab2}} \label{tab3} \begin{tabular*}{\tablewidth}{@{\extracolsep{4in minus 4in}}llccc@{}} \hline & & \multicolumn{2}{c}{\textbf{Killer's archive?}}\\[-4pt] & & \multicolumn{2}{c}{\hrulefill}\\ & & \textbf{No} & \multicolumn{1}{c}{\textbf{Yes}} & \textbf{Total}\\ \hline \multirow{2}{*}{Mentioned in other sources?} & No & $n_{00}$ & $n_{01}$ & $n_{0+}$\\ & Yes & $n_{10}$ & $n_{11}$ & $n_{1+}$\\ [6pt] Total & & $n_{+0}$ & $n_{+1}$ & $n_{++}$\\ \hline \end{tabular*} \legend{Note: $n_{00} = n$.} \end{table} The data can be taken to be multinomial, with probabilities $p_{ij}$, and hence likelihood \begin{equation} \label{eq1} L = \pmatrix{n_{++} \cr n_{00}, n_{01}, n_{10}, n_{11}} \mathop{\prod _{i=0,1}}_{j=0,1}p^{n_{ij}}_{ij}. \end{equation} A key assumption is that of independence, which would mean that whether a killing is known from the preservation of a letter from the killer's archive has no bearing on whether it is known from the other sources. In this application, such an assumption seems entirely reasonable. So if $p$ is the probability a killing is mentioned in other sources and $q$ is the probability a killing is known from at least one letter from the killer's archive, the assumption of independence can be written as \begin{equation} \label{eq2} p_{ij} = p^{i}\overline{p}{}^{\overline{i}}q^{j} \overline {q}{}^{\overline{j}},\qquad i=0,1; j=0,1, \end{equation} where $\overline{x} = 1-x$. Substituting (\ref{eq2}) into (\ref{eq1}) yields \begin{equation} \label{eq3} L = \pmatrix{n_{++} \cr n_{00}, n_{01},n_{10},n_{11}} p^{n_{1+}} \overline{p}{}^{n_{0+}}q^{n_{+1}} \overline{q}{}^{n_{+0}}. \end{equation} The parameters $p, q$ and $n$ are all that matter here, and $n$ is the parameter of interest. Any reasonable prior distribution (i.e., one that is not strongly opinionated) for $p$ and $q$ will lead to the same inference, given the values of $n_{0+},n_{1+},n_{+0}$ and $n_{+1}$ in this data set. Hence, we accept independent uniform priors for $p$ and~$q$. In view of the material in Section \ref{secdemo}, the prior of interest on the total number of killings, $n+337$, is uniform $(337, 5850)$. However, for the first computation reported here we use a much broader uniform prior on $n$ in order to show the uncertainty inherent in the likelihood. Using the well-known integration result, \begin{eqnarray} \label{eq4} \int^{1}_{0}x^{n}(1-x)^{m} \,dx &=& B(n+1, m+1) = \frac{\Gamma (n+1)\Gamma(m+1)}{\Gamma(n+m+2)} \nonumber\\[-8pt]\\[-8pt] &=& \frac{n!m!}{(n+m+1)!},\nonumber \end{eqnarray} the integrated likelihood is \begin{equation} \label{eq5} \pmatrix{n_{++} \cr n_{00},n_{01},n_{10},n_{11}} \frac{n_{1+}! n_{0+}!n_{+1}!n_{+0}!}{[(n_{++}+1)!]^{2}}. \end{equation} Now $n_{01}, n_{10},n_{11},n_{+1}$ and $n_{1+}$ do not depend on $n$. Hence, these factors do not matter for the integrated likelihood, yielding an integrated likelihood proportional to \begin{equation} \label{eq6} \frac{(n_{0+})!(n_{+0})!}{n_{00}!(n_{++}+1)(n_{++}+1)!} = \frac{(n+190)! (n+143)!}{n!(n+338)(n+338)!}. \end{equation} Figure \ref{fig1} plots, as a probability distribution, the quantity $n+337$, the total number of killings. Implicitly the prior on $n$ used in this calculation is uniform with an upper bound of at least 25,000, which is much higher than we find credible. Nonetheless, for display purposes, we show it. \begin{figure} \includegraphics{612f01.eps} \caption{Simple dual systems integrated likelihood.}\label{fig1} \end{figure} The quantiles of the data in Figure \ref{fig1} are reported in Table \ref{tab4}. Together Figure~\ref{fig1} and Table \ref{tab4} suggest substantial uncertainty about the total number of killings; the middle 80\% of the distribution lies between 3337 and 10,837, a gap of 7500 killings; the median of the distribution is 5837. \begin{table}[b] \caption{Quantiles for Figure \protect\ref{fig1}}\label{tab4} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lccccccccc@{}} \hline Quantile & 3337 & 3837 & 4337 & 4837 & 5837 & 6337 & 7337 & 8337 & 10,837\\ Probability & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9\\ \hline \end{tabular*} \end{table} This suggests the desirability of making more use of the data in Table \ref{tab1}, and in particular the data on the number of letters found in each killer's archive. \subsection*{B. Dual systems binomial model} To do so, we now establish general notation for Table \ref{tab1}, in Table \ref{tab5}. Let $\boldn= (n_{00}, n_{01}, n_{02}, \ldots, n_{05}, n_{10}, n_{11},\ldots, n_{15})$ and $\boldn! = \prod^{5}_{i=0} \prod^{1}_{j=0} n_{ij}!$. Then the multinomial likelihood can be written as \begin{equation} \label{eq7} L = \frac{n_{++}!}{\boldn!} \mathop{\prod_{i=0,1}}_{j=0,\ldots,5}p^{n_{ij}}_{ij}. \end{equation} Again imposing independence, we have \begin{equation} \label{eq8} p_{ij} = r_{j}s^{i} \overline{s}{}^{\overline{i}},\qquad j=0,\ldots, 5; i=0,1, \end{equation} where $r_{j}$ is the probability of $j$ surviving letters in the archive and $s$ is the probability of being mentioned in other sources. \begin{table} \caption{Notation for Table \protect\ref{tab1}} \label{tab5} \begin{tabular*}{\tablewidth}{@{\extracolsep{4in minus 4in}}lcccccccc@{}} \hline & & \multicolumn{6}{c}{\textbf{Number of letters in killer's archive}} &\\[-4pt] & & \multicolumn{6}{l}{\hrulefill} &\\ & & \multicolumn{1}{c}{\textbf{0}} & \multicolumn{1}{c}{\textbf{1}} & \multicolumn{1}{c}{\textbf{2}} & \multicolumn{1}{c}{\textbf{3}} & \multicolumn{1}{c}{\textbf{4}} & \multicolumn{1}{c}{\textbf{5}} & \textbf{Total}\\ \hline \multirow{2}{*}{Mentioned in other sources?} & No & $n_{00}$ & $n_{01}$ & $n_{02}$ & $n_{03}$ & $n_{04}$ & $n_{05}$ & $n_{0+}$\\ & Yes & $n_{10}$ & $n_{11}$ & $n_{12}$ & $n_{13}$ & $n_{14}$ & $n_{15}$ & $n_{1+}$\\ [6pt] Total & & $n_{+0}$ & $n_{+1}$ & $n_{+2}$ & $n_{+3}$ & $n_{+4}$ & $n_{+5}$ & $n_{++}$\\ \hline \end{tabular*} \end{table} Substituting (\ref{eq8}) into (\ref{eq7}), we obtain \begin{equation} \label{eq9} L= \frac{n_{++}!}{\boldn!} \prod^{5}_{j=0} r^{n+j}_{j} s^{n_{1+}}\overline{s}{}^{n_{0+}}. \end{equation} A simple model to impose on $\boldr= (r_{o}, r_{1},\ldots, r_{5})$ is binomial $(5,p)$, where $p$ is here the probability that each letter in a killer's archive survives (this assumption is revisited in subsection~C, ahead). With the binomial assumption, \begin{equation} \label{eq10} r_{j} = \pmatrix{5 \cr j} p^{j} \overline{p}{}^{5-j},\qquad j=0,\ldots,5. \end{equation} Then \begin{equation} \label{eq11} \prod^{5}_{j=0} r^{n+j}_{j} = \prod^{5}_{j=0} \pmatrix{5 \cr j,5-j}^{n_{+j}} p^{\sum^{5}_{j=0}jn_{+j}}\overline{p}{}^{\sum ^{5}_{j=0}(5-j)n_{+j}}. \end{equation} Let $S_{1} = \sum^{5}_{j=0}j n_{+j}$. Then $\sum^{5}_{j=0}(5-j)n_{+j} = 5n_{++}-S_{1}$. Hence, \begin{equation} \label{eq12} \prod^{5}_{j=0}r_{j}^{n_{+j}}= \prod^{5}_{j=0} \pmatrix{5 \cr j,5-j}^{n_{+j}}p^{S_{1}}\overline{p}{}^{5n_{++}-S_{1}}. \end{equation} The first term on the right can be written for our data as \begin{eqnarray} \label{eq13}\quad &&\prod^{5}_{j=0} \pmatrix{5 \cr j,5-j}^{n_{+j}}\nonumber\\[-8pt]\\[-8pt] &&\qquad= \biggl(\frac {5!}{0!5!} \biggr)^{n+143} \biggl( \frac{5!}{1!4!} \biggr)^{165} \biggl(\frac{5!}{2!3!} \biggr)^{20} \biggl(\frac{5!}{3!2!} \biggr)^{6} \biggl( \frac{5!}{4!1!} \biggr)^{3} \biggl(\frac{5!}{0!5!} \biggr)^{0}.\nonumber \end{eqnarray} Only the first term has an exponent that depends on a parameter, and that term is 1 raised to a power, so the entire product is constant with respect to the parameters, and can be dropped. Similarly, in the terms for $\boldn!$ only the first, $n !$, depends on the parameters, and the others can be dropped: \begin{equation} \label{eq14} L \propto\frac{(n_{++})!}{n!} p^{S_{1}} \overline{p}{}^{5n_{++}-S_{1}} s^{n_{1+}}\overline{s}{}^{n_{0+}}. \end{equation} Again, using (\ref{eq4}) and independent uniform distributions on $p$ and $s$, the integrated likelihood for $n$ is \begin{eqnarray} \label{eq15} && \frac{(n_{++})!}{n!} \frac{(S_{1})! (5n_{++}-S_{1})!}{(5n_{++}+1)!} \frac{(n_{1+})!(n_{0+})!}{(n_{++}+1)!} \nonumber\\[-8pt]\\[-8pt] &&\qquad= \frac{S_{1}!(5n_{++}-S_{1})! (n_{1+})! (n_{0+})!}{n!(5n_{++}+1)!(n_{++}+1)}.\nonumber \end{eqnarray} Finally, $S_{1}$ and $n_{1+}$ also do not depend on $n$, so those terms can be dropped as well, yielding the integrated likelihood proportional to \begin{equation} \label{eq16} \frac{(5n_{++}-S_{1})! (n_{0+})!}{n!(5n_{++}+1)! (n_{++}+1)}. \end{equation} Figure \ref{fig2} plots the posterior distribution for $n+337$ whose quantiles are given in Table \ref{tab6}. Here the median is 1155. \begin{figure} \includegraphics{612f02.eps} \caption{Binomial dual systems posterior distribution. \textup{Note that Figure \protect\ref{fig1} has a wider scale of the number of killings.}}\label{fig2} \end{figure} \begin{table}[b] \caption{Quantiles for dual systems posterior distribution under the binomial model}\label{tab6} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lccccccccc@{}} \hline Quantile & 978 & 1037 & 1076 & 1116 & 1155 & 1195 & 1234 & 1293 & 1372\\ Probability & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9\\ \hline \end{tabular*} \end{table} Thus, this model suggests remarkably fewer killings than those suggested by the simple dual systems estimate reported in Figure \ref{fig1} and Table \ref{tab6}. \subsection*{C. Com-binomial model} The binomial model implies that the survival of a document from a killer's archive is an event independent of the survival of other documents from the same killer's archive. Since all five letters are addressed to the same person (the killer), it is likely that they would tend to be stored together. Hence, it seems prudent to expand the model to allow for positive correlation among the events of survival of letters addressed to the same killer. [A referee suggests that an overly tidy daughter-in-law may have kept only one letter, leading to negative correlation. While that may have happened in a few instances, we think that joint physical destruction (fire and water) is far more likely, and hence expect positive correlation in the survival event of documents from a killer's archive.] One model that allows for such correlation is the com-binomial distribution [\citet{shmueli-etal2005}]. The pdf for this distribution is given by \begin{equation} \label{eq17}\quad P\{X=j|p,\nu\} = \frac{p^{j}(1-p)^{m-j} {m\choose j,m-j}^{\nu}}{\sum^{m}_{k=0} p^{k}(1-p)^{m-k} {m\choose k, m-k}^{\nu}},\qquad j = 0, 1,\ldots, m. \end{equation} When $\nu=1$, this distribution reduces to the binomial distribution, and hence to independence of survival of the documents sent to a given killer. For $\nu> 1$, the survival would be negatively correlated. For $\nu< 1$, the survival would be positively correlated. In this application, the latter is expected. As $\nu\rightarrow\infty$, the probability would become concentrated on a single point. As $\nu \rightarrow- \infty$, it would become concentrated on 0 and $m$. Because this distribution is unfamiliar, it is perhaps useful to look at some examples, displayed in Figure \ref{fig3} for the case $m=5$, which is the value of $m$ in this application. In this figure, looking across rows, as $\nu$ increases, the probability tends to concentrate on a single point (except at $p= 1/2$, where symmetry leads to two dominant points, 2 and 3). \begin{figure} \includegraphics{612f03.eps} \caption{Com-binomial distribution for various values of $p$ and $nu$.}\label{fig3} \end{figure} As alluded to above, values of $\nu$ above 1 do not make sense in this application. Therefore, the analysis to be presented imposes the condition $\nu\leq1$ as a hard constraint, by using a prior that put zero probability in the space $\nu> 1$. To incorporate the com-binomial distribution into the model, $r_{j}$ in (\ref{eq10}) is replaced by the expression in (\ref{eq17}). This yields the likelihood \begin{eqnarray} \label{eq18} L & = & \frac{n_{++}!}{\boldn!} s^{n_{1+}} \overline{s}{}^{n_{0+}} \prod^{5}_{j=0} r_{j}^{n_{+j}} \nonumber\\[-8pt]\\[-8pt] & = & \frac{n_{++}!}{\boldn!} s^{n_{1+}}\overline{s}{}^{n_{0+}}\prod ^{5}_{j=0} \biggl[ \frac{p^{j}(1-p)^{m-j}{m\choose j,m-j}^{\nu}} { \sum^{m}_{k=0} p^{k}(1-p)^{m-k} {m\choose k,m-k}^{\nu}} \biggr]^{n_{+j}}. \nonumber \end{eqnarray} It is convenient to divide the numerator and denominator in the product term by the factor $(1-p)^{m}(m !)^{\nu}$, yielding \begin{equation} \label{eq19} \frac{p^{j}(1-p)^{m-j}{m\choose j,m-j}^{\nu}}{\sum^{m}_{k=0} p^{k}(1-p)^{m-k}{m\choose k,m-k}^{\nu}} = \frac{\theta^{j}/[j!(m-j)!]^{\nu}}{\sum^{5}_{k=0} \theta^{k}/[k !(m-k)!]^{\nu}}, \end{equation} where $\theta= p/(1-p)$. It is further convenient to rewrite (\ref{eq19}) as follows: \begin{eqnarray} \label{eq20} && \theta^{j}\bigg/ \Biggl\{ \bigl[j!(m-j)! \bigr]^{\nu} \Biggl(\sum^{5}_{k=0} \theta ^{k}/ \bigl[k!(m-k)!\bigr]^{\nu} \Biggr) \Biggr\} \nonumber \\ &&\qquad= e^{j \log\theta-\nu\log[j!(m-j)!]}/Z(\theta,\nu) \\ &&\eqntext{\mbox{where }\displaystyle Z(\theta,\nu) = \sum^{5}_{k=0} \theta^{k}/\bigl[k!(m-k)!\bigr]^{\nu}.} \end{eqnarray} Substituting (\ref{eq20}) into (\ref{eq18}) yields \begin{equation} \label{eq21} L = \frac{n_{++}!}{\boldn!} s^{n_{1+}}\overline{s}{}^{n_{0+}} e^{s_{1}\log\theta- s_{2}\nu}/\bigl(Z(\theta, \nu)\bigr)^{n_{++}}, \end{equation} where $s_{1}= \sum^{5}_{j=1}j n_{+j}$ and $s_{2}= \sum^{5}_{j=0} n_{+j} \log(j!(5-j)!)$.\eject Once again $s$ can be integrated with respect to a uniform prior, yielding the integrated likelihood \begin{equation} \label{eq22} \frac{n_{++}!}{\boldn!} \frac{(n_{1+})!(n_{0+})!}{(n_{++}+1)!} e^{s_{1}\log\theta- s_{2}\nu}/Z(\theta, \nu)^{n_{++}}. \end{equation} Finally, factors not involving $\theta, \nu$ and $n$ can be eliminated, yielding \begin{equation} \label{eq23} \frac{(n_{0+})!}{n!(n_{++}+1)} e^{s_{1}\log\theta- s_{2}\nu} Z(\theta, \nu)^{-n_{++}}. \end{equation} In order to have results comparable to those in Figure \ref{fig2}, proper account must be taken of the transformation from $p$ to $\theta $. The differentials are related by \begin{equation} \label{eq24} dp = \frac{d\theta}{(1+\theta)^{2}}, \end{equation} so $p$ uniform on $(0,1)$ is equivalent to $\theta$ having the density $1/(1+\theta)^{2}$ on $(0,\infty)$. Thus, the form of likelihood used here is (\ref{eq23}) multiplied by (\ref{eq24}), that is, \begin{equation} \label{eq25} \frac{(n_{0+})!}{(n_{++}+1) n!} e^{s_{1}\log\theta- s_{2}\nu} \frac {Z(\theta,\nu)^{-n_{++}}}{(1+\theta)^{2}}. \end{equation} Using a grid method to integrate (\ref{eq25}) with respect to $\theta $ and $\nu$ yields the posterior distribution in Figure \ref{fig4}, with quantiles given in Table \ref{tab7}. The median for this model is 1143, about the same as for the binomial model. The results of the com-binomial in Figure \ref{fig4} are very similar to those of the binomial in Figure \ref{fig2}. The reason for this is that the likelihood for $\nu$ strongly indicates a preference for $\nu =1$. Glancing back at the data in Table \ref{tab1}, the data are strongly piled up at 0 and $1$ letters from a killer's archive; there are no killings at all for which all five letters have survived. Therefore, the data looks much more like it would at $\nu= \infty$, which makes no substantive sense in this problem. Given that the hard constraint $\nu\leq1$ has been imposed, the integrated posterior puts most weight on the largest $\nu$ permitted, that is, $\nu= 1$; the results therefore resemble those of the binomial model reported in Figure \ref{fig2}. While the generalization afforded by the com-binomial did not lead to a substantially different integrated likelihood, it was important to see whether positive correlation in the survival of letters sent to the killer was a dominant feature of the data. This turned out not to be the case. \section{Conclusion} An assumption underlying our model is that every killing resulted in the five letters being sent to the killer. It is possible that this is not true, and possible that the propensity to send the requisite letters varied by geography. It is also possible that some geographical areas were more prone to document destruction by fire, flood, etc., and such areas might be those less carefully administered. We leave these possibilities for further exploration. This paper presents three analyses of the number of killings in rural Norway during the period in question. The first (Table \ref{tab4} and Figure \ref{fig1}) used only the presence or absence of a mention in the killer's archive, and found huge uncertainty in the number of killings. The latter two, reported, respectively, in Table \ref{tab6} and Figure~\ref{fig2}, and in Table \ref{tab7} and Figure \ref {fig4}, are so similar that substantively they are the same. The distribution reported indicates that perhaps rural Norway was more peaceful in this period than had previously been thought. \begin{figure} \includegraphics{612f04.eps} \caption{Com-binomial posterior distribution. \textup{Note that Figure \protect\ref{fig1} has a wider scale for the number of killings.}}\label{fig4} \end{figure} \begin{table}[b] \caption{Quantiles for dual systems integrated likelihood under the com-binomial model}\label{tab7} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lccccccccc@{}} \hline Quantile & 959 & 1021 & 1051 & 1113 & 1143 & 1174 & 1235 & 1265 & 1357\\ Probability & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9\\ \hline \end{tabular*} \end{table} \section*{Acknowledgments} The authors thank their good friend Baruch Fischoff for introducing them and suggesting that this problem might interest us both. Sarah Brockwell did much to clean the data, and Anthony Brockwell helped with the data structure. Jong Soo Lee also contributed to the data handling. Conversations with Rebecca Nugent, Howard Seltman and Andrew Thomas about \texttt{R} were also very helpful. A referee was very helpful in correcting our rough demographic estimates of the numbers of killings. \begin{supplement \stitle{Criminal homicides in Norwegian letters 1300 to 1569\\} \slink[doi]{10.1214/12-AOAS612SUPP} \sdatatype{.pdf} \sfilename{aoas612\_supp.pdf} \sdescription{A list of letters found in Norway concerning killings during the period of 1300 to 1569.} \end{supplement}
1,314,259,996,442
arxiv
\section{Introduction}\label{s:intro} Cosmic inflation is the standard paradigm that provides the initial conditions for structure formation and the anisotropies in cosmic microwave background (CMB) as well as the global properties of the spacetime \cite{Starobinsky:1980te, Sato:1980yn, Guth:1980zm}. In the inflationary scenario, the accelerated expansion stretches quantum fluctuations on microscopic scales to cosmological scales, providing the seed for macroscopic observables as the anisotropies in the CMB. The wavelengths of the fluctuations are extremely short in the earliest epoch. Thus the cosmological observations provide a window into short-distance physics which is beyond the reach of terrestrial experiments. The cosmological fluctuations, which have been generated quantum mechanically, are statistical in nature. In the simplest single-field slow-roll inflation models, they are approximately Gaussian-distributed and their power spectrum is nearly scale-invariant. Then we usually characterize them over the observable range of scales in terms of a power-law-type power spectrum, which is parametrized simply by two or three parameters: the amplitude, the spectral index, and the running. Though the two- (or three-) parameter spectrum consistently explains the observed CMB anisotropies \cite{Komatsu:2010fb}, it could miss valuable information on physics behind inflation. Recent high-resolution CMB data already implies the presence of fine features in the primordial power spectrum. Indeed, several groups, including one of the present authors (JY), reported statistically significant discrepancy between the prediction from a power-law primordial power spectrum and the CMB data \cite{TocchiniValentini:2005ja, Nagata:2008tk}. Using non-parametric reconstruction methods, they have found large anomalies in the reconstructed power spectrum, which are localized around wavenumbers $k \simeq 0.003~\mathrm{Mpc}^{-1}$ and $k \simeq 0.009~\mathrm{Mpc}^{-1}$. On the other hand, a number of effects that cause deviations from a power-law power spectrum have been investigated in literatures, including trans-Planckian effects \cite{Martin:2000xs, Danielsson:2002kx,Schalm:2004qk}, a burst of particle production \cite{Romano:2008rr, Barnaby:2009dd}, temporal violation of slow-roll approximation \cite{Leach:2000yw, Saito:2008em, Starobinsky:1992ts, Adams:2001vc, Kaloper:2003nv, Battefeld:2010rf}, turns in the inflationary trajectory \cite{Burgess:2002ub, Achucarro:2010da, Shiu:2011qw}, a sharp water field transition \cite{Abolhasani:2012px}, and a sudden change of sound velocity \cite{Nakashima:2010sa}. They modify evolution of the inflaton fluctuations in their own way and leave their characteristic signatures on the primordial power spectrum. Thus fine features in the power spectrum could contain rich information such as detailed structures of inflaton Lagrangian or existence of other degrees of freedom. In this paper, we consider an effect of coherent oscillations of a heavy scalar field whose mass exceeds the Hubble scale, $m \gg H$. Heavy scalar fields are ubiquitous in models of inflation embedded in supergravity and string theory. They appear as moduli fields, Kaluza-Klein modes, the scalar supersymmetric partner of inflaton, or others. In usual treatment, the dynamics of a heavy scalar field is neglected assuming that it is stuck to their potential minima during inflation because its excitation decays quickly \cite{Yamaguchi:2005qm}. The instantaneously-excited oscillations, however, can be important when we measure the primordial power spectrum with high resolution. An impact of the excitation has already been discussed in Ref. \cite{Burgess:2002ub} for a hybrid-inflation model. More recently, Refs. \cite{Shiu:2011qw, Cespedes:2012hu, Gao:2012} have discussed that oscillations excited by a sharp turn in multi-dimensional potential leave a ringing signature in the primordial power spectrum. In both cases, the signatures arise because the evolution of the fluctuations becomes non-adiabatic \cite{Martin:2000xs} by a sudden energy transfer between the inflaton and a heavy scalar field through their couplings in the potential. In addition, it has been discussed in Ref. \cite{Chen:2011zf} that a ringing feature in the power spectrum is induced through the gravitational couplings without considering direct couplings between inflaton and a heavy scalar field. In this paper, instead, we point out that a resonant enhancement of the fluctuations efficiently occurs deep in the horizon, $k/a \sim m \gg H$, through derivative couplings with a heavy scalar field. The derivative couplings are allowed even if a shift symmetry is imposed to ensure the flatness of the inflaton potential. Then there is no reason why they are absent in the action from the effective-field-theory point of view \cite{Weinberg:2008hq, Khosravi:2012qg}. Though the derivative couplings are usually irrelevant at low energy scale, they can play an important role on the evolution of the inflaton fluctuations in the resonance epoch. This is because the inflaton fluctuations and the heavy scalar field rapidly oscillate in the resonance epoch. Hence, effects of the derivative couplings on the evolution of the inflaton fluctuations are large there while their effects on the background evolution is relatively small because the derivative of the background inflaton field is slow-roll suppressed. In the following sections, we estimate the enhancement of the primordial power spectrum by the resonance assuming non-derivative couplings are sufficiently suppressed. In contrast to other effects as the slow-roll violation \cite{Kumazaki:2011eb}, the feature induced by the resonance can be sharp and large even in the case that adiabaticity is mildly violated because the resonance coherently accumulates small effects. The instantaneously excited oscillations do not affect the slow-roll background evolution much in this case unless the heavy scalar field dominates the energy density because the flatness of the inflaton potential is ensured even during the oscillations. The organization of this paper is as follows. In \S \ref{s:model}, we present our model to realize an efficient enhancement of the fluctuations and discuss conditions on the model required for the slow-roll inflation. In \S \ref{s:amp}, we estimate the enhancement of the fluctuations by the parametric resonance and discuss consistency of our model with the anomalies observed in the CMB spectrum. Finally, we provide our summary of this paper in \S \ref{s:summary}. \section{The model}\label{s:model} In this section, we introduce our model and discuss conditions on model parameters required for the slow-roll inflation. \subsection{An inflationary model with a heavy scalar field} We consider a model with a heavy scalar field with mass $m \gg H$ which couples to the inflaton field in the following manner, \begin{align} S_{m} &\equiv -\int \mathrm{d}x^4 \sqrt{-g}\left[ \frac{1}{2}(\partial \phi)^2 + V(\phi) + \frac{1}{2}(\partial \chi)^2 + \frac{m^2}{2}\chi^2 + K_n + K_d \right], \label{eq:action} \end{align} with \begin{align} K_n &\equiv \frac{\lambda_n}{2\Lambda_n}\chi(\partial \phi)^2, \label{eq:ncouple} \end{align} and \begin{align} K_d &\equiv \frac{\lambda_{d1}}{4\Lambda_d^4}(\partial \chi)^2(\partial \phi)^2 + \frac{\lambda_{d2}}{4\Lambda_d^4}(\partial \chi \cdot \partial \phi)^2, \label{eq:dcouple} \end{align} where $\phi$ is the inflaton field and $\chi$ is the heavy scalar field. The potential $V(\phi)$ is assumed to be sufficiently flat; \begin{align}\label{eq:slowroll} \epsilon_V \ll 1,~|\eta_V| \ll 1, \end{align} where \begin{equation} \epsilon_V \equiv \frac{M_p^2}{2}\left(\frac{V'}{V}\right)^2, \quad \eta_V \equiv M_p^2\frac{V''}{V}, \end{equation} are the slow-roll parameters. Here, $M_p=2.4 \times 10^{18}\mathrm{GeV}$ is the reduced Planck mass. The heavy scalar field $\chi$ is assumed to be subdominant; \begin{align}\label{eq:sub} f_{\chi} \ll 1, \end{align} where \begin{equation} f_{\chi} \equiv \frac{\rho_{\chi}}{\rho} \simeq \frac{\dot{\chi}^2+m^2\chi^2}{6M_p^2H^2}, \end{equation} is the fraction of its energy density to the total one. We assume that $\chi$ decays with a rate $\Gamma$, which satisfies $H \ll \Gamma \ll m$. The derivative couplings $K_n$ and $K_d$ are expected to appear in the action if we consider the action (\ref{eq:action}) as the leading terms in a generic effective field theory \cite{Weinberg:2008hq, Khosravi:2012qg}. Though other terms are also allowed in general, the derivative couplings $K_n$ and $K_d$ provide the most general couplings between the inflaton and the heavy scalar field at the leading order in $1/\Lambda_n$ and $1/\Lambda_d$ in a model with a parity symmetry, $\phi \to -\phi$, and a shift symmetry, $\phi \to \phi+c$. Moreover, the couplings $K_d$ cannot be forbidden while the couplings $K_n$ can be suppressed by imposing a shift symmetry on $\chi$. Higher-order terms in $1/\Lambda_n$ and $1/\Lambda_d$ are also expected to appear in general, though we have suppressed them in Eq. (\ref{eq:action}). To ensure that contributions from these terms can be safely neglected, we assume hereafter that the background fields satisfy the following conditions, \begin{equation}\label{eq:irrelevant} \chi \ll \Lambda_n, \quad \dot{\phi},~\dot{\chi} \ll \Lambda_d^2. \end{equation} We have also suppressed terms $(\partial \phi)^4$ and $(\partial \chi)^4$, because they have little effect on our analysis under the conditions above. In general, non-derivative couplings as $\chi V(\phi)$ could appear since the potential breaks the shift symmetry. However, we assume the non-derivative couplings are sufficiently suppressed in the following analysis for simplicity. Even if we include non-derivative couplings, they modify only the background evolution and do not directly affect the evolution of fluctuations much. We can also find the couplings $K_n$ and $K_d$ in specific models of inflation. For example, the couplings $K_n$ naturally appear if the inflaton is a pseudo-Nambu Goldstone boson, where the heavy scalar field is provided by the symmetry breaking field. Moreover, they can be found in the Einstein-frame action if a model contains additional scalar degrees of freedom with a non-minimal coupling to curvature, and also in the supergravity action with higher-order terms in the K\"{a}hler potential. In the latter case, the scalar supersymmetric partner of inflaton can be a candidate for the heavy scalar field, which usually has large mass corrections from the Planck suppressed terms in the F-term potential since it is not protected by a shift symmetry. On the other hand, the couplings $K_d$ can be found in models such as brane inflation. In brane inflation, for example, we can see that they appear in the DBI action by expanding its square root with the heavy scalar field being provided by KK modes or brane coordinates other than the inflaton. In the case of brane inflation models, where all the higher-order terms neglected in the action (\ref{eq:action}) are known, we may not need to impose the conditions (\ref{eq:irrelevant}) though instead the evolution equation should be solved including their effects. In addition, they have non-derivative couplings and hence we should make the analysis including them. These cases will be discussed in another paper. \subsection{The background evolution} The evolution equations for the background fields can be written as \begin{align} \dot{\pi}_{\phi} + 3H\pi_{\phi} + V' = 0, \label{eq:ibgeom} \\ \dot{\pi}_{\chi} + (3H+\Gamma)\pi_{\chi} + m^2\chi + \frac{\lambda_n}{2\Lambda_n}\dot{\phi}^2= 0, \label{eq:hbgeom} \end{align} where $\pi_{\phi}$ and $\pi_{\chi}$ are conjugate momenta of the scalar fields, \begin{align} \pi_{\phi} &\equiv \left[ 1 + \lambda_n\frac{\chi}{\Lambda_n} + \left(\lambda_{d1}+\lambda_{d2}\right)\frac{\dot{\chi}^2}{2\Lambda_d^4} \right]\dot{\phi} \label{eq:icm} \\ &\equiv K_{1I}\dot{\boldsymbol{\phi}}^{I},\\ \pi_{\chi} &\equiv \left[ 1 + \left(\lambda_{d1}+\lambda_{d2}\right)\frac{\dot{\phi}^2}{2\Lambda_d^4} \right]\dot{\chi} \label{eq:hcm}\\ &\equiv K_{2I}\dot{\boldsymbol{\phi}}^{I}. \end{align} Here we have introduced a notation that \begin{equation} K \equiv \frac{1}{2}(\partial \phi)^2 + \frac{1}{2}(\partial \chi)^2 + K_n + K_d, \end{equation} and $\boldsymbol{\phi}^{(1)} \equiv \phi,~\boldsymbol{\phi}^{(2)} \equiv \chi$ for brevity.\footnote{We use the summation convention for repeated indices throughout the paper and parenthesis for components in the field space.} $K_{IJ}$ represents $K$ differentiated by $X^{IJ} \equiv -\partial \boldsymbol{\phi}^I \cdot \partial \boldsymbol{\phi}^J/2$. A dissipation term, $\Gamma \pi_{\chi}$, has been added in Eq. (\ref{eq:hbgeom}) to incorporate the effect of the decay of $\chi$. We can obtain the solutions for the background evolution as usual, \begin{align} \pi_{\phi}(t) &\simeq -\frac{V'}{3H} \qquad (\text{slow-roll solution}), \label{eq:ibgsol} \\ \chi(t) &\simeq \chi_0 e^{-\Gamma t}\cos(mt), \label{eq:hbgsol} \end{align} if the conditions (\ref{eq:irrelevant}) are satisfied at the onset of the oscillations, $t=0$. We discuss consistency of these solutions in some details here. Before proceeding to a discussion on the consistency of the solutions (\ref{eq:ibgsol}) and (\ref{eq:hbgsol}), we discuss whether inflation occurs or not during the oscillations. Inflation is realized if the slow-variation parameter is small; \begin{align}\label{eq:eps} \epsilon_H \equiv -\frac{\dot{H}}{H^2} \ll 1. \end{align} Using the Friedman equation, $\dot{H}=-K_{IJ}X^{IJ}/M_p^2$, the equation (\ref{eq:eps}) can be rewritten as \begin{align} \epsilon_H &= \frac{K_{IJ}X^{IJ}}{M_p^2H^2} \nonumber \\ &\simeq \frac{1}{2}\left(\frac{\dot{\phi}}{M_p H}\right)^2 + \frac{1}{2}\left(\frac{\dot{\chi}}{M_p H}\right)^2, \label{eq:eps2} \end{align} where we have neglected the higher-order terms in $1/\Lambda_n$ and $1/\Lambda_d$. Again, using the Friedman equation, $3M_p^2 H^2 = \rho \simeq V$, and the solution (\ref{eq:hbgsol}), we can further rewrite Eq. (\ref{eq:eps2}) as \begin{align} \epsilon_H &\simeq \epsilon_V + \frac{3}{2}f_{\chi}\sin^2(mt), \label{eq:eps3} \end{align} where we have neglected the higher-order terms in $\epsilon_V$ and $f_{\chi}$. Hence, the conditions (\ref{eq:slowroll}), (\ref{eq:sub}), and (\ref{eq:irrelevant}) are sufficient for realizing inflation. Note that another slow-variation parameter $\eta_H \equiv \dot{\epsilon}_H/H\epsilon_H$ is not so small during the oscillations due to the second term. In order that the slow-roll solution (\ref{eq:ibgsol}) be consistent, the first term should be smaller than the others in Eq. (\ref{eq:ibgeom}); $\dot{\pi}_{\phi}/3H\pi_{\phi} \ll 1$. Substituting the solution (\ref{eq:ibgsol}), we obtain \begin{align} \frac{\dot{\pi}_{\phi}}{3H\pi_{\phi}} &\simeq \frac{V''}{V'}\frac{\dot{\phi}}{H} - \frac{\dot{H}}{H^2} \nonumber \\ &\simeq -\eta_V +\epsilon_H, \label{eq:iconsistency} \end{align} which shows that the solution (\ref{eq:ibgsol}) can be used consistently under the conditions (\ref{eq:slowroll}), (\ref{eq:sub}), and (\ref{eq:irrelevant}). It is noteworthy that we cannot replace $\dot{\pi}_{\phi}$ by $\ddot{\phi}$ in deriving Eq. (\ref{eq:iconsistency}). The differentiation of $\chi$ in Eq. (\ref{eq:icm}) induces a large factor $m$, hence $\ddot{\phi}/3H\dot{\phi}$ can be large, \begin{align} \frac{\ddot{\phi}}{3H\dot{\phi}} &\sim O\left(\frac{m}{H}\frac{\chi}{\Lambda_n}\right) + O\left(\frac{m}{H}\frac{\dot{\chi}^2}{\Lambda_d^4}\right). \end{align} Finally, $\dot{\pi}_{\chi}$ can be replaced by $\ddot{\chi}$ in solving Eq. (\ref{eq:hbgeom}) because both $\dot{\phi}$ and $\dot{\chi}$ induce the factor $m$. Hence the couplings $K_d$ do not affect the solution (\ref{eq:hbgsol}). On the other hand, the last term in Eq. (\ref{eq:hbgeom}) induces an approximately constant term in $\chi$ as \begin{align}\label{eq:chishift} \chi(t) \simeq -\frac{\lambda_n \dot{\phi}^2}{2m^2\Lambda_n} + \left( \chi_0+\frac{\lambda_n \dot{\phi}^2}{2m^2\Lambda_n} \right) e^{-\Gamma t}\cos(mt). \end{align} Though the constant term does not spoil the resonance, it should satisfy the condition (\ref{eq:irrelevant}). This condition can be satisfied if $m$ is sufficiently large, \begin{equation}\label{eq:nccond} \frac{m}{H} \gg \sqrt{\lambda_n\epsilon_{H,\phi}} \left(\frac{\Lambda_n}{M_p}\right)^{-1}, \end{equation} where $\epsilon_{H,\phi}$ is the contribution of the inflaton to $\epsilon_H$. The constant term also contributes to the potential energy. However, the induced potential energy is always subdominant if the condition (\ref{eq:nccond}) is satisfied. In the case that the constant term is comparable to the initial amplitude $\chi_0$, the amplitude of the oscillations is provided as Eq.(\ref{eq:chishift}) instead of $\chi_0$. In summary, the oscillations of the heavy scalar field $\chi$ do not spoil the slow-roll inflation provided that the conditions (\ref{eq:slowroll}), (\ref{eq:sub}), and (\ref{eq:irrelevant}) are satisfied. In the next section, we investigate the evolution of the inflaton fluctuations in this background and show that a parametric amplification of these fluctuations takes place through the resonance with the oscillations of $\chi$. \section{Parametric resonance with the oscillations of the heavy scalar field}\label{s:amp} In the previous section, we saw that the oscillations of the heavy scalar field do not much affect the background evolution provided that the conditions (\ref{eq:slowroll}), (\ref{eq:sub}), and (\ref{eq:irrelevant}) are satisfied. In this section, we show that the fluctuations in the inflaton field can be enhanced through the parametric resonance with the oscillations of $\chi$ even in this case.\\ \subsection{Evolution equation for the inflaton fluctuations} First, we derive an evolution equation for the inflaton fluctuations. The heavy scalar field $\chi$ oscillates with a frequency $m \gg H$, then the resonance occurs on scales much smaller than the horizon scale. Hence, we can neglect contributions from the metric fluctuations during the resonance because they are much smaller than those in the scalar fields on subhorizon scales as explicitly shown in Appendix \ref{a:assumptions}. Furthermore, the fluctuations in the heavy scalar field can be neglected because they do not contribute to the resonance. To eliminate the gauge degrees of freedom in the metric fluctuations, we employ the flat gauge, where the spatial metric becomes $a^2\delta_{ij}$. Neglecting the contributions from the fluctuations in the metric and the heavy scalar field, the second-order action in the inflaton fluctuations, $\varphi$, can be written as, \begin{align} S_2 &\simeq \int \mathrm{d}t\mathrm{d}^3x~ \frac{a^3}{2}\left[ (2K_{1K,L1}X^{KL}+K_{11})\dot{\varphi}^2 - K_{11}(\nabla \varphi)^2/a^2 - K_{1,1}\varphi^2 \right] \\ &\simeq \int \mathrm{d}t\mathrm{d}^3x~ \frac{z_{\phi}^2}{2}\left[ \dot{\varphi}^2 - c_s^2(\nabla \varphi)^2/a^2 \right], \label{eq:2action} \end{align} where \begin{align} z_{\phi}^2 &\equiv a^3(2K_{1K,L1}X^{KL}+K_{11})\\ &= a^3\left[1 + \lambda_n\frac{\chi}{\Lambda_n} + \left(\lambda_{d1}+2\lambda_{d2}\right)\frac{\dot{\chi}^2}{2\Lambda_d^4}\right], \\ c_s^2 &\equiv \frac{K_{11}}{2K_{1K,L1}X^{KL}+K_{11}} \\ &\simeq 1 + \left(\lambda_{d1}-2\lambda_{d2}\right)\frac{\dot{\chi}^2}{2\Lambda_d^4} + O\left(\frac{\dot{\chi}^4}{\Lambda_d^8}\right). \end{align} Here, we have neglected the potential term, $K_{1,1}\varphi^2 \simeq -3\eta_V H^2 \varphi^2$, which is much smaller than the term $(\nabla \varphi)^2/a^2$ during the resonance. Note that non-derivative couplings, if any, can be similarly neglected unless the slow-roll conditions (\ref{eq:slowroll}) are violated. The action (\ref{eq:2action}) leads to the following evolution equation in Fourier space for the inflaton fluctuations, \begin{equation}\label{eq:emfl} \ddot{v}_{k} + \left[c_s^2\left(\frac{k}{a}\right)^2 - \frac{\ddot{z}_{\phi}}{z_{\phi}}\right]v_{k}=0, \end{equation} where $v \equiv z_{\phi}\varphi$. Neglecting the higher-order terms in $\chi/\Lambda_n$ and $\dot{\chi}/\Lambda_d^2$, the quantities $c_s^2$ and $\ddot{z}_{\phi}/z_{\phi}$ are estimated to be \begin{align} c_s^2 &\simeq 1 + (\lambda_{d1}-2\lambda_{d2})\frac{m^2\hat{\chi}_0^2}{2\Lambda_d^4}\sin^2(mt), \label{eq:sonic}\\ \begin{split} \frac{\ddot{z}_{\phi}}{z_{\phi}} &\simeq \frac{3}{4}(3-2\epsilon_H)H^2 + 3mH\left[\lambda_n\frac{\hat{\chi}_0}{\Lambda_n}\sin(mt) + (\lambda_{d1}+2\lambda_{d2})\frac{m^2\hat{\chi}_0^2}{2\Lambda_d^4}\sin(2mt)\right] \\ & \hspace*{.25\linewidth} + m^2\left[\lambda_n\frac{\hat{\chi}_0}{2\Lambda_n}\cos(mt) + (\lambda_{d1}+2\lambda_{d2})\frac{m^2\hat{\chi}_0^2}{2\Lambda_d^4}\cos(2mt)\right] \nonumber \end{split}\\ &\simeq m^2\left[\lambda_n\frac{\hat{\chi}_0}{2\Lambda_n}\cos(mt) + (\lambda_{d1}+2\lambda_{d2})\frac{m^2\hat{\chi}_0^2}{2\Lambda_d^4}\cos(2mt) + O\left(\frac{H}{m}\right)\right], \label{eq:efmass} \end{align} where $\hat{\chi}_0 \equiv \chi_0e^{-\Gamma t}$. Substituting Eq. (\ref{eq:sonic}) and Eq. (\ref{eq:efmass}) into Eq. (\ref{eq:emfl}), the evolution equation can be rewritten in the form, \begin{equation}\label{eq:hill} \ddot{v}_{k} + m^2\left[\left(\frac{k}{am}\right)^2 - \frac{q_n}{2}\cos(mt) - 2q_d\cos(2mt) \right]v_{k}=0, \end{equation} where \begin{align} q_n &\equiv \lambda_n\frac{\hat{\chi}_0}{\Lambda_n}, \label{eq:qn} \\ q_d &\equiv -(\lambda_{d1}-2\lambda_{d2})\frac{m^2\hat{\chi}_0^2}{8\Lambda_d^4}\left(\frac{k}{am}\right)^2 + (\lambda_{d1}+2\lambda_{d2})\frac{m^2\hat{\chi}_0^2}{4\Lambda_d^4}. \label{eq:qd} \end{align} Hereafter, we consider the limit where either $q_n$ or $q_d$ is much larger than the other for simplicity. In this case, the evolution equation (\ref{eq:hill}) can be written in the form of the Mathieu equation, \begin{equation}\label{eq:mathieueq} \dif{2}{{v}_{k}}{z} + \left[A_k - 2q\cos(2z) \right]v_{k}=0, \end{equation} where $(z,A_k,q) \equiv (mt/2,(2k/am)^2,q_n)$ for the $K_n$-dominated case and $(z,A_k,q) \equiv (mt,(k/am)^2,q_d)$ for the $K_d$-dominated case. Here, the value of $q$ is smaller than unity because $\chi/\Lambda_n \ll 1$ and $m\chi/\Lambda_d^2 \sim \dot{\chi}/\Lambda_{d}^2 \ll 1$ as assumed in the previous section, and then the adiabaticity is mildly violated. Hence the resonance is narrow, which occurs in a narrow instability band, $|A_k-1|<q$. The parametric resonance occurs when the modes are redshifted to this instability band; \begin{align} \left(1-\frac{q_n}{2}\right)\frac{m}{2} < \frac{k}{a} < \left(1+\frac{q_n}{2}\right)\frac{m}{2}, \end{align} for the $K_n$-dominated case and \begin{align} \left(1-\frac{\tilde{q}_d}{2}\right)m < \frac{k}{a} < \left(1+\frac{\tilde{q}_d}{2}\right)m, \end{align} for the $K_d$-dominated case where \begin{equation}\label{eq:qdtilde} \tilde{q}_d \equiv (\lambda_{d1}+6\lambda_{d2})\frac{m^2\hat{\chi}_0^2}{8\Lambda_d^4}. \end{equation} In both cases, the resonance occurs in a deep subhorizon regime since the mass scale $m$ is much larger than the Hubble scale, $m \gg H$. Though we consider only the case that one dominates the other, the resonance is expected to be efficient even when both of the couplings are non-negligible since the values of their oscillation frequencies are rationally related \cite{Braden:2010wd}. After the oscillations damp out, we can use the standard evolution equation for the inflaton field in a single-field inflation model, which can be written as \begin{equation}\label{eq:stemlf} \ddot{v}_{k} + \left[\left(\frac{k}{a}\right)^2 - \hat{\mathcal{M}}H^2 \right]v_{k}=0, \end{equation} where \begin{align} \hat{\mathcal{M}} &\equiv \frac{3}{4}(3-2\epsilon_H) + \frac{3}{2}\eta_H - \frac{1}{2}\epsilon_H\eta_H + \frac{1}{4}\eta_H^2 + \frac{\dot{\eta}_H}{2H} \\ &= \frac{3}{4}(3-2\epsilon_H) + \frac{1}{a^3 M_p^2 H^2}\dif{}{}{t}\left(\frac{a^3\pi_{\phi}^2}{H}\right) - \frac{V''}{H^2}, \end{align} in our notation. Here, we have added the contributions from the metric fluctuations and the potential term, which induce the global tilt of the spectrum. \begin{figure}[h] \centering \includegraphics[width=.6\linewidth]{evolution.pdf} \caption{Time evolution of the inflaton fluctuations. The fluctuations on observable scales can be enhanced through the parametric resonance when they are deep in the horizon. The modes with comoving scales represented by the thick line are enhanced by the resonance, while those represented by the thin lines are not because they never cross the resonance band during the oscillations of the heavy scalar field. Hence, the resonance induces a peak in the power spectrum. Here, the variables with subscript $0$ indicate those evaluated at the onset of the oscillations.} \label{fig:te} \end{figure} \subsection{Parametric amplification of the curvature perturbations} We estimate the amplification of the power spectrum for the comoving curvature perturbations relative to the standard one. The comoving curvature perturbations, $\zeta$, are proportional to the inflaton fluctuations in the flat gauge, $\varphi$ at linear order as $\zeta=H\varphi/\dot{\phi}$ after the oscillations damp out. Hence, the ratio of the modulated power spectrum to the unmodulated one, $\mathcal{A}$, can be written by the mode function for the inflaton fluctuations as, \begin{equation} \mathcal{A} = \lim_{t \to \infty} \left|\frac{\widetilde{\varphi}_k}{\varphi_k}\right|^2=\lim_{t \to \infty} \left|\frac{\widetilde{v}_k}{v_k}\right|^2, \end{equation} where we have denoted the modulated quantity by a symbol with tilde. We present the results for the $K_d$-dominated case here. The analysis for the $K_n$-dominated case can be made in a similar way. The spectral shape of the modulated power spectrum can be roughly understood from Fig. \ref{fig:te}. The spectrum has a peak around the modes which had the mass scale $\sim m$ at the onset of the oscillations and the peak width for smaller and larger wavenumbers, $\Delta_S$ and $\Delta_L$, are respectively determined by the width of the resonance band $q_0$ and the duration of the oscillations $H/\Gamma$. From Fig. \ref{fig:te}, we can see that the modes whose wavenumbers are around $k_p/a_0 \equiv m(1+q_0/2)$ are most amplified while the modes whose wavenumbers are less than $k_S/a_0 \equiv m(1-q_0/2)$ are hardly amplified since they were already outside of the resonance band at the onset of the oscillations. Hence, the peak width for smaller wavenumbers is roughly given by, \begin{align} \Delta_S \simeq \left| \frac{k_p}{a_0 m} - \frac{k_S}{a_0 m} \right| \sim q_0. \end{align} On the other hand, the modes whose wavenumbers are larger than $k_L/a_0 \equiv e^{H/\Gamma}m$ are also hardly amplified because they cross the resonance band after the oscillations damped out. Hence, the peak width for larger wavenumbers is roughly given by, \begin{align} \Delta_L \simeq \left| \frac{k_p}{a_0 m} - \frac{k_L}{a_0 m} \right| \sim \frac{H}{\Gamma}. \end{align} Moreover, the peak amplitude can be roughly estimated by using the Floquet exponent \cite{Allahverdi:2010xz}, $\mu=q/2$, as, \begin{align} v_{k} \propto \exp\left(\int \mu {\rm d}z \right), \end{align} where the exponent is approximately given by, \begin{align} \int \mu {\rm d}z &\simeq \begin{cases} {\displaystyle \frac{mq_0}{4\Gamma}} & \text{for} \quad q_0 \gg H/\Gamma, \\ {\displaystyle \frac{mq_0^2}{4H}} & \text{for} \quad q_0 \ll H/\Gamma, \end{cases}\\ &\sim {\displaystyle \frac{mq_0}{4H}\min\left(q_0, \frac{H}{\Gamma}\right)}. \label{eq:amp} \end{align} Here, the integration range is determined by the condition $|A_k-1|<q$. In Fig. \ref{fig:primordial}, we have shown the amplification of the power spectrum, $\mathcal{A}$, estimated numerically assuming that the inflaton fluctuations were in the Bunch Davis vacuum before the oscillations start. Each panel shows the spectra for different values of a parameter while the others are fixed. The resultant spectra exhibit the expected dependence on the parameters and rapidly oscillate with frequencies of the order of $m/H$. In the $K_n$-dominated case, the spectrum behaves similarly. \begin{figure}[phtb] \centering \begin{minipage}{.45\linewidth} \includegraphics[width=.95\linewidth]{mass.jpg} \end{minipage} \begin{minipage}{.45\linewidth} \includegraphics[width=.95\linewidth]{gamma.jpg} \end{minipage} \begin{minipage}{.45\linewidth} \includegraphics[width=.95\linewidth]{q.jpg} \end{minipage} \caption{The amplification of the power spectrum for different values of the parameters, $m$ (top left panel), $\Gamma$ (top right panel), and $q_0$ (bottom panel) for the $K_d$-dominated case. The spectra behave similarly in the $K_n$-dominated case.} \label{fig:primordial} \end{figure} Since the width and the amplitude depend on the model parameters in different ways, we can extract information on the heavy scalar field from the features in the primordial power spectrum. For example, we can read values of the model parameters in the $K_d$-dominated case from the data as, \begin{subequations}\label{eq:modelparam} \begin{align} \Gamma &\sim 10^{13}~{\rm GeV}\left(\frac{H}{10^{12}~{\rm GeV}}\right)\left(\frac{\Delta_L}{0.1}\right)^{-1}, \\ m &\sim 10^{14}~{\rm GeV}\left(\frac{H}{10^{12}~{\rm GeV}}\right)\left(\frac{\ln \mathcal{A}_p}{1.0}\right)\left(\frac{\Delta_S}{0.1}\right)^{-1}\left(\frac{\Delta_{\rm min}}{0.1}\right)^{-1}, \\ \Lambda_d &\sim 10^{15}~{\rm GeV}\left(\frac{H}{10^{12}~{\rm GeV}}\right)^{1/2}\left(\frac{f_{\chi}}{0.1}\right)^{1/4}\left(\frac{\Delta_S}{0.1}\right)^{-1/4}, \end{align} \end{subequations} assuming $\lambda_{d1}$ and $\lambda_{d2}$ being ${\cal O}(1)$. Here, we have made a rough estimation of the width as $\Delta_S \sim q_0$ and $\Delta_L \sim H/\Gamma$ and the amplification at the peak scale ${\cal A}_p$ has been estimated as Eq. (\ref{eq:amp}). We have also introduced $\Delta_{\rm min} \equiv \min(\Delta_S, \Delta_L)$ for brevity. \subsection{Comparison with observational data of CMB} Having clarified the dependence of the modulated power spectrum on our model parameters, we can in principle fix or constrain their values by confronting them with observational data. In practice, however, it is difficult to find best-fit parameters for this kind of highly oscillatory spectrum because there are many minima in the likelihood surface and then the Markov Chain Monte Carlo chains do not converge \cite{Flauger:2009ab, Meerburg:2011gd}. Moreover, the parameter search is time-consuming because we do not have the exact analytic expression of the modulated power spectrum. Therefore, instead of searching the best-fit parameters, we just show one possible values of the parameters that provide better fit of the observational CMB data than the power-law primordial power spectrum. In Fig. \ref{fig:cmb}, we have shown the angular power spectrum of the CMB for the modulated power spectrum with a feature at $k \simeq 0.003~{\rm Mpc}^{-1}$, which corresponds to multipole $l \simeq 40$. As possible values of the parameters, we have used $m/H=10^4$, $\Gamma/H=5\times10^2$, and $q_0=0.1$, while the other cosmological parameters have been kept fixed to the WMAP 7 bet-fit values \cite{Komatsu:2010fb}. Using these values of the parameters, we have found that the $\chi^2$-value impoves $2.5$ in comparison to the simple power-law spectrum. Hence, the modulated spectrum can provide a better fit of the data, though the improvement is not enough to confirm that the CMB anomalies are originated from the resonant effect of the heavy scalar field since we have introduced three additional parameters. On the other hand, for the feature at $k \simeq 0.009~{\rm Mpc}^{-1}$, it is difficult to obtain significant improvement. The angular power spectrum of the CMB is provided by a convolution of the primordial power spectrum with a radiative transfer function, which is written in terms of the spherical Bessel function. Since the radiative transfer function for multipole $l$ has an oscillatory tail for larger wavenumbers, $k\cdot(1.4\times10^4~{\rm Mpc}) \agt l$, the peak induced in the angular power spectrum has a relatively large width even if the primordial power spectrum has a sharp peak. Though the observed sharp and high peak could be realized if the modulated power spectrum has an oscillatory component coherent with that in the radiative transfer function, this is not the case for the spectra obtained here. Though the evidence found here is not statistically significant, this will give us some insight into the possibility to find some signatures of scalar fields much heavier than the Hubble scale. If we could found such signatures, they will provide a hint for physics behind inflation. \begin{figure}[h] \centering \includegraphics[width=.7\linewidth]{cmb.pdf} \caption{The angular power spectrum of the CMB for the modulated power spectrum.} \label{fig:cmb} \end{figure} \section{Summary and discussion}\label{s:summary} In this paper, we have discussed the possibility that a heavy scalar field, whose mass exceeds the Hubble scale, $m \gg H$, could leave non-negligible signatures in the CMB spectrum through parametric resonance between its background oscillations and the inflaton fluctuations. The resonance could be efficient without spoiling the slow-roll inflation if the heavy scalar field couples with the inflaton derivatively; the feature induced by the resonance can be sharp and large even in the case that adiabaticity is only mildly violated because the resonance coherently accumulates small effects. In the analysis, we have assumed that the oscillations of the heavy scalar field are instantaneously excited at e-folds $N_p \simeq \ln(m/H)+N_{\ast}$, where $N_{\ast}$ is the number of e-folds from the end of inflation when the peak modes crossed the horizon. This assumption will be appropriate if inflation begins at the e-folds $N_p$ and the heavy scalar field is initially displaced from its minimum, which is natural in the cases that the minima differ before and during inflation, or inflation occurs after a tunneling from a neighboring minimum \cite{Bucher:1994gb,Sasaki:1994yt,Freivogel:2005vv,Yamauchi:2011qq,Sugimura:2011tk}, for example. As another possibility, the oscillations could be dynamically excited when the heavy scalar field becomes momentarily light/tachyonic or the slow-roll condition is temporary violated. In the latter case, effects of the slow-roll violation on the resonance should be taken into account to consider large excitations of the heavy scalar field \cite{Avgoustidis:2012yc}. We will address this issue using a specific model in another paper. We have also estimated the goodness-of-fit of our model with the anomalies observed in the CMB spectrum. The resultant improvement of the fit has not been large enough to confirm that the CMB anomalies are originated from the resonance, though a systematic analysis has not been performed for technical difficulties. To test our model further from observations, non-Gaussianity in the CMB anisotropies will be helpful. Since oscillatory components are induced in the interactions, the higher-point correlation functions are also enhanced at specific scales by the resonance \cite{Chen:2008wn, Flauger:2010ja, Chen:2010bka, Behbahani:2011it}. We will estimate the amplitude and the shape of this non-Gaussianity in an upcoming paper. \section*{Acknowledgement} The work is supported by a Grant-in-Aid through JSPS (RS, MN, YT) and was partially supported by JSPS Grant-in-Aid for Scientific Research No. 23340058 (JY) and the Grant-in-Aid for Scientific Research on Innovative Areas No. 21111006 (JY).
1,314,259,996,443
arxiv
\section{Introduction} \label{sec:intro} To simulate the propagation of ocean waves and wave interaction with marine or coastal structures various modeling approaches can be invoked, depending on the assumptions that can be made. In the most general case, the surface waves have to be considered as nonlinear and dispersive, the flow possesses a rotational part with viscous and turbulent effects present, at least in the vicinity of bodies and domain boundaries (\emph{e.g.}\xspace seabed). Furthermore, in case of wave breaking or in the process of wave interaction with a floating structure, air entrainment can occur, so that mixed air-water flow has to be considered, with varying properties of the fluid characteristics (\emph{e.g.}\xspace density, viscosity). An accurate modeling of all these features at the spatial scale of kilometers in practical applications remains however challenging. An extensive literature review on modeling wave-structure interaction (WSI) was recently given by \textcite{davidson_efficient_2020} where many existing approaches are discussed from, but not limited to, the wave energy converter modeling point of view. Computational Fluid Dynamics (CFD) approaches, aiming at solving the Navier-Stokes (NS), or more frequently the Reynolds Averaged Navier-Stokes (RANS), equations for two-phase flows are able to capture most of the above mentioned effects and have made impressive progress in the recent years, both in terms of accuracy and efficiency \parencite[see, \emph{e.g.}\xspace][]{jacobsenFuhrmanFredsoe2012,kim2016numerical,oggiano2017reproduction}. Those CFD approaches are particularly well suited for simulating WSI processes, in particular viscous, turbulent and rotational effects, and possible air entrainment. Note that, in the context of WSI, these effects are mostly contained in the close vicinity of the body. At a larger scale, when considering wave propagation over a domain covering several dozens of wavelength, it is often possible to neglect viscous effects and to consider the flow as irrotational, which permits to use a potential flow approach. Many models have been developed for decades under this assumption, leading to the so-called Fully Nonlinear Potential Flow (FNPF) theory if full nonlinearity of the free surface boundary conditions (BC) is kept \parencite[\emph{e.g.}\xspace][]{tavassoli2001interactions}, partially or weakly nonlinear models \parencite{pinkster1980low,philippe:hal-01198807}, and simplified linear potential flow models \parencite[\emph{e.g.}\xspace][]{lee1995wamit,ansys2013aqwa,babarit2015theoretical}. These models, in particular the FNPF ones, have demonstrated both higher accuracy and lower computational resource requirements in comparison with CFD codes to simulate the large-scale propagation phase, because of their intrinsic lower numerical dissipation rate. As a consequence, an intuitive idea is to apply each model over a domain and at a scale where it best performs, namely employing the CFD approach where needed in the vicinity of the body, and the FNPF approach far from the body to benefit from a more accurate and cheaper approach to simulate long distance wave propagation. This type of methods is referred to as a ``coupled'' or ``hybrid'' method. The idea of using different models to best capture different physical effects is not new: seminal studies that tried to include the boundary layer viscous effects, within an otherwise inviscid simulation, can be seen as a hybrid model. For example, the very first boundary layer theory \parencite{prandtl_uber_1904} falls into that category, as well as the further work by \textcite{lighthill_displacement_1958}. The introduction in the context of wave flows was done by \textcite{dommermuth_numerical_1997}, who applied a decomposition of the flow into irrotational and rotational parts to solve the contact line problem in bow waves. Over the last two decades, several authors have successfully developed coupling schemes that use each model in the area where it is most adequate. Those coupling schemes can be separated into two main categories: Domain Decomposition (DD) methods and Functional Decomposition (FD) methods. The first one (DD) uses two different mathematical models and solution methods applied on distinct domains. In most cases, the domains do not overlap, and information is exchanged between the models only at common boundaries. A variant of this approach is to introduce overlapping zones at the interface between the two domains, with a progressive matching of BCs over the extent of this zone. A comprehensive review on DD model coupling in the context of WSI was recently given by \textcite{di_paolo_wave_2021}. This approach is further discussed in \cref{sec:domainCoupling:literature}. The second type of decomposition (FD) leads to a modification of the equations themselves. A flow that is solution of a simplified set of equations, denoted ``model A'', already verifies a significant portion of the more generic equations, denoted ``model B'' (in our case, models A and B will be Euler and NS equations respectively). Thus, if model A is solved first over the whole domain of interest, solving model B from scratch can be seen as largely redundant. Modifying the equations of model B to take into account the already computed part by model A results in a modified model B$^*$ whose solution is hoped to be easier and significantly cheaper in terms of computational cost. In this framework, the domain where model B$^*$ is solved needs to be a sub-domain of the domain where model A is solved, and we state that $\tI{u}_B = \tI{u}_A + \tI{u}_{B^*}$, where $\tI{u}_B$ is the total velocity solution of model B, and $\tI{u}_A$, $\tI{u}_{B^*}$ the velocities computed by the model A and B$^*$ respectively (a similar decomposition is applied to other variables, \emph{e.g.}\xspace pressure). This approach is further discussed in \cref{sec:velocityCoupling:literature}. Another distinction can be made between one- and two-way coupling methods (also referred to as weak and strong couplings). Within a one-way framework, the first model is used to solve a given wave field. The second model imports the (boundary or volume) values to solve in a more contained zone the more complex set of equations. No feedback from the second model to the first one happens, which implies that the first one is independent and can be run \emph{a priori}. In a two-way coupling method, both models receive information from the other (at their common BCs on a DD approach), each one having an effect on the other. Thus, they have to evolve simultaneously and ``wait'' for each other over the duration of the simulation. In the present study, both DD and FD approaches are tackled and compared to simulate WSI for ocean engineering applications. However, as a first step, only one-way coupling is considered. Furthermore, the coupling methodologies are evaluated here with a fixed submerged body. The remainder of this article is organized as follows: in \cref{sec:review}, a literature review on DD and FD methods is proposed. The potential and viscous models used are briefly presented in \cref{sec:models}. The DD and FD coupling methods developed and implemented during this work are presented in \cref{sec:domainCoupling} and \cref{sec:velocityDecomposition} respectively. In \cref{sec:resultsModelsComp} we report and discuss a series of tests and sensitivity studies to assess and compare the requirements and performances of the two coupling methods. Then, in \cref{sec:resultsVsLiterature}, we present the results of four modeling approaches (namely a FNPF one, a fully viscous RANS one, and the DD and FD coupling schemes) applied to simulate the interaction of waves with a submerged horizontal cylinder of rectangular cross-section. The results are compared with experimental data from several sources covering a range of incident wave conditions. Finally, conclusions are summarized in \cref{sec:conclusion}. \section{Literature review on coupling methods} \label{sec:review} A bibliographic review of DD and FD coupling approaches is given in the following sub-sections. As these approaches have been extensively developed over the last 25 years, only some of the notable works are listed below. This list of references is not exhaustive due to the variety of possible approaches and variants. \subsection{Overview of DD coupling methods} \label{sec:domainCoupling:literature} DD methods were first applied in the aerodynamics field \parencite{lock_viscous-inviscid_1987}, and then introduced in the context of ship sea-keeping studies more than two decades ago by \textcite{campana1994domain,campana1995viscous}. An early attempt to use a DD method in the context of moored ship was presented in \textcite{bingham_hybrid_2000}. \textcite{quemere_new_2001} suggested a coupling approach for highly different block discretizations and applied it to a RANS/LES (Large Eddy Simulation) coupling in \textcite{quemere_zonal_2002}. As summarized in the review of \textcite{di_paolo_wave_2021} (see in particular their Table 1 which offers a synthetic view of a large set of approaches), many types of models can be interfaced. For example, wave theory based models can be used in conjunction with potential or RANS models, see \emph{e.g.}\xspace \textcite{wei_cost-effective_2017} or \textcite{christensen_transfer_2009} for applications of DD to the coupling of a Boussinesq wave model and a RANS solver, while \textcite{kassiotis_coupling_2011} used this approach to couple a Boussinesq wave model with a Smoothed Particle Hydrodynamics (SPH) solver. Another widely used combination is the matching of a potential flow solver with NS or RANS equations. The potential equations can for example be solved with a High-Order Spectral (HOS) approach \parencite{choi_generation_2018} or with a Boundary Element Method (BEM) \parencite{colicchio_bem-level_2006}. Recently, \textcite{HPC:hanssen2019non,siddiqui_validation_2018} coupled NS equations with a potential solver based on the Harmonic Polynomial Cell (HPC) method. \textcite{siddiqui_validation_2018} studied the wave interaction and hydrodynamic of a damaged ship, for which they later released two experimental studies \parencite{siddiqui_experimental_2019,siddiqui_experimental_2020}. In \textcite{sriram_hybrid_2014}, the FNPF model, solved with a BEM, is strongly coupled with a NS solver based on a Finite Element Method (FEM), and applied to a wave breaking problem. This numerical model is later applied in \textcite{kumar_hybrid_2020} to the estimation of long wave run-up. A solver denoted ``qaleFOAM'' was also developed, coupling a FNPF solver using a Quasi Lagrangian Eulerian FEM with OpenFOAM\textregistered{}\xspace. The method is presented and applied in \textcite{li_zonal_2018,li_numerical_2018,yan_numerical_2019,wang_numerical_2020}. Another innovative DD method, presented in \textcite{kristiansen_gap_2012,kristiansen_validation_2013}, employs a potential viscous model as the external model. The coupling is then made with a NS based model and both are solved with the Finite Volume Method (FVM). Once again, more literature references can be found in \textcite[section 4]{davidson_efficient_2020} and \textcite{di_paolo_wave_2021}. \subsection{Overview of FD coupling methods} \label{sec:velocityCoupling:literature} The acoustic and aerodynamic field of research was the first to consider such type of variable or field decomposition \parencite{morino_helmholtz_1986,morino_toward_1994,morino_new_1999,hafez2006numerical,hafez2007improved}, mostly applied to solving the boundary layer problem where the viscosity plays a major role. An actively studied and applied methodology in the last two decades was proposed by \textcite{SWENSE:ferrant2003potential,SWENSE:gentaz2004numerical,SWENSE:luquet2004viscous}. The velocity, pressure and surface elevation are decomposed into their incident and diffracted/radiated components. A potential theory based wave model is used to explicitly obtain the incident field (ignoring the presence on bodies in the domain), and then a modified version of the RANS equations, denoted the Spectral Wave Explicit Navier-Stokes Equations (SWENSE), is derived. Those equations require the explicit values of the incident fields (such as the potential wave elevation, potential velocities): given a grid at a particular time instant, the potential fields and kinematics are calculated. Afterwards, the newly derived SWENSE equations are solved to yield the diffracted component. Notice that the potential-viscous coupling scheme is one-way in the sense that no feedback effect on the potential model is at play. This, however, does not come with any hypothesis but instead with the drawback of having to mesh the fluid domain up to relatively far from the body of interest: the diffracted fields do not vanish with the distance to the object (in 2D inviscid cases). Nevertheless, the main advantage of the method is to reduce the complexity of the BCs for the SWENSE equations, as no incident wave theory has to be imposed at the SWENSE BC. Thus, only a damping of the diffracted/radiated wave field has to be set. The problem often encountered with RANS simulations addressing the propagation of incoming waves without significant damping is also avoided. \textcite{luquet_applications_2007} described the methodology and the numerical implementation in great detail. They also introduced an irregular incident wave potential simulation, belonging to the family of HOS schemes. Over the years, numerous test cases were investigated with successful results. For example, \textcite{luquest2007simulation} focused on a tension leg platform, \textcite{alessandrini_numerical_2008,monroy_rans_2009} applied the method for ship sea-keeping, and \textcite{li_calculation_2017} solved the SWENSE equations on a vertical cylinder piercing the free surface. A development of the SWENSE equations within the OpenFOAM package employing a Volume of Fluid (VoF) method and thus a two-phase flow was recently done by \textcite{vukcevic_decomposition_2016-1,vukcevic_decomposition_2016,vukcevic_numerical_2016}. In a similar manner, while this method was originally developed within a single phase RANS solver, recent developments were conducted on the use of a fixed grid solver first with the use of a level-set tracking method \parencite{reliquet_simulation_2013,reliquet_simulations_2019} and more recently with the use of the VoF method within a two-phase solver \parencite{li_challenges_2018,li_progress_2018,li_spectral_2021}. Another approach was developed by \textcite{Beck:kim2005complementary} in which the potential model is solved with the presence of the body. The complementary velocity is defined as $\tI{u}^*=\tI{u}_t - \tI{u}_p$, where $\tI{u}_t$ is the total velocity field satisfying the NS equations and $\tI{u}_p$ is the potential one satisfying the Laplace equation. The same decomposition is performed on the pressure field. Turbulent and rotational effects are thus comprised into these newly defined fields. In the potential solver developed by \textcite{Beck:kim2005complementary}, a Rankine source type method is used. The studied cases mostly lie in the aerodynamic field of research. With another approach, \textcite{edmund_improved_2011,Beck:edmund2012velocity,Beck:edmund2013velocity} also decompose the velocity field into a potential and a rotational components. With the objective of allowing for a truncation of the NS domain, the potential part is sought as a solution of the Laplace equation (without invoking the non-viscous assumption). The RANS sub-problem does not differ from the original one except at its outer boundary on which the potential velocity is imposed. Reciprocally, the potential model solves the Laplace equation with a special coupled boundary on the body surface and is thus impacted by the NS sub-problem. An iterative method is set to achieve convergence and consistence of the two sub-problems on the coupling boundaries. An important domain reduction is achieved on a steady flow over a NACA profile airfoil. This method was extended to steady free surface flows in \textcite{rosemurgy_velocity_2012}. Recently, work has been conducted to develop the unsteady version of this approach by \textcite{chen_velocity_2015}, further extended to 3D and applied to the Wigley hull by \textcite{chen_velocity_2017}. \textcite{grilli_modeling_2009,harris_coupling_2010,harris_perturbation_2012} developed a 2D one-way coupling scheme with a LES model in order to study the wave-induced transport of sediments near the sea bottom. A FNPF wave tank is used to compute the overall wave field while perturbed NS-LES equations are solved in order to capture the fine scale viscous effects in the boundary layer zone. Later, a coupling between a FNPF model (solved with a BEM) and a NS Lattice-Boltzmann was developed basing on the same perturbation approach \parencite{janssen_modeling_2010,oreilly_hybrid_2015}. In the context of vortex induced vibrations, \textcite{li_hybrid_2017} recently adopted a FD approach to couple a simplified NS solver (denoted Quasi-turbulent model) with another more complicated RANS model. In the present work, modifications are made on the equations of the second model, in a similar manner as \textcite{Beck:kim2005complementary}. Note that in their method, to the authors understanding, only the turbulent viscosity needs iterations to match between both domains and the retroactive coupling (from model 2 to model 1) only occurs on this variable. In a similar manner as \textcite{Beck:kim2005complementary}, \textcite{zhang_multi-model_2018} decompose the total fields into a potential part, solved with a two-phase Euler solver, and complementary fields for which complementary RANS equations are derived. The coupling is done in a one-way manner but achieves a domain size reduction and stability by making use of transition zones (also called relaxation zones) to match the solution at the interface of the two domains. \section{Presentation of the CFD and potential models} \label{sec:models} In this section we briefly present the RANS CFD model (\cref{sec:CFD}) and the PNPF model (\cref{sec:FNPF}) that will be used either as standalone tools to model WSI, or in a coupled mode using the DD and FD coupling methods presented in \cref{sec:domainCoupling} and \cref{sec:velocityDecomposition} respectively. \subsection{The RANS CFD model} \label{sec:CFD} The CFD model employed in this study is based on the available RANS-VoF solvers available in the OpenFOAM\textregistered{}\xspace toolbox. The associated mathematical models and numerical techniques are briefly discussed hereafter. Further details can be found in \emph{e.g.}\xspace \textcite{jasak_error_1996}. \subsubsection{RANS equations} \label{sec:RANS} For an incompressible Newtonian fluid, considered hereafter, the Cauchy equations can be expressed under the form of the classical Navier-Stokes equations: \begin{subequations} \begin{empheq}[left=\empheqlbrace]{align} & \divs \rho \tI{u} = 0\label{eq:NSmaindivT} \\ & \dt{\rho \tI{u}} + \divs(\rho \tI{u}\otimes\tI{u}) = - \grads p + \divs \tII{T} + \tI{f} \label{eq:NSmainMomT} \end{empheq} \label{eq:NSmainSystemT} \end{subequations} where $\tI{u}=(u_x, u_y, u_z)^T$ is the velocity vector, $\rho$ the fluid density -- which can vary in a two-phase flow --, $p$ is the pressure, $\otimes$ is the outer product and $\tII{T}$ the shear stress tensor: $\tII{T} = \mu \left[ \grads \tI{u} + \left(\grads \tI{u}\right)^T \right] - \dfrac{2}{3} \mu \divs \tI{u} \ \tII{I} $, with $\tII{I}$ the identity matrix and $\mu$ the dynamic viscosity of the fluid. In order to reduce the computational cost, the RANS method is employed. First, the equations are statistically averaged, by splitting the velocity into its mean and fluctuating components $\tI{u} = \mean{\tI{u}} + \tI{u}'$. The Reynolds stress tensor ($\rho \mean{\tI{u}'\otimes \tI{u}'} $) is modeled by assuming a linearity of the latter with respect to the shear stress tensor of the mean flow field, using the so-called Boussinesq assumption: \begin{equation} \begin{aligned} -\rho \mean{\tI{u}'\otimes\tI{u}'} & = \dfrac{\mu_t}{\mu} \mean{\tII{T}} -\dfrac{2}{3}\rho k \tII{I}\\ & = \mu_t \left[ \grads{\mean{\tI{u}}} + \left(\grads\mean{\tI{u}}\right)^T \right] -\dfrac{2}{3}\mu_t \divs \mean{\tI{u}} \ \tII{I} -\dfrac{2}{3}\rho k \tII{I} \end{aligned} \label{eq:boussinesqAssumption} \end{equation} where $k=\dfrac{1}{2} \mean{\tI{u}' \cdot \tI{u}'}$ is the turbulent kinetic energy (TKE), and $\mu_t$ the turbulent viscosity. Eventually, the RANS equations, governing the dynamics of mean flow variables, are given by: \begin{subequations} \begin{empheq}[left=\hspace{-0.3cm}\empheqlbrace]{align} & \divs \rho \mean \tI{u} = 0 \label{eq:RANSequations:div}\\ % & \dt{\rho \mean{\tI{u}} } + \divs{ \rho\mean{\tI{u}}\otimes\mean{\tI{u}} } = - \grads{\mean{p}} -\divs{ \underbrace{ \left( \mu_{\text{eff}}\left[ \grads{\mean{\tI{u}}} +\left(\grads\mean{\tI{u}}\right)^T \right] -\dfrac{2}{3} \mu_{\text{eff}} \divs \mean{\tI{u}} \tII{I} -\dfrac{2}{3}\rho k \tII{I} \right)} _{\tII{T} _{\text{eff}}(\mean{\tI{u}})} } +\tI{\mean{f}} \label{eq:RANSequations:Mom} \end{empheq} \end{subequations} where the effective viscosity $\mu_{\text{eff}}$ is defined as $\mu_{\text{eff}}=\mu+\mu_t$. In that framework, two new variables have been incorporated into the problem ($k$ and $\mu_t$). Thus, closure equations are required for this system, they are referred to as turbulence models. For example, $k-\epsilon$ or $k-\omega$ models are two-equation closure models while the Spalart-Allmaras model \parencite{spalart_one-equation_1992} is a one-equation closure model. In this study, the $k-\omega$ SST model of \textcite{menter_two-equation_1994} will be used, more specifically the modified version provided by \textcite{devolder_application_2017}. Note that in the following, the overbar symbol on mean flow variables will be omitted for simplicity, and because only the mean velocity and mean pressure will be used in the CFD context. \subsubsection{Volume of Fluid (VoF) method for free surface tracking} \label{sec:VoF} In order to capture and simulate the evolution of the interface between the two non-miscible fluids, the VoF approach is selected \parencite{hirt1981volume}. With this method, the equations are represented in a continuous way across the interface: a scalar function that represents the volume fraction of a given fluid in a given computational cell is defined. Thus, the two fluids are represented and solved as one with varying physical properties. In OpenFOAM\textregistered{}\xspace, the volume fraction, denoted $\alpha$, is a function of $t$ and $\tI{x}$. Thus when this scalar equals unity in a given cell, this cell is full of fluid 1, \emph{i.e.}\xspace water in our case. If it is zero, this cell is full of fluid 2, \emph{i.e.}\xspace air in our case. Any value of $\alpha$ in between means that the free surface crosses this cell and gives the relative fraction of water inside this cell. Transport equation of this volume fraction is, taking into account the incompressibility of the fluids, given by: \begin{equation} \dt{\alpha} + \tI{u} \cdot \grads{\alpha } = 0 \label{eq:advectionAlpha} \end{equation} \subsubsection{Finite Volume Method (FVM)} \label{sec:FVM} The discretization of the equations in both time and space are done employing the FVM. Given a quantity $q$, in every control volume $V_P$ which centroid is denoted $P$ and at every time $t$ and time step $\Delta t$, the ``semi-discretized'' equation should be satisfied: \begin{equation} \int_t^{t+\Delta t}{} \left[ \underbrace{ \left. \dt{\rho q}\right|_P V_P }_\text{Time derivative} +\underbrace{\sum_{faces}F_f q_f }_\text{Advection} +\underbrace{\sum_{faces} \tI{S} \rho_f \Gamma_f (\grads q)_f }_\text{Diffusion} \right] dt = \int_t^{t+\Delta t}{} \underbrace{(S_0V_p+S_1V_p q_p)}_\text{Source} dt \label{eq:semiDiscretizedQ} \end{equation} where the source term $S$ has been linearized, $F_f$ is the face flux, $\tI{S}$ the face normal vector, and $\Gamma$ the diffusion parameter. Quantities are indexed with $\cdot_f$ when evaluated at a face and $\cdot_P$ when evaluated at the cell centroid. \subsubsection{The coupled velocity-pressure system} \label{sec:coupledVelocityPressure} Applying the discretization presented in \cref{eq:semiDiscretizedQ} to the RANS momentum \cref{eq:RANSequations:Mom}, assuming that the volume force reduces to only the gravitational force, yields the following form: \begin{equation} \tI{u} = \dfrac{1}{\tII{A}} ( \tII{H}(\tI{u}) + \tI{R} \underbrace{- \grads p + \rho \tI{g}} _{-\grads p_d} ) \label{eq:HandAonU} \end{equation} where $\tII{A}$ and $\tII{H}$ are respectively the diagonal and off-diagonal part of the obtained matrix, $\tI{R}$ is the source term, containing all explicit contributions (\emph{e.g.}\xspace values at the beginning of the time-step), and $p_d = p -\rho g z$ denotes the dynamic pressure. From \cref{eq:HandAonU}, one could derive the Poisson equation for the dynamic pressure, under the form: \begin{equation} \divs{\left(\dfrac{1}{\tII{A}} \grads p_d \right)} = \divs { \dfrac{\tII{H}(\tI{u}) +\tI{R} } {\tII{A}}} \label{eq:pEqn} \end{equation} Together, \cref{eq:HandAonU,eq:pEqn} form the velocity-pressure coupled system that is to be solved. From \cref{eq:pEqn}, $p_d$ is computed from a given velocity field $\tI{u}$, then from the newly computed pressure, a new velocity field is computed explicitly from \cref{eq:HandAonU}. Repeating this procedure until convergence allows to obtain a solution $(\tI{u}, p_d)$ of the coupled system. Because $\tII{H}(\tI{u})$ is nonlinear with respect to $\tI{u}$, one could update the coefficients of the off-diagonal part of the matrix whenever $\tI{u}$ is updated. This is classically not done, thus neglecting the nonlinear coupling before the velocity-pressure coupling. However, encapsulation algorithms such as for example the \ofkw{PIMPLE} method can be used to counteract this shortcoming. \subsection{The fully nonlinear potential flow (FNPF) model} \label{sec:FNPF} The FNPF model used in this study to generate and propagate the water waves, as well as computing the diffraction/reflection effects due to the presence of a structure, is based on the Harmonic Polynomial Cell (HPC) method introduced by \textcite{HPC:shao2012towards,HPC:shao2014fully,HPC:shao2014harmonic}. The current model was developed and implemented in two dimensions (2D) $(x,z)$ by \textcite{robaux2018modeling,robaux_numerical_2020,robaux_development_2021}. In this approach, a numerical cell is defined by assembling in 2D framework four adjacent quadrilateral cells in a quadrangular mesh. A cell comprises one center node (denoted with local number 9) and 8 exterior nodes lying on its boundary (with local numbers from 1 to 8). The potential is then approximated in each cell as a weighted sum of the potentials at those 8 exterior nodes: \begin{equation} \phi(\tI x)= \sum_{i=1}^{8} \left(\sum_{j=1}^{8} C^{-1}_{ji} f_j(\tI{\bar{x}}) \right) \phi_i \label{eq:mainwithcij} \end{equation} where $C_{ij}$ are the coefficients of a $8\times8$ matrix, only dependent on the geometry and $f_j(\tI{\mean{x}})$ are the eight first harmonic polynomials ($f_1(x,z) =1$, $f_2(x,z) =x$, $f_3(x,z) =z$, $f_4(x,z) =xz$, \emph{etc}.) applied at the location of interest, relative to the cell center $\tI x_9$ (\emph{i.e.}\xspace $\tI{\bar{x}} = (x,z) =\tI x - \tI x_9$). As each $f_j$ polynomial is a fundamental solution of the Laplace equation, so is the approximation \cref{eq:mainwithcij}. For every cell, \cref{eq:mainwithcij} is applied at its center $\tI x_9$, yielding an equation for every node located in the bulk of the water domain, which participates to the global linear system of unknowns $\phi_i$. This system is then closed by invoking the different BCs of Neumann and Dirichlet type \parencite{HPC:shao2014harmonic,robaux_development_2021}. It is then solved with a preconditioned GMRES iterative solver from \textcite{gmres:saad2003iterative} modified as suggested by \textcite{baker_simple_2009} to yield the values of the potential at all nodes of the grid. In our approach, all FNPF-HPC runs are performed taking into account the presence of the body. The corresponding flow velocities and pressure are thus available in the whole actual fluid domain, even though they neglect both viscous and turbulent effects. The present implementation of the HPC method yields accurate pressure fields, using the Bernoulli equation. The accuracy is obtained thanks to the formulation and solution of a second Laplace problem on the time derivative of the velocity potential $\phi_t=\dt{\phi}$. For further details regarding the method, the reader is referred to \textcite{robaux_development_2021, robaux_numerical_2020}. Potential loads can thus be obtained by integrating the pressure along the body wetted boundaries from the FNPF-HPC simulation prior to running the RANS solver. Results from the FNPF-HPC method will be discussed and compared to the coupling approaches results in \cref{sec:resultsModelsComp,sec:resultsVsLiterature}. In this work, only one-way coupling is studied. Thus, all computations with the FNPF-HPC model are done \emph{a priori}, and the current implementation of the coupling algorithms builds a solution on top of the HPC solution without influencing it. Although we only present hereafter results obtained using this FNPF-HPC model as external solver, the present coupling schemes have been implemented so as to be able to use any external results available in an OpenFOAM\textregistered{}\xspace formatted case. \section{The Domain Decomposition (DD) approach} \label{sec:domainCoupling} The DD method consists in splitting the spatial domain into two separate regions and attributing to each region a given mathematical model (see \cref{sch:coupling}). For the problem of interest, as discussed in \cref{sec:intro}, it is well known that FNPF are well suited to simulate wave propagation and the FNPF-HPC model presented in \cref{sec:FNPF} is used for that purpose. On the other hand, as this model neglects viscous and rotational effects, turbulent effects are not captured and the physics of the flow in the vicinity of a body is oversimplified. Thus, in a local area around the body, a turbulent RANS VoF model presented in \cref{sec:CFD} is set up and developed within the OpenFOAM\textregistered{}\xspace framework. We restrict here the study to a fully submerged body, such that no coupling in terms of volume fraction is needed. \begin{figure}[htbp!] \begin{center} \includegraphics{coupling_schematic_ch4} \caption{Schematic representation of the DD coupling method applied here. Note that only one ``external'' mesh and one ``internal'' mesh are represented here. In our application, two external meshes will be used, corresponding to the different HPC grids, namely background and immersed.} \label{sch:coupling} \end{center} \end{figure} \subsection{Spatial and temporal interpolations} In the DD approach, the equations within the bulk of the local (viscous) domain are not modified. Thus, we aim to solve directly the system of \cref{eq:RANSequations:div,eq:RANSequations:Mom}. In order to close the system, BCs need to be defined for the various variables, \emph{i.e.}\xspace along $\Gamma_{i,o,t,b,B}$ presented in \cref{sch:coupling}. The knowledge of the potential field at those BCs is thus required. Because the time step and spatial discretizations of the HPC method are much large compared to the discretizations required by the RANS approach, interpolations are needed, both in time and space. \paragraph{Spatial interpolation}~~\\ In practice and because we will make use of the same interpolation methods for the FD approach presented in \cref{sec:velocityDecomposition}, the spatial interpolation maps the external fields onto the internal mesh using OpenFOAM\textregistered{}\xspace routines, based on a direct interpolation method, modified to efficiently handle large discretization differences. Assuming the knowledge of a field $q$ given on the external mesh at the cell centers, the procedure is as follows: \begin{itemize} \item A projection onto the external mesh nodes (cell vertices) is first performed. \item For a spatial location of interest $\tI{x}$ (\emph{i.e.}\xspace a cell center/face center of the CFD mesh/boundary), we seek the cell belonging to the external mesh in which $\tI{x}$ is located. \item Afterwards, this cell is split into tetrahedron (note that the RANS model is applied on a 3D mesh of one cell width) and we identify which of them contains $\tI{x}$. Once done, we linearly reconstruct $q(\tI{x})$ using the values at the vertices of the tetrahedron. \end{itemize} For all time steps at which the potential is available, the above method is applied, yielding the knowledge of the potential field projected onto the CFD mesh at each of these time steps. \paragraph{Temporal interpolation}~~\\ Concerning the temporal interpolation, potential time instants framing the current time $t$ ($t_{n+1}<t\le t_n$) are identified and the corresponding spatial interpolations are used to approximate the field of interest at $t$. In practice, a linear temporal interpolation is used: $q(\tI{x},t)= [(t_n-t)q(\tI{x},t_{n+1}) + (t-t_{n+1})q(\tI{x}, t_n)]/(t_n-t_{n+1}) $. At this stage, approximations of the potential fields (velocity and pressure) are available on the CFD mesh at any given time, in particular at the boundaries of the CFD mesh. In the following these fields will be denoted $\tI{u}_p$ and $p_p$. Note that the procedure is also applied at the faces of CFD mesh boundaries, meaning that those fields can be used to enforce BCs. \subsection{Boundary conditions} \label{sec:DD:boundaryConditions} With the knowledge of the potential fields (\emph{i.e.}\xspace velocity, pressure, \emph{etc}.), it is possible to implement different BCs. For example, the classical Dirichlet BC: \begin{equation} q(\tI{x}_f)=q_p(\tI{x}_f) \hspace{1cm} \forall f\in \Gamma \label{eq:Dirichlet} \end{equation} is implemented and available for both the pressure and the velocity. The use of this condition for both variables simultaneously is however too restrictive. Furthermore, enforcing \cref{eq:Dirichlet} on the velocity for a closed domain might lead to an unbalanced mass flux as the HPC model, the time and space interpolations and the different numerical treatments, may be prone to numerical errors leading to slightly non conservative fluxes across the boundaries. For this reason, a variation of the OpenFOAM\textregistered{}\xspace native \ofkw{inletOutlet} BC is employed for the pressure, denoted \ofkw{coupledVelocityInletOutlet}: \begin{subequations} \begin{empheq}[left = \text{$\forall f \in \Gamma$}\empheqlbrace]{align} \text{if } \psi_f\le0 :\hspace{1cm}& q(\tI{x}_f) = q_p(\tI{x}_f) \\ \text{if } \psi_f>0 :\hspace{1cm}& \grads{q} \cdot \tI{n} = 0 \end{empheq} \end{subequations} where $\psi_f$ is the phase flux at the given face $f$, positive for an outward flow. Thus, if the fluid flows outwards of the CFD domain, a null Neumann condition is applied. Reciprocally, if the fluid flows inwards, the condition reduces to a Dirichlet BC, where values from the potential model are enforced. Using this condition for the velocity with a Dirichlet BC for the pressure proved to return consistently accurate results with an increased stability. \section{The Functional Decomposition (FD) approach} \label{sec:velocityDecomposition} The current implementation of the FD approach elaborates on the method introduced by \textcite{Beck:kim2005complementary}, in a similar fashion as the one presented in \textcite{zhang_multi-model_2018}. However, no transition zones are used here and thus the BCs are applied in a direct way at the outer perimeter of the internal domain. As we wish to retain the domain reduction gain described in \cref{sec:domainCoupling}, and keep a one-way coupling scheme in a first step, results are not expected to be in perfect agreement with the RANS method applied in an independent manner. Thus, the method presented below is only applicable to cases where the viscous and turbulent effects do not perturb the far field flow in a significant way. Otherwise, stability issues should be expected at the BCs with a difficulty to drive the complementary values (and the turbulent eddy viscosity) to zero. We guess that issues of this type led \textcite{zhang_multi-model_2018} to consider transition zones at the boundaries of the inner domain. \subsection{Complementary RANS equations} Following \citet{Beck:kim2005complementary}, the complementary counterpart $q^*$ of a given variable whose potential component is $q_p$ is defined so that: \begin{equation} q_t=q_p + q^* \label{eq:basicdecompositionAnyVariable} \end{equation} Hereafter, for the sake of clarity, we index a total variable with a $t$. $\tI{u}_t, p_t$ are thus the total velocity and pressure variables respectively, that are sought as solutions of the original Ns or RANS equations. Note that $q_p$ can be obtained from any solver, and in the following, the only requirement is for $(\tI{u}_p, p_p)$ to be solution of the Euler equations. Applied to the velocity, with a velocity deriving from a potential, $\tI{u}_t$, this decomposition is a Helmholtz decomposition ($\tI{u}_t=\rots \tI{a} + \grads \psi $ where $\tI{a}$ is a vector field and $\psi$ a scalar field). Thus, $\tI{u}^*$ contains the rotational part of the total velocity. The Helmholtz decomposition is not unique and we focus on solving for $(\tI{u}^*$, $p^*)$ that complement the potential velocity and pressure ($\tI{u}_p,p_p$) obtained from the FNPF-HPC model. The velocity and pressure are replaced by their decomposition in \cref{eq:RANSequations:div,eq:RANSequations:Mom}. Simplifications are done stating that $(\tI{u}_p, p_p)$ are solutions of the Euler equations. Afterwards, the RANS method presented in \cref{sec:RANS} is applied on the complementary velocity to yield the following equations, denoted the complementary RANS equations: \begin{subequations} \begin{empheq}[left=\empheqlbrace]{align} & \divs \tI{u}^*=0 \label{eq:cRANSequations:div}\\ & \begin{aligned} \dt{\rho \tI{u}^* } + \divs{ \rho\tI{u}^*\otimes \tI{u}^* } & + \boxed{\divs{ \rho\tI{u}_p\otimes \tI{u}^* } }+ \boxed{\divs{ \rho\tI{u}^*\otimes {\tI{u}}_p } } \\ & = - \grads{{p}^*} - \divs \tII{T}_{\text{eff}}(\tI{u}^*) -\boxed{ \divs \tII{T}_t (\tI{u}_p) } \end{aligned} \label{eq:cRANSequations:Mom} \end{empheq} \end{subequations} The expression of $\tII{T}_{\text{eff}}(\tI{u}^*)$ is given in \cref{eq:RANSequations:Mom}, using $\tI{u}^*$ in place of $\tI{u}$. The turbulent shear stress tensor applied on $\tI{u}_p$ is given by: \begin{equation} \tII{T}_t(\tI{u}_p) = \mu_t ( [ \grads \tI{u}_p + (\grads \tI{u}_p)^{T} ] - \dfrac{2}{3} (\divs{\tI{u}}_p)\tII{I} ) \end{equation} Note that, because $\mu_t$ is not constant, the divergence of $\tII{T}_t$ does not cancel out, only the second part is null because of the divergence free property of $\tI{u}_p$. Given the previous equations, the form of the semi-discretized \cref{eq:HandAonU} remains intact. The differences are of course on how to compute diagonal and off-diagonal matrices of the momentum equation. More precisely, the different tensors in \cref{eq:HandAonU} are computed according to \cref{eq:cRANSequations:div,eq:cRANSequations:Mom} in place of \cref{eq:RANSequations:div,eq:RANSequations:Mom}. However given an equation, OpenFOAM\textregistered{}\xspace offers numerical schemes to compute those terms. This implies that the pressure equation treatment does not need to be modified. Thus, the methodology described in \cref{sec:coupledVelocityPressure} is directly applied, and will not be repeated here. \subsection{Boundary conditions} \label{sec:VD:boundaryConditions} In order to close the velocity-pressure problem, a set of boundary conditions has to be defined. While a DD coupling model has to enforce its outer boundaries according to the external solver values, it is not the case in the FD coupling framework. Deriving a Dirichlet BC - imposing a total value $\tI{u}_D$ - in this framework yields: \begin{equation} \tI{u}^* = \tI{u}_D-\tI{u}_p \end{equation} In particular, in the case of a no-slip condition (sea bottom or non moving body BC), we get: \begin{equation} \tI{u}^* = -\tI{u}_p \text{\ \ at \ $\Gamma_B$} \label{eq:velocityCouplingNoSlip} \end{equation} At the outer boundary, the same Dirichlet condition can be applied. It is assumed that the effects described by the complementary equations are restricted to the body vicinity and thus, the condition is reduced to: \begin{equation} \tI{u}^*=0 \label{eq:velocityCoupledDirichletOuterBnd} \end{equation} A Neumann BC on which the imposed spatial derivative of the total velocity is imposed as $\left.\dn{\tI{u}}\right|_N$ would reduce to: \begin{equation} \dn{\tI{u}^*} = \left.\dn{\tI{u}}\right|_N - \dn{\tI{u}_p} \label{eq:velocityCouplingNeumann} \end{equation} Because of the one-way nature of the coupling scheme, stability problems are expected when imposing \cref{eq:velocityCoupledDirichletOuterBnd} at the outer boundaries of the RANS domain: the feedback of the turbulent and viscous part to the far-field flow cannot be exactly zero. Thus, in order to counteract this, a mixed Neumann-Dirichlet BC will be used, in the same manner as in \cref{sec:domainCoupling}, derived here for a complementary variable: \begin{subequations} \begin{empheq}[left = \text{$\forall f \in \Gamma$}\empheqlbrace]{align} \text{if } \psi_f\le0 :\hspace{1cm}& q^*(\tI{x}_f) = 0 \\ \text{if } \psi_f>0 :\hspace{1cm}& \grads{q^*} \cdot \tI{n} = 0 \end{empheq} \end{subequations} Note that multiple face fluxes are available in this context, namely the potential one $\psi_p$, the complementary one $\psi^*$ and the total one $\psi_t$. In the present implementation, the user is free to select any one of them. Selecting $\psi_t$ or $\psi_p$ at the outer boundaries of the domain ($\Gamma_{i,o,t,b}$ on \cref{sch:coupling}) should yield the same results, given that we expect the complementary effects to vanish at those boundaries. In practice, it was indeed found no difference when selecting either the potential face flux or the total one. Selecting $\psi^*$ however, would imply that a null Dirichlet BC is enforced most of the time, allowing for a flow only when non null complementary velocities are found in the boundary vicinity. \subsection{Expected sources of discrepancies between DD and FD coupling methods} While in theory the FD and DD approaches are mathematically equivalent, some sources of discrepancies are expected due to numerical treatments. \paragraph{Residual threshold}~~\\ When inverting the matrix of the Poisson equation \cref{eq:pEqn} with any of the available matrix solvers, a target residual (tolerance) needs to be specified. The residuals are scaled with the field values. However, the mean value of the complementary pressure is orders of magnitude lower than the mean value of the total dynamic pressure. A systematic study, further detailed in \cref{sec:sensitivityResultTolerance}, showed that increasing the residual tolerance by a large factor had only a limited effect on the total dynamic pressure field. \paragraph{Nonlinear discretization schemes}~~\\ In order to derive \cref{eq:cRANSequations:div,eq:cRANSequations:Mom} from \cref{eq:RANSequations:div,eq:RANSequations:Mom}, we largely took advantage of the linear properties of the divergence and gradient operators. However, some of the available discretization schemes can be nonlinear with $\tI{u}$. For example, the upwind divergence scheme, classically applied for the discretization of the velocity advection, is not fully linear with $\tI{u}$, \emph{i.e.}\xspace, numerically: \begin{equation} \divs \rho \tI{u}_t \otimes \tI{u}^* = \divs \rho (\tI{u}_p+\tI{u}^*) \otimes \tI{u}^* \ne \divs \rho \tI{u}_p \otimes \tI{u}^*+ \divs \rho \tI{u}^* \otimes \tI{u}^* \label{eq:aggregatedVsSeparated} \end{equation} even though the equality should be mathematically verified. The influence on the resolution of this term discretized following a 'separated' approach (right part of \cref{eq:aggregatedVsSeparated}) through two different upwind schemes compared with an 'aggregated' manner (left part of \cref{eq:aggregatedVsSeparated}) was conducted, showing that the aggregation has a stabilization effect as long as multiple \ofkw{PIMPLE} iterations are performed. Note however that some terms were simplified invoking the Euler equation for the potential variables. Thus, $\divs \rho \tI{u}_p \otimes \tI{u}_p$ is not available anymore in \cref{eq:cRANSequations:Mom}: $\divs \rho \tI{u}^* \otimes \tI{u}_p$ remains solely and its discretization might lead to discrepancies when comparing with the DD approach. \section{Models comparison: coupling validation and analysis} \label{sec:resultsModelsComp} In order to validate and compare the results obtained \emph{via} the FNPF-HPC model and the two decomposition approaches, an experimental wave-structure interaction case, detailed in \cref{sec:casedescription}, was selected. Computed loads on the structure, as well as vorticity fields, will then be compared with a standalone CFD simulation of the same case done with OpenFOAM\textregistered{}\xspace and denoted \ofkw{waveFoam}, referring to the name of the used solver \parencite{jacobsenFuhrmanFredsoe2012}. \subsection{Case description} \label{sec:casedescription} \subsubsection{Geometry} A fully immersed horizontal cylinder of rectangular cross-section is selected, reproducing the experimental studies of \textcite{arai1995forces,venugopal_hydrodynamic_2002}. An aspect ratio height over length of the rectangle $H_c/L_c=1/2$ is selected here. The relative submergence depth of the center of the rectangle with respect to the Still Water Level (SWL) is set as $d_c/H_c=4.1$ and the still water depth is set to $h/d_c=2.68$. A sketch of the case with the main dimensions is given in \cref{sch:cylinderVenugSch3}. Note that spatial profiles of flow variables will be sampled in the following along the blue line {\color{Blue} $l_{v1}$}, from the body upper wall to the CFD mesh top boundary ($\Gamma_t$ in \cref{sch:coupling}). \begin{figure}[htb!] \centering \includegraphics{venugopalCylinder_schematic} \caption{Schematic representation of the selected case. Note that the sea bottom is not represented, and everything is at scale, considering a wavelength $\lambda=\SI{6.15}{\meter}$ (half of it represented here) and a wave steepness $H/\lambda=7.0\%$.} \label{sch:cylinderVenugSch3} \end{figure} \subsubsection{Turbulence modeling}\label{sec:turbulenceModels} The selected model is the \texorpdfstring{$k-\omega$}{k-w} SST\xspace \parencite{menter_two-equation_1994}, for its capability to capture both wall vicinity and larger scale flows. Because no model is available in OpenFOAM\textregistered{}\xspace (v1712) for multiphase flows, the correction made available by \textcite{devolder_application_2017,devolder_performance_2018} is used instead. Turbulence models were however shown to be formally unstable under potential waves by \textcite{larsen2018over}, who suggested a limitation based on the magnitude of the rotation rate tensor. This limiter however proved to be intrusive in the vortex shedding zones on the selected case, and is therefore not used here. Additional studies are currently conducted in order to analyse further and correct this shortcoming. \subsubsection{Incoming waves} In this study, incoming waves are selected as regular with fixed period $T=\SI{2}{\second}$. % Different wave heights will be simulated in \cref{sec:resultsVsLiterature}, though most of the simulations of this section consider a fixed wave steepness $H/\lambda = 3.5\%$. Because of the choice of the turbulence model, see \cref{sec:turbulenceModels}, the turbulence growth is damped at the air-water interface but still arises in the bulk of the fluid domain. Energy loss is thus expected when OpenFOAM\textregistered{}\xspace is used to propagate water waves over distances of a few wavelengths, contrarily to what is observed with the FNPF-HPC solver. In summary, within either the DD or FD coupling approach, the cylinder might not be subjected to the same wave and local flow field as in the standalone CFD approach, and the former is expected to be more accurate regarding the quality of the propagated wave field at the body location. \subsubsection{Boundary conditions} \renewcommand{\arraystretch}{1.3} \begin{table}[tbp!] \newcommand{\mr}[2]{\multirow{#1}{*}{#2}} \centering \scriptsize{ \begin{tabular}{|c|c|c|c|c|c|} \hline boundary & model & $\tI{u}$; $\tI{u}^*$(\si{\meter\per\second}) & $p-\rho g h$; $p^*$ & $k$ (\si{\kunit}) & $\omega$ (\si{\per\second}) \\ \hline \mr{2}{$\Gamma_{i,t,o,b}$} & DD & cIO & cfV & \mr{2}{fV:$10^{-10}$} & \mr{2}{fV: 100} \\ \cline{2-4} & FD & IO:$\tI{0}$ & fV:$0$ & & \\ \cline{1-1} \cline{2-6} \mr{2}{$\Gamma_{B}$ } & DD & fV:$\tI{0}$ & \mr{2}{\texttt{fixedFluxPressure}:0} & \mr{2}{\texttt{ kqRWallFunction }} & \mr{2}{\texttt{omegaWallFunction}} \\ \cline{2-3} & FD & -cfV & & & \\ \cline{0-5} \hline \end{tabular} } \caption{Table of the tested sets of boundary conditions. IO stands for \ofkw{inletOutlet}, fV for \ofkw{fixedValue}. A prefix ``c'' is added when their coupled counterparts are used. For details about these boundary conditions, see \cref{sec:DD:boundaryConditions,sec:VD:boundaryConditions}.} \label{tab:testedBcSetsDomainCoupling} \end{table} \Cref{tab:testedBcSetsDomainCoupling} lists the types of the boundary conditions used in the coupled models. A wall model is used for the body boundary condition of the turbulence variables that allow to have an $y^+$ located either in the low Reynolds zone or in the log-profile region. In the case with $H/\lambda=3.5\%$ the maximum (over time and space) encountered value of $y^+$ is about 6, granting trust on the proper resolution of the boundary layer flow. Values of potential variables at the coupled boundaries are interpolated from the obtained fields with the FNPF-HPC model that was run \emph{a priori}. Note that results obtained independently with this model (thus corresponding to a fully potential flow simulation) will also be shown in the following comparisons. \subsubsection{Numerical parameters} With the aim of accurately comparing the coupling methods with the standalone \ofkw{waveFoam} simulation, most of the numerical parameters were maintained identical. Among them, the gradient operator is discretized with a least-square limited method, and the advection of the velocity with a limited linear scheme. A linear scheme not limited is also used for all other divergence terms, excepts the convection of turbulent variables for which a upwind scheme is used instead. Finally, the Laplacian operator is discretized with a linear method, with a correction for non-orthogonality of the mesh. Note that the FD method requires the \ofkw{PIMPLE} loop to be active with an exit residual target specified in order to yield stable results. The mesh itself also inherits most of the features from the \ofkw{waveFoam} mesh: the wall vicinity is discretized such that the boundary layer can be accurately resolved, a refined discretization is used close to the body corners. The overall body vicinity discretization is also respected, with the use of square cells of dimension $dx=0.056\si{\meter}$. A mesh independence study (not shown here for brevity) was performed to select this value. The time step is fixed at a value of $dt=T/4000=\SI{5e-4}{\second}$. For the HPC method, a dual mesh method (boundary fitted overlapping grid) is used, as presented in \textcite{robaux_development_2021}. The background mesh, used to propagate the incident and reflected waves is discretized with $dx=\lambda/60$, and the time step is also fixed, at a value controlled by the Courant-Friedrichs-Lewy (CFL) number based of the phase velocity of the waves. This CFL is fixed at $2$, leading to a time step of $dt=T/30=\SI{0.067}{\second}$. \subsection{Temporal loads series} \label{sec:temporalloads} At any time instant, the force applied on the cylinder (per unit width in the transverse direction) is calculated by integrating the stress along the body boundary, and decomposed into a horizontal component $(f_x)$ and a vertical component $(f_z)$. The temporal load series obtained with the four presented models are depicted in \cref{fig:plottemporalLoadsHPCOFvCdCwF} for the case $T=2$~s and $H/\lambda = 3.5\%$. While a relative agreement can be found between all four models, it is noticeable that the coupled methods are able, making use of the HPC results, to recover the loads obtained with waveFoam, especially the horizontal one. We indeed remark a good agreement of the loads after a few periods of evolution. Note that the coupled models are started at a time when the HPC method already yields stable and periodic results. Eventually, the right panels focuses on a one wave period time range after 6 periods have been simulated with the coupled methods, while a duration of 18 periods was necessary with HPC and waveFoam used as standalone codes. \begin{figure}[htb!] \centering \includegraphics{plottemporalLoadsHPCOFvCdCwFLegend} \sfig{plottemporalLoadsHPCOFvCdCwF}[0.59] \sfig{plottemporalLoadsHPCOFvCdCwFZoomed}[0.39] \caption{Temporal series of the loads obtained in the case $T=2$~s and $H/\lambda = 3.5\%$ with the waveFoam solver, the DD method, the FD method and the standalone HPC code.} \label{fig:plottemporalLoadsHPCOFvCdCwF} \end{figure} We note the two coupled models are in very good agreement concerning both load components. This fact tends to validate the correct implementation of both models as they both work with equivalent boundary conditions. However, the vertical loads from coupled models compare better with HPC that with waveFoam, denoting a lower impact on this force component of the turbulent and rotational effects. It will be shown in \cref{sec:localFieldsDescriptions} that the imposed pressure at the top boundary conditions differs from the waveFoam pressure and that this discrepancy propagates down to the cylinder wall. As stated earlier, the incoming waves predicted by HPC and waveFoam slightly differ in height at the body location, which might also contribute to the obtained difference in terms of vertical load. \subsection{Vorticity fields} The FNPF-HPC model, used to impose the boundary conditions of the coupled approaches, neglects flow vorticity. The underlying hypothesis is that the extent of the CFD mesh covers the entire region where the vorticity cannot be neglected. On \cref{fig:vorticityFields}, the vorticity fields from the coupled approaches are shown together with the vorticity field predicted by the \ofkw{waveFoam} standalone computation. \begin{figure}[htb!] \centering \sfigScaled[waveFoam]{WFvorticity} \sfigScaled[domain coupling]{DDvorticity} \sfigScaled[functional decomposition]{VDvorticity} \caption{Comparison of the vorticity fields predicted by (a) waveFoam and the coupled models ((b) DD method; (c) FD method) at $t/T=18$, \emph{i.e.}\xspace once a periodic behavior is established (case $T=2$~s and $H/\lambda = 3.5\%$).} \label{fig:vorticityFields} \end{figure} A good visual agreement can be denoted, emphasizing the capabilities of both coupled approaches to recover vorticity effects, despite the enforcement of BC with null vorticity. Note also that this figure tends to validate the underlying assumption that the CFD mesh covers the entire region where vorticity remains significant. \subsection{Local field descriptions}\label{sec:localFieldsDescriptions} In order to compare models results in a more quantitative manner, flow variables are sampled over a vertical line starting at the center of the upper wall of the body section and ending at the upper boundary of the local mesh around the body (\textcolor{Blue}{$l_{v1}$} on \cref{sch:cylinderVenugSch3}) at 40 different times during a given wave period. \paragraph{Comparisons at a given time}~~\\ An example of the obtained fields is given in \cref{fig:plotOverLineMultiCasevCvsdCU} at a particular time for the velocity components and pressure. Note that the potential fields are also included, which serve for imposing both the boundary conditions in the two coupled approaches, and the potential fields in the FD approach. For the FD method, the three available fields are represented, namely the potential components, the complementary components and their sum, \emph{i.e.}\xspace the total fields. \begin{figure}[htb!] \centering \includegraphics{plotOverLineMultiCasevCvsdCULegend} \sfig{plotOverLineMultiCasevCvsdCU} \sfig{plotOverLineMultiCasevCvsdCU2} \sfig{plotOverLineMultiCasevCvsdCP} \caption[velocity Coupling local results]{Comparison of the complementary fields, the potential ones and the resulting total ones with waveFoam and the DD solver at a particular time $t/T=20$ sampled along a vertical segment on top of the body upper face, denoted \textcolor{Blue}{$l_{v1}$} on \cref{sch:cylinderVenugSch3} (case $T=2$~s and $H/\lambda = 3.5\%$).} \label{fig:plotOverLineMultiCasevCvsdCU} \end{figure} It can be noted that the pressure computed from HPC is slightly different from the one predicted by waveFoam, from the body upper wall ($z=\SI{-0.72}{\meter}$) to the top boundary ($z=\SI{-0.3}{\meter}$). This discrepancy, enforced in the coupled approaches at the top boundary, propagates down the body walls. It is thought to be the reason of the vertical loads discrepancies observed in \cref{fig:plottemporalLoadsHPCOFvCdCwF}. % It is however possible to note that the near wall evolution of the pressure from the two coupling methods is similar to the one obtained with waveFoam (and thus distinct from the HPC pressure), maintaining the shift due to the HPC pressure further away. This is noticeable for example by denoting that the complementary pressure is almost null (\emph{i.e.}\xspace $p=p_p$) away from the body, but exhibits a small value in the wall vicinity. Finally, we observe that while the complementary velocity, which ``corrects'' the potential values, is of large magnitude compared to the potential one, the complementary pressure is very small. \paragraph{Comparisons over a wave period}~~\\ Hereafter and for the rest of this study, only the total fields obtained with the FD method will be presented for clarity reasons. \begin{figure}[htbp!] \newcommand{\vspace{-0.12cm}}{\vspace{-0.12cm}} \centering \vspace{-0.12cm} \vspace{-0.12cm} \vspace{-0.12cm} \includegraphics{plotBoundaryLayer2Meshes1PeriodvCvsWF0Legend} \vspace{-0.12cm} \begin{subfigure}[c]{0.5\textwidth} \includegraphics{% plotBoundaryLayer2Meshes1PeriodvCvsWF0} \caption{ $\bar { t } <0.5$ } \end{subfigure}% \begin{subfigure}[c]{0.5\textwidth} \includegraphics{% plotBoundaryLayer2Meshes1PeriodvCvsWF1} \caption{ $\bar { t } \ge 0.5$ } \end{subfigure}% \vspace{-0.12cm} \begin{subfigure}[c]{0.5\textwidth} \includegraphics{% plotBoundaryLayer2Meshes1PeriodvCvsWFU20} \caption{ $\bar { t } <0.5$ } \end{subfigure}% \begin{subfigure}[c]{0.5\textwidth} \includegraphics{% plotBoundaryLayer2Meshes1PeriodvCvsWFU21} \caption{ $\bar { t } \ge 0.5$ } \end{subfigure}% \vspace{-0.12cm} \begin{subfigure}[c]{0.5\textwidth} \includegraphics{% plotBoundaryLayer2Meshes1PeriodvCvsWFprgh0} \caption{ $\bar { t } <0.5$ } \end{subfigure}% \begin{subfigure}[c]{0.5\textwidth} \includegraphics{% plotBoundaryLayer2Meshes1PeriodvCvsWFprgh1} \caption{ $\bar { t } \ge 0.5$ } \end{subfigure}% \vspace{-0.12cm} \begin{subfigure}[c]{0.5\textwidth} \includegraphics{% plotBoundaryLayer2Meshes1PeriodvCvsWFnut0} \caption{ $\bar { t } <0.5$ } \end{subfigure}% \begin{subfigure}[c]{0.5\textwidth} \includegraphics{% plotBoundaryLayer2Meshes1PeriodvCvsWFnut1} \caption{ $\bar { t } \ge 0.5$ } \end{subfigure}% \vspace{-0.12cm} \caption{waveFoam, domain coupling and velocity coupling total fields sampled over a vertical line ($x_l=\SI{0}{\meter}$, $z=\ $\SIrange{-0.72}{-0.3}{\meter}, \textcolor{Blue}{$l_{v1}$} on \cref{sch:cylinderVenugSch3}) at $40$ time instants separated into two half wave periods (left and right panels).} \label{fig:plotBoundaryLayer2Meshes1PeriodvCvsWF0} \end{figure} All selected 40 time steps, as well as the obtained envelopes are shown in \cref{fig:plotBoundaryLayer2Meshes1PeriodvCvsWF0}. Left panels correspond to the first half wave period, and right panels to the second half period. We note that both coupling approaches recover the expected horizontal velocity profile as well as its envelopes in an accurate manner. Time step curves discrepancies can be attributed to a small phase shift, that we know will happen during the wave propagation process, mainly with the waveFoam approach. The pressure profiles are also recovered even though the added pressure by the coupled approaches were shown to be of small magnitude compared to the HPC results. The obtained magnitudes are thus not very different from the HPC potential pressure profiles (not shown here). The DD approach seems to accurately capture all profile and envelopes, at least in the body vicinity. Note that the turbulence is neglected at the top boundary of the local mesh ($z=-0.3$~m) in the coupled approaches, but is not found to be null at that location by waveFoam. We enforced a high value of the dissipation rate (\emph{c.f.}\xspace \cref{tab:testedBcSetsDomainCoupling}) at the outer boundaries, because otherwise, the expected rise of turbulent viscosity in their vicinity yields nonphysical peaks due to the difficulty to respect a null TKE value. This high dissipation rate value, though not physically consistent, is numerically beneficial in terms of stability and it does not modify in a significant manner the wall vicinity flow. Furthermore, it is consistent with the underlying hypothesis of fading away turbulence when approaching the outer boundary of the local domain. Results of the FD approach on the vertical velocity as well as on the turbulent viscosity exhibit more discrepancies with the FD approach. This is thought to be arising from the possible sources of discrepancy discussed previously. \subsection{Parameters investigations} \subsubsection{Assessment of hot start capabilities of the DD approach } \Cref{fig:plottemporalLoadsHPCFOFCompareStartTime} shows the time series of the loads obtained with waveFoam and the DD approach with two different starting times. The first one is started at $t_0/T=0$, meaning that for a significant duration ($t/T\in\sim[0,5]$), the CFD solver receives mostly null values at its boundary conditions: the wave did not propagate up to the body location yet. If the maximum local CFL number were used to control the time step, this first phase would be relatively cheap in terms of computational cost. Afterwards, a second phase occurs on which the CFD code solves the problem with boundary conditions that did not reach a periodic behavior yet (a CFL control would not decrease the CPU cost of this phase): if one is not interested in the transient behavior, this phase could also be skipped. Only later, at $t/T\approx 10$, a periodic and converged wave field is expected to be imposed at the boundaries on the local domain. \begin{figure}[htb!] \centering \includegraphics{plottemporalLoadsHPCOFCompareStartTimeLegend} \sfig{plottemporalLoadsHPCOFCompareStartTime}[0.59] \sfig[Zoom]{plottemporalLoadsHPCOFCompareStartTimeZoomed}[0.39] \caption{Temporal series of the loads when performing a start of the DD coupling method at $t_0/T=0$ and a ``hot start'' a time $t_0/T=12$, compared with waveFoam standalone simulation. See text for details.} \label{fig:plottemporalLoadsHPCFOFCompareStartTime} \end{figure} For this reason, a second run of the DD coupled model is done and started at $t_0/T=12$ (denoted ``hot start''), with the potential fields mapped on the CFD local mesh as starting conditions. With this approach, the expected number of simulated wave periods required to reach a periodic state is lowered. In practice, we found that a periodic state was reached after approximately $6T$ using a later starting time (\emph{e.g.}\xspace $t_0/T=12$ here), while a duration of at least $15T$ was mandatory when starting the DD case at $t_0/T=0$. The comparison of loads after those $6T$ presented in the right panels of \cref{fig:plottemporalLoadsHPCFOFCompareStartTime} shows a good agreement, supporting the assumption that running the DD simulation from $t_0/T=0$ increases the CPU time without any added value. \Cref{fig:plotBoundaryLayer2Meshes1PeriodU0hotstart0} further confirms this conclusion, by showing a good comparison between the local fields obtained with the DD approach with two different starting times. \begin{figure}[!htb] \centering \includegraphics{plotBoundaryLayer2Meshes1PeriodU0hotstart0Legend} \sfig[] {plotBoundaryLayer2Meshes1PeriodU0hotstart0} \sfig[] {plotBoundaryLayer2Meshes1PeriodNuthotstart0} \caption{Horizontal velocity (\cref{fig:sub:plotBoundaryLayer2Meshes1PeriodU0hotstart0}) and turbulent kinematic viscosity (\cref{fig:sub:plotBoundaryLayer2Meshes1PeriodNuthotstart0}) profiles envelopes along the transect {\color{Blue} $l_{v1}$} (see \cref{sch:cylinderVenugSch3}), computed from different 40 time steps values per wave period. Different starting times ($t_0$) are used, fields are therefore compared after $nT$ simulated periods. } \label{fig:plotBoundaryLayer2Meshes1PeriodU0hotstart0} \end{figure} \subsubsection{Sensitivity to the mesh breadth of the DD approach} \label{sec:mesh_breatdh} All the previously presented computations were done with a local CFD mesh breadth of $B_m = 7.5 L_c = 3\si{\meter}$. Because a reduced extent of the local mesh would be computationally faster, a convergence study is performed on this parameter, varying $B_m$ from a very low value of $1.25 L_c = 0.5\si{\meter}$ to the above value $7.5 L_c$ (see \cref{fig:schematicBm}). Hence, the goals here are i) to validate that the current breadth yields converged results, and ii) to investigate on its lower limits to evaluate how much computational power could be saved. \begin{figure}[htb!] \centering \includegraphics{Bmschematics} \caption{Different CFD mesh horizontal spans parameterized by $B_m$. Sketch at scale.} \label{fig:schematicBm} \end{figure} The maximum values of the loads computed with the DD method are shown on \cref{fig:plottemporalAmplitudeErrorbreadthErrors}. We note that, while some differences can be seen, they are of small amplitudes as soon as $B_m \ge \SI{0.75}{\meter}$, which already yields valuable results. Further reducing the mesh width (\emph{i.e.}\xspace $B_m=\SI{0.5}{\meter}$) leads to large discrepancies. Note that the horizontal width of the body is $L_c=\SI{0.4}{\meter}$, thus, a $\SI{0.5}{\meter}$ domain breadth means that the extent of the CFD domain, past the body vertical sides, is only of $\SI{0.05}{\meter} =L_c/8$. \begin{figure}[htb!] \centering \includegraphics{plottemporalAmplitudeErrorbreadthErrors} \caption{Maximum values of obtained loads for different CFD mesh horizontal breadths predicted with the DD approach.} \label{fig:plottemporalAmplitudeErrorbreadthErrors} \end{figure} Let's define this horizontal meshed length on each side of the cylinder $\delta l_m = (B_m -L_c)/2$. Thus, relative to the incoming wavelength, the meshed part on each side is $\delta l_m / \lambda= 1/123$ with the shortest domain. With a mesh breadth of $\SI{0.75}{\meter}$, the same parameter is $1/35$ and reaches $1/20$ for $B_m=\SI{1}{\meter}$. This means that meshing 1/35\textsuperscript{th}$\sim$1/20\textsuperscript{th} of the wavelength on each side of the cylinder is sufficient to capture most of the turbulent and vorticity effects in the vicinity of the body. To remain conservative and because the computation cost is not much higher (most cells are located in the vicinity of the body walls), we retain the $B_m=\SI{3}{\meter}$ CFD mesh for future computations (\emph{i.e.}\xspace meshing a length of 1/5\textsuperscript{th} of the wavelength on each side of the body). \subsubsection{Sensitivity to tolerance targets of the FD method } \label{sec:sensitivityResultTolerance} A study was conducted on the tolerance targets and their effects on the obtained results within the FD approach. It was shown that using the original tolerance ($10^{-9}$, $10^{-8}$ and $10^{-7}$ for the velocity, pressure and \ofkw{PIMPLE} loop, respectively) leads to a large computational cost, mainly because they correspond to a dimensionless number, scaled by the field values. In the FD approach, the field of interest, especially the pressure $p^*$, is order of magnitude smaller that the resolved fields in the DD approach. In practice it was found that the targets could be multiplied by $10^4$ (\emph{i.e.}\xspace $10^{-5}$, $10^{-4}$ and $10^{-3}$ for the velocity, pressure and the \ofkw{PIMPLE} loop, respectively), with a maximal difference on the obtained complementary pressure field after the simulation of 6 wave periods of only $3.6\%$. We would like to underline that this difference is relative to the \emph{complementary} pressure field amplitude, which is an order of magnitude lower than the total pressure field ($\approx12\si{Pa}$ and $700\si{Pa}$ respectively, see \emph{e.g.}\xspace \cref{fig:sub:plotOverLineMultiCasevCvsdCP}). \section{Validation against experimental measurements} \label{sec:resultsVsLiterature} \subsection{Introduction} In this section, we apply the developed coupling schemes to a range of incident regular wave conditions for the submerged horizontal cylinder described in \cref{sec:casedescription} and \cref{sch:cylinderVenugSch3}, keeping the same geometry: $H_c/L_c=1/2$, $d_c/H_c=4.1$ and $h/d_c=2.68$. This set-up was considered experimentally by \textcite{arai1995forces,venugopal_hydrodynamic_2002,venugopal_drag_2009}, and simulated numerically by \textcite{li_hydrodynamic_2010} using a CFD VoF approach. While in the current work the wave period remains fixed at $T=\SI{2}{s}$, several increasing wave heights were simulated leading to a wave steepness varying from $H/\lambda =0.5\%$ to $8.6\%$. The Keulegan-Carpenter number, defined as \begin{equation} \text{KC}= \pi \dfrac{H }{L_c} \dfrac{ \cosh(k(h-d_c))}{\sinh(kh)}, \label{eq:KC} \end{equation} where $k=2\pi/\lambda$ is the wave number, represents the ratio of the amplitude of horizontal excursion of a fluid particle by the horizontal length of the body, assuming linear waves. KC will thus be used to parameterize the wave height, and evolves in $\text{KC}\in[0.11, 2.07]$. Note that both wave steepness and KC vary by a factor of $\sim 18$ from the lowest wave height to the most nonlinear condition. To get a more synthetic appraisal of models' results and compare them with the above mentioned reference data sets, we restrict our attention to the values of the hydrodynamic coefficients (HC) extracted from the time series of computed loads on the body as explained in the next sub-section. \subsection{Computation of the hydrodynamic coefficients (HC)} Literature comparisons are usually done on the hydrodynamic coefficients (namely the inertia coefficient $C_{Mx,z}$ and drag coefficient $C_{Dx,z}$) that model the forces applied on the body following the so-called Morison equation (\textcite{morison_force_1950}): \begin{subequations} \begin{align} F_{xm} & = \frac{1}{2} \rho C_{Dx} H_c u_x \sqrt{u_x^2 + u_z^2} + \rho A C_{Mx} \dot{u}_{x} \label{eq:HydrodynamicCoefficients:fx}\\ F_{zm} & = \frac{1}{2} \rho C_{Dz} L_c u_z \sqrt{u_x^2 + u_z^2} + \rho A C_{Mz} \dot{u}_{z} \label{eq:HydrodynamicCoefficients:fz} \end{align} \label{eq:HydrodynamicCoefficients} \end{subequations} where $F_{xm}$ and $F_{zm}$ are the modeled (by the Morison formula) horizontal and vertical loads applied on the body (expressed in $\si{\newton\per\meter}$ in the present 2D framework), and $A=H_cL_c$ is the area of the rectangular cylinder cross-section. For simplicity, those forces will be called the Morison loads hereafter. $u_x$, $u_z$ are respectively the horizontal and vertical velocity of the flow, and $\dot{u}_{x}$, $\dot{u}_{z}$ are the accelerations of the flow in the corresponding directions. Note that the selection of those kinematic values represents a first significant decision. The Morison equations were originally developed to model the force applied by a uniform oscillatory flow on an object of small dimensions (relative to the wavelength). The associated assumptions is that the body is subjected to a flow on which it has a limited impact. In this case, the choice of the associated kinematic is straightforward: one should select the kinematics of the unperturbed flow (\emph{i.e.}\xspace in the absence of the body). By extension, the imposed wave model at the inlet on the computational grid (here a stream function theory) is applied at the location of the center of the cylinder. Given a temporal load series, many methods exist to estimate the ``corresponding'' hydrodynamic coefficients, \emph{i.e.}\xspace the hydrodynamic coefficients that yield Morison loads as close as possible to the measured or simulated loads. One of the most common method is to minimize the square root of the error between the modeled loads and the real ones. At a given time step $i$ the error is given by $e_{x,z}(i) = F_{x,z}(i) - F_{xm,zm}(i) $. Indexes $x,z$ represent the errors for the horizontal and vertical components of the force, respectively. Thus, one could minimize the resulting mean (over time) square error: \begin{equation} L_2(F_{x,z})=\frac{\sqrt{\frac{1}{N}\sum_i^N e_{x,z}^2(i)}}{\max_i F_{x,z}(i) - \min_i F_{x,z}(i) }. \label{eq:L2_error} \end{equation} Following \textcite{venugopal_hydrodynamic_2002}, the coefficients that minimize these $L_2$ errors over a certain number of time steps $N$ can be analytically derived as: \begin{subequations} \begin{align} C_{Dx,z} & = \dfrac{2}{\rho D} \dfrac{f_1f_2 -f_3f_4}{f_2f_5- f_4^2} \label{eq:HydrodynamicCoefficientsWithfi:fx}\\ C_{Mx,z} & = \dfrac{2}{\rho A}\dfrac{f_3f_5 -f_1f_4}{f_2f_5- f_4^2} \label{eq:HydrodynamicCoefficientsWithfi:fz} \end{align} \label{eq:HydrodynamicCoefficientsWithfi} \end{subequations} where $D$ is either $H_c$ or $L_c$ for the horizontal and vertical drag coefficients respectively, and the $f_i$ terms are given by: \begin{equation} \begin{aligned} f_1 & = \sum_i^N F_{x,z}(i) u_{x,z}(i) \norm{\tI{u}(i)} \\ f_2 & = \sum_i^N \dot{u}_{x,z}^2(i) \\ f_3 & = \sum_i^N F_{x,z}(i) \dot{u}_{x,z}(i) \\ f_4 & = \sum_i^N u_{x,z}(i) \norm{\tI{u}(i)} \dot{u}_{x,z}(i) \\ f_5 & = \sum_i^N u_{x,z}^4(i) \\ \end{aligned} \label{eq:fiforhydrocoeffs} \end{equation} \begin{rmk} Note that another method is presented in \textcite{arai1995forces}: it consists in expanding in terms of Fourier series the kinematic variables - based on their analytic expressions - to obtain the Fourier decomposition of the Morison loads. Then, a numerical harmonic decomposition is also performed on the obtained loads. Finally, both decompositions are equalized, and the corresponding terms are identified. With this method, the coefficients are obtained in terms of the Fourier harmonics that constitute the load series. This method was also implemented in this work, with no major differences in terms of obtained HC. The results will thus not be presented here. \end{rmk} \subsection{Results and discussion} All results in terms of HC are depicted on \cref{fig:plotCdxCdxVenugopalArticlevCdC}: \cref{fig:sub:plotCdxCdxVenugopalArticlevCdCCmx,fig:sub:plotCdxCdxVenugopalArticlevCdCCmz} show respectively the obtained horizontal and vertical inertia coefficients ($C_{Mx}$ and $C_{Mz}$ respectively), \cref{fig:sub:plotCdxCdxVenugopalArticlevCdCCdx,fig:sub:plotCdxCdxVenugopalArticlevCdCCdz} the obtained drag coefficients ($C_{Dx}$ and $C_{Dz}$ respectively), and \cref{fig:sub:plotCdxCdxVenugopalArticlevCdCErrx,fig:sub:plotCdxCdxVenugopalArticlevCdCErrz} the $L_2$ norm of the error (given by \cref{eq:L2_error}) in the reconstruction of the obtained loads \emph{via} the Morison formula \cref{eq:HydrodynamicCoefficients:fx,eq:HydrodynamicCoefficients:fz}. We would like to stress that this error does not represent a discrepancy between the obtained values and the experimental ones, but only the consistency in representing the loads with the Morison. Note that, in order to make the comparison easier, the axis ranges of all panels of this figure are chosen so as to fit the range of the presented data. \begin{figure}[htbp!] \centering \includegraphics{plotCdxCdxVenugopalArticlevCdCLegend} \begin{subfigure}[c]{0.5\textwidth} \includegraphics{plotCdxCdxVenugopalArticlevCdCCmx} \caption{} \label{fig:sub:plotCdxCdxVenugopalArticlevCdCCmx} \end{subfigure}% \begin{subfigure}[c]{0.5\textwidth} \includegraphics{plotCdxCdxVenugopalArticlevCdCCmz} \caption{} \label{fig:sub:plotCdxCdxVenugopalArticlevCdCCmz} \end{subfigure}% \begin{subfigure}[c]{0.5\textwidth} \includegraphics{plotCdxCdxVenugopalArticlevCdCCdx} \caption{} \label{fig:sub:plotCdxCdxVenugopalArticlevCdCCdx} \end{subfigure}% \begin{subfigure}[c]{0.5\textwidth} \includegraphics{plotCdxCdxVenugopalArticlevCdCCdz} \caption{} \label{fig:sub:plotCdxCdxVenugopalArticlevCdCCdz} \end{subfigure}% \begin{subfigure}[c]{0.5\textwidth} \includegraphics{plotCdxCdxVenugopalArticlevCdCErrx} \caption{} \label{fig:sub:plotCdxCdxVenugopalArticlevCdCErrx} \end{subfigure}% \begin{subfigure}[c]{0.5\textwidth} \includegraphics{plotCdxCdxVenugopalArticlevCdCErrz} \caption{} \label{fig:sub:plotCdxCdxVenugopalArticlevCdCErrz} \end{subfigure}% \caption{Inertia and drag coefficients in the horizontal ($x$) and vertical ($z$) directions obtained with the DD coupling method, the FD coupling method, the VoF-FVM method (waveFoam) and the HPC method for different KC numbers. The $L_2$ error of the Morison fitting to the temporal loads series is also shown in the lower panels. Large errors reveal that the Morison model is not well adapted to describe the temporal series of loads.} \label{fig:plotCdxCdxVenugopalArticlevCdC} \end{figure} \paragraph{\textbf{HC from HPC standalone simulations}}~~\\ The HC that model the potential loads (from FNPF-HPC model), represented on \cref{fig:plotCdxCdxVenugopalArticlevCdC}, are consistent with the potential assumptions: 1) drag coefficients are of very small amplitudes when compared to experimental ones, 2) at small KC, \emph{i.e.}\xspace when the viscous and vorticity effects are expected to be of lower magnitude, the inertia coefficients exhibit good agreement with the experimental ones, and 3) the magnitude of variation of the inertia coefficients with KC is relatively small, denoting that the first harmonic of the load (linear with respect to the wave height, and thus with KC) remains dominant. \paragraph{\textbf{HC from \ofkw{waveFoam} standalone simulations}}~~\\ The HC obtained with the FVM-VoF method alone yield a good agreement with the experimental results, except for the vertical inertia coefficient (\cref{fig:sub:plotCdxCdxVenugopalArticlevCdCCmz}). This discrepancy is the direct consequence of the observed difference in terms of vertical load amplitude shown in \cref{sec:temporalloads}, \cref{fig:plottemporalLoadsHPCOFvCdCwF} and is thought to be caused by an artificial numerical damping of the incoming wave during the propagation phase. Note that because a phase shift of the wave kinematics was also observed during the propagation, an \emph{ad hoc} correction was applied before computing the HC. This further grants confidence in the fact that the coupled approaches, as well as HPC, correctly capture the vertical loads despite exhibiting some differences with the \ofkw{waveFoam} simulation. \paragraph{\textbf{HC from the DD and FD coupled simulations}}~~\\ Despite the fact that the BC are enforced from the FNPF-HPC results, the drag force is correctly recovered with both the DD and FD approaches for the whole studied range of KC. In the same manner, the evolution of the inertia coefficients is recovered: at small KC values, the potential (HPC) inertia coefficients are recovered, while for larger values the curves separate and a good agreement is maintained with the inertia coefficients from \textcite{venugopal_drag_2009}. \subsection{Computational efficiency} In this section, few CPU costs are compared on the main studied case ($T=2$~s, $H/\lambda = 3.5\%$). Almost all computations were run on a 40 core Intel\textregistered\ Xeon\textregistered\ CPU E5-2650 v3 @ 2.30GHz. We took advantage of the parallelism only in waveFoam standalone computations. \Cref{tab:CPUcosts} shows some of the lower achieved CPU times needed to obtain the presented results with the 3 RANS methods (waveFoam, DD, FD). The mesh is the same in terms of discretization ($dx=\SI{0.056}{\meter}$ in the body vicinity) but its span, particularly in the stream-wise direction, is different, respectively \SI{24}{\meter} for waveFoam (\emph{i.e.}\xspace the full extent of the NWT) and $B_m=$ \SI{3}{\meter} for the coupled methods. In \cref{sec:mesh_breatdh}, it was shown that a reduction of this CFD breadth below \SI{3}{\meter} is possible with the DD method, but, as a reduction of $B_m$ was not applied in the simulations of this section, CPU times for this larger breadth are shown instead, for a fair comparison between models. \begin{table}[htb!] \newcommand{\mrf}[1]{\multirow{1}{*}{#1}} \centering \scriptsize{ \begin{tabular}{|l|c|c|c|c|} & waveFoam & DD coupling & FD coupling & HPC \\ periodic behavior reached after $\dfrac{t-t_0}{T}=$ & 15 & 6 & 6 & 15\\ number of cells & \SI{138e3}{} & $<$\SI{31e3}{} & $<$\SI{31e3}{} & \SI{14e3}{}\\ CPU time to reach periodic behavior & $\sim$\ti{24}{04} & \ti{3}{11} & \ti{3}{00} & $\sim$\ti{1}{25} \\ CPU time for one additional wave period & \ti{1}{46} & \ti{0}{36} & \ti{0}{30} & $\sim$\ti{0}{05}\\ \end{tabular} } \caption{CPU costs (the VoF models use the same spatial discretization in the body vicinity, $dx=\SI{0.056}{\meter}$) using one computational core. Note that those CPU time values might not be perfectly accurate (see text for details). } \label{tab:CPUcosts} \end{table} Moreover, the presented CPU values for waveFoam are estimated (see ``$\sim$'') by comparing a parallel run on 4 cores and a sequential one over 1 wave period, and thus might not be perfectly accurate. Moreover, these values also proved to be prone to differences even when the same computation is performed (for example the frequency of writing results on files can be a source of variation) and trust can only be granted in their orders of magnitude, say $\pm10\%$. Lastly, no investigation of the requested tolerance was conducted for \ofkw{waveFoam} or the DD method, and even the one concerning FD approach could have been pushed further (see \cref{sec:sensitivityResultTolerance}). Nevertheless, \cref{tab:CPUcosts} shows that a significant gain is achieved with the coupling approaches: the computational time per wave period is divided by $\sim 3$ when compared with \ofkw{waveFoam}. If one is interested in the periodic behavior, the required CPU time is divided by $\sim 8$. Valuable improvements of the coupling methods and the potential model can thus be obtained for a relatively contained CPU cost. Moreover, performing mesh/time step sensitivity studies with these coupled methods would be highly beneficial due to the separation of wave propagation and body vicinity physics, but also the reduced CPU cost. \section{Conclusions and outlook} \label{sec:conclusion} Two coupling strategies have been developed and validated within the OpenFOAM\textregistered{}\xspace framework, namely a domain decomposition (DD) and a functional decomposition (FD) approach, using a FNPF model as a driver for the large scale flow. One specificity of the present work is that the potential flow solution, obtained with an accurate and efficient Harmonic Polynomial Cell (HPC) method, is computed in the presence of the body. Both coupling strategies are currently one-way schemes, implying information is only exchanged from the potential model to the CFD one. While the unidirectional property of the coupling restricts the usage to cases where vorticity, viscous and turbulent effects are restricted to an area of limited extent around the body, purely potential diffraction effects are taken into account at the early stage of the solution with the potential flow solver. The approaches were applied on a 2D wave-structure interaction case representative of ocean engineering applications that was shown to fulfill the above mentioned assumptions: a rectangular-shaped fixed cylinder immersed below the MSL in regular nonlinear waves. Applied loads, but also local field descriptions such as flow vorticity, turbulent viscosity, etc. were found to be correctly improved from the potential prediction toward the results obtained \emph{via} an independent standalone CFD simulation. Furthermore, the coupled simulations showed a strong reduction of the CPU burden. For example, the flow vorticity field in turbulent conditions is well recovered in the body vicinity, though the total domain span is divided by more than 8 when compared to the requirements of a standalone OpenFOAM\textregistered{}\xspace computation. Afterwards the effect of the variation of the incident wave height was studied, by covering a range of values of the KC number, and the obtained hydrodynamic coefficients (HC) of the considered body, used to model the obtained loads in a synthetic manner, were computed and compared with HC from experimental time series of measured loads. While a relatively accurate capture of the (horizontal and vertical) inertia coefficients can be obtained from the potential model for small KC, the drag coefficients cannot be recovered with such method. In the same manner, larger KC numbers trigger effects laying outside of the potential assumptions, significantly affecting the load values. The coupled approaches, making use of the potential wave propagation, yield a significant improvement: all inertia and drag HC are correctly estimated for the complete range of KC values considered here. Moreover, the potential model is shown to accurately capture the wave propagation phase, while setting up a FVM-VoF approach able to simulate accurately both the propagation but also the small scale effects near body's walls proved to be challenging. Finally, a great improvement in terms of CPU cost is achieved with both the coupling schemes. Note that these conclusions are drawn on a particular configuration, for which the assumption of the one-way coupling seems to hold: the retroactive effects of the viscous and turbulent effects on the large scale flow are limited. Let's recall that, on the contrary, no assumption of the smallness of purely potential diffraction effects needs to be done, as the potential solution is simulated taking into account the presence of the body. Overall, it is thought that using such methods applied at a local scale around the body (or each individual body in a multi-body case) as a further step after a larger scale potential flow computation are of great interest. In addition to the significant reduction of the total CPU time, some of the other benefits are: (i) a first estimation of loads of the body is already available after the potential flow simulation, (ii) these approaches allow to identify and quantify the non-potential effects on both the flow field and loads on the body, (iii) the quality and accuracy of the propagated wave field is high, without effect of artificial numerical damping, which may become significant for CFD simulations over long distances, (iv) all steps of work dealing with the calibration of the CFD model and sensitivity studies of various parameters are much easier and faster as the CFD computational domain is drastically reduced, (v) a larger number of wave conditions can be simulated or explored for a given computational burden. Several continuations of this work can be drawn, most of them being common between the DD and FD coupling approaches. The FNPF-HPC model is currently developed in 2D. However, respecting the OpenFOAM\textregistered{}\xspace philosophy, none of the presented implementations use a 2D hypothesis. Thus, provided a 3D external model, the extension of the current coupling methods to 3D cases should be straightforward. While not shown here, first tests of the coupling methods have been conducted with a free surface piercing body. However, due to the one-way nature of the couplings, the current implementations proved to be prone to instabilities in some cases. More complicated matching strategies could be setup to overcome this difficulty. For example, \textcite{zhang_multi-model_2018} applied relaxation techniques close to the outer boundaries of the viscous domain to allow for a smoother enforcement. More generally, this issue could certainly be better resolved by extending to a two-way coupling. Within a two-way coupling, the turbulent flow also acts on the computation of the potential flow itself. This method would also ease the matching with the potential free surface, probably removing the need of invoking any smoothing.
1,314,259,996,444
arxiv
\section*{Acknowledgements} \label{sec:acknowledgements} We wish to thank Tancredi Carli, Brian Cole, Peter Jacobs, Christian Klein-B\"osing, Michelangelo Mangano, Mateusz Ploskon, Sevil Salur, Peter Steinberg and Urs Wiedemann, for helpful conversations, comments and additional information. This work was supported in part by grants ANR-09-BLAN-0060 and PITN-GA-2010-264564.
1,314,259,996,445
arxiv
\section{Introduction} Recently, there has been a growing interest in the adiabatic theorem \cite{Messiah} in the context of quantum information, in particular for fault tolerant holonomic quantum computation \cite{Zanardi}, and for the design of quantum adiabatic algorithms \cite{FarhiGold,Farhi}. In this paper, we put forward a type of adiabatic approximation with focus on how an ideal (unitary) adiabatic evolution governed by a time-dependent Hamiltonian $H$ is perturbed by open-system effects. We consider the effects of a disturbance $D_{t/T}$ on a desired ideal evolution $\dot{\varrho} = -i[H(t/T),\varrho]$, as a master equation $\dot{\varrho} = L_{t/T}\varrho = -i[H(t/T),\varrho] + \Gamma D_{t/T} ( \varrho )$, where $\Gamma$ gives the ``strength'' of the disturbance, and $T$ is the run-time. In the ideal case $\Gamma=0$, the adiabatic approximation decouples the evolution of the instantaneous eigenspaces of $H$. In the present approximation, the eigenspace structure of $H(t/T)$ still plays a role in that it determines what should decouple in the adiabatic limit. It turns out that this in general limits the applicability to systems that are weakly open, i.e., to small $\Gamma$. The present study generalizes Refs.~\cite{rompinto,pintorom}, as it allows degeneracy of the Hamiltonian, which is an essential feature to obtain non-Abelian holonomy effects in general \cite{Wilzee} and holonomic quantum computation \cite{Zanardi} in particular. The concept of adiabaticity in open systems has been addressed recently by Sarandy and Lidar in Refs.~\cite{sarlidar,sali} for master equations of the above type. In their approach, the adiabatic approximation is characterized by a decoupling in terms of instantaneous Jordan blocks of the superoperator $L_{t/T}$. In other words, the decoupling is determined, in Ref.~\cite{sarlidar}, by the total superoperator $L_{t/T}$, while this is determined by $H(t/T)$ in the present approach. The approach in Ref.~\cite{sarlidar} may be difficult to use in certain applications. One example is the analysis of holonomic quantum computation in the presence of open system effects. First of all it should be noted that, although the Jordan decomposition always exists, it could be challenging to determine it in practice for more than a limited class of disturbances and systems. Furthermore, for the approximation in Ref.~\cite{sarlidar} a new Jordan decomposition has to be calculated for each choice of disturbance of the ideal gate. In the present approach, where the eigenspaces of the Hamiltonian are primary, the approximate equation can be obtained irrespective of the form of the disturbance $\Gamma D_{t/T}( \varrho )$. This is due to the fact that the spectral decomposition of $H(t/T)$ is in general known for holonomic implementations of quantum gates. For other applications, however, the preferred method of approximation has to be decided from the specific problem at hand. Another generalization of adiabaticity to open systems has been considered in \cite{Aab}, but for a specific type of open systems in the context of quantum adiabatic search. The structure of the paper is as follows. The approximation scheme is stated in the next section. In Sec.~\ref{sec:weak} we show that the approximation can be obtained as an adiabatic weak open-system limit of master equations. Section \ref{sec:compo} demonstrates that the approximation leads to a completely positive evolution under wide conditions. In Sec.~\ref{sec:appl} we apply the present approximation scheme to a decoherence model of a non-Abelian implementation of the Hadamard gate. Moreover, we compare with numerical solutions of the exact equation. The range of applicability of the approximation is discussed in Sec. \ref{sec:range}. The paper ends with the conclusions. \section{\label{sec:approx}The approximation} We consider master equations of the following type \begin{equation} \label{start} \frac{d}{dt}\varrho(t) = -i[H(t/T),\varrho(t)] + \Gamma D_{t/T}\bm{(}\varrho(t)\bm{)}, \end{equation} where $H(t/T)$ is a family of Hermitian operators, $D_{t/T}$ is a superoperator, $T$ is the run-time of the evolution, and $\Gamma$ is a strength parameter of the open system effect. With the change of variables $s = t/T$ one obtains \begin{equation} \label{basekv} \frac{d}{ds}\rho(s) = -iT[H(s),\rho(s)] + \Gamma T D_{s}\bm{(}\rho(s)\bm{)}, \end{equation} where $\rho(s) = \varrho(sT)$. The superoperator $D_{s}$ is assumed to be linear. In addition to purely technical assumptions on $D_{s}$ such as sufficient smoothness with respect to $s$, we assume that the solution $\varrho(t)$ does not grow without bound with respect to some operator norm, as $t$ grows. This is necessary if $\varrho(t)$ is to be a density operator, and is achieved under suitable conditions if $D_{s}$ is on the Lindblad form. We assume that the dimension of each eigenspace of $H(s)$ is fixed, so that we may write \begin{equation} H(s) = \sum_{k=1}^{K} E_{k}(s)P_{k}(s). \end{equation} Furthermore, for each $s$ we assume $E_{k}(s) \neq E_{l}(s)$ for all $k,l$ such that $k\neq l$, and $P_{k}(s)$ are projection operators such that $P_{k}(s)P_{l}(s) = \delta_{kl}P_{k}(s)$ and $\sum_{k}P_{k}(s) = \hat{1}$. Under conditions that are elucidated in Sec.~\ref{sec:weak}, the adiabatic approximation of Eq.~(\ref{basekv}) takes the form \begin{eqnarray} \label{totalaapp} \dot{\rho} & = &-i[TH(s)+Q(s),\rho] \\ & & +\Gamma T\sum_{klk'l'}g_{klk'l'}P_{k}(s)D_{s}\bm{(}P_{k'}(s)\rho P_{l'}(s)\bm{)}P_{l}(s),\nonumber \end{eqnarray} where \begin{equation} Q(s) = i\sum_{k}\dot{P}_{k}(s)P_{k}(s) \end{equation} is Hermitian (see Eq.~(\ref{sumPdotP})) and $g_{klk'l'}$ are $0$ or $1$ depending on the pairwise eigenvalue differences \begin{eqnarray} \Delta_{kk'}(s) = E_{k}(s)-E_{k'}(s), \end{eqnarray} as is described in Sec.~\ref{sec:offd}. In the case of closed evolution $\Gamma = 0$, we retain the standard adiabatic approximation \cite{remark1}. An alternative form of Eq.~(\ref{totalaapp}) may be obtained by making the change of variables \begin{equation} \label{change} \widetilde{\rho}(s) = U(s)\rho(s)U^{\dagger}(s) , \end{equation} where $U(s)$ is any sufficiently smooth family of unitary operators such that \begin{equation} \label{Pvillkor} U(s)P_{k}(s)U^{\dagger}(s) = P_{k}(0), \quad \forall k . \end{equation} In terms of $\widetilde{\rho}(s)$, Eq.~(\ref{basekv}) takes the form \begin{equation} \label{huvudekv} \dot{\widetilde{\rho}} = -iT[\widetilde{H}(s),\widetilde{\rho}(s)] -i[Z(s),\widetilde{\rho}(s)] + \Gamma T \widetilde{D}_{s}(\widetilde{\rho}), \end{equation} where \begin{eqnarray} \label{wHZdef} \widetilde{H}(s) & = & U(s)H(s)U^{\dagger}(s) = \sum_{k}E_{k}(s)P_{k}(0),\nonumber\\ Z(s) & = & i\dot{U}(s)U^{\dagger}(s),\nonumber\\ \widetilde{D}_{s}(\widetilde{\rho}) & = & U(s)D_{s}\bm{(}U^{\dagger}(s)\widetilde{\rho}(s)U(s)\bm{)}U^{\dagger}(s), \end{eqnarray} and we have used that $U(s) \dot{U}^{\dagger}(s) = - \dot{U}(s) U^{\dagger}(s)$. We decompose the density operator as $\widetilde{\rho} = \sum_{kl}\widetilde{\rho}^{(kl)}$, where $\widetilde{\rho}^{(kl)} = P_{k}(0)\widetilde{\rho}P_{l}(0)$. We refer to $\widetilde{\rho}^{(ll)}$ as the ``diagonal'' terms, while we refer to $\widetilde{\rho}^{(kl)}$, with $k\neq l$, as the ``off-diagonal'' terms. The approximate Eq.~(\ref{totalaapp}) can be written as \begin{eqnarray} \label{nlan} \frac{d}{ds}\widetilde{\rho}^{(kl)} & = & - iT\Delta_{kl}(s)\widetilde{\rho}^{(kl)}(s) \nonumber\\ & &-iZ_{k}(s)\widetilde{\rho}^{(kl)}(s)+ i\widetilde{\rho}^{(kl)}(s)Z_{l}(s) \nonumber\\ & & +\Gamma T \sum_{k'l'}g_{klk'l'}P_{k}(0) \widetilde{D}_{s}(\widetilde{\rho}^{(k'l')})P_{l}(0), \end{eqnarray} where $Z_{l}(s) = P_{l}(0)Z(s)P_{l}(0)$. The properties of $g_{klk'l'}$ imply that the diagonal terms $\widetilde{\rho}^{(ll)}$ always evolve according to the following equation \begin{eqnarray} \label{diagonal} \frac{d}{ds}\widetilde{\rho}^{(ll)} & = & -i[Z_{l}(s),\widetilde{\rho}^{(ll)}(s)] \nonumber \\ & & + \Gamma T\sum_{k}P_{l}(0) \widetilde{D}_{s}(\widetilde{\rho}^{(kk)})P_{l}(0). \end{eqnarray} The first term on the right-hand side of Eq.~(\ref{diagonal}) yields the non-Abelian holonomy \cite{Wilzee} of the standard adiabatic approximation, while the second term introduces a coupling between the diagonal terms of the density operator. Equation (\ref{diagonal}) implies that for the approximate evolution the diagonal terms always evolve independently of the off-diagonal terms. In the simplest case where $g_{klk'l'} = \delta_{kk'}\delta_{ll'}$, for $k\neq l$, the off-diagonal terms evolve independently of each other and of the diagonal terms. We note that if $\widetilde{\rho}^{(kl)} (s)$ are the solutions of Eq.~(\ref{nlan}), then $U^{\dagger}(s)\widetilde{\rho}^{(kl)}(s)U(s) = P_{k}(s)\rho(s)P_{l}(s)$, where $\rho(s)$ is the solution of Eq.~(\ref{totalaapp}). This follows from the fact that Eq.(\ref{nlan}) is equivalent to Eq.~(\ref{totalaapp}), as is demonstrated in Sec.~\ref{sec:equiv}. \section{\label{sec:weak}The approximation as an adiabatic weak open-system limit} Here, we put forward one possible way to justify the above approximation scheme. First, we note that Eq.~(\ref{huvudekv}) may be written as \begin{equation} \label{sftjf} \frac{d}{ds}\widetilde{\rho}(s) = L^{(1)}_{s}\widetilde{\rho}(s) + L^{(2)}_{s}\widetilde{\rho}(s), \end{equation} where \begin{eqnarray} \label{L1L2} L_{s}^{(1)} & = & -iT[\widetilde{H}(s),\cdot] \nonumber,\\ L_{s}^{(2)} & = & -i[Z(s),\cdot] + \Gamma T \widetilde{D}_{s} . \end{eqnarray} Since $[\widetilde{H}(s),\widetilde{H}(s')]=0$, it follows that $[L_{s}^{(1)},L_{s'}^{(1)}] = 0$. This implies that Eq.~(\ref{sftjf}) can be rewritten as the following integral equation \begin{eqnarray} \label{gene} e^{\Lambda(s)}\overline{\rho}(s) & = & e^{\Lambda(s)}\overline{\rho}(0) \nonumber \\ & & + e^{\Lambda(s)}\int_{0}^{s} e^{-\Lambda(s')} L_{s'}^{(2)} \left( e^{\Lambda(s')}\overline{\rho}(s') \right)ds' , \nonumber\\ \Lambda(s) & = & \int_{0}^{s}L_{s}^{(1)}(s'')ds'' , \end{eqnarray} where we have made the change of variables \begin{equation} \label{basbyte} \widetilde{\rho}(s) = e^{\Lambda(s)}\overline{\rho}(s) . \end{equation} The superoperator $L_{s}^{(1)}$ is anti-Hermitian with respect to the Hilbert-Schmidt inner product $(A,B) = {\textrm{Tr}}(A^{\dagger}B)$. Thus, $\exp[\Lambda(s)]$ is unitary, and we can rewrite Eq.~(\ref{gene}) as \begin{eqnarray} \label{erhn} \overline{\rho}(s) & = & \overline{\rho}(0) \nonumber \\ & &+\int_{0}^{s}e^{-\Lambda(s')}L_{s'}^{(2)} \left( e^{\Lambda(s')} \overline{\rho}(s') \right)ds' . \end{eqnarray} Note that $\sigma = \exp[\Lambda(s)]\sigma(0)$ is the solution of the equation $\dot{\sigma} = -iT[\widetilde{H}(s),\sigma]$. Since $\widetilde{H}(s)$ possesses a time-independent eigenbasis it follows that the corresponding evolution operator can be written as \begin{eqnarray} V(s) & = & \sum_{k}\exp[-iTI_{k}(s)]P_{k}(0), \nonumber \\ I_{k}(s) & = & \int_{0}^{s}E_{k}(s')ds'. \end{eqnarray} Thus, \begin{eqnarray} e^{\Lambda(s)}\sigma & = & V(s)\sigma V^{\dagger}(s) \nonumber \\ & = & \sum_{kl}e^{-iT\{I_{k}(s)-I_{l}(s)\}}P_{k}(0)\sigma P_{l}(0), \end{eqnarray} for every linear operator $\sigma$. We obtain \begin{eqnarray} \label{lautv} \overline{\rho}(s) & = & e^{-\Lambda(s)}\widetilde{\rho}(s) \nonumber \\ & = & \sum_{kl}e^{iT\{I_{k}(s)-I_{l}(s)\}} P_{k}(0) \widetilde{\rho}(s)P_{l}(0). \end{eqnarray} If Eq.~(\ref{lautv}) is combined with Eq.~(\ref{erhn}) the result is \begin{eqnarray} \label{ekvationen} \overline{\rho}(s) &= & \overline{\rho}(0)\\ & &+\sum_{klk'l'}\int_{0}^{s}e^{iTI_{klk'l'}(s')}\nonumber \\ & & \quad \times P_{k}(0) L_{s'}^{(2)}\bm{(}P_{k'}(0)\overline{\rho}(s')P_{l'}(0)\bm{)} P_{l}(0)ds', \nonumber \end{eqnarray} where \begin{equation} I_{klk'l'}(s) = I_{k}(s)-I_{l}(s) -I_{k'}(s)+ I_{l'}(s). \end{equation} Inserting Eq.~(\ref{L1L2}) into Eq.~(\ref{ekvationen}) yields \begin{eqnarray} \label{ekvationensl} \overline{\rho}(s) & = & \overline{\rho}(0)\\ & & -i\sum_{kk'}\int_{0}^{s}e^{iTI_{kk'}(s')}[P_{k}(0) Z(s')P_{k'}(0),\overline{\rho}(s')]ds'\nonumber\\ & &+\Gamma T\sum_{klk'l'}\int_{0}^{s}e^{iTI_{klk'l'}(s')}\nonumber\\ & &\quad\times P_{k}(0) D_{s'}\bm{(}P_{k'}(0)\overline{\rho}(s')P_{l'}(0)\bm{)} P_{l}(0)ds',\nonumber \end{eqnarray} where we have introduced \begin{equation} I_{kl}(s) = I_{k}(s)-I_{l}(s). \end{equation} \subsection{\label{sec:diag}The diagonal terms} The diagonal terms of Eq.~(\ref{ekvationensl}) read \begin{eqnarray} \label{aoirae} \overline{\rho}^{(ll)}(s) & = & P_l (0) \overline{\rho} (s) P_l (0) \nonumber \\ & = & \overline{\rho}^{(ll)}(0) -i\int_{0}^{s}[Z_{l}(s'),\overline{\rho}^{(ll)}(s')]ds' \nonumber\\ & & + \Gamma T\sum_{k}\int_{0}^{s} P_{l}(0) D_{s'}\bm{(}\overline{\rho}^{(kk)}(s')\bm{)} P_{l}(0)ds' \nonumber\\ & & + X_{d}(s) . \end{eqnarray} Here, \begin{eqnarray} \label{Fdef} &&X_{d}(s)\nonumber\\ & &=\sum_{k:k\neq l}\int_{0}^{s}e^{iTI_{lk}(s')}P_{l}(0)Z(s') P_{k}(0)\overline{\rho}^{(kl)}(s')ds' \nonumber\\ & & -\sum_{k:k\neq l} \int_{0}^{s} e^{iTI_{kl}(s')} \overline{\rho}^{(lk)} (s') P_{k}(0) Z(s') P_{l}(0) ds' \\ & & +\Gamma T\!\!\!\!\sum_{k'l':k'\neq l'} \int_{0}^{s} e^{-iTI_{k'l'}(s')}P_{l}(0) D_{s'} \bm{(} \overline{\rho}^{(k'l')}(s') \bm{)} P_{l}(0) ds' , \nonumber \end{eqnarray} where we here have used that $I_{llk'l'}(s) = -I_{k'l'}(s)$. We now show that the operator $X_{d}(s)$ vanishes in suitable limits of $T$ and $\Gamma$. First, we cite Lemma 7.2.17 from \cite{complex}. \begin{Lemma} \label{lemmaett} Suppose the function $h(s)$ is real valued, has a continuous second derivative on the closed bounded interval $[0,1]$, and $\frac{d}{ds}h(s)\neq 0$ for all $s\in[0,1]$. Let the function $f(s)$ have a continuous derivative on $[0,1]$. Then, for sufficiently large $T$, there exists a constant $C$ such that \begin{equation} \label{asdfn} \int_0^1 e^{iTh(s)}f(s)ds \leq CT^{-1}. \end{equation} \end{Lemma} Since the integrands in Eq.~(\ref{Fdef}) all take the form $\exp[iTh(s)]F(s)$ it may be tempting to use Lemma \ref{lemmaett}, or similar results like the Riemann-Lebesgue Lemma \cite{RimLeb}, directly on these integrals. However, since $F$ depends on the solution $\overline{\rho}$, one should keep in mind that the solution $\rho$ (and hence $\overline{\rho}$) depends on $T$, and may contain fluctuations growing with $T$, which potentially may cancel the averaging effect of the phase factors $\exp[iTh(s)]$. This makes a direct use of Lemma \ref{lemmaett} dangerous when applied to terms containing $\overline{\rho}$. In other words, we cannot allow the function $f$ in Eq.~(\ref{asdfn}) to depend on $T$, neither directly nor indirectly. It is, however, quite straightforward to avoid this problem in the present case. Let $\{ |n\rangle \}_n$ be some fixed orthonormal basis, independent of $s$, $T$, and $\Gamma$. With respect to this basis the first integral in Eq.~(\ref{Fdef}) can be written as \begin{equation} \sum_{mn}|m\rangle\langle n|\int_{0}^{1} e^{iTI_{lk}(s')}\bm{(}P_{l}(0)Z(s')P_{k}(0)\bm{)}_{mn} \overline{\rho}^{(kl)}_{nm}(s')ds' , \end{equation} which is a sum containing integrals on the form \begin{equation} \label{lkkbbms} \int_{0}^{s}e^{iTh(s')}f(s')K\bm{(}\overline{\rho}^{(kl)}(s')\bm{)}ds'. \end{equation} Here, $h(s) = \pm[I_{k}(s)-I_{l}(s)]$ for some $k,l$ and $K$ denotes a linear map from the operator to a matrix element in the matrix representation of it, i.e., $K(\cdot) = \langle n|\cdot|m\rangle$ for some $n,m$. Note that the function $f$ only depends on $s$, not on $\overline{\rho}$ or $T$. Similarly, the second term on the right-hand side of Eq.~(\ref{Fdef}) can be written as a sum involving integrals of the form (\ref{lkkbbms}). Now, by partial integration of Eq.~(\ref{lkkbbms}) one obtains \begin{eqnarray} \label{hbszkj} R_{Z}(s) & = & K\bm{(}\overline{\rho}^{(kl)}(s)\bm{)}\int_{0}^{s}e^{iTh(s')}f(s')ds'\\ & & - \int_{0}^{s}\!\!K\!\!\left(\frac{d}{ds}\overline{\rho}^{(kl)}(s')\right) \int_{0}^{s'}\!e^{iTh(s'')}f(s'')ds'' ds'.\nonumber \end{eqnarray} By differentiation of Eq.~(\ref{ekvationensl}), and by use of the standard operator norm $||\sigma|| = \sup_{||\psi||=1}||\sigma|\psi\rangle||$, one finds \begin{eqnarray} \left|\left|\frac{d}{ds}{\overline{\rho}}^{(kl)}(s')\right|\right| &\leq & A^{(d)}_{1} + B^{(d)}_{1}\Gamma T, \end{eqnarray} for some constants $A^{(d)}_{1}$ and $B^{(d)}_{1}$, where the index $d$ signifies the diagonal terms. From Eq.~(\ref{hbszkj}) it follows that \begin{equation} \label{kvvdk} |R_{Z}(s)| \leq (1+ A^{(d)}_{1}+ B^{(d)}_{1}\Gamma T)\bigg|\int_{0}^{s}e^{iTh(s')}f(s')ds'\bigg|, \end{equation} where we have used that $|K(\overline{\rho}^{(kl)})|\leq ||\overline{\rho}^{(kl)}||\leq 1$, as a consequence of the fact that $\overline{\rho}(s)$ is a density operator. We note that $\frac{d}{ds} h(s) = E_{k}(s)-E_{l}(s)$ for all $s\in[0,1]$, which is nonzero by assumption if $k\neq l$. Furthermore, we assume that the family of Hermitian operators $H(s)$ has an Hermitian continuous first derivative, which implies that the eigenvalues $E_{k}(s)$ can be ordered, for each $s$, in such a way that they have a continuous first derivative (see Ref.~\cite{Rell}, pp.~44-45). Thus, the second derivative of $h(s)$ is continuous if $H(s)$ has a continuous first derivative. Moreover, the function $f$ does not depend on $T$, and has a continuous first derivative if $Z(s)$ and $D_{s}$ has. We may thus apply Lemma \ref{lemmaett} to the right-hand side of Eq.~(\ref{kvvdk}), from which it follows that there exists some constant $C^{(d)}_{1}$ such that \begin{equation} |R_{Z}(s)|\leq C_{1}^{(d)}(1+ A_{1}^{(d)})T^{-1} + C_{1}^{(d)}B_{1}^{(d)}\Gamma. \end{equation} The third integral in Eq.~(\ref{Fdef}) may be treated in the same way, but including the extra factor $\Gamma T$, which results in terms $R_{D}(s)$ bounded as \begin{equation} |R_{D}(s)|\leq C^{(d)}_{2}(1+ A^{(d)}_{2})\Gamma + C_{2}^{(d)}B_{2}^{(d)}\Gamma^{2}T. \end{equation} In total, we find that the norm (or, alternatively, the elements in some matrix representation) of $X_{d}(s)$ is bounded as \begin{equation} ||X_{d}(s)|| \leq A_{3}^{(d)}T^{-1} + B_{3}^{(d)}\Gamma + C_{3}^{(d)}\Gamma^{2} T, \end{equation} for some constants $A_{3}^{(d)}$, $B_{3}^{(d)}$, and $C_{3}^{(d)}$. Next, we prove that the diagonal terms of the solution of the exact Eq.~(\ref{ekvationensl}) converges to the solution of the approximate equation of the diagonal terms, under certain conditions. The set of operators $\sigma$ such that $\sum_{k}P_{k}(0)\sigma P_{k}(0) = \sigma$, forms a linear subspace $\mathcal{L}$ of the space of all linear operators on $\mathcal{H}$. Define \begin{eqnarray*} f_{d}(s,\sigma) & = & -i\sum_{l}P_{l}(0)[Z_{l}(s),P_{l}(0)\sigma P_{l}(0)]P_{l}(0)\\ & & + \Gamma T\sum_{kl} P_{l}(0)D_{s}\bm{(}P_{k}(0)\sigma P_{k}(0)\bm{)}P_{l}(0). \end{eqnarray*} For $\sigma,\sigma'\in\mathcal{L}$ and $s\in [0,1]$, we have \begin{eqnarray} \label{dialip} & &||f_{d}(s,\sigma)-f_{d}(s,\sigma')|| \nonumber \\ & &\leq \sum_{l}||P_{l}(0)[Z_{l}(s),P_{l}(0)(\sigma-\sigma') P_{l}(0)]P_{l}(0)|| \nonumber \\ & & \quad + \Gamma T \sum_{kl}||P_{l}(0)D_{s} \! \bm{(}P_{k}(0)(\sigma-\sigma')P_{k}(0)\bm{)}P_{l}(0)|| \nonumber\\ & &\leq (F^{(d)} + G^{(d)}\Gamma T)||\sigma-\sigma'||, \end{eqnarray} for some constants $F^{(d)}$ and $G^{(d)}$. In the last inequality we have used that $Z_{l}(s)$ and $D_{s}$ are continuous functions of $s$ and that there exist maxima of $||Z_{l}(s)||$ and $|||D_{s}||| = \sup_{||\sigma||=1}||D_{s}(\sigma)||$, the latter following from $[0,1]$ being a compact set. Note that the constants $F^{(d)}$ and $G^{(d)}$ can be chosen independently of $\Gamma$ and $T$. Equation (\ref{dialip}) means that $F^{(d)}+G^{(d)}\Gamma T$ is a Lipschitz constant for $f_d$ on the set $[0,1]\times\mathcal{L}$. Suppose that $\overline{\rho}^{a}_{d}(s)$ is the solution of the approximate equation for the diagonal terms, i.e., Eq.~(\ref{aoirae}) with $X_{d}(s)\equiv 0$. Moreover, let \begin{equation} \overline{\rho}_{d}(s) = \sum_{l}P_{l}(0)\overline{\rho}(s)P_{l}(0) =\sum_{l}\overline{\rho}^{(ll)}(s), \end{equation} where $\overline{\rho}(s)$ is the exact solution of Eq.~(\ref{ekvationensl}). We now intend to prove that $||\overline{\rho}^{a}_{d}(s)-\overline{\rho}_{d}(s)||$ vanishes for all $s$, in a suitable limit. The error $\mathcal{E}$, with respect to the standard operator norm, can be estimated as \begin{eqnarray} \mathcal{E}(s) & = & ||\overline{\rho}^{a}_{d}(s)-\overline{\rho}_{d}(s)||\nonumber\\ & = & \left|\left|\int_{0}^{s}\Big(f(s',\overline{\rho}^{a}_{d}(s'))- f(s',\overline{\rho}_{d}(s'))\Big)ds' - X_{d}(s)\right|\right|\nonumber\\ & \leq & ||X_{d}(s)|| + \int_{0}^{s}||f(s',\overline{\rho}^{a}_{d}(s'))- f(s',\overline{\rho}_{d}(s'))||ds' \nonumber\\ & \leq & A_{3}^{(d)}T^{-1} + B_{3}^{(d)}\Gamma + C_{3}^{(d)}\Gamma^{2} T\nonumber\\ & & + (F^{(d)} + G^{(d)}\Gamma T) \int_{0}^{s}\mathcal{E}(s')ds'. \end{eqnarray} From the above inequalities one obtains an integral inequality for the error $\mathcal{E}(s)$. This integral inequality can be shown \cite{Amann} to have the solution \begin{eqnarray} \label{diagsc} & &||\overline{\rho}^{a}_{d}(s)-\overline{\rho}_{d}(s)|| \\ & & \leq ( A_{3}^{(d)}T^{-1} + B_{3}^{(d)}\Gamma + C_{3}^{(d)}\Gamma^{2} T )e^{s(F^{(d)} + G^{(d)}\Gamma T)}.\nonumber \end{eqnarray} One can conclude that a sufficient condition for convergence of the approximate and the exact solution is the simultaneous limits $T\rightarrow \infty$ and $\Gamma\rightarrow 0$, under the condition that $\Gamma T$ is bounded. \subsection{\label{sec:offd}The off-diagonal terms} The off-diagonal terms contain two types of phase factors, viz., $\exp[iTI_{kl}(s)]$ and $\exp[iTI_{klk'l'}(s)]$. While $\frac{d}{ds} I_{kl}(s) = \Delta_{kl}(s)$ is always nonzero due to the assumption of distinct eigenvalues, the functions $\frac{d}{ds}I_{klk'l'}(s) = \Delta_{kl}(s)-\Delta_{k'l'}(s)$ may be zero at isolated points, or more systematically, even if $\Delta_{kl}(s)\neq 0$. Thus, the averaging effect leading to the adiabatic decoupling of the exact equation depends upon whether the graphs of the functions $\Delta_{kl}(s)$ avoid each other, cross, or coincide. We now study the following two physically reasonable special cases. \begin{itemize} \item[(i)] For each pair $(k,l)$ and $(k',l')$ it holds that $\Delta_{kl}(s) = \Delta_{k'l'}(s),\quad \forall s\in[0,1]$, or $\Delta_{kl}(s)\neq \Delta_{k'l'}(s),\quad \forall s\in[0,1]$. \item[(ii)] For each pair $(k,l)$ and $(k',l')$ it holds that $\Delta_{kl}(s) = \Delta_{k'l'}(s),\quad \forall s\in[0,1]$, or $\Delta_{kl}(s)\neq \Delta_{k'l'}(s)$ for all $s$, except possibly at isolated points. At each such point $\widetilde{s}$ it holds that $\frac{d}{ds} (\Delta_{kl}(\widetilde{s})- \Delta_{k'l'} (\widetilde{s}))\neq 0$. \end{itemize} In the first case the condition says that the functions $\Delta_{kl}(s)$ and $\Delta_{k'l'}(s)$ either coincide at all points, or never cross. In other words, the difference $\Delta_{kl}(s)-\Delta_{k'l'}(s)$ is either zero or nonzero on the whole interval $[0,1]$. In the second case we allow the above mentioned graphs to cross at isolated points. At those points where they do cross we put a restriction on how they cross, in form of the first derivative of $\Delta_{kl}-\Delta_{k'l'}$. As the approximation depends on the behavior of the functions $\Delta_{kl}$ we need to keep track of those which coincide systematically. To do this in the above two special cases, we define \begin{equation} \label{gdef} g_{klk'l'} = \left\{ \begin{array}{lcr} 1 & \text{if} & \Delta_{kl}(s) = \Delta_{k'l'}(s),\quad\forall s\in[0,1], \\ 0 & \textrm{else}. & \end{array} \right. \end{equation} Hence, $g_{klk'l'}=1$ only if the two functions $\Delta_{kl}(s)$ and $\Delta_{k'l'}(s)$ coincide systematically. Note that this definition holds for all combinations of $k,l,k',l'$, including those involving diagonal elements. It follows that \begin{equation} \label{symm1} g_{klk'l'} = g_{k'l'kl}, \quad g_{klk'l'} = g_{lkl'k'}, \end{equation} and \begin{eqnarray} \label{vilk1} g_{klk'l} = \delta_{kk'}, & & g_{klkl'} = \delta_{ll'}, \nonumber\\ g_{kkk'l'} = \delta_{k'l'}, & & g_{klk'k'} = \delta_{kl} , \end{eqnarray} where the latter conditions hold under the assumption $E_{k}(s)\neq E_{l}(s)$, $k\neq l$. The equation for the off diagonal term $\overline{\rho}^{(kl)}(s)$ can be written \begin{widetext} \begin{eqnarray} \label{totekv} \overline{\rho}^{(kl)}(s) & = & \overline{\rho}^{(kl)}(0) -i\int_{0}^{s}Z_{k}(s')\overline{\rho}^{(kl)}(s')ds' +i\int_{0}^{s}\overline{\rho}^{(kl)}(s')Z_{l}(s')ds' \nonumber\\ & &+ \Gamma T \sum_{k'l'}g_{klk'l'}\int_{0}^{s}P_{k}(0) D_{s'}\Big(\overline{\rho}^{(k'l')}(s')\Big) P_{l}(0)ds' + X_{o}(s),\\ \label{WTGdef} X_{o}(s) & = & -i\sum_{k':k'\neq k}\int_{0}^{s}e^{iTI_{kk'}(s')}P_{k}(0) Z(s')P_{k'}(0)\overline{\rho}^{(k'l)}(s')ds'\nonumber\\ & & -i\sum_{k':k'\neq l}\int_{0}^{s}e^{iTI_{k'l}(s')}\overline{\rho}^{(kk')}(s')P_{k'}(0) Z(s')P_{l}(0)ds' \nonumber \\ & & +\Gamma T\sum_{k'l':g_{klk'l'} = 0}\int_{0}^{s}e^{iTI_{klk'l'}(s')}P_{k}(0) D_{s'}\Big(P_{k'}(0)\overline{\rho}(s')P_{l'}(0)\Big) P_{l}(0)ds'. \end{eqnarray} \end{widetext} As in the proof of the approximate equation of the diagonal terms we need a Lipschitz constant. Consider the linear operators $\sigma$ which fulfills $\sum_{k,l:k\neq l}P_{k}(0)\sigma P_{l}(0) = \sigma$. This operator subspace is denoted by $\mathcal{L}^{\perp}$. Define \begin{eqnarray} & & f_{o}(s,\sigma) = \nonumber \\ & & -i \sum_{kl:k\neq l} Z_{k}(s') P_{k}(0) \sigma P_{l}(0) ds' \nonumber \\ & & +i\sum_{kl:k\neq l} P_{k}(0) \sigma P_{l}(0) Z_{l}(s') \\ & & + \Gamma T \sum_{kl:k\neq l}\sum_{k'l'}g_{klk'l'}P_{k}(0) D_{s'}\bm{(}P_{k'}(0)\sigma P_{l'}(0)\bm{)} P_{l}(0). \nonumber \end{eqnarray} With a similar reasoning as for the diagonal terms we obtain \begin{equation} \label{lipso} ||f_{o}(s,\sigma)-f_{o}(s,\sigma')|| \leq (F^{(o)}+G^{(o)}\Gamma T)||\sigma-\sigma'||, \end{equation} for all $s\in[0,1]$ and $\sigma,\sigma'\in\mathcal{L}^{\perp}$. Here, $F^{(o)}$ and $G^{(o)}$ are constants. We further define \begin{eqnarray} \overline{\rho}_{o}(s) = \sum_{kl:k\neq l}P_{k}(0)\overline{\rho}(s) P_{l}(0) =\sum_{kl:k\neq l}\overline{\rho}^{(kl)}(s), \end{eqnarray} where $\overline{\rho}(s)$ is the solution of Eq.~(\ref{ekvationensl}). Similarly we let \begin{eqnarray} \overline{\rho}_{o}^{a}(s) = \sum_{kl:k\neq l} \overline{\rho}^{(kl)}_{a}(s), \end{eqnarray} where $\overline{\rho}^{(kl)}_{a}(s)$ are the solutions of Eq.~(\ref{totekv}), with $X_{o}(s) \equiv 0$. Clearly, both $\overline{\rho}_{o}(s)$ and $\overline{\rho}_{o}^{a}(s)$ belong to $\mathcal{L}^{\perp}$. \subsubsection{Case (i)} Here, the functions $\Delta_{kl}(s)$ and $\Delta_{k'l'}(s)$ either coincide at all points or never cross. Following the reasoning of the diagonal case, one finds that \begin{eqnarray} ||X_{o}(s)|| \leq A^{(o1)}_{3}T^{-1} + B^{(o1)}_{3} \Gamma + C^{(o1)}_{3} \Gamma^{2} T \end{eqnarray} and we may use the Lipschitz condition in Eq.~(\ref{lipso}) to obtain \begin{eqnarray} & & ||\overline{\rho}_{o}(s)-\overline{\rho}_{o}^{a}(s)||\\ & & \leq (A^{(o1)}_{3}T^{-1} + B^{(o1)}_{3} \Gamma + C^{(o1)}_{3} \Gamma^{2} T)e^{s(F^{(o)}+ G^{(o)}\Gamma T)},\nonumber \end{eqnarray} for all $s\in[0,1]$. Thus, as for the diagonal terms, we find the conditions $T\rightarrow \infty$, $\Gamma\rightarrow 0$, and $\Gamma T$ bounded, for convergence of the approximate and exact solution. \subsubsection{Case (ii)} In this case we allow the graphs of the functions $\Delta_{kl}(s)$ to cross, but only at isolated points, and with a nonzero angle. Consider two distinct pairs $(k,l)$ and $(k',l')$. It follows that the function $\Delta_{kl}(s) - \Delta_{k'l'}(s)$ has only isolated zeros. Since the interval $[0,1]$ is compact and since $\Delta_{kl}(s) - \Delta_{k'l'}(s)$ is continuous, there can only be a finite number of isolated zeros. We may partition the interval $[0,1]$ into subintervals were each subinterval has at most one zero of $\Delta_{kl}(s) - \Delta_{k'l'}(s)$ in its interior. Due to the zeros of $\Delta_{kl}(s) - \Delta_{k'l'}(s)$, Lemma \ref{lemmaett} is no longer applicable, but we may instead use the stationary phase theorem, which we cite from \cite{complex} (Theorem 7.2.10). Note that we here present a weakened form of the theorem, which precisely covers the aspects we need. \begin{Theorem} \label{stationary} Let $h(s)$ be analytic in a neighborhood of the closed bounded interval $[a,b]$ and be real on $[a,b]$. Let $f(s)$ have a continuous first derivative on $[a,b]$. If $\frac{d}{ds} h(s) = 0$ at exactly one point $s_{0}\in (a,b)$ and if the second derivative of $h$ at $s_{0}$ is nonzero, then for sufficiently large $T$ there exists a constant $D$ such that \begin{equation} \label{stat} \int_a^b e^{iTh(s)}f(s)ds\leq DT^{-1/2}. \end{equation} \end{Theorem} We write $X_{o}(s)$ as a sum of integrals of the form (\ref{lkkbbms}), each of which is decomposed into integrals on subintervals. In the present case, we may reason in the same way as in the steps from Eq.~(\ref{hbszkj}) to Eq.~(\ref{kvvdk}), with the exception that each integral spans only a subinterval. Note that we only need to use Theorem \ref{stationary} on neighborhoods of the points where the functions $\Delta_{kl}$ cross. On the rest of the interval we may use Lemma \ref{lemmaett}. Thus, in order to use Theorem \ref{stationary}, the eigenvalues $E_{k}(s)$ only have to be analytic functions of $s$ in a neighborhood of each point $s_{0}$ where $\Delta_{kl}(s_{0})-\Delta_{k'l'}(s_{0}) = 0$. Since the Hilbert space is finite-dimensional and since $H(s)$ is Hermitian, this is the case if $H(s)$ is analytic in a neighborhood of each $s_{0}$ (see Ref.~\cite{Rell}, pp.~33-34, or Ref.~\cite{reed}, Theorem XII.3). We also require that $Z(s)$ and $D_{s}$ have continuous first derivatives in $s$. The value of the integral \begin{equation} \left|\int_{a}^{b}e^{iTh(s)}f(s)ds\right| \end{equation} is $O(T^{-1})$ if the subinterval $[a,b]$ does not contain a zero of $\Delta_{kl}(s) - \Delta_{k'l'}(s)$, and $O(T^{-1/2})$ if it does. When summing up the contributions from the subintervals, it follows that the value of the integral in Eq.~(\ref{kvvdk}), for sufficiently large $T$ can be bounded as $D^{(o2)}_{1}T^{-1/2}$, for some constant $D^{(o2)}_{1}$. Thus, the first two terms on the right-hand side of Eq.~(\ref{WTGdef}) are bounded by a finite sum of expressions on the form \begin{equation} |R_{Z}(s)|\leq (1+A^{(o2)}_{1} + B^{(o2)}_{1}\Gamma T)D^{(o2)}_{1}T^{-1/2}, \end{equation} for sufficiently large $T$. Similarly, for the third term on the right-hand side of Eq.~(\ref{WTGdef}) gives a finite sum of bounds of the form \begin{equation} |R_{D}(s)|\leq (1+A^{(o2)}_{2} + B^{(o2)}_{2}\Gamma T)D^{(o2)}_{2} \Gamma T^{1/2}. \end{equation} Thus, for sufficiently large $T$ we obtain \begin{displaymath} ||X_{o}(s)|| \leq A^{(o2)}_{3}T^{-1/2} + B^{(o2)}_{3}\Gamma T^{1/2} + C^{(o2)}_{3}\Gamma^{2}T^{3/2}. \end{displaymath} By combining this with the Lipschitz condition (\ref{lipso}), one obtains \begin{eqnarray} \label{o2sca} & ||\overline{\rho}_{o}(s)-\overline{\rho}_{o}^{a}(s)||&\leq (A^{(o2)}_{3}T^{-1/2} + B^{(o2)}_{3}\Gamma T^{1/2} \nonumber\\ & & + C^{(o2)}_{3}\Gamma^{2}T^{3/2} ) e^{s(F^{(o)}+ G^{(o)}\Gamma T)}, \end{eqnarray} for all $s\in[0,1]$. Since, $\Gamma T^{1/2} = (\Gamma T)T^{-1/2}$ and $\Gamma^{2}T^{3/2} = (\Gamma T)^{2}T^{-1/2}$, it is sufficient with the simultaneous conditions $T\rightarrow \infty$, $\Gamma \rightarrow 0$, and $\Gamma T$ bounded, for the error to vanish. Although we obtain the same conditions as in case (i), Eq.~(\ref{o2sca}) nevertheless indicates worse scaling properties of the error than Eq.~(\ref{diagsc}) does. This point will be discussed further in Sec.~\ref{sec:scales}. \subsection{\label{sec:equiv}The approximate equations} In Secs.~\ref{sec:diag} and \ref{sec:offd} we have motivated the approximate equations for diagonal as well as off-diagonal terms $\overline{\rho}^{(kl)}(s)$. For the diagonal terms the approximate equation is Eq.~(\ref{aoirae}) with $X_{d}(s)\equiv 0$. For the off-diagonal terms the approximate equation is Eq.~(\ref{totekv}) with $X_{o}(s)\equiv 0$. One may transform the integral equations into differential equations, followed by a change of variables back to $\widetilde{\rho}^{(kl)}(s)$. With use of the definition of $g_{klk'l'}$, this results in Eqs.~(\ref{diagonal}) and (\ref{nlan}), for the diagonal and the off-diagonal terms, respectively. Note that Eq.~(\ref{nlan}) holds, not only for the off-diagonal terms, but for the diagonal terms as well. This is the case since the expression in Eq.~(\ref{nlan}) reduces to Eq.~(\ref{diagonal}), due to Eq.~(\ref{vilk1}), if we consider the diagonal terms. The transformation to Eq.~(\ref{totalaapp}) from Eq.~(\ref{nlan}) is straightforward for the dissipator. The only part which may need comment is the operator $Q(s)$ in Eq.~(\ref{totalaapp}). If one transforms from the variable $\widetilde{\rho}(s)$, back to the variable $\rho(s)$, combining all the terms, one obtains \begin{eqnarray} \label{intermediate} \dot{\rho} & = & -iT[H(s),\rho] \nonumber\\ & & + \Gamma T\sum_{klk'l'}g_{klk'l'}P_{k}(s)D_{s}\bm{(}P_{k'}(s)\rho P_{l'}(s)\bm{)}P_{l}(s) \nonumber \\ & & - \sum_{k}P_{k}(s)\dot{P}_{k}(s)\rho - \rho\sum_{k} \dot{P}_{k}(s) P_{k}(s). \end{eqnarray} By differentiating $P_{k}^{2}(s) = P_{k}(s)$ one obtains $\dot{P}_{k}(s)P_{k}(s) + P_{k}(s)\dot{P}_{k}(s) = \dot{P}_{k}(s)$. If this expression is summed over $k$ and is combined with the fact that $\sum_{k}\dot{P}_{k}(s) = 0$, the result is \begin{equation} \label{sumPdotP} \sum_{k} P_{k} (s) \dot{P}_{k}(s) = -\sum_{k} \dot{P}_{k}(s)P_{k}(s). \end{equation} By combining this expression with Eq.~(\ref{intermediate}) one obtains Eq.~(\ref{totalaapp}). \subsection{\label{sec:scales}Time scales} For the diagonal terms, as well as for the off-diagonal terms in case (i), we have found that the error between the solution of the exact equation and the solution of the approximate equation satisfies a bound of the form \begin{equation} \label{cond} \mathcal{E}\leq (AT^{-1}+ B\Gamma + C\Gamma^{2}T)e^{F+G\Gamma T}. \end{equation} In view of the limiting processes considered in the previous sections, a physical interpretation of this condition might be to assume that the strength parameter $\Gamma$ depends on the run-time $T$. If $\Gamma = \alpha/T$, with $\alpha\geq 0$ a constant independent of $T$, then the error would go to zero when $T\rightarrow\infty$. This, however, paints our abilities to control open-system effects in a bit too rosy colors. In practice, the open-system effects are often residual uncontrollable errors and the strength $\Gamma$ is given by the situation at hand, and we have no possibility to decrease $\Gamma$ as $T$ increases. On the other hand, from Eq.~(\ref{cond}) it is quite clear that the approximation is good if the run time $T$ is sufficiently large, the characteristic time scale of the open-system effects $\Gamma^{-1}$ is sufficiently large, and the run time $T$ is in the order of or smaller than $\Gamma^{-1}$. Unlike the standard adiabatic approximation where the error can be made arbitrarily small by increasing the run-time, the present approximation appears to be limited, since for a given open-system strength $\Gamma$, the error cannot be made arbitrarily small as the run-time has to be at the same order or smaller than the characteristic time scale of the open-system effects \cite{remark2}. One should keep in mind, though, that we only have obtained sufficient conditions, not necessary conditions, for the accuracy of the approximation. As pointed out in Sec.\ref{sec:range}, these sufficient conditions may in some cases be unnecessarily pessimistic. For the off-diagonal terms in case (ii), we similarly obtained the condition $T\rightarrow\infty$, $\Gamma\rightarrow 0$, and $\Gamma T$ bounded. However, this does not tell us at what rate the error decreases. One may compare Eqs.~(\ref{o2sca}) and (\ref{cond}). As an example, consider those terms that solely depend on $T$, and not $\Gamma$. This part scales like $T^{-1/2}$ and $T^{-1}$ in Eqs.~(\ref{o2sca}) and Eq.~(\ref{cond}), respectively. Thus, while both these parts goes to zero when the run-time $T$ increases, the rate is slower for case (ii) than for the diagonal terms and case (i). Similarly one may compare the other terms, in Eqs.~(\ref{o2sca}) and (\ref{cond}), containing combinations of $T$ and $\Gamma$. Again one finds that the scaling of the error with increasing $T$ and decreasing $\Gamma$ is worse in case (ii) than for the diagonal terms and case (i). This suggests that the range of applicability of the approximation is tighter in case (ii). One may note that the constants $A$, $B$, $C$, $G$, and $F$ in Eq.~(\ref{cond}) play an important role as they ``set the scales'', in the sense that they determine what ``large $T$'' and ``small $\Gamma$'' means. We have avoided to give explicit estimates of these constants. It would be possible to perform the derivations in the previous sections in such a way that estimates of these constants are obtained. However, it seems a better strategy to derive such constants more specifically for the system and the initial conditions at hand. In essence, we have shown that there exists a region of large $T$ and small $\Gamma$ where the approximation is good, but we have not determined how large $T$ and how small $\Gamma$ must be. This is analogous to perturbation theory where one knows the approximation to be good if the perturbation parameter is sufficiently small, but usually one does not know how small ``sufficiently small'' is. \section{\label{sec:compo}Complete positivity} So far we have assumed very little about the exact nature of the disturbance $D_{s}$. Except that $D_{s}$ should be linear as a superoperator and be sufficiently smooth as a function of $s$, we have only required that it should lead to an evolution which keeps the solution $\rho(s)$ bounded. In this section we investigate in more detail what evolution the approximation gives rise to, and we do so for a restricted class of superoperators $D_{s}$. For an important class of master equations $\dot{\varrho} = L\varrho$, the superoperator $L$ can be written on the (time-independent) Lindblad form \cite{Lindblad}. If $L$ is bounded, then the Lindblad form guarantees that the resulting evolution is trace preserving and completely positive \cite{Lindblad,Vitt}. To be more precise, the master equation induces a one-parameter family of linear maps $\Lambda_{x}$ such that $\rho(s_2) = \Lambda_{s_2-s_1}\rho(s_1)$, for $s_2\geq s_1$. Each $\Lambda_{x}$ is trace preserving and completely positive if $L$ can be written on the Lindblad form. The complete positivity guarantees that the evolution maps density operators to density operators, even if the evolution acts on one member of an entangled pair of systems \cite{Kraus}. If the superoperator $L$ is time-dependent we instead obtain a two-parameter family of linear maps $\Lambda_{s_2,s_1}$ such that $\rho(s_2) = \Lambda_{s_2,s_1}\rho(s_1)$, for $s_2\geq s_1$. In the finite-dimensional case it can be shown that a sufficient condition for $\Lambda_{s_2,s_1}$ to be completely positive is that the time-dependent superoperator $L_{s}$ can be written on a time-dependent Lindblad form, and that $L_{s}$ has a continuous first derivative on the interval $[0,1]$. For discussions on the complete positivity of the dynamical maps generated by time-dependent Lindbladians, see Refs.~\cite{Alle,Lendi}. Note that the Lindblad form of the master equation is not necessary in order to obtain complete positivity, or positivity of the dynamical maps. More general master equations which are non-local in time (integro-differential equations) have been considered in the literature (see, e.g., \cite{Wilkie}). Moreover, time-local equations $\dot{\rho}(s) = L_{s}\rho(s)$, where $L_{s}$ is not of the lindblad form have also been considered (see, e.g., \cite{Breuer}). In this section we assume that the disturbance $D_{s}$ can be written on the time-dependent Lindblad form. It may be possible to generalize the reasoning in this section to the type of master equations considered in \cite{Breuer}. This question is, however, not treated here. Here it is shown that if the superoperator $D_{s}$ can be written on the time-dependent Lindblad form, then the approximate equation can also be written on the time-dependent Lindblad form. Thus, under suitable conditions it follows that the approximate evolution is ``physically reasonable'' in the sense that it is trace preserving and completely positive. Suppose $D_{s}(\rho)$ can be written on the time-dependent Lindblad form \begin{eqnarray} \label{Ds} D_{s}(\rho) &=& -i[F(s),\rho] + \sum_{n}V_{n}(s)\rho V_{n}^{\dagger}(s)\\ & &-\frac{1}{2}\sum_{n} V_{n}^{\dagger}(s)V_{n}(s)\rho -\frac{1}{2}\rho\sum_{n} V_{n}^{\dagger}(s)V_{n}(s),\nonumber \end{eqnarray} where $F(s)$ is Hermitian. To calculate the term of Eq.~(\ref{totalaapp}) involving $D_s$, we use Eq.~(\ref{Ds}) together with the conditions in Eq.~(\ref{vilk1}), to obtain the approximate equation $\dot{\rho} = -i[TH(s)+ Q(s),\rho] + \Gamma TR_{s}(\rho)$, where \begin{eqnarray} \label{njknd} R_{s}(\rho) & = & \sum_{klk'l'}g_{klk'l'} P_{k}(s)D_{s}\bm{(} P_{k'}(s) \rho P_{l'}(s) \bm{)} P_{l}(s) \nonumber \\ & = & -i\sum_{k} \big[ P_{k}(s) F(s)P_{k}(s), \rho \big] \nonumber \\ & & + \sum_{klk'l'n}g_{klk'l'}P_{k}(s)V_{n}(s)P_{k'}(s)\rho P_{l'}(s) V_{n}^{\dagger}(s)P_{l}(s) \nonumber \\ & & -\frac{1}{2} \sum_{kn} P_{k}(s) V_{n}^{\dagger}(s) V_{n}(s) P_{k}(s) \rho \nonumber \\ & & -\frac{1}{2}\sum_{ln}\rho P_{l}(s)V_{n}^{\dagger}(s) V_{n}(s)P_{l}(s). \end{eqnarray} One may note the following \begin{eqnarray} \label{ghsns} & & \sum_{k}P_{k}(s)V_{n}^{\dagger}(s)V_{n}(s)P_{k}(s) \\ & & =\sum_{klk'l'}g_{klk'l'}P_{l'}(s)V_{n}^{\dagger}(s) P_{l}(s)P_{k}(s)V_{n}(s)P_{k'}(s) \nonumber, \end{eqnarray} which follows from Eq.~(\ref{vilk1}). By combining Eqs.~(\ref{njknd}) and (\ref{ghsns}) the result is \begin{eqnarray} \label{dfddfb} & & R_{s}(\rho) = -i\sum_{k} \big[ P_{k}(s) F(s)P_{k}(s),\rho \big] \\ & & + \sum_{klk'l'n}g_{klk'l'}P_{k}(s)V_{n}(s)P_{k'}(s)\rho P_{l'}(s) V_{n}^{\dagger}(s)P_{l}(s) \nonumber \\ & & -\frac{1}{2} \sum_{klk'l'n}g_{klk'l'}P_{l'}(s)V_{n}^{\dagger}(s) P_{l}(s)P_{k}(s)V_{n}(s)P_{k'}(s)\rho \nonumber\\ & & -\frac{1}{2}\sum_{klk'l'n}g_{klk'l'}\rho P_{l'}(s)V_{n}^{\dagger}(s)P_{l}(s)P_{k}(s)V_{n}(s)P_{k'}(s). \nonumber \end{eqnarray} Define a matrix $G$ with elements $G_{kk',ll'} = g_{klk'l'}$. $G$ is symmetric due to Eq.~(\ref{symm1}), which implies that $G$ is diagonalizable such that $G_{kk',ll'} = \sum_{m} \lambda_{m} c_{kk'}^{(m)} c_{ll'}^{(m)\ast}$. This can be used to show that Eq.~(\ref{dfddfb}) can be rewritten as \begin{eqnarray} R_{s}(\rho) & = & -i\sum_{k} \big[ P_{k}(s) F(s)P_{k}(s),\rho \big] \nonumber \\ & & + \sum_{n}\sum_{m}M_{n}^{(m)}(s)\rho M_{n}^{(m)\dagger}(s) \nonumber \\ & & -\frac{1}{2}\sum_{n}\sum_{m} M_{n}^{(m)\dagger}(s)M_{n}^{(m)}(s)\rho \nonumber \\ & & -\frac{1}{2}\sum_{n}\sum_{m}\rho M_{n}^{(m)\dagger}(s)M_{n}^{(m)}(s), \end{eqnarray} where \begin{equation} M_{n}^{(m)}(s) = \sum_{kk'}\sqrt{\lambda_{m}}c_{kk'}^{(m)}P_{k}(s)V_{n}(s)P_{k'}(s). \end{equation} Hence, we have shown that the approximate equation can be written on the time-dependent Lindblad form. \section{\label{sec:appl}Example: Non-Abelian Holonomy} Holonomic quantum computation \cite{Zanardi} is a recently proposed approach to quantum circuits using the idea of adiabatic evolution. Here, we wish to apply the present approximation scheme for weak open-system effects in holonomic single-qubit rotation gates. We consider a four-level system consisting of three ground states $0$, $1$, and $a$ whose coupling to an excited state $e$ is modeled by the Hamiltonian \cite{Unan} \begin{equation} \label{Hamiltonian} H(s) =|e\rangle \big( \langle 0| \omega_0(s) + \langle 1| \omega_1(s) + \langle a| \omega_a(s) \big) + \mathrm{H.c} . \end{equation} Here, $s = t/T$, with $T$ being the run-time of the process, and $\omega_{0}$, $\omega_{1}$, and $\omega_{a}$ are tunable, possibly complex-valued, coupling parameters. For each $s$, $H$ possesses a doubly degenerate zero-energy (dark) eigensubspace spanned by $|\chi_1 \rangle$ and $|\chi_2 \rangle$ and two bright eigenvectors $|\chi_3 \rangle$ and $|\chi_4 \rangle$, the latter with energies $\pm \omega$, where \begin{equation} \label{smomedef} \omega = \sqrt{|\omega_0|^2+|\omega_1|^2+|\omega_a|^2}. \end{equation} This type of system is found in various implementations of holonomic gates, including ion-traps \cite{Duan}, Josephson junctions \cite{Faoro}, semiconductor quantum dots \cite{solzanzanros}, and neutral atoms in cavities \cite{RecCal}. In the present investigation $\omega(s)$ is chosen to be constant. It follows that we may measure energy in units of $\omega$, and thus let the vector $[\omega_{0}(s),\omega_{1}(s),\omega_{a}(s)]$ to be of unit length. Since $\hbar = 1$ it follows that we measure time, and especially the run-time, in units of $\omega^{-1}$. We use this convention in the rest of this section. Holonomic single-qubit rotations acting on the computational space spanned by $|0\rangle$ and $|1\rangle$ may be obtained in adiabatic transport of the doubly degenerate dark states along paths restricted by the parametrization \begin{eqnarray} \label{parameters} \omega_0(s) & = & \sin \theta(s) \sin \varphi(s),\nonumber\\ \omega_1(s) & = & \sin \theta(s) \cos\varphi(s),\nonumber\\ \omega_a(s) & = & \cos\theta(s), \end{eqnarray} where the angles $\theta$ and $\varphi$ parametrize a 2-sphere. Explicitly, a loop $\mathcal{C}$ in parameter space starting and ending at $(\omega_0,\omega_1,\omega_a) = (0,0,1)$, yields the holonomic rotation gate \begin{eqnarray} u[\mathcal{C}] = e^{-\Omega (|0\rangle \langle 1| - |1\rangle \langle 0|)} , \label{holonomy} \end{eqnarray} $\Omega$ being the solid angle swept by $\mathcal{C}$. We assume that the system is influenced by an environment which is sensitive to whether the system is in the state $a$ or not. This may be modeled by adding the Lindbladian \begin{equation} \label{lindbladian} V = |a\rangle \langle a| \end{equation} and its concomitant strength $\Gamma$. \subsection{Application of the approximation} First, we notice that there is an arbitrariness in the choice of eigenbasis of $H(s)$, which can be formulated as a choice of gauge. This arbitrariness in the choice of gauge is related to the arbitrariness in the choice of $U(s)$ in Eq.~(\ref{Pvillkor}). Let $\{|\chi_{k}(s)\rangle\}_{k}$ be an instantaneous orthonormal eigenbasis of $H(s)$. Given such a basis one may construct a family $U(s)$ by \begin{equation} \label{gaugeochu} U(s) = U_{0}\sum_{k} |\chi_{k}(0)\rangle\langle\chi_{k}(s)|, \end{equation} where $U_{0}$ is a fixed unitary operator such that $[U_{0},P_{n}(0)] = 0$ for all $n$. Every family $U(s)$ constructed via Eq.~(\ref{gaugeochu}) is unitary and satisfies Eq.~(\ref{Pvillkor}). Moreover, every family $U(s)$ that satisfies Eq.~(\ref{Pvillkor}) can be reached via Eq.~(\ref{gaugeochu}) for some choice of instantaneous orthonormal eigenbasis $\{|\chi_{k}(s)\rangle\}_{k}$ and $U_{0}$. As the present approximation is independent of the choice of allowed $U(s)$, it follows that the approximation is also independent of the choice of gauge. Here we briefly describe a procedure to put the approximate master equation into matrix form. As the present application only regards the computational subspace we disregard the off-diagonal terms of the approximate solution, and only consider Eq.~(\ref{diagonal}). In the present case there are three diagonal terms, corresponding to the dark subspace and the two bright states. In order to write Eq.~(\ref{diagonal}) on matrix form we first choose an instantaneous orthonormal eigenbasis $\{|\chi_{k}(s)\rangle\}_{k}$ of $H(s)$, from which one can construct $U(s)$ via Eq.~(\ref{gaugeochu}), with $U_{0} = \hat{1}$. Define \begin{equation} \label{repr} \boldsymbol{\rho}^{a} = \left(\begin{array}{c c c c c c}\rho^{a}_{11}&\rho^{a}_{12}& \rho^{a}_{21}&\rho^{a}_{22}&\rho^{a}_{33}&\rho^{a}_{44}\end{array}\right)^t, \end{equation} where $\rho_{kl}^{a}(s) = \langle\chi_{k}(0)|\widetilde{\rho}^{a}(s)|\chi_{l}(0)\rangle$, and where $\widetilde{\rho}^{a}(s)$ is the solution of Eq.~(\ref{diagonal}). Note that we here use the initial eigenbasis $\{|\chi_{k}(0)\rangle\}_{k}$. This is related to the fact that Eq.~(\ref{diagonal}) is written in the ``rotated frame'', as described by Eqs.~(\ref{change}) and (\ref{Pvillkor}). When inserting $U(s)$ into Eq.~(\ref{diagonal}) the result can be written as \begin{equation} \label{eqn:matrixapprox} \dot{\boldsymbol{\rho}}^{a} = \boldsymbol{M}^{a}(s)\boldsymbol{\rho}^{a}. \end{equation} If the instantaneous eigenbasis is chosen to be \begin{widetext} \begin{eqnarray} \label{enkel} |\chi_1(s)\rangle & = & \cos \varphi(s) |0\rangle - \sin \varphi(s) |1\rangle, \nonumber \\ |\chi_2(s)\rangle & = & \sin \varphi(s) \cos \theta(s) |0\rangle + \cos \varphi(s) \cos \theta(s) |1\rangle - \sin \theta(s) |a\rangle, \nonumber \\ |\chi_3(s)\rangle & = & \frac{1}{\sqrt{2}} \Big( \sin \varphi(s) \sin \theta(s) |0\rangle + \cos \varphi(s) \sin \theta(s) |1\rangle + \cos \theta(s) |a\rangle + |e\rangle \Big) , \nonumber \\ |\chi_4(s)\rangle & = & \frac{1}{\sqrt{2}} \Big( \sin \varphi(s) \sin \theta(s) |0\rangle + \cos \varphi(s) \sin \theta(s) |1\rangle + \cos \theta(s) |a\rangle - |e\rangle \Big) , \end{eqnarray} then, with the Hamiltonian in Eq.~(\ref{Hamiltonian}) and the Lindbladian in Eq.~(\ref{lindbladian}), one obtains \begin{equation} \label{Mapprox} \boldsymbol{M}^{a} = \!\left(\!\begin {array}{cccccc} 0 & -\frac{d\varphi}{ds}\cos\theta(s) & -\frac{d\varphi}{ds}\cos\theta(s) & 0 & 0 & 0\\ \noalign{\medskip}\frac{d\varphi}{ds}\cos\theta(s) & -\frac{\Gamma T}{2}\sin^2\theta(s) & 0 & -\frac{d\varphi}{ds}\cos\theta(s) & 0 & 0\\ \noalign{\medskip}\frac{d\varphi}{ds}\cos\theta(s) & 0 & -\frac{\Gamma T}{2}\sin^2\theta(s) & -\frac{d\varphi}{ds}\cos\theta(s) & 0 & 0\\ 0 & \frac{d\varphi}{ds}\cos\theta(s) & \frac{d\varphi}{ds}\cos\theta(s) & -f(s) & \frac{1}{2}f(s) & \frac{1}{2}f(s)\\ 0 & 0 & 0 & \frac{1}{2}f(s) & -\frac{1}{4}g(s) & \frac{\Gamma T}{4}\cos^{4}\theta(s)\\ 0 & 0 & 0 & \frac{1}{2}f(s) & \frac{\Gamma T}{4}\cos^{4}\theta(s) & -\frac{1}{4}g(s) \end {array}\!\right). \end{equation} \end{widetext} Here, $f(s) = \Gamma T \sin^2\!\theta(s)\cos^{2}\!\theta(s)$ and $g(s)=\Gamma T \left[1+\sin^2\!\theta(s)\right]\cos^{2}\!\theta(s)$. From the above analysis it follows that the solutions of Eq.~(\ref{diagonal}) can be written $\widetilde{\rho}^{(nn)}(s) = \sum_{kl}\rho_{kl}^{a}(s)|\chi_{k}(0)\rangle\langle \chi_{l}(0)|$, where the sum over $k,l$ spans the appropriate elements for each $n$. Since these operators are written in the ``rotated frame'', it is appropriate to invert this transformation to more easily analyze the gate operation. By inverting the transformation in Eq.~(\ref{change}) one finds that these operators can be written \begin{eqnarray} U^\dagger(s)\widetilde{\rho}^{(nn)}(s)U(s) & = & \sum_{kl}\rho_{kl}^{a}(s)|\chi_{k}(s)\rangle\langle \chi_{l}(s)|\nonumber\\ & = & P_{n}(s)\rho(s)P_{n}(s), \end{eqnarray} where $\rho(s)$ is the solution of Eq.~(\ref{totalaapp}). There are some subtleties associated with the choice of basis and the usual difficulty with spherical coordinates, viz., that $\varphi$ is not defined at the north and south pole of parameter space. In fact, with the choice of basis in Eq.~(\ref{enkel}), the gauge potential $Z(s)$ has singularities at both poles. Nevertheless, if we avoid loops around the poles and take appropriate limits if we wish to approach the poles, then this gauge is unproblematic. Another possibility is to rotate the dark instantaneous eigenvectors as \begin{eqnarray} |\chi_k\rangle \rightarrow |\chi'_k\rangle & = & \sum_{j=1}^2 |\chi_j\rangle \langle \chi_j|W|\chi_k \rangle , \ k=1,2, \nonumber \\ W & = & e^{\varphi \big( |\chi_1 \rangle \langle \chi_2| - |\chi_2 \rangle \langle \chi_1| \big)} . \label{nordbra} \end{eqnarray} With this basis one obtains a gauge where the vector potential is well defined except at the south pole of parameter space. The rotated dark states does, however, give a system of differential equations too extensive to be presented explicitly here. We restrict the parametrization to \begin{equation} \varphi(s) = a\, s + b, \quad \theta(s) = c\, s + d, \quad a,b,c,d \in \mathbb{R},\label{eqn:resres} \end{equation} and the paths in parameter space to half ``orange slices'' \begin{eqnarray} \label{eqn:orangepath} (\varphi=0,\theta=0,t=0) & \rightarrow & (0,\pi/2,T_1) \nonumber \\ & \rightarrow & (\delta \varphi,\pi/2,T_2+T_1) \nonumber \\ & \rightarrow &(\delta \varphi,0,T_3+T_2+T_1) \nonumber \\ & \rightarrow & (0,0,T_4+T_3+T_2+T_1) , \nonumber \\ \end{eqnarray} where $T_1,\ldots,T_4$ are the run-times for the path segments, see Fig.~\ref{fig:path}. Note that the fourth path originates from the deformation of a well defined square to the orange slice on the parameter sphere. In the limit $\theta \rightarrow 0$ the fourth path is reduced to a single point at the north pole. This implies that $T_{4}$ can be set to zero without loss of adiabaticity. Note that in the present decoherence model a nonzero $T_{4}$ only affects the two bright states. \begin{figure}[ht] \includegraphics[width = 6cm]{path.eps} \caption{\label{fig:path} Path in parameter space starting and ending at P. The duration of the path segments P$\rightarrow$Q, Q$\rightarrow$R, and R$\rightarrow$P is $T_1$, $T_2$, and $T_3$, respectively, as described in Eq.~(\ref{eqn:orangepath}). $\delta \varphi$ is the opening angle in the equatorial plane. Note that for this path the enclosed solid angle equals $\delta\varphi$.} \end{figure} For the initial state vector $|\Psi\rangle = \cos(x/2)|0\rangle + e^{-iy}\sin(x/2)|1\rangle$, the output state of the approximation projected onto the computational space may be written as \begin{eqnarray} \label{approximerade} \boldsymbol{\rho}^a_{\mathrm{out}} & = & \boldsymbol{u} [\mathcal{C}] \boldsymbol{\rho}' \boldsymbol{u}^{\dagger} [\mathcal{C}] , \nonumber\\ \boldsymbol{\rho}' & = & \left( \begin {array}{cc} \frac{1}{2}+\frac{1}{2}\,\cos x & f_1 \\ f_1^* & \left(\frac{1}{2}-\frac{1}{2}\cos x \right) f_2 \end {array}\right) , \end{eqnarray} where \begin{eqnarray} f_1 & = & \frac{1}{2} e^{-\frac{1}{4}\Gamma \left( T_3+2T_2+T_1 \right)} e^{-iy}\sin x,\\ f_2 & = & \frac{1}{3} + \frac{2}{3}e^{-\frac{3}{16}\Gamma(T_3 +T_1)} , \end{eqnarray} and $\boldsymbol{u} [\mathcal{C}]$ is the holonomy Eq.~(\ref{holonomy}) in the $\{ |0\rangle , |1\rangle \}$ basis. Thus, the output is determined by the holonomy transformation of the $\delta\varphi$-independent $\boldsymbol{\rho}'$. It is worth to point out that this feature is due to the particular choice of parametrization and the chosen loop in parameter space, and not some intrinsic property of the decoherence. Furthermore, in addition to destroying the superpositions between the computational states the decoherence also gives an intensity loss from the computational space. This intensity loss arises as the states corresponding to $\chi_2(s)$, $\chi_3(s)$, and $\chi_4(s)$ decoheres into mixed states, as all three of them contain the $a$ state. As a final observation we note that Eq.~(\ref{eqn:matrixapprox}) can be obtained more or less directly from Eq.~(\ref{basekv}). We may represent Eq.~(\ref{basekv}) using an instantaneous orthonormal eigenbasis $\{|\chi_{k}(s)\rangle\}_{k}$ as \begin{equation} \label{totalaekv} \dot{\boldsymbol{\rho}} = \boldsymbol{M}(s)\boldsymbol{\rho}, \end{equation} where $\boldsymbol{M}(s)$ is a $16\times 16$ matrix, and where \begin{equation} \boldsymbol{\rho} \equiv \left(\begin{array}{c c c c c c c c}\rho_{11} &\rho_{12}&\cdots&\rho_{14}&\rho_{21}& \cdots&\rho_{44}\end{array}\right)^t \end{equation} with $\rho_{kl}(s) = \langle \chi_k (s)|\rho(s)|\chi_l (s) \rangle , \ k,l=1,\ldots,4$. Due to Eq.~(\ref{change}) it follows that Eq.~(\ref{totalaekv}) is also obtained if we instead represent Eq.~(\ref{huvudekv}) using the $\{|\chi_{k}(0)\rangle\}_{k}$ basis, where we again assume that $U(s)$ is constructed via Eq.~(\ref{gaugeochu}) with $U_{0}=\hat{1}$. Equation (\ref{nlan}) is obtained by removing couplings from Eq.~(\ref{huvudekv}). For the chosen basis, this corresponds to a removal of off-diagonal elements in $\boldsymbol{M}(s)$, such that the new approximate matrix can be arranged in a block diagonal form. Each of these diagonal blocks corresponds to a collection of coupled terms. One block corresponds to the diagonal terms, and there is one block for each collection of off-diagonal terms that couples among themselves, as determined by $g_{klk'l'}$. If one is interested in the evolution of a particular collection of coupled terms, then the approximate equation is obtained if one removes those rows and columns from $\boldsymbol{M}(s)$ that correspond to terms not included in the collection. In the present example, the matrix $\boldsymbol{M}^{a}(s)$ in Eq.~(\ref{Mapprox}) is obtained if we use the basis in Eq.~(\ref{enkel}) to represent the exact master equation, and remove those rows and columns from $\boldsymbol{M}(s)$ that correspond to the off-diagonal terms. \subsection{\label{sec.numeric}Numerical analysis} We compare the approximate solution in Eq.~(\ref{approximerade}) with a numerical solution of Eq.~(\ref{totalaekv}) in the Hadamard case $\Omega = \pi/4$ by putting $\delta \varphi = \pi/4$. For the calculation we have used the gauge where the vector potential is well defined at the north pole. We further put $T_{4} = 0$. We distribute the run-time $T$ proportionally to the length of the three circle segments, i.e., $T_{1} = T_{3} = 2T/5$, $T_{2} = T/5$. In the numerical treatment of the evolution we decompose the interval $[0,T]$ into subintervals with step size $\Delta t$, on which $\boldsymbol{M}(t)$ is taken to be constant. The resulting approximate evolution is on the form $\boldsymbol{\rho}_{K} = \big[ \Pi_{k=0}^{K} \exp \bm{(} \Delta t \boldsymbol{M}(t_{k}) \bm{)} \big] \boldsymbol{\rho}_{0}$. The step size $\Delta t = 0.01$ is used. The step size $\Delta t = 0.005$ has been tested, without any significant change of the result. For quantum gates, the relevant error is that at the end-point. This error may contain a contribution in form of an intensity loss out of the computational subspace. To detect this intensity loss, we use the quantities \begin{eqnarray} I(T) & = & 1-{\textrm{Tr}} \big( P\varrho (T) \big) , \nonumber \\ I^a(T) & = & 1-{\textrm{Tr}} \big( P\varrho^a (T) \big) \label{intensityloss} \end{eqnarray} with $P$ the projector onto the computational subspace spanned by $|0\rangle$ and $|1\rangle$. The error within this subspace is analyzed in terms of the fidelity \cite{Uhlmann,Jozsa} \begin{eqnarray} & & D(\varrho_{\mathrm{norm}} (T), \varrho^a_{\mathrm{norm}} (T)) \nonumber \\ & & \equiv {\textrm{Tr}} \sqrt{\sqrt{\varrho_{\mathrm{norm}} (T)} \varrho^a_{\mathrm{norm}} (T)\sqrt{\varrho_{\mathrm{norm}} (T)}} , \label{fidelity} \end{eqnarray} where \begin{eqnarray} \varrho_{\mathrm{norm}} (T) & = & \rho_{\mathrm{norm}} (1) = \frac{P \varrho(T) P}{{\textrm{Tr}} \big( P\varrho(T) \big)} , \nonumber \\ \varrho^a_{\mathrm{norm}} (T) & = & \rho^a_{\mathrm{norm}} (1) = \frac{P\varrho^{a}(T)P}{{\textrm{Tr}} \big( P\varrho^{a}(T) \big)} , \label{normstates} \end{eqnarray} are the normalized outputs of the exact and the approximate evolution, respectively. This normalization may correspond to a post-selection procedure. In Fig.~\ref{fig:element}, we show $\langle 1|\varrho(T)|1\rangle$ and $\langle 1|\varrho^{a}(T)|1\rangle$, for $\Gamma = 0,0.01,0.1$. We have chosen the initial state vector $|\Psi\rangle = \cos(x/2)|0\rangle + e^{-iy}\sin(x/2)|1\rangle$ with $x = \pi/5$ and $y = 3\pi/4$. The corresponding normalized fidelity $D$ is shown in Fig.~\ref{fig:fidelity} and the intensity losses $I(T)$ and $I_a(T)$ are shown in Fig.~\ref{fig:loss}. These simulations indicate that for this model system the error seems to decrease with increasing run-times $T$ at a rate more or less equal to the ordinary adiabatic approximation in the closed case, independent of the strength $\Gamma$ of the decoherence process. We have confirmed this finding for other input states. \begin{figure}[ht] \includegraphics[width = 8cm]{element.eps} \caption{\label{fig:element} The solid lines show the value of the matrix element $\langle 1|\varrho(T)|1\rangle$ of the output density operator of the exact equation, as a function of $T$. The run-time $T$ is measured in units of $\omega^{-1}$, defined in Eq.~(\ref{smomedef}). The dashed lines show the corresponding value $\langle 1|\varrho^{a}(T)|1\rangle$. Counted from the top and down, the solid-dashed line pairs correspond to $\Gamma =0$, $\Gamma = 0.01$, and $\Gamma = 0.1$, respectively. The initial state is pure, with polar angle $x = \pi/5$ and azimuthal angle $y =3\pi/4$ on the Bloch sphere. The horizontal dashed line corresponds to the ordinary adiabatic approximation for the closed evolution case. One may note that the rate at which the exact solution approaches the approximate solution appears to be rather independent of the strength $\Gamma$ of the decoherence.} \end{figure} \begin{figure}[ht] \includegraphics[width = 8cm]{fidelity.eps} \caption{\label{fig:fidelity} These graphs highlight another aspect of the same series of calculations as in Fig.~\ref{fig:element}. They show the error between the exact evolution and the approximate evolution, in form of the normalized fidelity $D$ defined in Eqs.~(\ref{fidelity}) and (\ref{normstates}), as a function of $T$. The run-time $T$ is given in units of $\omega^{-1}$, defined in Eq.~(\ref{smomedef}). Note the different range of the run-time compared to the other plots. Here the dotted line corresponds to $\Gamma = 0$, the dashed line to $\Gamma = 0.01$ and the solid line to $\Gamma = 0.1$. Note that the dotted and the dashed lines almost coincide. These graphs indicate that the distance between the approximate and exact evolution, within the computational subspace, decreases with the run-time $T$ at a rate more or less independent of $\Gamma$.} \end{figure} \begin{figure} \includegraphics[width = 8cm]{loss.eps} \caption{\label{fig:loss} The intensity losses, as defined in Eq.~(\ref{intensityloss}), out of the computational subspace for the same series of calculations as in Figs.~\ref{fig:element} and \ref{fig:fidelity}. The solid lines show the intensity losses $I$ of the exact evolution, and the dashed the intensity losses $I^{a}$ of the approximate evolution, as a function of $T$. The run-time $T$ is given in units of $\omega^{-1}$, defined in Eq.~(\ref{smomedef}). Here the lowermost pair of curves correspond to $\Gamma = 0$. Note that for $\Gamma = 0$ the loss is identically zero for the approximate evolution. The uppermost pair of curves corresponds to $\Gamma = 0.1$, and the pair in the middle corresponds to $\Gamma = 0.01$.} \end{figure} \section{\label{sec:range}Range of applicability} The analysis in Sec.~\ref{sec:weak} suggests that, for a given $\Gamma$, the error bound in Eq.~(\ref{cond}) has a minimum for some value of the run-time $T$. It is quite straightforward to obtain an example of a system where this appears to be the case. One may consider a time-dependent Hamiltonian of the form \begin{equation} H(s) = e^{-isZ}H_{0}e^{isZ}, \end{equation} where $H_{0}$ and $Z$ are fixed Hermitian operators. The spectrum of this Hamiltonian is fixed, but the eigenbasis rotates. One may consider the master equation \begin{equation} \label{dekoh} \dot{\rho} = -iT[H(s),\rho] -\Gamma T [A,[A,\rho]], \end{equation} where $A$ is a fixed Hermitian operator. The double commutator in the above equation causes decoherence with respect to the eigenbasis of $A$. We have chosen a four dimensional Hilbert space and have generated $H_{0}$, $Z$, and $A$, as well as the pure initial state, randomly. Figure \ref{fig:exemp} shows the maximum error in the Hilbert-Schmidt norm $\max_{s\in[0,1]}||\rho(s)-\rho^{a}(s)||$ for various choices of $T$ and $\Gamma$. As seen in Fig.~\ref{fig:exemp} we indeed seem to have the expected behavior of the approximation. In the ideal case, $\Gamma = 0$, the error appears to go to zero as $T$ increases, while for non-vanishing $\Gamma$ there seems to be a minimum error. \begin{figure} \includegraphics[width = 8cm]{exemp.eps} \caption{\label{fig:exemp} The maximum error in the Hilbert-Schmidt norm $\max_{s\in[0,1]}||\rho(s)-\rho^{a}(s)||$ between the solution $\rho(s)$ of Eq.~(\ref{dekoh}) and the solution $\rho^{a}(s)$ of the approximate equation as a function of the run-time $T$, the latter measured in arbitrary units. The plots are generated for one random instance of $H_{0}$, $Z$, $A$, and initial state, for a four dimensional Hilbert space. Each curve corresponds to a value of $\Gamma$, and shows the maximum error as a function of $T$. To the right of the figure the curves correspond to, counted from the bottom and up, to $\Gamma = 0, 0.002, 0.004, 0.006, 0.008, 0.01$. As seen, the error for the adiabatic approximation in the closed case ($\Gamma =0$) seems to tend to zero as $T$ increases, while the other cases appear to have a minimum error for a certain $T$.} \end{figure} However, the error does not always seem to behave in this manner. In the example presented in Sec.~\ref{sec.numeric} there is no trace of this minimum error. Rather the error seems to vanish for large $T$ for any value of $\Gamma$. In other words, the error bounds derived in Sec.~\ref{sec:weak} appears to be unnecessarily pessimistic in some cases. We here put forward some reasons why this may be the case. One aspect is the question of which error to consider. In Sec.~\ref{sec:weak} we considered the maximum deviation between the exact and approximate solution during the whole evolution, while in Sec.~\ref{sec.numeric}, the relevant error was taken at the end of the evolution. In some cases the maximum deviation need not occur at the end of the evolution, which may cause the ``end point error'' to be smaller than the maximum deviation. One may also note that Sec.~\ref{sec.numeric} focused on one single diagonal term of the total density operator and that the error for this part may be smaller than the total error. Another reason for the approximation to be accurate under wider conditions is if the dissipator $D_{s}$ is such that it does not couple off-diagonal terms to diagonal terms, or off-diagonal terms to other off-diagonal terms. Under such conditions the dissipator is unaffected by the approximation and it seems reasonable that the approximation should have a wider range of applicability. However, this cannot be the sole reason, as is indicated by the results in Sec.~\ref{sec.numeric}, since the dissipator used (i.e., decoherence with pointer state $a$) does belong to the class of dissipators that do couple the diagonal and off-diagonal terms. Suppose, however, that the evolution is such that the magnitude of the off-diagonal terms tends to decrease with increasing run-times. For example, this may occur if a decoherence or relaxation process acts suitably in relation to the instantaneous eigenspaces. Consider the diagonal terms: even if there would be a coupling to the off-diagonal terms, the importance of this coupling naturally diminishes if the off-diagonal terms tends to decrease in magnitude. If the open system process is such that it tends to suppress the off-diagonal terms, it thus seems reasonable to expect that the approximate equation should be accurate at large run-times. This reduction of off-diagonal terms reasonably should be more relevant for the end point error than for the maximum deviation. Another reason for the end-point error to vanish is if the approximate and exact equations have a common asymptotic state. There is clearly room for further investigations of when and why the present approximation is accurate beyond the joint limit of slow change and weak open system effects. \section{\label{sec:sum}Conclusions} We present an adiabatic approximation scheme for weakly open systems. Contrary to the adiabatic approximation for closed systems, the presence of open system effects introduces a coupling between the instantaneous eigenspaces of the time-dependent, possibly degenerate, Hamiltonian. We show that the present approximation can be obtained as a slow-change weak open-system limit, in the sense that the time scale inversely proportional to the strength of the open system effect puts an upper limit on the run-time. In the ideal case of closed systems this limiting time scale becomes infinite, and the ordinary adiabatic approximation \cite{Messiah} is retained. We demonstrate the approximation scheme for a non-Abelian holonomic implementation of a Hadamard gate, exposed to a decoherence process. We compare the approximation with numerically obtained solutions of the exact master equation. These calculations indicate that the error between the approximate and the exact evolution decreases with increasing run-time at a rate more or less independent of the strength of the decoherence process. This result suggests that the approximation scheme may have a wider range of applicability than the weak open-system limit.
1,314,259,996,446
arxiv
\section{Introduction} The Muskat equation is an important model in the analysis of free surface flows, which describes the dynamics of the interface separating two fluids whose velocities obey Darcy's law (\cite{darcy1856fontaines,Muskat}). Its two main features are that it is a fractional parabolic equation and a highly nonlinear equation. These two features are shared by several equations which have attracted a lot of attention in recent years, like the surface quasi-geostrophic equation, the Hele-Shaw equation or the fractional porous media equation, to name a few. Among these equations, a specificity of the Muskat equation is that it admits a beautiful compact formulation in terms of finite differences, as observed by C\'ordoba and Gancedo~\cite{CG-CMP}. The latter formulation allows to study the Cauchy problem by means of tools at the interface of harmonic analysis and nonlinear partial differential equations. In this direction, we are very much influenced by the recent works by Constantin, C{\'o}rdoba, Gancedo, Rodr{\'\i}guez-Piazza and Strain~\cite{CCGRPS-JEMS2013,CCGRPS-AJM2016}, C\'ordoba and Lazar~\cite{Cordoba-Lazar-H3/2} and Gancedo and Lazar~\cite{Gancedo-Lazar-H2}. Our goal is to introduce for the Muskat problem an approach based on a logarithmic correction to the usual Gagliardo semi-norms which is adapted to both the fractional and nonlinear features of the equation, following earlier works in~\cite{Alazard-Lazar,BN18a,BN18b,BN18c,BN18d,Ng}. Our main result is stated after we introduce some notations, but one can express its main corollary as follows: one can study the Cauchy problem in an almost critical Sobolev space, allowing initial data which are not Lipschitz. \clearpage \subsection{The Muskat equation} Consider the dynamics of a time-dependent curve $\Sigma(t)$ separating two $2D$-domains $\Omega_1(t)$ and $\Omega_2(t)$. On the supposition that $\Sigma(t)$ is the graph of some function, we introduce the following notations \begin{align*} \Omega_1(t)&=\left\{ (x,y)\in \mathbb{R}\times \mathbb{R}\,;\, y>f(t,x)\right\},\\ \Omega_2(t)&=\left\{ (x,y)\in \mathbb{R}\times \mathbb{R}\,;\, y<f(t,x)\right\},\\ \Sigma(t)&=\left\{ (x,y)\in \mathbb{R}\times \mathbb{R}\,;\, y=f(t,x)\right\}. \end{align*} Assume that each domain~$\Omega_j$, $j=1,2$, is occupied by an incompressible fluid with constant density~$\rho_j$ and denote by $\rho=\rho(t,x)$ the function with value $\rho_j$ for $x\in\Omega_j(t)$. We assume that $\rho_2>\rho_1$ so that the heavier fluid is underneath the lighter one. Then the motion is determined by the incompressible porous media equations, where the velocity field $v$ is given by Darcy's law: \begin{equation}\label{MuskatrhovP} \left\{ \begin{aligned} &\partial_t\rho+\cnx(\rho v)=0,\\ &\cnx v=0,\\ &v+\nabla (P+\rho gy)=0, \end{aligned} \right. \end{equation} where $g$ is the acceleration of gravity. Changes of unknowns, reducing the problem~\eqref{MuskatrhovP} to an evolution equation for the free surface parametrization, have been known for quite a time (see~\cite{CaOrSi-SIAM90,EsSi-ADE97,PrSi-book,SCH2004}). This approach was further developed by C\'ordoba and Gancedo~\cite{CG-CMP} who obtained a beautiful compact formulation of the Muskat equation. Indeed, they showed that the Muskat problem is equivalent to the following equation for the free surface elevation: \begin{align}\label{eq2.1} \partial_tf=\frac{\rho}{2\pi}\pv \int_\mathbb{R}\frac{\partial_x\Delta_\alpha f}{1+\left(\Delta_\alpha f\right)^2}\diff \! \alpha, \end{align} where the integral is understood in the sense of principal values, $\rho=\rho_2-\rho_1$ is the difference of the densities of the two fluids and $\Delta_\alpha f$ is the slope \begin{align}\label{eq2.2} \Delta_\alpha f(t,x)=\frac{f(t,x)-f(t,x-\alpha)}{\alpha}\cdot \end{align} Since $\rho_2>\rho_1$ by assumption, we may set $\rho=2$ without loss of generality. A key feature of this problem is that~\eqref{eq2.1} is preserved by the change of unknowns: $$ f(t,x)\mapsto \frac{1}{\lambda}f\left(\lambda t,\lambda x\right). $$ Hence, the two natural critical spaces for the initial data are the homogeneous spaces $$ \dot{H}^{\frac{3}{2}}(\mathbb{R}),\quad \dot{W}^{1,\infty}(\mathbb{R}). $$ The analysis of the Cauchy problem for the Muskat equation is now well developed, including global existence results under mild smallness assumptions and blow-up results for some large enough initial data. Local well-posedness results go back to the works of Yi~\cite{Yi2003}, Ambrose~\cite{Ambrose-2004,Ambrose-2007}, C\'ordoba and Gancedo~\cite{CG-CMP}, C\'ordoba, C\'ordoba and Gancedo~\cite{CCG-Annals}, Cheng, Granero-Belinch\'on, Shkoller~\cite{Cheng-Belinchon-Shkoller-AdvMath}. Then local well-posedness results were obtained in the sub-critical spaces by Constantin, Gancedo, Shvydkoy and Vicol~\cite{CGSV-AIHP2017} for initial data in the Sobolev space $W^{2,p}(\mathbb{R})$ for some $p>1$, and Matioc~\cite{Matioc1,Matioc2} for initial data in $H^s(\mathbb{R})$ with $s > 3/2$ (see also~\cite{Alazard-Lazar,Nguyen-Pausader}). Since the Muskat equation is parabolic, the proof of the local well-posedness results also gives global well-posedness results under a smallness assumption, see Yi~\cite{Yi2003}. The first global well-posedness results under mild smallness assumptions, namely assuming that the Lipschitz semi-norm is smaller than $1$, was obtained by Constantin, C{\'o}rdoba, Gancedo, Rodr{\'\i}guez-Piazza and Strain~\cite{CCGRPS-AJM2016} (see also \cite{CGSV-AIHP2017,PSt}). On the other hand, there are blow-up results for some large enough data by Castro, C\'{o}rdoba, Fefferman, Gancedo and L\'opez-Fern\'andez~(\cite{CCFG-ARMA-2013,CCFG-ARMA-2016,CCFGLF-Annals-2012}). They prove the existence of solutions such that at time $t=0$ the interface is a graph, at a later time $t_1>0$ the interface is no longer a graph and then at a subsequent time $t_2>t_1$, the interface is $C^3$ but not $C^4$. The previous discussion raises a question about the possible existence of a criteria on the slopes of the solutions which would force/prevent them to enter the unstable regime where the slope is infinite. Surprisingly, it is possible to solve the Cauchy problem for initial data whose slope can be arbitrarily large. Deng, Lei and Lin in~\cite{DLL} obtained the first result in this direction, under the assumption that the initial data are monotone. Cameron \cite{Cameron} proved the existence of a modulus of continuity for the derivative, and hence a global existence result assuming only that the product of the maximal and minimal slopes is bounded by $1$; thereby allowing arbitrarily large slopes too (recently, Abedin and Schwab also obtained the existence of a modulus of continuity in~\cite{Abedin-Schwab-2020} via Krylov-Safonov estimates). Then, by using a new formulation of the Muskat equation involving oscillatory integrals, C\'ordoba and Lazar established in \cite{Cordoba-Lazar-H3/2} that the Muskat equation is globally well-posed in time, assuming only that the initial data is sufficiently smooth and that the $\dot H^{3/2}(\mathbb{R})$-norm is small enough. This result was extended to the 3D case by Gancedo and Lazar~\cite{Gancedo-Lazar-H2}. Let us also quote papers by Vazquez~\cite{Vazquez-DCDS}, Granero-Belinch{\'o}n and Scrobogna~\cite{Granero-Scrobogna} for related global existence results for different equations. The existence and possible non-uniqueness of weak-solutions has also been thoroughly studied (we refer the reader to~\cite{Brenier2009,cordoba2011lack,szekelyhidi2012relaxation,castro2016mixing,forster2018piecewise,noisette2020mixing}). \subsection{Fractional logarithmic spaces} Based on the discussion earlier, one of the main questions left open is to solve the Cauchy problem for the Muskat equation for initial data which are not Lipschitz. Indeed, for such data, the slope is not only arbitrarily large but can be infinite. To prove the existence of such solutions, the main difficulties one has to cope with are the following: Firstly, there is a degeneracy in the parabolic behavior when $f_x$ is not controlled (this is easily seen by looking at the energy estimate~\eqref{i5}~below: when $f_x$ is not controlled, one does not control the $L^2_{t,x}$-norm of the derivatives). Secondly, in addition to this degeneracy, one cannot apply classical nonlinear estimates. Indeed the latter require to control the $L^\infty$-norm of some factors, which amounts here to control the $L^\infty$-norm of the slopes~$\Delta_\alpha f$, equivalent to control the Lipschitz norm of~$f$. To overcome these difficulties, we will use two different kind of arguments, following earlier works in~\cite{Alazard-Lazar,BN18a,BN18b}. Firstly, we will prove estimates valid in critical spaces, by exploiting various cancellations as well as specific inequalities. Secondly, we will perform energy estimates in some variants of the classical Sobolev spaces, allowing to control a {\em fraction} of a logarithmic derivative. More precisely, the idea followed in this paper is to estimate the following norms. \begin{definition}\label{defi:1}Given $a\ge 0$ and $s\ge 0$, the fractional logarithmic space $\mathcal{H}^{s,a}(\mathbb{R})$ consists of those functions $g\in L^2(\mathbb{R})$ such that the following norm is finite: $$ \left\Vert g\right\Vert_{\mathcal{H}^{s,a}}^2=\int_{\mathbb{R}} \left(1+|\xi|^2\right)^s\left( \log(4+|\xi|)\right)^{2a}\left\vert \hat{g}(\xi)\right\vert^2\diff \! \xi. $$ \end{definition} \begin{remark} $(i)$ Since the formulation of the Muskat equation involves the finite differences of $f$, it is important to notice that these semi-norms can be defined in terms of finite differences. We will see that, if $s\in (0,2)$, $$ \left\Vert g\right\Vert_{\mathcal{H}^{s,a}}^2\sim \left\Vert g\right\Vert_{L^2(\mathbb{R})}^2+ \iint_{\mathbb{R}^2} \frac{\left\vert 2g(x)-g(x+h)-g(x-h)\right\vert^2}{|h|^{2s}} \left[\log\left(4+\frac{1}{|h|^2}\right)\right]^{2a}\frac{\diff \! x\diff \! h}{|h|}, $$ and if $s=0$ \begin{align*} \left\Vert g\right\Vert_{\mathcal{H}^{0,a}}^2&\sim \left\Vert g\right\Vert_{L^2(\mathbb{R})}^2\\&+ \iint_{\mathbb{R}^2} \mathbf{1}_{|h|<\frac{1}{2}} \left\vert 2g(x)-g(x+h)-g(x-h)\right\vert^2 \left[\log\left(4+\frac{1}{|h|^2}\right)\right]^{-1+2a}\frac{\diff \! x\diff \! h}{|h|}\cdot \end{align*} The latter norms were introduced in~\cite{BN18a} for $s\in [0,1)$ (with the symmetric difference replaced by $g(x+h)-g(x)$). $(ii)$ Here the word `fractional' is used to insist on the fact that $a$ belongs to $(0,1]$. This is important in view of~\eqref{n40} below. \end{remark} We consider initial data in $\mathcal{H}^{3/2,a}(\mathbb{R})$ for some $a\ge 0$. Notice that the latter spaces lie between the Sobolev spaces $H^{3/2}(\mathbb{R})$ and $H^{3/2+\epsilon}(\mathbb{R})$: $$ \forall \varepsilon>0,~\forall a\ge 0,\quad H^{\frac{3}{2}+\epsilon}(\mathbb{R})\subset \mathcal{H}^{\frac{3}{2},a}(\mathbb{R})\subset\mathcal{H}^{\frac{3}{2},0}(\mathbb{R})=H^{\frac{3}{2}}(\mathbb{R}). $$ The definition of the Sobolev spaces is recalled below in \eqref{defi:Sobolev}. For our purposes, the most important think to note is that \begin{equation}\label{n40} \mathcal{H}^{\frac{3}{2},a}(\mathbb{R})\subset W^{1,\infty}(\mathbb{R}) \quad \text{if and only if}\quad a>\frac{1}{2}\cdot \end{equation} It follows from~\eqref{n40} that there is a key dichotomy between the cases $a\leq 1/2$ and $a>1/2$. Loosely speaking, for $a>1/2$, the analysis of the Cauchy problem in $\mathcal{H}^{3/2,a}(\mathbb{R})$ is expected to be similar to the one in sub-critical spaces $H^s(\mathbb{R})$ with $s>3/2$. While for $a\leq 1/2$, the same problem is expected to be much more involved since one cannot control the $W^{1,\infty}$-norm of $f$ (which is ubiquitous in the estimates of nonlinear quantities involving gradients or the slopes $\Delta_\alpha f$). \subsection{Main results} Once the fractional logarithmic spaces have been introduced, the question of solving the Cauchy problem for non Lipschitz initial data can be made precise. Namely, our goal is to prove that the Cauchy problem is well-posed on $\mathcal{H}^{3/2,a}(\mathbb{R})$ for some $a\leq \frac{1}{2}$. Our main results assert that in fact one can solve the Cauchy problem down to $a= \frac{1}{3}$. Recall that we set $\rho_2-\rho_1=2$, so that the Muskat equation~\eqref{eq2.1} reads \begin{align}\label{eq2.1b} \partial_tf=\frac{1}{\pi}\pv \int_\mathbb{R}\frac{\partial_x\Delta_\alpha f}{1+\left(\Delta_\alpha f\right)^2}\diff \! \alpha. \end{align} \begin{theorem}[local well-posedness]\label{theo:main} For any initial data $f_0$ in $\mathcal{H}^{\frac{3}{2},\frac{1}{3}}(\mathbb{R})$, there exists a positive time $T$ such that the Cauchy problem for the Muskat equation~\eqref{eq2.1b} has a unique solution $$ f\in C^0\big([0,T];\mathcal{H}^{\frac{3}{2},\frac{1}{3}}(\mathbb{R})\big) \cap L^2\big(0,T;H^{2}(\mathbb{R})\big). $$ \end{theorem} \begin{remark}\label{R:1.4} The case $a=\frac{1}{3}$ corresponds to a limiting case. In particular, the time of existence does not depend only on the norm of $f_0$ but also on $f_0$ itself. More precisely, we will estimate the solution for a norm whose definition depends on~$f_0$ (this can be understood by looking at Lemma~$\ref{L:critical}$ and Remark~$\ref{R:3.8}$). \end{remark} We now give a global in time well-posedness result under a smallness condition on the following quantity: \begin{equation}\label{n141} \begin{aligned} \left\Vert f_0\right\Vert_{\frac{3}{2},\frac{1}{3}}^2&\mathrel{:=} \int_{\mathbb{R}} |\xi|^{3} \log(4+|\xi|)^{\frac{2}{3}}\big\vert \hat{f_0}(\xi)\big\vert^2\diff \! \xi\\ &\sim\iint_{\mathbb{R}^2} \frac{\left\vert 2f_0(x)-f_0(x+h)-f_0(x-h)\right\vert^2}{|h|^{3}} \left(\log\left(4+\frac{1}{|h|^2}\right)\right)^{\frac{2}{3}}\frac{\diff \! x\diff \! h}{|h|}\cdot \end{aligned} \end{equation} \begin{theorem}[global well-posedness]\label{theo:main2} There exists a positive constant $c_0$ such that, for all initial data $f_0$ in $\mathcal{H}^{3/2,1/3}(\mathbb{R})$ satisfying \begin{equation}\label{i10} \left\Vert f_0\right\Vert_{\frac{3}{2},\frac13}\left( \left\Vert f_0\right\Vert_{L^2}^2+1\right) \leq c_0, \end{equation} the Cauchy problem for the Muskat equation~\eqref{eq2.1b} has a unique solution $$ f\in C^0\big([0,+\infty);\mathcal{H}^{\frac{3}{2},\frac{1}{3}}(\mathbb{R})\big) \cap L^2\big(0,+\infty;H^{2}(\mathbb{R})\big). $$ \end{theorem} \subsection{Strategy of the proof and plan of the paper} To prove Theorem~\ref{theo:main}, the key point is to work with critical-type norms. In this direction, we will prove some technical estimates which we think are of independent interest. To state the main consequence of the latter, let us introduce a bit of notation. We denote by $\la D\ra^{s,\phi}$ the Fourier multiplier $(-\Delta)^{s/2}\phi(D_x)$. Then, our main technical estimate asserts that, for any $\phi$, there holds \begin{equation}\label{i5} \frac{\diff}{\dt} \big\Vert \la D\ra^{\frac{3}{2},\phi}f\big\Vert_{L^2}^2 + \int_\mathbb{R} \frac{\big\vert\la D\ra^{2,\phi}f\big\vert^2}{1+(\partial_x f)^2}\diff \! x\leq C Q(f) \big\Vert\la D\ra^{2,\phi}f\big\Vert_{L^2}, \end{equation} where \begin{align*} Q(f)&= \left(\left\Vert f\right\Vert_{\dot H^2}+\left\Vert f\right\Vert_{\dot H^{\frac{7}{4}}}^2\right) \big\Vert\la D\ra^{\frac{3}{2},\phi}f\big\Vert_{L^2} +\big\Vert\la D\ra^{\frac74,\phi}f\big\Vert_{L^2} \left\Vert f\right\Vert_{H^{\frac74}}\\ &\quad+\left(\left\Vert f\right\Vert_{H^{\frac{19}{12}}}^{3/2}+\left\Vert f\right\Vert_{\dot H^{\frac74}}^{1/2}\right) \big\Vert\la D\ra^{\frac{7}{4},\phi^{2}}f\big\Vert^{1/2}_{L^2} \left\Vert f\right\Vert_{\dot H^{\frac74}}. \end{align*} The crucial point is that the quantity $Q(f)$ does not involve the $W^{1,\infty}$-norm of~$f$. To prove~\eqref{i5}, notable technical aspects include the proof of new commutator estimates and the systematic use of Triebel-Lizorkin norms. We develop these tools in~$\S\ref{S:2}$. With these results in hands, we begin in~$\S\ref{S:3}$ by introducing a sequence of approximate equations by a Galerkin type decomposition, which admit approximate solutions $(f_n)_{n\in \mathbb{N}}$. Then we prove that the estimate \eqref{i5} holds for these approximate systems, uniformly in~$n$. We then conclude the proof of Theorem~\ref{theo:main} in two steps, by applying~\eqref{i5} with some special choice for $\phi$, satisfying $\phi(\xi)\sim (\log(4+\left\vert\xi\right\vert))^{a}$. As already mentioned, one of the main difficulty is that the factor $1+(\partial_x f)^2$ which appears in the left hand-side of~\eqref{i5} is not controlled in $L^\infty_{t,x}$. To overcome this difficulty, we prove some new interpolation inequalities to estimate the factor $1+(\partial_x f)^2$, using the fractional logarithmic norms, by some quantity which is not bounded in time. To be more specific, assume that $\phi(\xi)\sim (\log(4+\left\vert\xi\right\vert))^{a}$, and introduce the quantities \begin{align*} A(t)&=\big\Vert \la D\ra^{\frac{3}{2},\phi}f(t)\big\Vert_{L^2}^2,\\ B(t)&=\big\Vert \la D\ra^{2,\phi}f(t)\big\Vert_{L^2}^2. \end{align*} In \S\ref{S:3.2}, we will prove after a fair amount of bookkeeping an estimate of the form $$ \frac{\diff}{\dt} A(t)+C_1\delta(t)B(t)\leq C_2 \left(\log\left(\frac{B(t)}{A(t)}\right)\right)^{-a} \left( \sqrt{A(t)}+A(t) \right)B(t), $$ where $$ \delta(t)\sim\left(1+ \log\left(4+\frac{B(t)}{A(t)+ \Vert f_0\Vert_{L^2}^2}\right)^{1-2a}\left( A(t)+ \Vert f_0\Vert_{L^2}^2\right)\right)^{-1}. $$ Notice that $\delta(t)$ is not bounded from below so that the left-hand side is insufficient to control $B(t)$. However, to apply a Gronwall type inequality it will suffice to have $a> 1-2a$, that is $ a> \frac{1}{3}$, see~\ref{S:3.3}. The limiting case $a=\frac{1}{3}$ will be studied in~\S\ref{S:critical} by introducing a more general weight $\phi$ which is not a fraction of a logarithm, and whose definition depends on the initial data itself. This gives uniform bounds in $L^\infty_t(\mathcal{H}^{s,a}(\mathbb{R}))$ for any $a\ge \frac{1}{3}$, for the approximate solutions $f_n$, from which we deduce the existence of a solution to the Muskat equation by extracting a subsequence. The uniqueness is proved in~\S\ref{S:3.5} by similar arguments, using again some delicate interpolation inequalities to handle the lack of Lipschitz control. \subsection*{Notations} Most notations are introduced in the next section. In particular, the definitions of Sobolev, Besov and Triebel-Lizorkin spaces are recalled in~\S\ref{S:2.1}. To avoid possible confusions in the notations, we mention that, throughout the paper: \begin{itemize} \item We will sometimes write $\log (4+|\xi|)^a$ as a short notation for $(\log(4+|\xi|))^a$. \item All functions are assumed to be real-valued in this paper. Nevertheless, we will often use the complex-modulus notation in writing $\left\vert \Delta_\alpha f\right\vert^2$ or $\left\vert \alpha\right\vert^2$ in many identities, since we think it might help the reader to read the latter. \item Given~$0\leq t\leq T$, a normed space~$X$ and a function~$\varphi=\varphi(t,x)$ defined on~$[0,T]\times \mathbb{R}$ with values in~$X$, we denote by~$\varphi(t)$ the function~$x\mapsto \varphi(t,x)$, and $\left\Vert \varphi\right\Vert_X$ is a short notation for the time dependent function $t\mapsto \left\Vert \varphi(t)\right\Vert_X$. \end{itemize} \section{Nonlinearity and fractional derivatives in the Muskat problem}\label{S:2} We now develop the linear and nonlinear tools needed to study the Muskat problem in the spaces $\mathcal{H}^{3/2,a}(\mathbb{R})$. The first paragraph is a review consisting of various notations and usual results about Besov and Triebel-Lizorkin spaces, which serve as the requested background for what follows. Then we study in $\S\ref{S:2.2}$ Fourier multipliers of the form $\la D\ra^s \phi(\left\vert D_x\right\vert)$ for some symbols $\phi(|\xi|)$ which generalize the fractional logarithm $(\log(4+\left\vert \xi\right\vert))^a$ introduced in the introduction. In particular we give a characterization of the space $$ \mathcal{H}^{s,\phi}(\mathbb{R})=\{ f\in L^2(\mathbb{R})\, :\, \la D\ra^s \phi(\left\vert D_x\right\vert)f\in L^2(\mathbb{R})\}, $$ in terms of modified Gagliardo semi-norms. Then in \S\ref{S:2.3} we recall the paralinerization formula for the Muskat equation from~\cite{Alazard-Lazar}. The core of this section is \S\ref{S:2.4}, in which we prove technical ingredients needed to estimate the coefficients of the latter paralinearization formula in terms of the $\mathcal{H}^{3/2,\phi}(\mathbb{R})$-norms. \subsection{Triebel-Lizorkin norms}\label{S:2.1} This work builds on the analysis of the Muskat equation by C{\'o}rdoba and Lazar~\cite{Cordoba-Lazar-H3/2} and Alazard and Lazar~\cite{Alazard-Lazar}, which introduced the use of techniques related to Besov spaces in this problem (see also Gancedo and Lazar~\cite{Gancedo-Lazar-H2}). Here we will also use Triebel-Lizorkin spaces. For ease of reading, we recall various notations and results about these spaces, which will be used continually in the rest of the paper. Given a function $f\colon\mathbb{R}\to\mathbb{R}$, an integer $m\in\mathbb{N}\setminus\{0\}$ and a real number $h\in\mathbb{R}$, we define the finite differences $\delta_h^mf$ as follows: $$ \delta_hf(x)=f(x)-f(x-h),\qquad \delta_h^{m+1}f=\delta_h(\delta_h^mf). $$ \begin{definition} Consider an integer $m\in\mathbb{N}\setminus\{0\}$, a real number $s\in [m-1,m)$ and two real numbers $(p,q)$ in $[1,\infty)^2$. The homogeneous Triebel-Lizorkin space $\dot F^{s}_{p,q}(\mathbb{R})$ consists of those tempered distributions $f$ whose Fourier transform is integrable near the origin and such that \begin{equation}\label{n5} \left\Vert f\right\Vert_{\dot F^s_{p,q}} =\left(\int_{\mathbb{R}}\left(\int_{\mathbb{R}} \left\vert\delta_h^mf(x)\right\vert^q \frac{\diff \! h}{|h|^{1+qs}}\right)^{\frac{p}{q}}\diff \! x\right)^{\frac{1}{p}}<+\infty. \end{equation} \end{definition} \begin{remark}$i)$ We refer to Triebel~\cite[$\S 2.3.5$]{Triebel-TFS} for historical comments about these spaces, and Triebel~\cite[section~$3$]{Triebel1988} for the equivalence between this definition and other ones including the Littlewood-Paley decomposition. $ii)$ For the sake of comparison, recall that the Besov space $\dot B^{s}_{p,q}(\mathbb{R})$ consists of those tempered distributions $f$ whose Fourier transform is integrable near the origin and such that \begin{equation}\label{defi:B-spaces} \left\Vert f \right\Vert_{\dot B^{s}_{p,q}} = \left(\int_{\mathbb{R}}\left( \int_{\mathbb{R}}\left\vert\delta_h^mf(x)\right\vert^p\diff \! x\right)^{\frac{q}{p}} \frac{\diff \! h}{|h|^{1+qs}}\right)^{\frac{1}{q}}<+\infty. \end{equation} Notice that Besov defined his spaces in this way, that is with finite differences, see~\cite{Besov}. \end{remark} For easy reference, we recall two results allowing to compare the Triebel-Lizorkin semi-norms to the homogeneous and non-homogeneous Sobolev norms, which are defined by \begin{equation}\label{defi:Sobolev} \left\lVert u \right\rVert_{\dot{H}^{\sigma}}^{2} \mathrel{:=} (2\pi)^{-1} \int_{\mathbb{R}} \left\vert\xi\right\vert^{2\sigma} |\hat{u}(\xi)|^2\diff \! \xi,\quad \left\lVert u \right\rVert_{H^\sigma}^{2} \mathrel{:=} (2\pi)^{-1} \int_{\mathbb{R}} (1+|\xi|^2)^{\sigma} |\hat{u}(\xi)|^2\diff \! \xi, \end{equation} where~$\widehat{u}$ is the Fourier transform of~$u$. Recall that $\left\Vert \cdot\right\Vert_{\dot H^{s}}$ and $\left\Vert \cdot\right\Vert_{\dot{F}^s_{2,2}}$ are equivalent. Moreover, for $s\in (0,1)$, \begin{equation}\label{SE0} \left\Vert u\right\Vert_{\dot H^{s}}^2=\frac{1}{4\pi c(s)}\left\Vert u\right\Vert_{\dot{F}^s_{2,2}}^2 \quad\text{with}\quad c(s)=\int_\mathbb{R} \frac{1-\cos(h)}{\left\vert h\right\vert^{1+2s}}\diff \! h. \end{equation} We will also extensively use the following Sobolev embeddings: For any $2<p_1< \infty$ and any $q\leq\infty$, if $s-\frac{1}{2}=s_1-\frac{1}{p_1}$ then \begin{equation}\label{FB} \left\Vert f\right\Vert_{\dot F^{s_1}_{p_1,q}(\mathbb{R})}\leq C \left\Vert f\right\Vert_{\dot H^{s}(\mathbb{R})}. \end{equation} \subsection{Some special Fourier multipliers}\label{S:2.2} Let us introduce a bit of notation which will be used continually in the rest of the paper. \begin{definition}\label{defi:D} Consider a real number $s\ge 0$ and a function $\phi\colon [0,\infty)\to (0,\infty)$ satisfying the doubling condition $\phi(2r)\leq c_0\phi(r)$ for any $r\geq 0$ and some constant $c_0>0$. Then we may define $|D|^{s,\phi}$ as the Fourier multiplier with symbol $|\xi|^s\phi(|\xi|)$. More precisely, \begin{equation*} \mathcal{F}( |D|^{s,\phi}f)(\xi)=|\xi|^s\phi(|\xi|) \mathcal{F}(f)(\xi). \end{equation*} \end{definition} We shall consider operators $\la D\ra^{s,\phi}$ for some special functions $\phi$ depending on some function $\kappa\colon [0,\infty)\to (0,\infty)$, of the form \begin{equation}\label{n10} \phi(\lambda}\def\ep{\epsilon}\def\ka{\kappa)=\int_{0}^{\infty}\frac{1-\cos(h)}{h^2} \kappa\left(\frac{\lambda}\def\ep{\epsilon}\def\ka{\kappa}{h}\right) \diff \! h, \quad \text{for }\lambda\ge 0. \end{equation} Before we explain the reason to introduce these functions, let us clarify the assumptions on $\kappa$ which will be needed later on. \begin{assumption}\label{A:kappa} Throughout this paper, we always require that $\kappa\colon[0,\infty) \to [1,\infty)$ satisfies the following three assumptions: \begin{enumerate}[$({{\rm H}}1)$] \item\label{H1} $\kappa$ is increasing and $\lim\kappa(r)=\infty$ when $r$ goes to $+\infty$; \item\label{H2} there is a positive constant $c_0$ such that $\kappa(2r)\leq c_0\kappa(r)$ for any $r\geq 0$; \item\label{H3} the function $r\mapsto \kappa(r)/\log(4+r)$ is decreasing on $[0,\infty)$. \end{enumerate} \end{assumption} \begin{remark} $i)$ The main example is the function $\kappa_a(r)=(\log(4+r))^a$ with $a\in [0,1]$. However, we shall see that, to include the critical case $a=\frac{1}{3}$ in Theorem~$\ref{theo:main}$, we must consider more general functions (see Section~$\ref{S:critical}$). $ii)$ To clarify notations, we mention that we choose the natural logarithm so that $\log(e)=1$. In assumption~$({\rm H}\ref{H3})$, the choice of the constant $4$ is purely technical (it is used only to prove~\eqref{n97} below). \end{remark} There is one observation that will be useful below. We will prove that $\phi\sim \kappa$, which means that \begin{equation}\label{n60} c\kappa(\lambda}\def\ep{\epsilon}\def\ka{\kappa)\leq \phi(\lambda}\def\ep{\epsilon}\def\ka{\kappa)\leq C \kappa(\lambda}\def\ep{\epsilon}\def\ka{\kappa), \end{equation} for some positive constants $c,C$. In particular, $\phi$ satisfies the doubling condition $\phi(2r)\leq c_0\phi(r)$ and we may define the Fourier multiplier $\la D\ra^{s,\phi}=\la D\ra^{s}\phi(\left\vert D_x\right\vert)$, as in Definition~\ref{defi:D}. Although $\kappa$ and $\phi$ are equivalent, we will use them for different purposes. We will use $\phi$ when we prefer to work with the frequency variable (Fourier analysis), while we use $\kappa$ when the physical variable is more convenient. The next two results will be useful later on to freely switch computations between the frequency and physical settings. \begin{lemma}\label{Z12} Assume that $\phi$ is as defined in~\eqref{n10} for some function $\kappa$ satisfying Assumption~$\ref{A:kappa}$. Then, for all $g\in \mathcal{S}(\mathbb{R})$, there holds \begin{equation*} \la D\ra^{1,\phi}g(x)=\frac{1}{4}\int_{\mathbb{R}}\frac{2g(x)-g(x+h)-g(x-h)}{h^2} \kappa\left(\frac{1}{|h|}\right) \diff \! h. \end{equation*} \end{lemma} \begin{proof} Notice that the Fourier transform of the function \begin{align*} \int_\mathbb{R} \frac{2g(x)-g(x+h)-g(x-h)}{h^2}\kappa\left(\frac{1}{|h|}\right)\diff \! h, \end{align*} is given by $$ \left(\int_\mathbb{R} \frac{2-2\cos(h\xi)}{h^2}\kappa\left(\frac{1}{|h|}\right) \diff \! h\right) \hat g(\xi). $$ Therefore $$ \left(4|\xi| \int_0^\infty \frac{1-\cos(h)}{h^2} \kappa\left(\frac{|\xi|}{|h|}\right) \diff \! h\right) \hat{g}(\xi)=4\phi(\xi) \left\vert\xi\right\vert\hat{g}(\xi) =4\widehat{\la D\ra^{1,\phi}g}(\xi), $$ equivalent to the wanted result. \end{proof} The following result states that $\phi$ and $\kappa$ are equivalent and also gives the equivalence of some semi-norms. \begin{proposition} \label{Z9} Assume that $\phi$ is as defined in~\eqref{n10} for some function $\kappa$ satisfying Assumption~$\ref{A:kappa}$. $i)$ There exist two constants $c,C>0$ such that, for all $\lambda\ge 0$, \begin{equation}\label{n60b} c\kappa(\lambda}\def\ep{\epsilon}\def\ka{\kappa)\leq \phi(\lambda}\def\ep{\epsilon}\def\ka{\kappa)\leq C \kappa(\lambda}\def\ep{\epsilon}\def\ka{\kappa). \end{equation} $ii)$ Given $g\in \mathcal{S}(\mathbb{R})$, define the semi-norm $$ \left\Vert g\right\Vert_{s,\kappa}=\left(\iint_{\mathbb{R}^2} \left\vert 2g(x)-g(x+h)-g(x-h)\right\vert^2 \left(\frac{1}{|h|^{s}}\kappa\left(\frac{1}{|h|}\right)\right)^2\frac{\diff \! x\diff \! h}{|h|}\right)^\frac{1}{2}. $$ Then, for all $1<s<2$, there exist two constants $c,C>0$ such that, for all $g\in \mathcal{S}(\mathbb{R})$, \begin{equation*} c\int_{\mathbb{R}} \left\vert \la D\ra^{s,\phi}g(x)\right\vert^2\diff \! x\leq \left\Vert g\right\Vert_{s,\kappa}^2 \leq C\int_{\mathbb{R}}\left\vert \la D\ra^{s,\phi}g(x)\right\vert^2\diff \! x. \end{equation*} \end{proposition} \begin{proof}We begin by proving statement $ii)$. Let us introduce \begin{equation*} {K}(r)=\kappa^2\left(\frac{1}{r}\right)\frac{1}{r^{1+2s}}\qquad (r>0). \end{equation*} For any $h\in\mathbb{R}$, the Fourier transform of $x\mapsto 2g(x)-g(x+h)-g(x-h)$ is given by $(2-2\cos(\xi h))\hat{g}(\xi)$. Consequently, $$ \left\Vert g\right\Vert_{s,\kappa}^2=\iint_{\mathbb{R}^2}\left| 2g(x)-g(x+h)-g(x-h)\right|^2{K}(|h|)\diff \! x\diff \! h =\int_\mathbb{R} I(\xi)\big\vert \hat{g}(\xi)\big\vert^2 \diff \! \xi, $$ where $$ I(\xi)=\frac{2}{\pi}\int_\mathbb{R} ( 1-\cos(\xi h))^2 {K}\left(\left\vert h\right\vert\right) \diff \! h. $$ We must prove that the integral $I(\xi)$ satisfies \begin{equation} \label{Z30} c|\xi|^{2s}\phi(|\xi|)^2\leq I(\xi)\leq C|\xi|^{2s}\phi(|\xi|)^2, \end{equation} for some constant $c,C$ independent of $\xi\in\mathbb{R}$. Let us prove the bound from above. To do so, we use the inequality $\left\vert 1-\cos(\theta)\right\vert\leq \min \{ 2,\theta^2\}$ for all $\theta\in \mathbb{R}$ to obtain $$ I(\xi)\leq \frac{8}{\pi}\int_{|h \xi|\ge 1}{K}\left(\left\vert h\right\vert\right) \diff \! h +\frac{2}{\pi}\int_{|h \xi|\leq 1}\xi^4h^4{K}\left(\left\vert h\right\vert\right) \diff \! h. $$ Now, since $\kappa$ is increasing by assumption, directly from the definition of ${K}$, we have $$ \int_{|h \xi|\ge 1}{K}\left(\left\vert h\right\vert\right) \diff \! h\leq \left(\kappa(|\xi|)\right)^2 \int_{|h \xi|\ge 1}\frac{\diff \! h}{|h|^{1+2s}}\lesssim \kappa^2(|\xi|)\left\vert \xi\right\vert^{2s}. $$ The estimate of the contribution of the integral over $\{|h \xi|\leq 1\}$ is more involved. To do so, we introduce the following decomposition of the integrand \begin{equation}\label{n105} \xi^4h^4{K}\left(\left\vert h\right\vert\right)=\xi^4h^4\kappa^2\left(\frac{1}{\left\vert h\right\vert}\right)\frac{1}{\left\vert h\right\vert^{1+2s}} =\pi_1\left(\left\vert h\right\vert\right)\pi_2\left(\left\vert h\right\vert\right)\pi_3\left(\left\vert h\right\vert\right)\frac{\xi^4}{|h|^{s-1}} \end{equation} where \begin{align*} \pi_1(r)&\mathrel{:=} \frac{\kappa^2\big(\frac{1}{r}\big)}{\log\big(4+\frac{1}{r}\big)^2},\quad &&\pi_2(r)\mathrel{:=}\frac{\log\big(4+\frac{1}{r}\big)^2}{\log\big(\lambda_0+\frac{1}{r^2}\big)^2},\\ \pi_3(r)&\mathrel{:=} r^{2-s}\left(\log\left(\lambda_0+\frac{1}{r^2}\right)\right)^{2}, &&\lambda_0=\exp\left(\frac{8 }{2-s}\right). \end{align*} By assumption on $\kappa$ (see~$({\rm H}\ref{H3})$ in Assumption~\ref{A:kappa}), the function $\pi_1$ is increasing and hence $$ \pi_1(|h|)\leq \frac{\kappa^2(|\xi|)}{\log(4+|\xi|)^2}\quad\text{for}\quad |h|\leq \frac{1}{|\xi|}\cdot $$ The function $\pi_2$ is bounded on $[0,+\infty)$ by some harmless constant depending only on $s$. Eventually, we claim that the function $\pi_3$ is increasing. Indeed, \begin{align*} \frac{\diff}{\diff \! r}\pi_3(r)&=r^{1-s}\log\left(\lambda_0+\frac{1}{r^2}\right)^{2}\left[2-s-\frac{4}{(\lambda_0 r^2+1)\log\left(\lambda_0+\frac{1}{r^2}\right)}\right]\\&\geq r^{1-s}\log\left(\lambda_0+\frac{1}{r^2}\right)^{2}\left(2-s-\frac{4}{\log\left(\lambda_0\right)}\right)\\ &=\frac{2-s}{2}r^{1-s}\log\left(\lambda_0+\frac{1}{r^2}\right)^{2}>0. \end{align*} It follows that $$ \pi_3(|h|)\leq |\xi|^{s-2} \log\left(\lambda_0+|\xi|^2\right)^{2}\quad\text{for}\quad |h|\leq \frac{1}{|\xi|}\cdot $$ By combining these bounds about the factors $\pi_j$, we deduce from~\eqref{n105} that $$ \int_{|h \xi|\leq 1}\xi^4h^4{K}\left(\left\vert h\right\vert\right) \diff \! h \lesssim \left( \frac{\kappa(|\xi|)}{\log(4+|\xi|)} \right)^2|\xi|^{2+s} \log\left(\lambda_0+\left\vert\xi\right\vert^2\right)^{2} \int_{|h|\leq 1/|\xi|}\frac{\diff \! h}{|h|^{s-1}}. $$ Since $\log(4+|\xi|)\sim \log(\lambda_0+|\xi|^2)$ and since $0<s-1<1$, we deduce that $$ \int_{|h \xi|\leq 1}\xi^4h^4{K}\left(\left\vert h\right\vert\right) \diff \! h\lesssim (\kappa(|\xi|))^2 |\xi|^{2s}. $$ So, we get $I(\xi)\lesssim|\xi|^{2s}\phi(|\xi|)^2$. On the other hand, $$ I(\xi)\gtrsim \int_{\frac{2\pi}{5}\leq \xi h\leq \frac{3\pi}{5}} {K}\left(\left\vert h\right\vert\right) \diff \! h \gtrsim \left(\int_{\frac{2\pi}{5}\leq \xi h\leq \frac{3\pi}{5}} \diff \! h\right) \kappa^2(|\xi|)|\xi|^{1+2s} \gtrsim (\kappa(|\xi|))^2|\xi|^{2s}. $$ Therefore, we proved \eqref{Z30}, which concludes the proof of statement $ii)$. It remains to prove statement $i)$. The lower bound $\phi(\lambda)\ge c\kappa(\lambda)$ follows directly from the definition of $\phi$, by writing $$ \phi(\lambda)\ge \int_0^1 \frac{1-\cos(h)}{h^2}\kappa\left(\frac{\lambda}{h}\right) \diff \! h \ge \left(\int_0^1\frac{1-\cos(h)}{h^2}\diff \! h\right)\kappa(\lambda), $$ since $\kappa$ is increasing. To prove the upper bound, we split the integral into $\{h\leq 1\}$ and $\{h>1\}$ and then use similar arguments to those used above. \end{proof} \subsection{Paralinearization of the nonlinearity}\label{S:2.3} For a nonlinear evolution equation, considerable insight comes from being able to decompose the nonlinearity into several pieces having different roles. For a parabolic free boundary problem in fluid dynamics, one expects to extract from the nonlinearity at least two terms: \begin{enumerate}[$i)$] \item a convective term of the form $V\partial_x f$, \item an elliptic component of the form $\gamma \la D\ra^\alpha f$, \end{enumerate} for some coefficients $V$ and $\gamma$ and some index $\alpha\ge 0$, where as above $\la D\ra=(-\Delta)^{1/2}$ (see Definition~\ref{defi:D} with $s=1$ and $\phi=1$, or~\eqref{n3} below). To reach this goal, a standard strategy is use a paradifferential analysis, which consists in using a Littlewood-Paley decomposition to determine the relative significance of competing terms. For the Muskat equation, this idea was implemented independently in~\cite{Alazard-Lazar,Nguyen-Pausader}. In this paragraph, we recall the approach in~\cite{Alazard-Lazar} where the formulation of the Muskat equation in terms of finite differences is exploited to give such a paradifferential decomposition in a direct manner. Recall that the Muskat equation reads $$ \partial_tf=\frac{1}{\pi}\int_\mathbb{R}\frac{\partial_x\Delta_\alpha f}{1+\left(\Delta_\alpha f\right)^2}\diff \! \alpha. $$ Therefore it can be written under the form $$ \partial_tf= \frac{1}{\pi}\int_\mathbb{R}\partial_x\Delta_\alpha f\diff \! \alpha - \frac{1}{\pi}\int_\mathbb{R}\partial_x\Delta_\alpha f\frac{\left(\Delta_\alpha f\right)^2}{1+\left(\Delta_\alpha f\right)^2}\diff \! \alpha. $$ Let us introduce now some notations that will be used continually in the rest of the paper. We define the singular integral operators \begin{equation}\label{n3} \mathcal{H}u=-\frac{1}{\pi}\mathrm{pv}\int_\mathbb{R}\Delta_\alpha u\diff \! \alpha\quad\text{and}\quad \la D\ra=\mathcal{H}\partial_x. \end{equation} Then the Muskat equation can be written under the form \begin{align}\label{main} \partial_tf+\la D\ra f = \mathcal{T}(f)f, \end{align} where $\mathcal{T}(f)$ is the operator defined by \begin{equation}\label{def:T(f)f} \mathcal{T}(f)g = -\frac{1}{\pi}\int_\mathbb{R}\left(\partial_x\Delta_\alpha g\right) \frac{\left(\Delta_\alpha f\right)^2}{1+\left(\Delta_\alpha f\right)^2}\diff \! \alpha. \end{equation} The desired decomposition of the nonlinearity alluded to above will be achieved by splitting the coefficient $$ F_\alpha\mathrel{:=}\frac{\left(\Delta_\alpha f\right)^2}{1+\left(\Delta_\alpha f\right)^2} $$ into its odd and even components. Set \begin{align} &\mathcal{O}\left(\alpha,\cdot\right) = \frac{1}{2}\frac{\left(\Delta_\alpha f\right)^2}{1+\left(\Delta_\alpha f\right)^2} -\frac{1}{2}\frac{\left(\Delta_{-\alpha} f\right)^2}{1+\left(\Delta_{-\alpha} f\right)^2},\label{Oss}\\& \mathcal{E}\left(\alpha,\cdot\right) = \frac{1}{2}\frac{\left(\Delta_\alpha f\right)^2}{1+\left(\Delta_\alpha f\right)^2} +\frac{1}{2}\frac{\left(\Delta_{-\alpha} f\right)^2}{1+\left(\Delta_{-\alpha} f\right)^2}\cdot\label{Ess} \end{align} It follows that \begin{align*} \mathcal{T}(f)g = -\frac{1}{\pi}\int_\mathbb{R}\left(\partial_x\Delta_\alpha g\right)\mathcal{E}\left(\alpha,\cdot\right) \diff \! \alpha-\frac{1}{\pi}\int_\mathbb{R}\left(\partial_x\Delta_\alpha g\right)\mathcal{O}\left(\alpha,\cdot\right) \diff \! \alpha. \end{align*} Since $\Delta_\alpha f(x)$ converges to $f_x(x)$ when $\alpha$ goes to $0$, we further decompose $\mathcal{E}\left(\alpha,\cdot\right)$ as $$ \mathcal{E}\left(\alpha,\cdot\right) =\frac{(\partial_xf)^2}{1+(\partial_xf)^2}+\left(\mathcal{E}\left(\alpha,\cdot\right) -\frac{(\partial_xf)^2}{1+(\partial_xf)^2}\right). $$ Remembering that \begin{equation*} \la D\ra g(x)= -\frac{1}{\pi}\int_\mathbb{R}\left(\partial_x\Delta_\alpha g\right)\diff \! \alpha, \end{equation*} we obtain the following decomposition of the nonlinearity: \begin{equation}\label{n1} \mathcal{T}(f)g=\frac{(\partial_xf)^2}{1+(\partial_xf)^2}\la D\ra g+V(f)\partial_x g+R(f,g). \end{equation} where \begin{equation}\label{defi:V} V=-\frac{1}{\pi}\int_\mathbb{R}\frac{\mathcal{O}\left(\alpha,.\right)}{\alpha}\diff \! \alpha, \end{equation} and \begin{align*} R(f,g)=-\frac{1}{\pi}\int_\mathbb{R}\left(\partial_x\Delta_\alpha g\right)\left(\mathcal{E}\left(\alpha,\cdot\right) -\frac{(\partial_xf)^2}{1+(\partial_xf)^2}\right)\diff \! \alpha+\frac{1}{\pi}\int_\mathbb{R}\frac{\partial_x g(.-\alpha)}{\alpha}\mathcal{O}\left(\alpha,\cdot\right) \diff \! \alpha. \end{align*} \subsection{Nonlinear estimates}\label{S:2.4} With these preliminaries established, we start the analysis of the nonlinearity in the Muskat equation. We begin by estimating the coefficient $V(f)$ (see~\eqref{defi:V}). \begin{proposition} \label{Z3'} There exists a positive constant $C$ such that, for all $f$ in $\mathcal{S}(\mathbb{R})$, \begin{equation}\label{Z3} \left\Vert V(f)\right\Vert_{\dot H^{1}}\leq C\left\Vert f\right\Vert_{\dot{H}^2}+C\left\Vert f\right\Vert_{\dot{H}^{\frac{7}{4}}}^2. \end{equation} \end{proposition} \begin{proof} Recall that $$ V=-\frac{1}{\pi}\int_\mathbb{R}\frac{\mathcal{O}\left(\alpha,.\right)}{\alpha}\diff \! \alpha \quad\text{where}\quad \mathcal{O}\left(\alpha,\cdot\right) = \frac{1}{2}\frac{\left(\Delta_\alpha f\right)^2}{1+\left(\Delta_\alpha f\right)^2} -\frac{1}{2}\frac{\left(\Delta_{-\alpha} f\right)^2}{1+\left(\Delta_{-\alpha} f\right)^2}. $$ Now, write \begin{equation}\label{Z100} \mathcal{O}\left(\alpha,\cdot\right) = A_\alpha(x)\left(\Delta_\alpha f(x)-\Delta_{-\alpha} f(x)\right), \end{equation} where \begin{equation*} A_\alpha(x)= \frac{1}{2} \frac{\Delta_\alpha f+\Delta_{-\alpha} f}{(1+\left(\Delta_\alpha f\right)^2)(1+\left(\Delta_{-\alpha} f\right)^2)}. \end{equation*} Hence $$ \partial_x\mathcal{O}= A_\alpha \big(\Delta_\alpha f_x-\Delta_{-\alpha} f_x\big) +\partial_xA_\alpha\big(\Delta_\alpha f-\Delta_{-\alpha} f\big). $$ Now, we replace in the first product the factor $A_\alpha$ by $$ \left(A_\alpha(x)-\frac{f_x(x)}{(1+f_x(x)^2)^2}\right)+\frac{f_x(x)}{(1+f_x(x)^2)^2}, $$ and observe that the last term is bounded by $1$. Therefore, by using the triangle inequality, it follows that \begin{align*} &\left\vert\partial_x V(x)\right\vert\leq I_1(x)+I_2(x)+I_3(x)\quad\text{where}\\[1ex] &I_1(x)=\left\vert\int_\mathbb{R}\left(\Delta_\alpha f_x(x)-\Delta_{-\alpha} f_x(x)\right)\frac{\diff \! \alpha}{\alpha}\right\vert,\\ &I_2(x)= \int_\mathbb{R} \left|A_\alpha(x)-\frac{f_x(x)}{(1+f_x(x)^2)^2}\right|\left|\Delta_\alpha f_x(x)-\Delta_{-\alpha} f_x(x)\right|\frac{\diff \! \alpha}{|\alpha|},\\ &I_3(x)= \int_\mathbb{R} \left\vert\partial_xA_\alpha(x)\right\vert \left\vert\Delta_\alpha f(x)-\Delta_{-\alpha} f(x)\right\vert\frac{\diff \! \alpha}{|\alpha|}. \end{align*} We now must estimate the $L^2$-norm of $I_j$ for $1\leq j\leq 3$. We begin by estimating the $L^2$-norm of $I_1$. Observe that $$ \Delta_\alpha f_x(x)-\Delta_{-\alpha} f_x(x)= \frac{2f_x(x)-f_x(x-\alpha)-f_x(x+\alpha)}{\alpha}. $$ Now, as above, we use the fact that, for any $\alpha\in\mathbb{R}$, the Fourier transform of $x\mapsto 2g(x)-g(x+\alpha)-g(x-\alpha)$ is given by $(2-2\cos(\xi \alpha))\hat{g}(\xi)$. Consequently, it follows from Plancherel's theorem that \begin{align*} \left\Vert I_1\right\Vert_{L^2}^2&=\frac{1}{2\pi}\int_\mathbb{R} |\xi|^2 \left|\int\left( 2 -e^{-i\alpha \xi}-e^{i\alpha \xi}\right)\frac{\diff \! \alpha}{\alpha^2}\right|^2|\hat{f}(\xi)|^2 \diff \! \xi \\&= \frac{2}{\pi}\int_\mathbb{R} |\xi|^2 \left|\int\left( 1 -\cos(\alpha \xi)\right)\frac{\diff \! \alpha}{\alpha^2}\right|^2|\hat{f}(\xi)|^2 \diff \! \xi \\&= \frac{2}{\pi}\left|\int\left( 1 -\cos(\alpha)\right)\frac{\diff \! \alpha}{\alpha^2}\right|^2\int_\mathbb{R} |\xi|^4|\hat{f}(\xi)|^2 \diff \! \xi. \end{align*} Now, since $\left\vert 1 -\cos(\alpha )\right\vert \leq \min\{2,|\alpha\xi|^2\}$, \begin{align*} \left|\int_\mathbb{R}\left( 1 -\cos(\alpha )\right)\frac{\diff \! \alpha}{\alpha^2}\right|\leq 2\int_{|\alpha|\geq 1}\frac{\diff \! \alpha}{\alpha^2} +\int_{|\alpha|\leq 1}|\alpha|^2\frac{\diff \! \alpha}{\alpha^2}\leq 3. \end{align*} So, \begin{align*} \left\Vert I_1\right\Vert_{L^2}^2\lesssim \int_\mathbb{R} |\xi|^4|\hat{f}(\xi)|^2 dx=\left\Vert f\right\Vert_{\dot{H}^2(\mathbb{R})}^2. \end{align*} This implies the wanted inequality $\left\Vert I_1\right\Vert_{L^2}\lesssim \left\Vert f\right\Vert_{\dot{H}^2}$. We now move to the estimate of $\left\Vert I_2\right\Vert_{L^2}$. Introduce \begin{equation}\label{defi:F} F(x,y)=\frac{1}{2} \frac{x+y}{(1+x^2)(1+y^2)}, \end{equation} so that \begin{equation}\label{n20} A_\alpha=F(\Delta_\alpha f,\Delta_{-\alpha}f)\quad\text{and}\quad \frac{f_x(x)}{(1+f_x(x)^2)^2}=F(f_x,f_x). \end{equation} Since $\left\Vert\nabla_{x,y}F\right\Vert_{L^\infty(\mathbb{R}^2)}\leq 4$, we deduce that \begin{align*} \left\vert A_\alpha(x)-\frac{f_x(x)}{(1+f_x(x)^2)^2}\right\vert\leq 4 \left\vert \Delta_{\alpha}f-f_x\right\vert+ 4\left\vert \Delta_{-\alpha}f-f_x\right\vert, \end{align*} which in turn implies that, for all $x$ in $\mathbb{R}$, $$ \left\vert I_2(x)\right\vert \leq 8 \int_\mathbb{R} \left|\Delta_\alpha f(x)-f_x(x)\right|\left|\Delta_\alpha f_x(x)-\Delta_{-\alpha} f_x(x)\right|\frac{\diff \! \alpha}{|\alpha|}, $$ where we used the change of variables $\alpha\mapsto -\alpha$ to handle the contribution of the term $\Delta_{-\alpha}f-f_x$. Now, using the obvious estimate $$ \left|\Delta_\alpha f_x(x)-\Delta_{-\alpha} f_x(x)\right|\leq \left|\Delta_\alpha f_x(x)\right|+\left|\Delta_{-\alpha} f_x(x)\right|, $$ and the Cauchy-Schwarz inequality, we conclude that \begin{equation}\label{n16} \left\Vert I_2\right\Vert_{L^2}^2 \lesssim\left( \int_\mathbb{R}\left(\int _\mathbb{R}(\Delta_\alpha f-f_x(x))^2\frac{\diff \! \alpha}{\alpha^2}\right)^2 \diff \! x\right)^{\frac{1}{2}} \left( \int_\mathbb{R}\left(\int_\mathbb{R} \left(\Delta_\alpha f_x(x)\right)^2\diff \! \alpha\right)^2 \diff \! x\right)^{\frac{1}{2}}. \end{equation} We next claim that the first factor in the right-hand side above can be estimated by the second one. To see this, we start with the identity $$ \Delta_\alpha f-f_x(x)=\frac{f(x)-f(x-\alpha)}{\alpha}-f_x(x)=\frac{1}{\alpha}\int_0^{\alpha}(f_x(x-y)-f_x(x))\diff \! y. $$ Then, for all $x$ and all $\alpha$ in $\mathbb{R}$, we deduce that $$ (\Delta_\alpha f-f_x(x))^2\leq \frac{1}{\alpha}\left\vert \int_0^{\alpha}(f_x(x-y)-f_x(x))^2\diff \! y\right\vert. $$ (Notice that the integral is not necessarily non-negative since $\alpha$ might be non-positive.) Then, integrating in $\alpha$ and splitting the integral into $\alpha\ge 0$ and $\alpha<0$, we obtain \begin{align*} \int_\mathbb{R} \left(\Delta_\alpha f-f_x(x)\right)^2\frac{\diff \! \alpha}{\alpha^2} &\leq \int_{\mathbb{R}}\left\vert\int_0^\alpha \left( f_x(x-y)-f_x(x)\right)^2\diff \! y\right\vert\frac{\diff \! \alpha}{|\alpha|^3}\\ &\leq2\int_{0}^\infty\int_0^\alpha \left( f_x(x-y)-f_x(x)\right)^2\diff \! y\frac{\diff \! \alpha}{\alpha^3}. \end{align*} Since $2\int_y^{+\infty}\alpha^{-3}\diff \! \alpha\leq y^{-2}$, using Fubini's theorem, it follows that \begin{equation}\label{Z1} \begin{aligned} \int_\mathbb{R} \left(\Delta_\alpha f-f_x(x)\right)^2\frac{\diff \! \alpha}{\alpha^2} &\leq \int_0^\infty (f_x(x-y)-f_x(x))^2\frac{\diff \! y}{y^2}\\ &\leq \int_0^\infty (\Delta_y f_x(x))^2 \diff \! y\leq \int_\mathbb{R} \left(\Delta_\alpha f_x(x)\right)^2\diff \! \alpha. \end{aligned} \end{equation} Hence, it follows from~\eqref{n16} that $$ \left\Vert I_2\right\Vert_{L^2}^2\lesssim \int_\mathbb{R}\left(\int_\mathbb{R} \left(\Delta_\alpha f_x(x)\right)^2\diff \! \alpha\right)^2 \diff \! x. $$ Now, using the Triebel-Lizorkin semi-norms~\eqref{n5}, observe that \begin{equation}\label{n120} \begin{aligned} \int_\mathbb{R} \left( \int_\mathbb{R} \left\vert\Delta_\alpha f_x\right\vert^2\diff \! \alpha\right)^2\diff \! x &=\bigg(\int_\mathbb{R}\bigg( \int_\mathbb{R} \left\vert\delta_\alpha f_x\right\vert^2\frac{\diff \! \alpha}{|\alpha|^{1+2\frac{1}{2}}}\bigg)^\frac{4}{2}\diff \! x\bigg)^{\frac{4}{4}}\\ &=\left\Vert f_x\right\Vert^4_{\dot{F}^{\frac{1}{2}}_{4,2}}\lesssim \left\Vert f_x\right\Vert_{\dot H^{\frac{3}{4}}}^4, \end{aligned} \end{equation} where we used the Sobolev embedding~\eqref{FB} in the last inequality. This proves that $\left\Vert I_2\right\Vert_{L^2}$ is estimated by $C\left\Vert f\right\Vert_{\dot H^{7/4}}^2$, which completes the analysis of $I_2$. It remains to estimate $\left\Vert I_3\right\Vert_{L^2}$. Remembering that the function $F$ in~\eqref{defi:F} has partial derivatives bounded on $\mathbb{R}^2$, we find that $$ \left\vert \partial_xA_\alpha\right\vert\lesssim \left\vert \Delta_\alpha f_x\right\vert+\left\vert \Delta_{-\alpha}f_x\right\vert. $$ Now, making the change of variable $\alpha\mapsto-\alpha$ and applying the Cauchy-Schwarz inequality, we end up with \begin{align*} I_3(x)&\lesssim \int_\mathbb{R}\left\vert \Delta_\alpha f_x(x)\right\vert \left\vert\Delta_\alpha f(x)-\Delta_{-\alpha} f(x)\right\vert \frac{\diff \! \alpha}{|\alpha|}\\ &\lesssim \left(\int_\mathbb{R} (\Delta_\alpha f_x(x))^2\diff \! \alpha\right)^{\frac{1}{2}} \left(\int_\mathbb{R} \left(\Delta_\alpha f(x)-\Delta_{-\alpha} f(x)\right)^2\frac{\diff \! \alpha}{\alpha^2}\right)^{\frac{1}{2}}. \end{align*} Now, using the inequality $$ \left(\Delta_\alpha f(x)-\Delta_{-\alpha} f(x)\right)^2\leq 2\left(\Delta_\alpha f(x)-f_x(x)\right)^2+2\left(\Delta_{-\alpha}f(x)-f_x(x)\right)^2, $$ and then applying~\eqref{Z1}, we infer that \begin{equation*} \int_\mathbb{R} \left(\Delta_\alpha f(x)-\Delta_{-\alpha} f(x)\right)^2\frac{\diff \! \alpha}{\alpha^2}\lesssim \int_\mathbb{R} (\Delta_\alpha f_x(x))^2\diff \! \alpha. \end{equation*} Consequently, $$ \left\Vert I_3\right\Vert_{L^2}^2\lesssim \int_\mathbb{R}\left( \int_\mathbb{R}\left(\Delta_\alpha f_x(x)\right)^2\diff \! \alpha\right)^2\diff \! x. $$ Now, it follows from~\eqref{n120} that \begin{equation*} \left\Vert I_3\right\Vert_{L^2}^2\lesssim \left\Vert f\right\Vert_{\dot H^{\frac{7}{4}}}^4, \end{equation*} which concludes the proof of the proposition. \end{proof} \begin{remark} It follows from \eqref{Z100} that for all $f$ in $\mathcal{S}(\mathbb{R})$, \begin{align*} |V(f)(x)|&\leq \frac{1}{2\pi}\int |\Delta_{\alpha}f(x)-\Delta_{-\alpha}f(x)|\frac{\diff \! \alpha}{|\alpha|}\\ &\leq \frac{1}{\pi} \iint_{\mathbb{R}^2} |1-\cos(\alpha\xi)||\hat f(\xi) \frac{\diff \! \xi \diff \! \alpha}{|\alpha|^2}. \end{align*} Thus, since $\int_{\mathbb{R}} |1-\cos(\alpha\xi)|\frac{ \diff \! \alpha}{|\alpha|^2}=|\xi|\int_{\mathbb{R}} |1-\cos(\alpha)|\frac{ \diff \! \alpha}{|\alpha|^2}\sim |\xi|$, we find that \begin{equation}\label{Z101} \left\Vert V(f)\right\Vert_{L^\infty}\lesssim \int|\xi| |\hat f(\xi)|\diff \! \xi. \end{equation} We will use this estimate to bound the $H^1(\mathbb{R}_x)$-norm of $\mathcal{T}(f)g$ in Corollary~\ref{bounTfg}. \end{remark} The next result contains a key estimate which will allow us to commute arbitrary operators with~$\mathcal{T}(f)$. \begin{proposition}\label{Z18} Assume that $\phi$ is as defined in~\eqref{n10} for some function $\kappa$ satisfying Assumption~$\ref{A:kappa}$. Then, there exists a positive constant $C$ such that, for all $f,g$ in $\mathcal{S}(\mathbb{R})$, \begin{align}\label{n18} \big\Vert\big[\la D\ra^{1,\phi},\mathcal{T}(f)\big](g)\big\Vert_{L^2}&\lesssim \left\Vert g\right\Vert_{\dot H^{\frac74}}\big\Vert \la D\ra^{\frac74,\phi}f\big\Vert_{L^2} +\left\Vert g\right\Vert_{\dot H^{\frac74}}\big\Vert \la D\ra^{\frac{7}{4},\phi^2}f\big\Vert_{L^2}^{\frac{1}{2}}\left\Vert f\right\Vert_{\dot{H}^{\frac{19}{12}}}^{\frac{3}{2}}\\&\quad+\big\Vert\la D\ra^{\frac{7}{4},\phi^{2}}g\big\Vert_{L^2}^{\frac{1}{2}} \left\Vert g\right\Vert_{\dot H^{\frac74}}^{\frac{1}{2}} \left\Vert f\right\Vert_{\dot H^{\frac74}}.\nonumber \end{align} \end{proposition} \begin{proof} Recall that the operator $\mathcal{T}(f)$ is defined by $$ \mathcal{T}(f)g = -\frac{1}{\pi}\int_\mathbb{R}\left(\Delta_\alpha g_x\right)F_\alpha\diff \! \alpha\quad\text{where}\quad F_\alpha=\frac{\left(\Delta_\alpha f\right)^2}{1+\left(\Delta_\alpha f\right)^2}\diff \! \alpha. $$ Let us introduce $$ \Gamma_\alpha\mathrel{:=}\la D\ra^{1,\phi}\left[F_\alpha\Delta_\alpha g_x \right]-F_\alpha\la D\ra^{1,\phi} \left[\Delta_\alpha g_x\right]-\Delta_\alpha g_x\la D\ra^{1,\phi}\left[F_\alpha\right]. $$ Loosely speaking, the term $\Gamma_\alpha$ is a remainder term for a fractional Leibniz rule with the operator $\la D\ra^{1,\phi}$. With this notation, one can write the commutator of the operators $\la D\ra^{1,\phi}$ and $\mathcal{T}(f)$ as $$ \left[\la D\ra^{1,\phi},\mathcal{T}(f)\right](g)= -\frac{1}{\pi} \int_\mathbb{R} \Delta_\alpha g_x\la D\ra^{1,\phi}F_\alpha\diff \! \alpha -\frac{1}{\pi} \int_\mathbb{R} \Gamma_\alpha\diff \! \alpha. $$ Consequently, to estimate the $L^2$-norm of $\left[\la D\ra^{1,\phi},\mathcal{T}(f)\right](f)$, we have to bound the following integrals: $$ (I)\mathrel{:=} \int_\mathbb{R}\left( \int_\mathbb{R} \Delta_\alpha g_x(x)\la D\ra^{1,\phi}F_\alpha(x)\diff \! \alpha\right)^2\diff \! x,\quad (II)\mathrel{:=} \int_\mathbb{R}\left( \int_\mathbb{R} \Gamma_\alpha\diff \! \alpha\right)^2\diff \! x. $$ More precisely, to prove the wanted estimate~\eqref{n18}, it is sufficient to show that \begin{align} (I)&\lesssim \left\Vert g\right\Vert^2_{\dot H^{\frac74}}\big\Vert \la D\ra^{\frac74,\phi}f\big\Vert_{L^2}^2 +\left\Vert g\right\Vert^2_{\dot H^{\frac74}}\big\Vert \la D\ra^{\frac{7}{4},\phi^2}f\big\Vert_{L^2}\big\Vert \la D\ra^{\frac{19}{12}}f\big\Vert_{L^2}^3,\label{Z11}\\ (II)&\lesssim \big\Vert\la D\ra^{\frac{7}{4},\phi^{2}}g\big\Vert_{L^2}\big\Vert\la D\ra^{\frac{7}{4}}g\big\Vert_{L^2} \big\Vert\la D\ra^{\frac{7}{4}}f\big\Vert_{L^2}^2.\label{Z13} \end{align} \textit{Step 1:} We prove \eqref{Z11}. By Holder's inequality and Minkowski's inequality, one has $$ (I) \leq \left(\int_\mathbb{R}\left( \int_\mathbb{R} \big\vert\Delta_\alpha g_x(x)\big\vert^2 \diff \! \alpha\right)^2\diff \! x\right)^{\frac{1}{2}} \left(\int_\mathbb{R}\left(\int_\mathbb{R} \big\vert\la D\ra^{1,\phi}F_\alpha(x)\big\vert^2 \diff \! \alpha\right)^2dx\right)^{\frac{1}{2}}. $$ The first factor is estimated by means of~\eqref{n120}, namely $$ \left(\int_\mathbb{R}\left(\int_\mathbb{R} \big\vert\Delta_\alpha g_x(x)\big\vert^2 \diff \! \alpha\right)^2\diff \! x\right)^{\frac{1}{2}}\lesssim \left\Vert g\right\Vert^2_{\dot H^{\frac{7}{4}}}. $$ The analysis of the second term is more difficult. We begin by applying Minkowski's inequality together with the Sobolev embedding $\dot{H}^{1/4}(\mathbb{R})\hookrightarrow L^4(\mathbb{R})$, to obtain \begin{align*} \left(\int_\mathbb{R}\left(\int _\mathbb{R}\big\vert\la D\ra^{1,\phi}F_\alpha(x)\big\vert^2 \diff \! \alpha\right)^2\diff \! x\right)^{\frac{1}{2}} &\lesssim \int_\mathbb{R}\left( \int_\mathbb{R} \big\vert\la D\ra^{1,\phi}F_\alpha(x)\big\vert^4\diff \! x\right)^{\frac{1}{2}}\diff \! \alpha\\ &\lesssim\iint_{\mathbb{R}^2} \big\vert\la D\ra^{\frac54,\phi}F_\alpha(x)\big\vert^2\diff \! x\diff \! \alpha. \end{align*} Now, to evaluate the latter integral, we use Lemma \ref{Z9}, which implies that \begin{multline*} \iint_{\mathbb{R}^2} \big\vert\la D\ra^{\frac54,\phi}F_\alpha(x)\big\vert^2\diff \! x\diff \! \alpha\\ \sim \iint_{\mathbb{R}^2} \left| 2F_\alpha( x)-F_\alpha( x+h)-F_\alpha( x-h)\right|^2 \left(\kappa\left(\frac{1}{|h|}\right)\right)^2\frac{\diff \! x\diff \! h}{|h|^{1+5/2}}\cdot \end{multline*} We must estimate the integrand $\left| 2F_\alpha( x)-F_\alpha( x+h)-F_\alpha( x-h)\right|^2$ in terms of similar terms for $\Delta_\alpha f$. To do so, write $F_\alpha= \mathcal{F}(\Delta_\alpha)$ with $\mathcal{F}(x)=x^2/(1+x^2)$. Then we use the following sharp contraction estimate for $\mathcal{F}$. \begin{lemma}\label{Z10} For any triple of real numbers $(x_1,x_2,x_3)$, there holds \begin{equation}\label{Z4} \left|\frac{2x_1^2}{1+x_1^2}-\frac{x_2^2}{1+x_2^2}-\frac{x_3^2}{1+x_3^2}\right|\leq |x_2+x_3-2x_1| +|x_2-x_1|^2+|x_3-x_1|^2. \end{equation} \end{lemma} \begin{remark} It is worth remarking that such a clean inequality does not hold for a general function. \end{remark} \begin{proof} Write \begin{align*} &\frac{2x_1^2}{1+x_1^2}-\frac{x_2^2}{1+x_2^2}-\frac{x_3^2}{1+x_3^2} =\frac{2}{1+x_1^2}-\frac{1}{1+x_2^2}-\frac{1}{1+x_3^2}\\[1ex] &=\frac{(x_3^2+1)(x_2+x_1)(x_2-x_1)+(x_2^2+1)(x_3+x_1)(x_3-x_1)}{(1+x_1^2)(1+x_2^2)(1+x_3^2)}\\[1ex] &=\frac{(x_3^2+1)(x_2+x_1)(x_2+x_3-2x_1)+(x_1x_2+x_2x_3+x_3x_1-1)(x_2-x_3)(x_3-x_1)}{(1+x_1^2)(1+x_2^2)(1+x_3^2)}, \end{align*} to obtain \begin{align*} \left|\frac{2x_1^2}{1+x_1^2}-\frac{x_2^2}{1+x_2^2}-\frac{x_3^2}{1+x_3^2}\right| \leq \vert x_2+x_3-2x_1\vert +\vert x_2-x_3\vert \vert x_3-x_1\vert, \end{align*} which implies the desired result. \end{proof} Directly from the definition of $F_\alpha$, it follows from the previous lemma that \begin{align*} \left| 2F_\alpha( x)-F_\alpha( x+h)-F_\alpha( x-h)\right|^2 &\lesssim\left| 2\Delta_\alpha f( x)-\Delta_\alpha f( x+h)-\Delta_\alpha f( x-h)\right|^2 \\ &\quad+\left| \Delta_\alpha f( x)-\Delta_\alpha f( x+h)\right|^4\\ &\quad+\left| \Delta_\alpha f( x)-\Delta_\alpha f( x-h)\right|^4. \end{align*} Integrating in $h$ and making use of the change of variable $h\mapsto -h$ to handle the contribution of the last term, we conclude that \begin{align*} &\int_{\mathbb{R}} \left| \la D\ra^{5/4,\phi}\left[F_\alpha\right](x)\right|^2\diff \! x\\ &\qquad\qquad\lesssim \iint_{\mathbb{R}^2} \left| 2\Delta_\alpha f( x)-\Delta_\alpha f( x+h)-\Delta_\alpha f( x-h)\right|^2 \left(\kappa\left(\frac{1}{|h|}\right)\right)^2\frac{\diff \! x\diff \! h}{|h|^{1+5/2}} \\&\qquad\qquad\quad+ \iint_{\mathbb{R}^2} \left| \Delta_\alpha f( x)-\Delta_\alpha f( x+h)\right|^4 \left(\kappa\left(\frac{1}{|h|}\right)\right)^2\frac{ \diff \! x\diff \! h}{|h|^{1+5/2}}. \end{align*} The first term is estimated by using again Lemma \ref{Z9}, which implies that \begin{multline*} \iint_{\mathbb{R}^2} \left| 2\Delta_\alpha f( x)-\Delta_\alpha f( x+h)-\Delta_\alpha f( x-h)\right|^2 \left(\kappa\left(\frac{1}{|h|}\right)\right)^2\frac{\diff \! x\diff \! h}{|h|^{1+5/2}}\\ \lesssim \int_{\mathbb{R}} \left| \la D\ra^{5/4,\phi}\left[\Delta_\alpha f\right](x)\right|^2\diff \! x. \end{multline*} On the other hand, using the same arguments together with Holder's inequality, we have \begin{align*} &\iint_{\mathbb{R}^2} \left| \Delta_\alpha f( x)-\Delta_\alpha f( x+h)\right|^4 \left(\kappa\left(\frac{1}{|h|}\right)\right)^2\frac{\diff \! x\diff \! h}{|h|^{1+5/2}}\\ &\qquad\qquad\leq \left(\iint_{\mathbb{R}^2} \left| \Delta_\alpha f( x)-\Delta_\alpha f( x+h)\right|^2\left(\kappa\left(\frac{1}{|h|}\right)\right)^4\frac{\diff \! x\diff \! h}{|h|^{1+5/2}}\right)^{1/2} \\ &\qquad\qquad\quad \times\left(\iint_{\mathbb{R}^2} \left| \Delta_\alpha f( x)-\Delta_\alpha f( x+h)\right|^6\frac{\diff \! x\diff \! h}{|h|^{1+5/2}}\right)^{1/2}\\ &\qquad\qquad\sim \big\Vert\la D\ra^{\frac{5}{4},\phi^2}\Delta_\alpha f\big\Vert_{L^2} \left\Vert\Delta_\alpha f\right\Vert_{\dot{F}^{\frac{5}{12}}_{6,6}}^3 \\&\qquad\qquad \lesssim \big\Vert\la D\ra^{\frac{5}{4},\phi^2}\Delta_\alpha f\big\Vert_{L^2} \left\Vert\Delta_\alpha f\right\Vert_{\dot{H}^{\frac{3}{4}}}^3, \end{align*} where we used the Sobolev embedding $\dot{H}^{\frac{3}{4}}(\mathbb{R})\hookrightarrow \dot{F}^{\frac{5}{12}}_{6,6}(\mathbb{R})$, see~\eqref{FB}. Therefore, by gathering the previous results, we get $$ (I)\lesssim \left\Vert g\right\Vert^2_{\dot H^{\frac74}}\left(\iint_{\mathbb{R}^2} \big\vert\Delta_\alpha( \la D\ra^{\frac54,\phi}f)(x)\big\vert^2 \diff \! x \diff \! \alpha +\int_{\mathbb{R}} \big\Vert\la D\ra^{\frac{5}{4},\phi^2}\Delta_\alpha f\big\Vert_{L^2} \left\Vert \Delta_\alpha f\right\Vert_{\dot{H}^{\frac{3}{4}}}^3\diff \! \alpha\right). $$ Now we claim that, for any function $\tilde{f}$, \begin{equation}\label{n15} \iint_{\mathbb{R}^2} \big\vert \Delta_\alpha \tilde{f}\big\vert^2\diff \! \alpha\diff \! x\sim \big\Vert \tilde{f}\big\Vert_{\dot{H}^{\frac{1}{2}}}^2. \end{equation} To see this, write $$ \iint_{\mathbb{R}^2} \big\vert \Delta_\alpha \tilde{f}\big\vert^2\diff \! \alpha\diff \! x= \iint_{\mathbb{R}^2} \big\vert \delta_\alpha \tilde{f}\big\vert^2\frac{\diff \! \alpha}{\alpha^{1+2\frac{1}{2}}}\diff \! x =\big\Vert \tilde{f}\big\Vert_{\dot{F}_{2,2}^{\frac{1}{2}}}^2 \sim \big\Vert \tilde{f}\big\Vert_{\dot{H}^{\frac{1}{2}}}^2, $$ where we used~\eqref{SE0}. Now the estimate~\eqref{n15} implies that \begin{align*} & \int\big\vert\Delta_\alpha( \la D\ra^{\frac54,\phi}f)(x)\big\vert^2 \diff \! x \diff \! \alpha\lesssim \big\Vert \la D\ra^{\frac74,\phi}f\big\Vert_{L^2}^2 ,\\ &\int_{\mathbb{R}} \big\Vert\Delta_\alpha (\la D\ra^{\frac{5}{4},\phi^2}f)\big\Vert_{L^2}^2\diff \! \alpha \lesssim \big\Vert \la D\ra^{\frac{7}{4},\phi^2}f\big\Vert_{L^2}^2. \end{align*} It follows that $$ (I)\lesssim \left\Vert g\right\Vert^2_{\dot H^{\frac74}}\big\Vert \la D\ra^{\frac74,\phi}f\big\Vert_{L^2}^2 +\left\Vert g\right\Vert^2_{\dot H^{\frac74}}\big\Vert \la D\ra^{\frac{7}{4},\phi^2}f\big\Vert_{L^2}\left(\int_{\mathbb{R}}\big\Vert\Delta_\alpha \big( \la D\ra^{\frac{3}{4}}f\big)\big\Vert_{L^2}^6 \diff \! \alpha\right)^{\frac{1}{2}}. $$ Using the Besov norm~\eqref{defi:B-spaces}, we have $$ \left(\int_{\mathbb{R}}\big\Vert\Delta_\alpha \big( \la D\ra^{\frac{3}{4}}f\big)\big\Vert_{L^2}^6 \diff \! \alpha\right)^{\frac{1}{2}} =\big\Vert \la D\ra^{\frac34}f\big\Vert_{\dot{B}^{\frac56}_{2,6}}^3\lesssim \big\Vert \la D\ra^{\frac34}f\big\Vert_{\dot{F}^{\frac56}_{2,6}}^3 \lesssim \big\Vert \la D\ra^{\frac{19}{12}}f\big\Vert_{L^2}^3, $$ where we have used the embedding~\eqref{FB} in the last inequality, while the inner inequality follows at once from the definitions of Besov and Triebel-Lizorkin space (see~\eqref{defi:B-spaces} and \eqref{n5}) by using the Minkowski's inequality. This proves the wanted result~\eqref{Z11}. \textit{Step 2:} We prove \eqref{Z13}. Recall that Lemma~\ref{Z12} implies that for any function $g$, one can compute $\la D\ra^{1,\phi}g$ as follows: \begin{equation*} \la D\ra^{1,\phi}g(x)=\frac{1}{4}\int\frac{2g(x)-g(x+h)-g(x-h)}{h^2}\kappa\left(\frac{1}{|h|}\right)\diff \! h. \end{equation*} Then using the elementary identity \begin{align*} &\left(2xy-x_{1}y_1-x_{-1}y_{-1}\right)-x \left(2y-y_1-y_{-1}\right)-y \left(2x-x_{1}-x_{-1}\right)\\&=-(x-x_1)(y-y_1)-(x-x_{-1})(y-y_{-1}), \end{align*} we deduce that \begin{align*} \left\vert \Gamma_\alpha\right\vert=2\left|\int_\mathbb{R} \left(F_\alpha(x)-F_\alpha(x-h)\right) \left(\Delta_\alpha g_x(x)-\Delta_\alpha g_x(x-h)\right) \kappa\left(\frac{1}{|h|}\right)\frac{\diff \! h}{h^2}\right|. \end{align*} Since \begin{equation*} |F_\alpha(x)-F_\alpha(x-h)|\leq |\Delta_\alpha f(x)-\Delta_\alpha f(x-h)|, \end{equation*} it follows that \begin{align*} \left\vert\Gamma_\alpha\right\vert&\leq 2\int_{\mathbb{R}} \left|\Delta_\alpha f(x)-\Delta_\alpha f(x-h)\right|\left|\Delta_\alpha g_x(x)-\Delta_\alpha g_x(x-h)\right| \kappa\left(\frac{1}{|h|}\right)\frac{\diff \! h}{h^2} \\ &\leq 2 \left(\int_{\mathbb{R}} \left|\Delta_\alpha g_x(x)-\Delta_\alpha g_x(x-h)\right|^2 \kappa^4\left(\frac{1}{|h|}\right)\frac{\diff \! h}{h^2}\right)^{\frac14}\\ &\times \left(\int_{\mathbb{R}} \left\vert\Delta_\alpha g_x(x)-\Delta_\alpha g_x(x-h)\right\vert^3 \frac{\diff \! h}{h^2}\right)^{\frac 1 6} \left(\int_{\mathbb{R}}\left|\Delta_\alpha f(x)-\Delta_\alpha f(x-h)\right|^{\frac{12}{7}} \frac{\diff \! h}{h^2}\right)^{\frac{7}{12}}. \end{align*} So, by Holder's inequality in $x$, \begin{align*} \left\Vert \Gamma_\alpha\right\Vert_{L^2(\mathbb{R};\diff \! x)}^2&\leq 4 \left(\iint_{\mathbb{R}^2}\left(\Delta_\alpha g_x(x)-\Delta_\alpha g_x(x-h)\right)^2 \kappa^4\left(\frac{1}{|h|}\right) \frac{\diff \! h}{h^2}\diff \! x\right)^{\frac{1}{2}}\\ &\quad\times \left(\iint_{\mathbb{R}^2}\left|\Delta_\alpha g_x(x)-\Delta_\alpha g_x(x-h)\right|^3 \frac{\diff \! h\diff \! x}{h^2}\right)^{\frac13}\\ &\quad\times \left(\int_{\mathbb{R}}\left(\int_{\mathbb{R}}\left|\Delta_\alpha f(x)-\Delta_\alpha f(x-h)\right|^{\frac{12}{7}} \frac{\diff \! h}{h^2}\right)^{7}\diff \! x\right)^{\frac16}\\ &\sim \big\Vert |\Delta_\alpha (\la D\ra^{\frac{3}{2},\phi^{2}}g)\big\Vert_{L^2} \left\Vert\Delta_\alpha g_x\right\Vert_{\dot{F}^{\frac13}_{3,3}} \left\Vert\Delta_\alpha f\right\Vert_{\dot{F}^{\frac{7}{12}}_{12,\frac{12}{7}}}^2\\&\overset{\eqref{FB}}\lesssim \big\Vert \Delta_\alpha (\la D\ra^{\frac{3}{2},\phi^{2}}g)\big\Vert_{L^2} \big\Vert\Delta_\alpha (\la D\ra^{\frac{3}{2}} g)\big\Vert_{L^2} \big\Vert\Delta_\alpha (\la D\ra f)\big\Vert_{L^2}^2. \end{align*} Therefore, \begin{align*} (II)&\leq \left(\int_{\mathbb{R}} \big\Vert |\Delta_\alpha (\la D\ra^{\frac{3}{2},\phi^{2}}g)\big\Vert_{L^2}^{1/2} \big\Vert\Delta_\alpha (\la D\ra^{3/2} g)\big\Vert_{L^2}^{1/2} \big\Vert\Delta_\alpha (\la D\ra f)\big\Vert_{L^2}\diff \! \alpha\right)^2 \\ &\leq \left(\int_{\mathbb{R}}\big\Vert\Delta_\alpha (\la D\ra^{\frac{3}{2},\phi^{2}}g)\big\Vert_{L^2}^2 |\alpha|^{1/2}\diff \! \alpha\right)^{1/2} \left(\int_{\mathbb{R}}\big\Vert\Delta_\alpha (\la D\ra^{\frac{3}{2}}g)\big\Vert_{L^2}^2|\alpha|^{1/2}\diff \! \alpha\right)^{1/2}\\ &\quad\times \int_{\mathbb{R}}\big\Vert\Delta_\alpha (\la D\ra f)\big\Vert_{L^2}^2|\alpha|^{-1/2}\diff \! \alpha\\ &\sim \big\Vert\la D\ra^{\frac{7}{4},\phi^{2}}g\big\Vert_{L^2}\big\Vert\la D\ra^{\frac{7}{4}}g\big\Vert_{L^2}\big\Vert\la D\ra^{\frac{7}{4}}f\big\Vert_{L^2}^2, \end{align*} where we used~\eqref{SE0}. This gives \eqref{Z13}, which completes the proof. \end{proof} Eventually, we study the remainder term $R(f,g)$ in the paralinearization of $\mathcal{T}(f)g$ (see~\eqref{n1}). \begin{proposition}\label{Z19} Assume that $\phi$ is as defined in~\eqref{n10} for some function $\kappa$ satisfying Assumption~$\ref{A:kappa}$. Then, there exists a positive constant $C$ such that, for all functions~$f,g$ in $\mathcal{S}(\mathbb{R})$, \begin{equation}\label{Z14} \Vert R(f,g)\Vert _{L^2}\leq C\Vert g\Vert _{\dot{H}^{\frac{3}{4}}} \left\Vert f\right\Vert_{\dot{H}^{\frac{7}{4}}}. \end{equation} In particular, \begin{equation}\label{Z15} \Vert R(f,\la D\ra^{1,\phi}f)\Vert _{L^2}\leq C\big\Vert \la D\ra^{\frac{7}{4},\phi}f\big\Vert_{L^2} \left\Vert f\right\Vert_{\dot{H}^{\frac{7}{4}}}. \end{equation} \end{proposition} \begin{proof} Recall that \begin{align*} R(f,g)&=-\frac{1}{2\pi}\int_\mathbb{R}\left(\partial_x\Delta_\alpha g+\partial_x\Delta_{-\alpha} g\right)\left(\mathcal{E}\left(\alpha,\cdot\right) -\frac{(\partial_xf)^2}{1+(\partial_xf)^2}\right)\diff \! \alpha\\ &\quad+\frac{1}{\pi}\int_\mathbb{R}\frac{\partial_x g(.-\alpha)}{\alpha}\mathcal{O}\left(\alpha,\cdot\right) \diff \! \alpha. \end{align*} The first half of the proof is based on~\cite{Alazard-Lazar}. Namely, write \begin{align*} &\partial_x\Delta_\alpha g+\partial_x\Delta_{-\alpha} g=-\frac{1}{\alpha}\partial_\alpha \big(\delta_\alpha g+\delta_{-\alpha}g\big),\\ &\partial_x g(.-\alpha)=\partial_\alpha(\delta_\alpha g), \end{align*} as can be verified by elementary calculations. Then use these identities to integrate by parts in $\alpha$. By so doing, we find \begin{align*} |R(f,g)(x)|\lesssim\int_\mathbb{R} \frac{|\delta_\alpha g(x)|}{|\alpha|}&\bigg(\left|\mathcal{E}\left(\alpha,x\right) -\frac{(\partial_xf(x))^2}{1+(\partial_xf(x))^2}\right|+|\alpha| |\partial_\alpha\mathcal{E}\left(\alpha,x\right) |\\ &~~~+|\mathcal{O}(\alpha,x)|+|\alpha||\partial_\alpha\mathcal{O}\left(\alpha,x\right) | \bigg)\frac{\diff \! \alpha}{|\alpha|}\cdot \end{align*} Then it follows from \cite[Lemma 4.5]{Alazard-Lazar} that $$ |R(f,g)(x)|\lesssim \int_\mathbb{R} |\Delta_\alpha g(x)|\gamma(\alpha,x)\diff \! \alpha $$ where $$ \gamma(\alpha,x)=\frac{1}{\left\vert\alpha\right\vert}\left( |\delta_\alpha f_x(x)|+ |\delta_{-\alpha} f_x(x)|+\frac{|s_\alpha f(x)|}{|\alpha|}+\left|\frac{1}{\alpha}\int_{0}^{\alpha}s_\eta f_x(x)\diff \! \eta\right|\right). $$ and $s_\alpha f(x)=\delta_\alpha f(x)+\delta_{-\alpha} f(x)$.\\ We now use different arguments then those used in~\cite{Alazard-Lazar}. The main new ingredient here is given by the following inequality: $$ \int_\mathbb{R} \gamma(\alpha,x)^2\diff \! \alpha\lesssim \int_\mathbb{R} |\delta_\alpha f_x(x)|^2\frac{\diff \! \alpha}{\alpha^2}\cdot $$ To prove the latter, it will suffice to show that that \begin{align} &\int_\mathbb{R} \left|\int_{0}^{\alpha}s_\eta f_x(x)\diff \! \eta\right|^2\frac{\diff \! \alpha}{\alpha^4} \lesssim \int_\mathbb{R} |\delta_\alpha f_x(x)|^2\frac{\diff \! \alpha}{\alpha^2},\label{n100}\\ &\int_\mathbb{R} |s_\alpha f(x)|^2\frac{\diff \! \alpha}{\alpha^4}\lesssim \int_\mathbb{R} |\delta_\alpha f_x(x)|^2\frac{\diff \! \alpha}{\alpha^2}\cdot\label{n101} \end{align} To prove \eqref{n100}, we apply the following Hardy's inequality $$ \int_0^\infty\left(\frac{1}{\alpha}\int_0^\alpha u(\eta)\diff \! \eta\right)^2\diff \! \alpha\leq 4 \int_0^\infty u(\alpha)^2\diff \! \alpha, $$ with $u(\eta)=(s_\eta f_x(x))/\eta$. It follows that \begin{align*} \int_\mathbb{R} \left|\int_{0}^{\alpha}s_\eta f_x(x)\diff \! \eta\right|^2\frac{\diff \! \alpha}{\alpha^4} &=\int_\mathbb{R} \left|\frac{1}{\alpha^2}\int_{0}^{\alpha}s_\eta f_x(x)\diff \! \eta\right|^2\diff \! \alpha \leq \int_\mathbb{R} \left|\frac{1}{\alpha}\int_{0}^{\alpha}\frac{s_\eta f_x(x)}{\eta}\diff \! \eta\right|^2\diff \! \alpha\\ &\leq 4 \int_\mathbb{R} \left( \frac{s_\alpha f_x(x)}{\alpha}\right)^2\diff \! \alpha. \end{align*} Then~\eqref{n100} follows at once from the triangle inequality $\left\vert s_\alpha f\right\vert^2\leq 2\left\vert \delta_\alpha f\right\vert^2+2\left\vert \delta_{-\alpha} f\right\vert^2$. Let us prove \eqref{n101}. Write $$ \frac{s_\alpha f}{\alpha}=\frac{\delta_\alpha f+\delta_{-\alpha}f}{\alpha}=\Delta_\alpha f-\Delta_{-\alpha}f =\big(\Delta_\alpha f-f_x\big)-\big(\Delta_{-\alpha}f-f_x\big), $$ so $$ \frac{(s_\alpha f(x))^2}{\alpha^4}\leq 2\frac{(\Delta_\alpha f-f_x)^2}{\alpha^2}+2\frac{(\Delta_{-\alpha}f-f_x)^2}{\alpha^2}. $$ Now, remember from~\eqref{Z1} that $$ \int_\mathbb{R} \left(\Delta_\alpha f-f_x(x)\right)^2\frac{\diff \! \alpha}{\alpha^2}\leq \int_\mathbb{R} \left(\Delta_\alpha f_x(x)\right)^2\diff \! \alpha, $$ together with a similar result for $\Delta_{-\alpha}f-f_x$ (interchanging $\alpha$ and $-\alpha$). This proves~\eqref{n101}. Then, remembering that $\Delta_\alpha u=\delta_\alpha u/\alpha$ and using the Cauchy-Schwarz inequality, we get \begin{equation*} |R(f,g)(x)|\lesssim \left(\int_{\mathbb{R}} (\Delta_{\alpha} g(x))^2 \diff \! \alpha\right)^{\frac{1}{2}}\left(\int_{\mathbb{R}} (\Delta_\alpha f_x(x))^2\diff \! \alpha\right)^{\frac{1}{2}}. \end{equation*} By using the Cauchy-Schwarz inequality again, we conclude that $$ \Vert R(f,g)\Vert _{L^2}^2 \lesssim \left(\int_{\mathbb{R}}\left(\int_\mathbb{R} (\Delta_\alpha g(x))^2 \diff \! \alpha\right)^2\diff \! x\right)^{\frac{1}{2}} \left( \int_{\mathbb{R}}\left(\int_{\mathbb{R}}(\Delta_\alpha f_x(x))^2\diff \! \alpha\right)^2\diff \! x\right)^{\frac{1}{2}}. $$ Therefore, the estimate~\eqref{n120} implies that $$ \Vert R(f,g)\Vert _{L^2}^2 \lesssim \Vert g\Vert _{\dot{H}^{\frac{3}{4}}}^2 \left\Vert f\right\Vert_{\dot{H}^{\frac{7}{4}}}^2, $$ equivalent to the desired result~\eqref{Z14}. \end{proof} \begin{corollary}\label{bounTfg} There exists a positive constant $C$ such that, for all~$f\in \mathcal{S}(\mathbb{R})$, \begin{equation}\label{Z102} ||\mathcal{T}(f)f||_{\dot H^1}\leq C\left(\left\Vert f\right\Vert_{\dot H^{\frac32}}+\left\Vert f\right\Vert_{\dot H^{\frac32}}^2+1+||V(f)||_{L^\infty}\right) \left\Vert f\right\Vert_{\dot H^{2}}. \end{equation} \end{corollary} \begin{proof} By \eqref{n1} and \eqref{n18} and \eqref{Z15} with $\phi\equiv 1$ one has, \begin{align*} \Vert \mathcal{T}(f)f\Vert_{\dot H^1}&\leq \big\Vert\big[\la D\ra,\mathcal{T}(f)\big]f\big\Vert_{L^2}+ \Vert\mathcal{T}(f)(|D|f)\Vert_{L^2}\\ &\lesssim \left\Vert f\right\Vert_{\dot H^{\frac74}}^2 +\left\Vert f\right\Vert_{\dot H^{\frac74}}^{\frac{3}{2}}\left\Vert f\right\Vert_{\dot{H}^{\frac{19}{12}}}^{\frac{3}{2}}+\left(||V(f)||_{L^\infty}+1\right)||f||_{\dot H^2}. \end{align*} Then \eqref{Z102} follows from the classical interpolation inequalities in Sobolev spaces. \end{proof} \section{Proof of the main results}\label{S:3} In this section, we prove Theorem~\ref{theo:main} and Theorem~\ref{theo:main2}. Following a classical strategy, we shall construct solutions of the Muskat equation in three steps: \begin{enumerate} \item We begin by defining approximate systems and proving that the Cauchy problem for the latter are well-posed by means of an ODE argument. \item Secondly, we prove uniform estimates for the solutions of the approximate systems on a uniform time interval. The heart of the entire argument is contained in a {\em a priori} estimate given by Proposition~\ref{P:3.3} below. \item Finally, we prove that the sequence of approximate solutions converges to a solution of the Muskat equation and conclude the proof by proving a uniqueness result. \end{enumerate} \textbf{Notations.} We denote by $\langle\cdot,\cdot\rangle$ the scalar product in $L^2(\mathbb{R})$ and set $\left\Vert \cdot\right\Vert=\left\Vert \cdot\right\Vert_{L^2}$. \subsection{Approximate systems} We will define the wanted approximate systems by using a version of Galerkin's method based on Friedrichs mollifiers. To do so, it is convenient to use smoothing operators which are also projections. Consider, for any integer $n$ in $\mathbb{N}\setminus\{0\}$, the operators $J_n$ defined by \begin{equation}\label{defi:Jn} \begin{aligned} \widehat{J_n u}(\xi)&=\widehat{u}(\xi) \quad &&\text{for} &&\left\vert \xi\right\vert\leq n,\\ \widehat{J_n u}(\xi)&=0 \quad &&\text{for} &&\left\vert \xi\right\vert> n. \end{aligned} \end{equation} Notice that $J_n$ is a projection since $J_n^2=J_n$. Recall that the Muskat equation reads $$ \partial_tf=\frac{1}{\pi}\int_\mathbb{R}\frac{\partial_x\Delta_\alpha f}{1+\left(\Delta_\alpha f\right)^2}\diff \! \alpha. $$ Remember also from~\S\ref{S:2.3} that the latter is equivalent to \begin{equation}\label{n80} \partial_tf+\la D\ra f = \mathcal{T}(f)f, \end{equation} where $\mathcal{T}(f)$ is the operator defined by~\eqref{def:T(f)f}. Let us introduce the following approximate Cauchy problems: \begin{equation}\label{A3} \left\{ \begin{aligned} &\partial_t f_n+\la D\ra f_n=J_n\big(\mathcal{T}(f_n)f_n\big),\\ & f_n\arrowvert_{t=0}=J_n f_{0}. \end{aligned} \right. \end{equation} The next lemma states that this system has smooth global in time solutions. \begin{lemma}\label{L:3.1} For all $f_0\in L^{2}(\mathbb{R})$, and any $n\in \mathbb{N}\setminus\{0\}$, the initial value problem~\eqref{A3} has a unique global solution $f_n\in C^{1}([0,+\infty);H^{\infty}(\mathbb{R}))$. Moreover \begin{equation}\label{n52} f_n=J_n f_n, \end{equation} and, for all time $t\ge 0$, \begin{equation}\label{n51} \left\Vert f_n(t)\right\Vert_{L^2}\leq \left\Vert f_0\right\Vert_{L^2}. \end{equation} \end{lemma} \begin{proof}This proof is not new: it follows from the analysis in~\cite[Section 5]{Alazard-Lazar} together with the $L^2$-maximum principle in \cite[Section 2]{CCGRPS-JEMS2013}. However, since slight modifications are needed, we include a detailed proof. $i)$ We begin by studying the following auxiliary Cauchy problem \begin{equation}\label{A3-twisted} \left\{ \begin{aligned} &\partial_t f_n+J_n\la D\ra f_n=J_n\big(\mathcal{T}(J_n f_n)J_n f_n\big),\\ &f\arrowvert_{t=0}=J_n f_{0}. \end{aligned} \right. \end{equation} The Cauchy problem \eqref{A3-twisted} has the form \begin{equation}\label{edo} \partial_t f_n= F_n(f_n),\quad f_n\arrowvert_{t=0}=J_nf_0, \end{equation} where $$ F_n(f)=-\la D\ra J_n f+J_n\big(\mathcal{T}(J_nf)J_nf\big) $$ Recall from Proposition~$2.3$ in~\cite{Alazard-Lazar} that the map $f\mapsto \mathcal{T}(f)f$ is locally Lipschitz from $\dot{H}^1(\mathbb{R})\cap \dot{H}^{\frac{3}{2}}(\mathbb{R})$ to $L^2(\mathbb{R})$. Therefore, since $J_n$ is linear smoothing operator (which means that it is bounded from $L^2(\mathbb{R})$ into $H^\mu(\mathbb{R})$ for any $\mu\ge 0$), the map $f\mapsto J_n\big(\mathcal{T}(J_nf)J_nf\big)$ is locally Lipschitz from $L^2(\mathbb{R})$ to $L^2(\mathbb{R})$. This implies that $F_n$ satisfies the same property and hence we are thus in position to apply the Cauchy-Lipschitz theorem. This gives the existence of a unique maximal solution~$f_n$ in~$C^{1}([0,T_n);L^{2}(\mathbb{R}))$. Moreover, the continuation principle for ordinary differential equations implies that either \begin{equation}\label{A4} T_n=+\infty\qquad\text{or}\qquad \limsup_{t\rightarrow T_n} \left\Vert f_n(t)\right\Vert_{L^2}=+\infty. \end{equation} We shall prove in the next step that $T_n=+\infty$. Eventually, remembering that $J_n^2=J_n$, we check that the function $(I-J_n)f_n$ solves $$ \partial_t (I-J_n)f_n=0,\quad (I-J_n)f_n\arrowvert_{t=0}=0. $$ Therefore $(I-J_n)f_n=0$ which proves that $J_nf_n=f_n$. Now, we deduce from $J_nf_n=f_n$ and the equation~\eqref{A3-twisted} that $f_n$ is also a solution to the original equation~\eqref{A3}. In addition, the identity $J_nf_n=f_n$ also implies that $f_n$ is smooth, in particular $f_n$ belongs to~$C^{1}([0,T_n);H^{\infty}(\mathbb{R}))$. $ii)$ To conclude the proof of the proposition, it remains to show that $(i)$ the solution is defined globally in time and $(ii)$ it satisfies the $L^2$-bound \begin{equation}\label{n50} \left\Vert f_n(t)\right\Vert_{L^2}\leq \left\Vert f_0\right\Vert_{L^2}. \end{equation} In fact, in light of the alternative~\eqref{A4}, it is sufficient to prove the latter inequality: by combining~\eqref{A4} with~\eqref{n50}, we will obtain that $T_n=+\infty$. It remains to prove \eqref{n50}. This estimate is proved in \cite[Section 2]{CCGRPS-JEMS2013} for the full equation (that is with $J_n$ replaced by the identity $I$) and we recall the main argument to verify that the estimate is uniform in $n$. By definition of $\mathcal{T}(f)f$, one has $$ \la D\ra f-\mathcal{T}(f)f=\frac{1}{\pi}\int_\mathbb{R}\frac{\partial_x\Delta_\alpha f}{1+\left(\Delta_\alpha f\right)^2}\diff \! \alpha. $$ Therefore the equation~\eqref{A3} is equivalent to $$ \partial_tf_n+(I-J_n)\la D\ra f_n =J_n\left(\frac{1}{\pi}\int_\mathbb{R}\frac{\partial_x\Delta_\alpha f_n}{1+\left(\Delta_\alpha f_n\right)^2}\diff \! \alpha\right). $$ Using $f_n$ as test function, one has \begin{equation*} \frac{1}{2}\frac{\diff}{\dt} \left\Vert f_n(t)\right\Vert_{L^2}^2+\langle (I-J_n)\la D\ra f_n, f_n\rangle = \frac{1}{\pi}\bigg\langle J_n\int_\mathbb{R}\frac{\partial_x\Delta_\alpha f_n}{1+\left(\Delta_\alpha f_n\right)^2}\diff \! \alpha , f_n\bigg\rangle. \end{equation*} Now we use three elementary ingredients: firstly, $\langle (I-J_n)\la D\ra f_n, f_n\rangle\ge 0$ and $J_n^*=J_n$ as can be verified by applying Plancherel's theorem. and secondly $J_nf_n=f_n$. It follows that $$ \frac{1}{2}\frac{\diff}{\dt} \left\Vert f_n(t)\right\Vert_{L^2}^2\leq \frac{1}{\pi}\bigg\langle \int_\mathbb{R}\frac{\partial_x\Delta_\alpha f_n}{1+\left(\Delta_\alpha f_n\right)^2}\diff \! \alpha , f_n\bigg\rangle. $$ Now, by \cite[Section 2]{CCGRPS-JEMS2013}, the right-hand side is non-positive since, for any smooth function $f=f(t,x)$, \begin{multline*} \int_\mathbb{R}\left[\int_\mathbb{R}\frac{\partial_x\Delta_\alpha f}{1+\left(\Delta_\alpha f\right)^2}\diff \! \alpha \right]f(x)\diff \! x \\ =-\iint_{\mathbb{R}^2}\log\left[\sqrt{1+\frac{(f(t,x)-f(t,x-\alpha))^2}{\alpha^2}}\right]\diff \! x\diff \! \alpha. \end{multline*} The proof is complete. \end{proof} \subsection{Uniform estimates}\label{S:3.2} We have seen that the solutions~$f_n$ to the approximate systems~\eqref{A3} satisfy a uniform $L^2$-estimate (see~\eqref{n51}). We now have to prove uniform $L^2$-estimate for the derivatives $\la D\ra^{s,\phi}f_n$. Let us fix some notations. \begin{assumption}\label{A:kappa2} We consider a function $\kappa\colon[0,\infty) \to [1,\infty)$ satisfying Assumption~\ref{A:kappa} together with the following property: There exists $a\in [0,1/2)$ such that $$ \kappa(r)\ge \log (4+r)^a\quad \text{for all}\quad r\ge 0. $$ \end{assumption} Remember that, by notation, \begin{equation*} \phi(r)=\int_{0}^{\infty}\frac{1-\cos(h)}{h^2} \kappa\left(\frac{r}{|h|}\right) \diff \! h, \quad \text{for }r\ge 0. \end{equation*} Recall also that $\phi$ and $\kappa$ are equivalent: there are $c,C>0$ such that, $$ \forall r\ge 0,\qquad c\kappa(r)\leq \phi(r)\leq C \kappa(r). $$ We denote by $\la D\ra^{s,\phi}$ the Fourier multiplier $\la D\ra^{s}\phi(\left\vert D_x\right\vert)$. With this notations, our goal in this paragraph is to obtain uniform estimates for the functions \begin{equation}\label{n67} A_n(t)=\big\Vert \la D\ra^{\frac{3}{2},\phi}f_n(t)\big\Vert_{L^2}^2, \qquad B_n(t)=\big\Vert \la D\ra^{2,\phi}f_n(t)\big\Vert_{L^2}^2. \end{equation} The following result is the key technical point in this paper. \begin{proposition}\label{P:3.3} Assume that $\kappa$ satisfies Assumptions~$\ref{A:kappa}$ and~$\ref{A:kappa2}$. Then there exist two positive constants $C_1$ and $C_2$ such that, for all integer $n\in \mathbb{N}\setminus\{0\}$, \begin{equation}\label{Z21'} \frac{\diff}{\dt} A_n(t)+C_1\delta_n(t)B_n(t)\leq C_2 \left( \sqrt{A_n(t)}+A_n(t) \right)\mu_n(t) B_n(t), \end{equation} where \begin{align*} \delta_n(t)&=\left(1+ \log\left(4+\frac{B_n(t)}{A_n(t)+ \Vert f_0\Vert_{L^2}^2}\right)^{1-2a}\left( A_n(t)+ \Vert f_0\Vert_{L^2}^2\right)\right)^{-1},\\ \mu_n(t)&=\left(\kappa\left(\frac{B_n(t)}{A_n(t)}\right)\right)^{-1}. \end{align*} \end{proposition} \begin{proof} We split the analysis into two parts: \begin{enumerate} \item We begin by applying the nonlinear estimates proved in Section~\ref{S:2} to deduce a key inequality (see~\eqref{Z21}) of the form \begin{equation}\label{Z21ter} \frac{\diff}{\dt} \big\Vert \la D\ra^{\frac{3}{2},\phi}f_n\big\Vert_{L^2}^2 + \int_\mathbb{R} \frac{\big\vert\la D\ra^{2,\phi}f_n\big\vert^2}{1+(\partial_x f_n)^2}\diff \! x\leq C Q(f_n) \big\Vert\la D\ra^{2,\phi}f_n\big\Vert_{L^2}, \end{equation} where $Q(f_n)$ is bounded in (strict) subspace of $L^2_{t,x}$ by $\big\Vert \la D\ra^{\frac{3}{2},\phi}f_n\big\Vert_{L^\infty_t(L^2_x)}$ and $\big\Vert \la D\ra^{2,\phi}f_n\big\Vert_{L^2_t(L^2_x)}$. \item Then we apply interpolation type arguments to show that one can absorb the right-hand side of \eqref{Z21ter} by the left-hand side. \end{enumerate} We now proceed to the details and begin with the following result. \begin{lemma}\label{L:3.4} There exists a positive constant $C$ such that, for any $n\in\mathbb{N}\setminus\{0\}$, the approximate solution $f_n\in C^{1}([0,+\infty);H^{\infty}(\mathbb{R}))$ to~\eqref{A3} satisfies \begin{equation}\label{Z21} \frac{\diff}{\dt} \big\Vert \la D\ra^{\frac{3}{2},\phi}f_n\big\Vert_{L^2}^2 + \int_\mathbb{R} \frac{\big\vert\la D\ra^{2,\phi}f_n\big\vert^2}{1+(\partial_x f_n)^2}\diff \! x\leq C Q(f_n) \big\Vert\la D\ra^{2,\phi}f_n\big\Vert_{L^2}, \end{equation} where \begin{align*} Q(f_n)&= \left(\left\Vert f_n\right\Vert_{\dot H^2}+\left\Vert f_n\right\Vert_{\dot H^{\frac{7}{4}}}^2\right) \big\Vert\la D\ra^{\frac{3}{2},\phi}f_n\big\Vert_{L^2} +\big\Vert\la D\ra^{\frac74,\phi}f_n\big\Vert_{L^2} \left\Vert f_n\right\Vert_{{H}^{\frac74}}\\ &\quad+\left(\left\Vert f_n\right\Vert_{H^{\frac{19}{12}}}^{3/2}+\left\Vert f_n\right\Vert_{\dot H^{\frac74}}^{1/2}\right) \big\Vert\la D\ra^{\frac{7}{4},\phi^{2}}f_n\big\Vert^{1/2}_{L^2} \left\Vert f_n\right\Vert_{\dot H^{\frac74}}. \end{align*} \end{lemma} \begin{proof} As we have seen in the proof of Lemma~$\ref{L:3.1}$, $f_n$ satisfies $J_nf_n=f_n$ and hence $f_n\in C^1([0,+\infty);H^\infty(\mathbb{R}))$. In particular, all computations below are easily justified. The proof is based on the nonlinear estimates established in the previous section, together with parabolic energy estimates for the Muskat equation, and a commutator estimate with the Hilbert transform. We multiply the equation $$ \partial_t f_n+\la D\ra f_n=J_n\mathcal{T}(f_n)f_n, $$ by $\la D\ra^{3,\phi^2}f_n$ and use the following consequences of the Plancherel's identity: \begin{align*} &\big\langle\partial_t f_n,\la D\ra^{3,\phi^2} f_n\big\rangle=\frac{1}{2}\frac{\diff}{\dt} \big\Vert \la D\ra^{3/2,\phi}f_n\big\Vert_{L^2}^2,\\ &\big\langle \la D\ra f_n,\la D\ra^{3,\phi^2} f_n\big\rangle=\big\Vert \la D\ra^{2,\phi}f_n\big\Vert_{L^2}^2. \end{align*} Now, we need four elementary ingredients: $$ J_n^*=J_n,\quad J_nf_n=f_n \quad (\text{see}~\eqref{n52}),\quad \la D\ra^{3,\phi^2}=\la D\ra^{2,\phi}\la D\ra^{1,\phi},\quad \big(\la D\ra^{1,\phi}\big)^*=\la D\ra^{1,\phi}. $$ Then we easily verify that \begin{align*} \big\langle J_n\mathcal{T}(f_n)f_n,\la D\ra^{3,\phi^2} f_n\big\rangle&=\big\langle\mathcal{T}(f_n)f_n,J_n\la D\ra^{3,\phi^2} f_n\big\rangle =\big\langle\mathcal{T}(f_n)f_n,\la D\ra^{3,\phi^2} J_nf_n\big\rangle\\ &=\big\langle\la D\ra^{1,\phi} \mathcal{T}(f_n)f_n,\la D\ra^{2,\phi}f_n\big\rangle. \end{align*} It follows that \begin{align*} \frac{1}{2}\frac{\diff}{\dt} \big\Vert \la D\ra^{3/2,\phi}f_n\big\Vert_{L^2}^2+ \big\Vert \la D\ra^{2,\phi}f_n\big\Vert_{L^2}^2 =\big\langle\la D\ra^{1,\phi} \mathcal{T}(f_n)f_n,\la D\ra^{2,\phi}f_n\big\rangle. \end{align*} Notice that this identity no longer involves the operator $J_n$, which explains that the subsequent estimates are independent of $n$. Now we commute the operators $\la D\ra^{1,\phi}$ and $\mathcal{T}(f_n)$ in the last term, and then expand the term $\mathcal{T}(f_n)(\la D\ra^{1,\phi}f_n)$ using~\eqref{n1}. This gives \begin{align*} &\frac{1}{2}\frac{\diff}{\dt} \big\Vert D^{3/2,\phi}f_n\big\Vert_{L^2}^2 + \int_\mathbb{R} \frac{\big\vert \la D\ra^{2,\phi}f_n\big\vert^2}{1+(\partial_x f_n)^2} \diff \! x= (I)+(II)+(III)\quad \text{where}\\[1ex] &(I)\mathrel{:=}\big\langle V(f_n)\partial_x \la D\ra^{1,\phi}f_n, \la D\ra^{2,\phi}f_n\big\rangle,\\[1ex] &(II)\mathrel{:=}\big\langle R(f_n,\la D\ra^{1,\phi}f_n), \la D\ra^{2,\phi}f_n\big\rangle,\\[1ex] &(III)\mathrel{:=}\Big\langle\left[\la D\ra^{1,\phi},\mathcal{T}(f_n)\right]f_n, \la D\ra^{2,\phi}f_n\Big\rangle. \end{align*} It follows from Propositions \ref{Z18} and~\ref{Z19} that the terms $(II)$ and $(III)$ are estimated by the right-hand side of~\eqref{Z21}. So it remains only to estimate the term $(I)$. To do so, we claim that \begin{equation}\label{com1} (I)\leq C\left\Vert V(f_n)\right\Vert_{\dot{H}^1}\big\Vert \la D\ra^{2,\phi}f_n\big\Vert_{L^2}\big\Vert \la D\ra^{\frac{3}{2},\phi}f_n\big\Vert_{L^2}. \end{equation} Assume that this claim is true. Then it will follow from~\eqref{com1} and Proposition~\ref{Z3'} that $(I)$ is bounded by the right-hand side of~\eqref{Z21}, which will in turn complete the proof. Now we must prove~\eqref{com1}. We begin by making appear a commutator structure. To do so, we notice that, since $\partial_x=-\mathcal{H}\la D\ra$, one can rewrite the term $A$ under the form $$ \langle V(f_n)\partial_x \la D\ra^{1,\phi}f_n, \la D\ra^{2,\phi}f_n\rangle=-\langle V(f_n)\mathcal{H} \la D\ra^{2,\phi}f_n, \la D\ra^{2,\phi}f_n\rangle. $$ We then use $\mathcal{H}^*=-\mathcal{H}$ to infer that \begin{align}\nonumber (I)&= -\frac{1}{2}\Big\langle V(f_n)\mathcal{H} \la D\ra^{2,\phi}f_n, \la D\ra^{2,\phi}f_n\Big\rangle +\frac{1}{2} \Big\langle \mathcal{H} \big(V(f_n)\la D\ra^{2,\phi}f_n\big),\la D\ra^{2,\phi}f_n\Big\rangle\\ &= \frac{1}{2} \Big\langle \left[\mathcal{H}, V(f_n)\right ] \la D\ra^{2,\phi}f_n,\la D\ra^{2,\phi}f_n\Big\rangle.\label{Z106} \end{align} Consequently, to prove \eqref{com1}, it will be sufficient to establish that \begin{equation}\label{n2} \left\Vert \left[\mathcal{H}, V(f_n)\right ] \la D\ra^{2,\phi}(f_n)\right\Vert_{L^2}\lesssim\left\Vert V(f_n)\right\Vert_{\dot{H}^1}\big\Vert \la D\ra^{3/2,\phi}f_n\big\Vert_{L^2}. \end{equation} The latter inequality will be deduced from a commutator estimate of independent interest. We claim that \begin{equation}\label{Z2} \left\Vert \left[\mathcal{H}, g_1\right ](\partial_x g_2)\right\Vert_{L^2} \leq C\left\Vert g_1\right\Vert_{\dot H^{1}}\left\Vert g_2\right\Vert_{\dot H^{\frac{1}{2}}}. \end{equation} Notice that the wanted estimate~\eqref{n2} follows from~\eqref{Z2} applied with $g_1=V(f)$ and $g_2=\mathcal{H}\la D\ra^{1,\phi}$ (since $\la D\ra^{2,\phi}=\partial_x \mathcal{H}\la D\ra^{1,\phi}$). It remains to prove the commutator estimate~\eqref{Z2}. Start from the definition of the Hilbert transform (see \eqref{n3}) and observe that $$ \left\Vert\left[\mathcal{H}, g_1\right ](\partial_x g_2)\right\Vert_{L^2}^2 =\frac{1}{\pi^2}\int_\mathbb{R}\left(\int_\mathbb{R} \frac{g_1(x)-g_1(y)}{x-y}\partial_y(g_2(x)-g_2(y)) \diff \! y\right)^2 \diff \! x. $$ Integrating by parts in $y$, this gives \begin{equation}\label{n4} \begin{aligned} \left\Vert\left[\mathcal{H}, g_1\right ](\partial_x g_2)\right\Vert_{L^2}^2 &\lesssim \int_\mathbb{R} \left(\int _\mathbb{R}\frac{|g_1(x)-g_1(y)\Vert g_2(x)-g_2(y)|}{|x-y|^2}\diff \! y\right)^2\diff \! x\\ &\quad+\int_\mathbb{R} \left(\int_\mathbb{R} \frac{\partial_yg_1(y)}{x-y}(g_2(x)-g_2(y)) \diff \! y\right)^2\diff \! x. \end{aligned} \end{equation} Using the Cauchy-Schwarz inequality, we estimate the first term in the right-hand side of \eqref{n4} by, $$ \left( \int_\mathbb{R} \left(\int_\mathbb{R} \frac{|g_1(x)-g_1(y)|^2}{|x-y|^{1+3/2}}\diff \! y\right)^2\diff \! x\right)^{\frac{1}{2}} \left( \int_\mathbb{R} \left(\int_\mathbb{R} \frac{|g_2(x)-g_2(y)|^2}{|x-y|^{1+1/2}}\diff \! y\right)^2\diff \! x\right)^{\frac{1}{2}}. $$ Using the Lizorkin-Triebel norms introduced in~\eqref{n5}, the above product is in turn estimated from above by $$ \left\Vert g_1\right\Vert_{\dot{F}^{\frac34}_{4,2}}^2 \left\Vert g_2\right\Vert_{\dot{F}^{\frac14}_{4,2}}^2. $$ Now, the Sobolev embedding \eqref{FB} imply that the right-hand side above is bounded by the right-hand side of \eqref{Z2}, namely $$ \left\Vert g_1\right\Vert_{\dot{F}^{\frac34}_{4,2}} \left\Vert g_2\right\Vert_{\dot{F}^{\frac14}_{4,2}}\lesssim \left\Vert g_1\right\Vert_{\dot H^{1}}\left\Vert g_2\right\Vert_{\dot H^{\frac{1}{2}}}. $$ It remains to estimate the second term in the right-hand side of \eqref{n4}. We use again H\"older's inequality to estimate the later term by $$ \left(\int_\mathbb{R}\left(\int_\mathbb{R}\frac{|\partial_yg_1(y)|^{3/2}}{|x-y|^{1-1/4}} \diff \! y\right)^2 \diff \! x\right)^{\frac{2}{3}} \left( \int_\mathbb{R}\left(\int_\mathbb{R}\frac{|g_2(x)-g_2(y)|^3}{|x-y|^{1+1/2}}\diff \! y\right)^2 \diff \! x\right)^{\frac{1}{3}}. $$ Then, again, we use~\eqref{n5} and \eqref{FB} to estimate the above quantity from above by $$ \big\Vert \la D\ra^{-\frac{1}{4}}\big(|\partial_yg_1|^{\frac{3}{2}}\big)\big\Vert_{L^2}^{\frac{4}{3}} \Vert g_2\Vert _{\dot{F}^{\frac{1}{6}}_{6,3}}^2 \lesssim \Vert |g_1\Vert _{\dot H^{1}}^2\Vert g_2\Vert _{\dot H^{\frac{1}{2}}}^2. $$ Here we have used the fact that \begin{equation*} \big\Vert \la D\ra^{-\frac{1}{4}}h\big\Vert_{L^2}\leq C \left\Vert h\right\Vert_{L^{\frac{4}{3}}},~~\forall h\in L^{\frac{4}{3}}(\mathbb{R}). \end{equation*} This completes the proof of~\eqref{Z2} and hence the proof of the lemma. \end{proof} We now continue with the interpolation arguments alluded to previously. We want to estimate the various norms which appear in $Q(f)$ in terms of $A_n$ and $B_n$. Indeed, given Lemma~\ref{L:3.4}, the proof of Proposition~\ref{P:3.3} reduces to establishing the following result. \begin{lemma}\label{L:3.5} Consider a real number $7/4\leq s\leq 2$. Then there exists a positive constant $C$ such that, for all $n\in \mathbb{N}\setminus\{0\}$ and for all $t\ge 0$, \begin{align} &\left\Vert f_n(t)\right\Vert_{\dot H^s}\leq C\mu_n(t)A_n(t)^{2-s} B_n(t)^{s-\frac{3}{2}},\label{Z20'}\\ &\big\Vert \la D\ra^{\frac{7}{4},\phi^{2}}f_n\big\Vert_{L^2}\leq \mu_n(t)^{-1} A_n(t)^{\frac{1}{4}}B_n(t)^{\frac{1}{4}},\label{n110} \end{align} and moreover, \begin{equation} \left\Vert\partial_x f_n(t)\right\Vert_{L^\infty}\leq C \log\left(4+\frac{B_n(t)}{A_n(t)+ \left\Vert f_0\right\Vert_{L^2}^2}\right)^{\frac{1-2a}{2}}\left( A_n(t)^{\frac{1}{2}} + \left\Vert f_0\right\Vert_{L^2}\right).\label{linf'} \end{equation} \end{lemma} \begin{proof} For ease of reading, we skip the indexes $n$. $i)$ Let $\lambda >0$. By cutting the frequency space into low and high frequencies, at the frequency threshold $\left\vert \xi\right\vert=\lambda$, we obtain \begin{align*} \left\Vert f\right\Vert_{\dot H^s}^2&\lesssim \int_\mathbb{R} |\xi|^{2s}|\hat{f}|^2 \diff \! \xi=\int_{|\xi|\leq \lambda}|\xi|^{2s}|\hat{f}|^2 \diff \! \xi +\int_{|\xi|> \lambda} |\xi|^{2s}|\hat{f}|^2 \diff \! \xi\\ &\lesssim\int_{|\xi|\leq \lambda} \frac{\left\vert \xi\right\vert^{2s-3}}{\kappa(|\xi|)^2}|\xi|^{3} \phi(|\xi|)^2|\hat{f}|^2 \diff \! \xi +\int_{|\xi|> \lambda} \frac{\left\vert \xi\right\vert^{2s-4}}{\kappa(|\xi|)^2}|\xi|^{4}\phi(|\xi|)^2|\hat{f}|^2 \diff \! \xi, \end{align*} where we have used the equivalence $\phi\sim\kappa$. Now Plancherel's theorem implies that \begin{equation}\label{n92} \begin{aligned} &\int_{|\xi|\leq \lambda} |\xi|^{3}\phi(|\xi|)^2|\hat{f}|^2 \diff \! \xi\lesssim \big\Vert \la D\ra^{\frac{3}{2},\phi}f\big\Vert_{L^2}^2,\\ &\int_{|\xi|> \lambda} |\xi|^{4}\phi(|\xi|)^2|\hat{f}|^2 \diff \! \xi\lesssim \big\Vert \la D\ra^{2,\phi}f\big\Vert_{L^2}^2. \end{aligned} \end{equation} On the other hand, we claim that \begin{alignat}{4} &(i)\quad &&\frac{\left\vert \xi\right\vert^{2s-4}}{\kappa(|\xi|)^2}\leq \frac{\lambda^{2s-4}}{\kappa(\lambda)^2}\qquad &&\text{for}\quad&&\left\vert\xi\right\vert\ge \lambda,\\ &(ii)&& \frac{\left\vert \xi\right\vert^{2s-3}}{\kappa(|\xi|)^2}\leq \frac{\lambda^{2s-3}}{\kappa(\lambda)^2} &&\text{for}\quad &&\left\vert\xi\right\vert\leq \lambda.\label{n97} \end{alignat} The first claim follows directly from the facts that $\kappa$ is increasing and the assumption $s\leq 2$ (which implies that $2s-4\leq 0$). To prove the second claim, write $$ \frac{r^{2s-3}}{\kappa(r)^2}=\frac{ r^{2s-3}}{\log(4+r)^{2}}\times\frac{\log(4+r)^{2}}{\kappa(r)^2}\cdot $$ By assumption, $\log(4+r)/\kappa(r)$ is increasing. On the other hand, by computing the derivative, we verify that the other factor is also an increasing function (since we assume that $s\ge 7/4$). It follows that $r\mapsto r^{2s-3}/\kappa(r)^2$ is also increasing which implies the second claim. It follows that $$ \left\Vert f\right\Vert_{\dot H^s}^2\lesssim \lambda^{2s-3} (\kappa(\lambda))^{-2} \Vert \la D\ra^{3/2,\phi}f\Vert_{L^2}^2+\lambda^{2s-4} (\kappa(\lambda))^{-2} \Vert\la D\ra^{2,\phi}f\Vert_{L^2}^2. $$ Chose $\lambda=\Vert \la D\ra^{2,\phi}(f)\Vert _{L^2}^2/\Vert \la D\ra^{3/2,\phi}(f)\Vert _{L^2}^2$ to obtain \begin{equation*} \left\Vert f\right\Vert_{\dot H^s}^2\lesssim \left(\kappa\left(B_n/A_n\right)\right)^{-2} A_n^{4-2s}B_n^{2s-3}, \end{equation*} equivalent to the wanted result \eqref{Z20'}. \\ \newline $ii)$ As above, for $\lambda>0$ one has, \begin{align*} \Vert\la D\ra^{\frac{7}{4},\phi^{2}}f_n\Vert_{L^2}^2&=\int |\xi|^{\frac{7}{2}}\phi(|\xi|)^4 |\hat f|^2\diff \! \xi\\ &\lesssim \lambda}\def\ep{\epsilon}\def\ka{\kappa^{\frac{1}{2}}\phi(\lambda}\def\ep{\epsilon}\def\ka{\kappa)^2 \int_{|\xi|\leq \lambda} |\xi|^{3}\phi(|\xi|)^2 |\hat f|^2\diff \! \xi\\ &\quad+\lambda}\def\ep{\epsilon}\def\ka{\kappa^{-\frac{1}{2}}\phi(\lambda}\def\ep{\epsilon}\def\ka{\kappa)^2 \int_{|\xi|\leq \lambda} |\xi|^{4}\phi(|\xi|)^2 |\hat f|^2\diff \! \xi. \end{align*} Since $\phi\sim\kappa$, we deduce that $$ \big\Vert \la D\ra^{\frac{7}{4},\phi^{2}}f_n\big\Vert_{L^2}^2 \lesssim \lambda}\def\ep{\epsilon}\def\ka{\kappa^{1/2}\kappa(\lambda}\def\ep{\epsilon}\def\ka{\kappa)^2 \big\Vert \la D\ra^{3/2,\phi}f\big\Vert _{L^2}^2 +\lambda}\def\ep{\epsilon}\def\ka{\kappa^{-1/2}\kappa(\lambda}\def\ep{\epsilon}\def\ka{\kappa)^2 \big\Vert \la D\ra^{2,\phi}f\big\Vert_{L^2}^2. $$ Now take $$ \lambda}\def\ep{\epsilon}\def\ka{\kappa=\frac{\big\Vert\la D\ra^{2,\phi}f\big\Vert_{L^2}^2}{\big\Vert \la D\ra^{\frac{3}{2},\phi}f\big\Vert_{L^2}^2}, $$ to get the wanted result~ \eqref{n110}. $iii)$ Starting from the inverse Fourier transform, using the Cauchy-Schwarz inequality together with estimates similar to~\eqref{n92}, we obtain \begin{align*} \Vert \partial_x f\Vert_{L^\infty}&\leq \int_\mathbb{R} |\xi| |\hat f| \diff \! \xi \\ &=\int_{|\xi|>\lambda} \kappa(|\xi|)^{-1} |\xi|^{-1} \kappa(|\xi|) |\xi|^{2}|\hat f| \diff \! \xi \\ &\quad +\int_{|\xi|\leq \lambda} \kappa(|\xi|)^{-1} (|\xi|+1)^{-\frac{1}{2}} \kappa(|\xi|) |\xi|(1+|\xi|)^{\frac{1}{2}}|\hat f| \diff \! \xi \\&\lesssim\left(\int_{|\xi|>\lambda}\def\ep{\epsilon}\def\ka{\kappa} \frac{1}{|\xi|^{2}\kappa^2(|\xi|)} \diff \! \xi \right)^{\frac{1}{2}} \Vert \la D\ra^{2,\phi}f\Vert_{L^2}\\ &\quad+ \left(\int_{|\xi|\leq \lambda}\def\ep{\epsilon}\def\ka{\kappa} \frac{1}{(|\xi|+1)\kappa^2(|\xi|)} \diff \! \xi \right)^{\frac{1}{2}} \left( \Vert \la D\ra^{3/2,\phi}f\Vert_{L^2}+ \Vert f\Vert_{L^2}\right). \end{align*} Now observe that $$ \int_{|\xi|>\lambda}\def\ep{\epsilon}\def\ka{\kappa} \frac{1}{|\xi|^{2}\kappa^2(|\xi|)} \diff \! \xi \leq \frac{1}{\kappa^2(\lambda}\def\ep{\epsilon}\def\ka{\kappa)} \int_{|\xi|>\lambda}\def\ep{\epsilon}\def\ka{\kappa} \frac{1}{|\xi|^{2}} \diff \! \xi \leq \frac{1}{\kappa^2(\lambda}\def\ep{\epsilon}\def\ka{\kappa)\lambda}\def\ep{\epsilon}\def\ka{\kappa}\cdot $$ It remains to estimate the second integral. Remembering that $\kappa(r)\ge \log(4+r)^a$ by assumption, we begin by writing that $$ \frac{1}{(1+r)\kappa^2(r)}\leq \frac{4}{(4+r)\log(4+r)^{2a}}. $$ On the other hand, with $\beta=(1-2a)/a$ we have $\beta\ge 0$ (since $a< 1/2$) and moreover $$ \frac{1}{(4+r)\log(4+r)^{2a}}=\frac{1}{a\beta}\frac{\diff}{\dr} \log(4+r)^{a\beta}. $$ Therefore, $$ \left(\int_{|\xi|\leq \lambda}\def\ep{\epsilon}\def\ka{\kappa} \frac{1}{(|\xi|+1)\kappa^2(|\xi|)} \diff \! \xi\right)^{\frac{1}{2}} \lesssim \log(4+\lambda}\def\ep{\epsilon}\def\ka{\kappa)^{\frac{1-2a}{2}}. $$ We conclude that \begin{align*} \Vert \partial_x f\Vert_{L^\infty} &\lesssim \kappa(\lambda}\def\ep{\epsilon}\def\ka{\kappa)^{-1}\lambda}\def\ep{\epsilon}\def\ka{\kappa^{-1/2}\Vert \la D\ra^{2,\phi}f\Vert_{L^2}+ \log(4+\lambda}\def\ep{\epsilon}\def\ka{\kappa)^{\frac{1-2a}{2}}\left( \Vert \la D\ra^{3/2,\phi}f\Vert_{L^2}+ \Vert f\Vert_{L^2}\right) \\ &\lesssim \log(4+\lambda}\def\ep{\epsilon}\def\ka{\kappa)^{\frac{1-2a}{2}}\left(\lambda}\def\ep{\epsilon}\def\ka{\kappa^{-1/2}\Vert \la D\ra^{2,\phi}f\Vert_{L^2}+\Vert \la D\ra^{3/2,\phi}f\Vert_{L^2}+ \Vert f\Vert_{L^2}\right). \end{align*} Remembering that $\Vert f\Vert_{L^2}\leq \Vert f_0\Vert_{L^2}$ (see~\eqref{n51}) and then choosing $\lambda$ such that $$ \lambda^{1/2}=\frac{\Vert \la D\ra^{2,\phi}f\Vert_{L^2}}{\Vert \la D\ra^{3/2,\phi}f\Vert_{L^2}+ \Vert f_0\Vert_{L^2}}, $$ we obtain~\eqref{linf'}. This completes the proof. \end{proof} Now the energy estimate~\eqref{Z21'} follows directly from Lemma~\ref{L:3.4} and Lemma~\ref{L:3.5}. \end{proof} For later purposes, we conclude this paragraph by recording a corollary of the inequalities used to prove Lemma~\ref{L:3.5}. \begin{corollary} Consider a function $f\in \mathcal{S}(\mathbb{R})$ and set $$ M=\left\Vert f\right\Vert_{\frac{3}{2},\frac{1}{3}}+\left\Vert f\right\Vert_{L^2}. $$ Then there holds \begin{equation}\label{Z103} \left\Vert f\right\Vert_{\dot H^2} +\left\Vert \mathcal{T}(f)f\right\Vert_{\dot H^1} \leq C(M+1)^2\log\left(4+\frac{\left\Vert f\right\Vert_{2,\frac{1}{3}}}{M}\right)^{-\frac{1}{6}} \left\Vert f\right\Vert_{2,\frac{1}{3}}, \end{equation} for some absolute constant $C$ independent of $M$. \end{corollary} \begin{proof} It follows from the proof of \eqref{linf'} with $\phi(r)=\log(4+r)^{\frac{1}{3}}$ that we have the two following estimates: \begin{align*} & \int|\xi| |\hat f(\xi)|\diff \! \xi\lesssim \log\left(4+\frac{||f||_{2,\frac{1}{3}}}{M}\right)^{\frac{1}{6}}M,\\& ||f||_{\dot H^2}\lesssim \log\left(4+\frac{||f||_{2,\frac{1}{3}}}{M}\right)^{-1/3} ||f||_{2,\frac{1}{3}}. \end{align*} Therefore, by combining these with \eqref{Z101} and \eqref{Z102}, we get that \begin{align*} &||f||_{\dot H^2} +||\mathcal{T}(f)f||_{\dot H^1}\\&\lesssim \left(\left\Vert f\right\Vert_{\dot H^{\frac32}}+\left\Vert f\right\Vert_{\dot H^{\frac32}}^2+1+\log\left(4+\frac{||f||_{2,\frac{1}{3}}}{M}\right)^{\frac{1}{6}}M\right) \log\left(4+\frac{||f||_{2,\frac{1}{3}}}{M}\right)^{-\frac{1}{3}} ||f||_{2,\frac{1}{3}}\\&\lesssim (M+1)^2\log\left(4+\frac{||f||_{2,\frac{1}{3}}}{M}\right)^{-\frac{1}{6}}||f||_{2,\frac{1}{3}}, \end{align*} which is the wanted result~\eqref{Z103}. \end{proof} \subsection{Uniform estimates for small initial data globally in time}\label{S:3.3} In this paragraph, we apply Proposition~\ref{P:3.3} to obtain uniform estimates globally in time, assuming some smallness assumption. \begin{proposition}\label{pro1} There exists two positive constants $c$ and $C$ such that the following property holds. For all initial data $f_0$ in $\mathcal{H}^{\frac{3}{2},\frac{1}{3}}(\mathbb{R})$ satisfying \begin{equation}\label{nUG0} \left\Vert f_0\right\Vert_{\frac{3}{2},\frac13}\left(\left\Vert f_0\right\Vert_{L^2}^2+1\right) \leq c, \end{equation} where the semi-norm $\left\Vert \cdot\right\Vert_{\frac{3}{2},\frac{1}{3}}$ is as defined in~\eqref{n141}, and for all integer $n$ in $\mathbb{N}\setminus\{0\}$, the solution $f_n$ to the approximate Cauchy problem~\eqref{A3} satisfies \begin{equation}\label{nUG1} \sup_{t\in [0,+\infty)}\left\Vert f_n(t)\right\Vert_{\frac{3}{2},\frac13}\leq \left\Vert f_0\right\Vert_{\frac{3}{2},\frac13}, \end{equation} together with \begin{equation}\label{nUG2} \int_0^{+\infty}\bigg[[\left\Vert \partial_tf_n\right\Vert_{\dot{H}^1}^2+\left\Vert f_n\right\Vert_{\dot{H}^2}^2 +\left\Vert\mathcal{T}(f_n)f_n\right\Vert_{\dot H^1}^2+\frac{ \left\Vert f_n\right\Vert_{2,\frac{1}{3}}^2}{\log(4+\left\Vert f_n\right\Vert_{2,\frac{1}{3}})^{\frac{1}{3}}}\bigg]\diff \! t \leq C\left\Vert f_0\right\Vert_{\frac{3}{2},\frac13}^2. \end{equation} Furthermore, there exists a subsequence of $(f_n)$ converging to a solution $f$ of the Muskat equation. Also, $f$ satisfies \eqref{nUG1} and \eqref{nUG2} with $f_n=f$. \end{proposition} \begin{proof} Fix $\kappa(r)=\big(\log(4+r)\big)^{\frac{1}{3}}$, define $\phi$ by~\eqref{n10} and then consider $A_n$ and $B_n$ as given by~\eqref{n67}. Notice that, since $\phi\sim \kappa$, we have $$ \big\Vert \la D\ra^{\frac{3}{2},\phi}g\big\Vert_{L^2}\sim\left\Vert g\right\Vert_{\frac{3}{2},\frac{1}{3}}. $$ The estimate~\eqref{Z21'} implies that \begin{equation}\label{Zglobal} \begin{aligned} \frac{\diff}{\dt} A_n(t)&+C_1\frac{B_n(t)}{1+ \log\left(4+\frac{B_n(t)}{A_n(t)+ \Vert f_0\Vert_{L^2}^2}\right)^{\frac{1}{3}} \left( A_n(t)+ \Vert f_0\Vert_{L^2}^2\right)}\\ &\leq C_2 \left( \sqrt{A_n(t)}+A_n(t) \right)\log\left(4+\frac{B_n(t)}{A_n(t)}\right)^{-\frac{1}{3}}B_n(t) \\ &\leq C_2 \left( \sqrt{A_n(t)}+A_n(t) \right)\log\left(4+\frac{B_n(t)}{A_n(t)+ \Vert f_0\Vert_{L^2}^2}\right)^{-\frac{1}{3}}B_n(t). \end{aligned} \end{equation} We want to absorb the right-hand side by the left-hand side. To do so, we shall prove that \begin{multline}\label{Z5} C_2 \left( \sqrt{A_n(t)}+A_n(t) \right)\log\left(4+\frac{B_n(t)}{A_n(t)+\Vert f_0\Vert_{L^2}^2}\right)^{-\frac{1}{3}} \\ \leq \frac{1}{2}\frac{C_1}{1+ \log\left(4+\frac{B_n(t)}{A_n(t)+ \Vert f_0\Vert_{L^2}^2}\right)^{\frac{1}{3}} \left( A_n(t)+ \Vert f_0\Vert_{L^2}^2\right)}\cdot \end{multline} Set $$ X=\sqrt{A_n(t)}+A_n(t),\quad Y=A_n(t)+\Vert f_0\Vert_{L^2}^2,\quad \lambda =\log\left(4+\frac{B_n(t)}{A_n(t)+ \Vert f_0\Vert_{L^2}^2}\right)^{\frac{1}{3}}. $$ Then \eqref{Z5} is equivalent to $$ C_2 X \leq \frac{C_1}{2}\frac{\lambda}{1+\lambda Y}. $$ The latter inequality will be satisfied provided that $2C_2X(Y+1)\leq C_1$. This means that \eqref{Z5} will be satisfied provided that \begin{align}\label{Z6} C_2 \left( \sqrt{A_n(t)}+A_n(t) \right)\left(A_n(t)+ \Vert f_0\Vert_{L^2}^2+1\right) \leq \frac{C_1}{2}\cdot \end{align} We thus have proved that if \eqref{Z6} is true for all time $t$, then \eqref{Z5} is also true for all time. On the other hand, let us assume that \eqref{Z5} is true for all time. Then \eqref{Zglobal} implies that \begin{equation}\label{nUG4} \frac{\diff}{\dt} A_n(t)+\frac{C_1}{2}\delta_n(t)B_n(t)\leq 0. \end{equation} This immediately implies that $A_n$ is decreasing, which implies that \eqref{Z6} is true also for time $t$ provided that it holds at initial time. By an elementary continuity argument, one can make the previous reasoning rigorous. This proves that~\eqref{nUG1} holds provided that the assumption~\eqref{nUG0} is satisfied with $$ c=\frac{C_1}{8(C_1+C_2)}\cdot $$ Integrating~\eqref{nUG4} in time and noticing that $\delta_n(t)\gtrsim \log(4+B_n(t))^{-\frac{1}{3}}$, we also get that the function $t\mapsto B_n(t)\log(4+B_n(t))^{-\frac{1}{3}}$ is integrable on $[0,+\infty)$. By virtue of~\eqref{Z103} and using the equation $\partial_t f_n=-\la D\ra f_n+J_n\big(\mathcal{T}(f_n)f_n\big)$, we end up with~\eqref{nUG2}. Eventually, by the standard compactness theorem, there exists a subsequence of $(f_n)$ converging to a solution $f$ of the Muskat equation. \end{proof} \subsection{Uniform estimates for arbitrary initial data}\label{S:critical} We now prove uniform estimates for arbitrary initial data in $\mathcal{H}^{\frac{3}{2},\frac{1}{3}}(\mathbb{R})$, without any smallness assumption. This is the most delicate step. Indeed, as explained in Remark~\ref{R:1.4}, one important feature of this problem is that the estimates will not only depend on the norm of the initial data: they depend on the initial data themselves. As a consequence, we are forced to estimate the approximate solutions $f_n$ for a norm whose definition depends on the initial data. More precisely, we will estimate the norm $\la D\ra^{\frac{3}{2},\phi}f_n$ for some function $\phi$ depending on $f_0$. To define this function $\phi$, we begin with the following general lemma. \begin{lemma}\label{L:critical} For any nonnegative integrable function $\omega\in L^1(\mathbb{R})$, there exists a function $\eta\colon[0,\infty) \to [1,\infty)$ satisfying the following properties: \begin{enumerate} \item $\eta$ is increasing and $\lim\limits_{r\to \infty}\eta(r)=\infty$, \item $\eta(2r)\leq 2\eta(r)$ for any $r\geq 0$, \item $\omega$ satisfies the enhanced integrability condition: \begin{equation} \int_\mathbb{R} \eta(|r|) \omega(r) \diff \! r<\infty, \end{equation} \item moreover, the function $r\mapsto \eta(r)/\log(4+r)$ is decreasing on $[0,\infty)$. \end{enumerate} \end{lemma} \begin{proof} Consider a sequence of real-number $(\alpha_k)_{k\ge 1}$ such that $\alpha_{1}\geq e^{5}$ and $\alpha_{k}\geq \alpha_{k-1}^{10}$ and in addition \begin{equation} \forall k\ge 1,\qquad \int_{|r|\geq \alpha_k}\omega (r)\diff \! r\leq 2^{-k}. \end{equation} We set \begin{equation}\label{ODE1} \eta(r)=\left\{ \begin{aligned} &2 ~~~ &\text{if }&~~0\leq r< \alpha_1,\\ &k+1+\frac{\log(\frac{4+r}{4+\alpha_{k}})}{\log(\frac{4+\alpha_{k+1}}{4+\alpha_{k}})} \qquad&\text{if }&~~\alpha_{k}\leq r< \alpha_{k+1}. \end{aligned} \right. \end{equation} It is easy to check that $\eta\colon [0,\infty) \to [1,\infty)$ is an increasing function converging to $+\infty$ when $r$ goes to $+\infty$. Moreover, $\eta$ satisfies $\eta(2r)\leq 2\eta(r)$ for any $r\geq 0$. In addition, \begin{align*} \int \eta(|r|) \omega(r)\diff \! r&\leq \int_{|r|\leq \alpha_{1}} 2 \omega(r)\diff \! r+\sum_{k=1}^{\infty}(k+2)\int_{\alpha_{k}\leq |r|\leq \alpha_{k+1}}\omega(r)\diff \! r\\ &\leq 2 ||\omega||_{L^1}+\sum_{k=1}^{\infty}(k+2)2^{-k}\\&\leq 2 ||\omega||_{L^1}+C. \end{align*} It remains to prove that $r\mapsto \eta(r)/\log(4+r)$ is decreasing. To do so, write \begin{equation} \frac{\diff}{\dr}\left(\frac{\eta(r)}{\log(4+r)} \right)=\frac{1}{\log(4+r)} \left(\eta'(r)-\frac{1}{4+r}\frac{\eta(r)}{\log(4+r)}\right). \end{equation} So, for $0\leq r< \alpha_1$, \begin{equation} \frac{\diff}{\dr}\left(\frac{\eta(r)}{\log(4+r)} \right)<0, \end{equation} while for $\alpha_k\leq r<\alpha_{k+1}$ with $k\ge 1$, we have \begin{align*} \frac{\diff}{\dr}\left(\frac{\eta(r)}{\log(4+r)} \right)&\leq \frac{1}{(4+r)\log(4+r)^2} \left(\frac{\log(4+r)}{\log(\frac{4+\alpha_{k+1}}{4+\alpha_{k}})}-k-1\right)\\&\leq \frac{1}{(4+r)\log(4+r)^2} \left(\frac{\log(4+\alpha_{k+1})}{\log(\frac{4+\alpha_{k+1}}{4+\alpha_{k+1}^{1/10}})}-2\right)<0, \end{align*} where we have used $\alpha_{k+1}\geq e^{5\times 10^k}$. This proves that $r\mapsto \eta(r)/\log(4+r)$ is decreasing on $[0,\infty)$. The proof is complete. \end{proof} After this short d\'etour, we return to the main line of our development. Consider a function $f_0$ in $\mathcal{H}^{\frac{3}{2},\frac{1}{3}}(\mathbb{R})$. It immediately follows from the previous lemma and Plancherel's theorem that there exists an function $\tilde{k}\colon[0,\infty) \to [1,\infty)$ such that \begin{equation}\label{n131} \int_\mathbb{R} |\xi|^3 \log\big(4+|\xi|^2\big)^{\frac{2}{3}}(\tilde{k}(\xi))^2 \big\vert \hat f_0(\xi)\big\vert^2 \diff \! \xi <+\infty, \end{equation} and such that $\tilde{k}$ is increasing, $r\mapsto \tilde{k}(r)/\log(e+r)$ is decreasing; $\tilde{k}(2r)\leq c_0\tilde{k}(r)$ and $\lim\limits_{r\to \infty}\tilde{k}(r)=\infty$. Next we now define a function $\kappa_0\colon [0,+\infty)\to[1,+\infty)$ by $$ \kappa_0(r)=\big(\log(4+|\xi|)\big)^{\frac{1}{3}} \, \tilde{k}(|\xi|). $$ together with the companion function $\phi_0$ defined by~\eqref{n10}, that is \begin{equation}\label{defi:phi0} \phi_0(\lambda}\def\ep{\epsilon}\def\ka{\kappa)=\int_{0}^{\infty}\frac{1-\cos(h)}{h^2} \kappa_0\left(\frac{\lambda}\def\ep{\epsilon}\def\ka{\kappa}{h}\right) \diff \! h, \quad \text{for }\lambda\ge 0. \end{equation} \begin{proposition} Consider an initial data $f_0$ in $\mathcal{H}^{\frac{3}{2},\frac{1}{3}}(\mathbb{R})$ and denote by $\phi_0$ the function defined above in~\eqref{defi:phi0}. Set \begin{equation*} M_0=\big\Vert \la D\ra^{\frac{3}{2},\phi_0}f_0\big\Vert_{L^2}^2. \end{equation*} Then, there exists $T_0>0$ depending on $M_0$ and $\Vert f_0\Vert_{L^2}$ such that, for any integer $n$ in $\mathbb{N}\setminus\{0\}$, the solution $f_n$ to the approximate Cauchy problem~\eqref{A3} satisfies \begin{equation} \sup_{t\in [0,T_0]}\big\Vert \la D\ra^{\frac{3}{2},\phi_0}f_n(t)\big\Vert_{L^2}^2\leq 2M_0 \end{equation} and \begin{equation}\label{Z104} \int_0^{T_0}\left(\left\Vert \partial_tf_n\right\Vert_{\dot{H}^1}^2+\left\Vert f_n\right\Vert_{\dot{H}^2}^2+||\mathcal{T}(f_n)(f_n)||_{\dot H^1}^2+\frac{ ||f_n||_{2,\frac{1}{3}}^2}{\log(4+||f_n||_{2,\frac{1}{3}})^{\frac{1}{3}}}\right)\diff \! t \leq CM_0, \end{equation} for some absolute constant $C>0$ independent of $f_0$. Furthermore, there exists a subsequence of $(f_n)$ converging to a solution $f$ of the Muskat equation which satisfies~\eqref{nUG1} and \eqref{nUG2} with $f_n$ replaced by~$f$. \end{proposition} \begin{remark}\label{R:3.8} Notice that the time $T_0$ depends on $f_0$ and not only on~$\big\Vert f_0\big\Vert_{\mathcal{H}^{\frac{3}{2},\frac{1}{3}}}$. \end{remark} \begin{proof} We apply \eqref{Z21'} for the quantities \begin{align*} A_n(t)=\big\Vert \la D\ra^{\frac{3}{2},\phi_0}f_n(t)\big\Vert_{L^2}^2, \qquad B_n(t)=\big\Vert \la D\ra^{2,\phi_0}f_n(t)\big\Vert_{L^2}^2. \end{align*} This gives that \begin{equation}\label{Z21''} \frac{\diff}{\dt} A_n(t)+C_1\delta_n(t)B_n(t)\leq C_2 \left( \sqrt{A_n(t)}+A_n(t) \right)\mu_n(t) B_n(t), \end{equation} where \begin{align*} \delta_n(t)&=\left(1+ \left[\log\left(4+\frac{B_n(t)}{A_n(t)+ \Vert f_0\Vert_{L^2}^2}\right)\right]^{1-2a}\left( A_n(t)+ \Vert f_0\Vert_{L^2}^2\right)\right)^{-1},\\ \mu_n(t)&=\left(\log\left(4+\frac{B_n(t)}{A_n(t)}\right)\right)^{-\frac{1}{3}}\times \left(\tilde{k} \left(\frac{B_n(t)}{A_n(t)}\right)\right)^{-1}. \end{align*} Given $\varrho\ge 0$, define the function \begin{align*} \mathcal{E}\left(\varrho,\Vert f_0\Vert_{L^2}^2\right) =\sup_{r\ge 0} \Bigg\{&C_2 \frac{\left(\sqrt{\varrho}+\varrho\right)r}{\tilde{k}\left(\frac{r}{\varrho}\right)\left[\log\left(4+\frac{r}{\varrho}\right)\right]^{1/3}} \\ &-\frac{C_1}{2}\frac{r}{1+ \Big[\log\Big(4+\frac{r}{\varrho}\Big)\Big]^{1/3} \left(\varrho+ \Vert f_0\Vert_{L^2}^2\right)}\Bigg\}. \end{align*} Since $\rho\mapsto\tilde{k}(\rho)$ is increasing, directly from the definition of $\mathcal{E}\left(\varrho,\Vert f_0\Vert_{L^2}^2\right)$, we verify that the function $\varrho\to \mathcal{E}(\varrho,\Vert f_0\Vert_{L^2}^2)$ is increasing. On the other hand, since $\kappa(\rho)$ tends to $+\infty$ as $\rho$ goes to $+\infty$, we verify that $$ \forall \varrho\ge 0,\qquad \mathcal{E}(\varrho,\Vert f_0\Vert_{L^2}^2)<\infty. $$ Thus, \begin{equation} \frac{\diff}{\dt} A_n(t)+\frac{C_1}{2}\delta_n(t)B_n(t)\leq \mathcal{E}\left(A_n(t),\Vert f_0\Vert_{L^2}^2\right). \end{equation} and {\em a fortiori} \begin{equation*} \frac{\diff}{\dt} A_n(t)\leq \mathcal{E}\left(A_n(t),\Vert f_0\Vert_{L^2}^2\right). \end{equation*} Then by standard arguments, one obtains \begin{equation} \sup_{t\in [0,T_0]}\big\Vert \la D\ra^{\frac{3}{2},\phi_0}f_n(t)\big\Vert_{L^2}^2\leq 2M_0. \end{equation} with \begin{equation*} T_0=\frac{M_0}{\mathcal{E}\left(2M_0,\Vert f_0\Vert_{L^2}^2\right)},~~~M_0=\big\Vert \la D\ra^{\frac{3}{2},\phi_0}f_0\big\Vert_{L^2}^2. \end{equation*}Moreover, as proof of Propostion \ref{pro1}, we also have \eqref{Z104} and a subsequence of $(f_n)$ converging to a solution $f$ of the Muskat equation. Also, $f$ satisfies \eqref{nUG1} and \eqref{nUG2} with $f_n=f$. The proof is complete. \end{proof} \subsection{Uniqueness}\label{S:3.5} The following proposition implies that the solution of the Muskat equation is unique. \begin{proposition} Consider two solutions $f_1,f_2$ of the Muskat equation in $[0,T]\times\mathbb{R}$ (for some $T<\infty$), with initial data $f_{1,0},f_{2,0}$ respectively, satisfying \begin{equation}\label{Z105} \sup_{t\in [0,T]}\left\Vert f_k(t)\right\Vert_{\frac{3}{2},\frac13}^2+ \int_0^{T} \log\Big(4+\left\Vert f_k\right\Vert_{2,\frac{1}{3}}\Big)^{-\frac{1}{3}} ||f_k||_{2,\frac{1}{3}}^2\diff \! t \leq M<\infty,~~k=1,2. \end{equation} Then the difference $g=f_1-f_2$ is estimated by \begin{equation} \label{Z107} \sup_{t\in [0,T]} \Vert g(t) \Vert_{\dot{H}^\frac{1}{2}}\leq \Vert g(0) \Vert_{\dot{H}^\frac{1}{2}}\exp\left(C(M)\sum_{k=1}^2\int_0^T\log\left(4+||f_k||_{2,\frac{1}{3}}\right)^{-\frac{1}{3}} ||f_k||_{2,\frac{1}{3}}^2 \diff \! t \right). \end{equation} \end{proposition} \begin{proof} Since $\partial_tf_k+\la D\ra f_k = \mathcal{T}(f_k)f_k$, it follows from the decomposition~\eqref{n1} of $\mathcal{T}(f_k)f_k$ that the difference $g=f_1-f_2$ satisfies \begin{align*} \partial_tg+\frac{\la D\ra g}{1+(\partial_xf_1)^2}&= V(f_1)\partial_x g+R(f_1,g)+\left(\mathcal{T}(f_2+g)-\mathcal{T}(f_2)\right)f_2. \end{align*} Take the $L^2$-scalar product of this equation with $\la D\ra g$ to get \begin{align*} \frac{1}{2} \frac{\diff}{\dt}\Vert g \Vert^{2}_{\dot{H}^\frac{1}{2}}+\int \frac{( \la D\ra g)^2}{1+(\partial_x f_{1})^2} \diff \! x &\leq \left|\big(V(f_1)\partial_x g,|D| g\big)\right|+\left\Vert R(f_1,g)\right\Vert_{L^2}\left\Vert g\right\Vert_{\dot H^1}\\ &\quad+\left\Vert \left(\mathcal{T}(f_2+g)-\mathcal{T}(f_2)\right)f_2\right\Vert_{L^2}\left\Vert g\right\Vert_{\dot H^1}. \end{align*} The arguments used to show the identity \eqref{Z106} also implies that $$ \left\vert \big(V(f_1)\partial_x g,\la D\ra g\big)\right\vert =\frac{1}{2}\left\vert \big( \big[ \mathcal{H},V(f_1)\big]\la D\ra g,\la D\ra g\big)\right\vert. $$ Thus, by combining the estimate \eqref{Z3} for $\left\Vert V(f)\right\Vert_{\dot{H}^1}$ together with the commutator estimate~\eqref{Z2} about the Hilbert transform, the bound~\eqref{Z14} for the remainder term and eventually the following interpolation inequality (see \eqref{linf'}), $$ \left\Vert \partial_xf_{1}\right\Vert_{L^\infty}\leq C(M)\log\left(4+\left\Vert f_1\right\Vert_{2,\frac{1}{3}}\right)^{\frac{1}{6}}, $$ we end up with \begin{align*} \frac{\diff}{\dt}\Vert g \Vert^{2}_{\dot{H}^\frac{1}{2}}+&C(M)\log\Big(4+||f_1||_{2,\frac{1}{3}}\Big)^{-\frac{1}{3}} ||g||_{\dot H^1}^2\\&~~~\lesssim \left(\left\Vert f_1\right\Vert_{\dot{H}^2}+\left\Vert f_1\right\Vert_{\dot{H}^{\frac{7}{4}}}^2\right)\left\Vert g\right\Vert_{\dot H^{\frac{1}{2}}}\left\Vert g\right\Vert_{\dot H^{1}}+\left\Vert f_1\right\Vert_{\dot{H}^{\frac{7}{4}}}\Vert g\Vert _{\dot{H}^{\frac{3}{4}}}|| g||_{\dot H^1}\\ &\quad+\left\Vert \left(\mathcal{T}(f_2+g)-\mathcal{T}(f_2)\right)f_2\right\Vert_{L^2}\left\Vert g\right\Vert_{\dot H^1}. \end{align*} Now, directly from the definition of $\mathcal{T}(f)g$, we have $$ |\left(\mathcal{T}(f_2+g)-\mathcal{T}(f_2)\right)f_2(x)|\lesssim \int |\Delta_{\alpha} (\partial_xf_{2})(x)||\Delta_{\alpha} g(x)| \diff \! \alpha, $$ so, by H\"older's inequality, the $L^2$-norm of $(\mathcal{T}(f_2+g)-\mathcal{T}(f_2))f_2$ is estimated by \begin{align*} \left(\int\left(\int |\Delta_{\alpha}(\partial_xf_{2})(x)||\Delta_{\alpha} g(x)| \diff \! \alpha\right)^{2}\diff \! x\right)^{\frac{1}{2}} \leq \left\Vert (\partial_xf_{2})\right\Vert_{\dot{F}^{\frac{1}{2}}_{4,2}}\left\Vert g\right\Vert_{\dot{F}^{\frac{1}{2}}_{4,2}}\lesssim \left\Vert f_2\right\Vert_{\dot{H}^{\frac{7}{4}}}\Vert g\Vert _{\dot{H}^{\frac{3}{4}}}. \end{align*} Hence, we derive \begin{align*} \frac{\diff}{\dt}\Vert g \Vert^{2}_{\dot{H}^\frac{1}{2}}+&C(M)\log\Big(4+||f_1||_{2,\frac{1}{3}}\Big)^{-\frac{1}{3}} ||g||_{\dot H^1}^2\\&~~~\lesssim \left(\left\Vert f_1\right\Vert_{\dot{H}^2}+\left\Vert f_1\right\Vert_{\dot{H}^{\frac{7}{4}}}^2\right)\left\Vert g\right\Vert_{\dot H^{\frac{1}{2}}}\left\Vert g\right\Vert_{\dot H^{1}}+\left\Vert f_2\right\Vert_{\dot{H}^{\frac{7}{4}}}\Vert g\Vert _{\dot{H}^{\frac{3}{4}}}|| g||_{\dot H^1} \\&~~~\lesssim \left(\left\Vert f_1\right\Vert_{\dot{H}^2}+\left\Vert f_1\right\Vert_{\dot{H}^{\frac{7}{4}}}^2\right)\left\Vert g\right\Vert_{\dot H^{\frac{1}{2}}}\left\Vert g\right\Vert_{\dot H^{1}}+\left\Vert f_2\right\Vert_{\dot{H}^{\frac{7}{4}}}\Vert g\Vert _{\dot{H}^{\frac{1}{2}}}^{\frac{1}{2}}|| g||_{\dot H^1}^{\frac{3}{2}}. \end{align*} Interchanging the role of $f_1$ and $f_2$, we also get a symmetric estimate. Then, by combining these two estimates, we get \begin{align*} \frac{\diff}{\dt}\Vert g \Vert^{2}_{\dot{H}^\frac{1}{2}}+&C(M)\bigg[ \log\Big(4+||f_1||_{2,\frac{1}{3}}\Big)^{-\frac{1}{3}}+\log\Big(4+||f_2||_{2,\frac{1}{3}}\Big)^{-\frac{1}{3}}\bigg] ||g||_{\dot H^1}^2\\ &~~~\lesssim \left(\left\Vert f_1\right\Vert_{\dot{H}^2}+\left\Vert f_1\right\Vert_{\dot{H}^{\frac{7}{4}}}^2\right)\left\Vert g\right\Vert_{\dot H^{\frac{1}{2}}}\left\Vert g\right\Vert_{\dot H^{1}}+\left\Vert f_2\right\Vert_{\dot{H}^{\frac{7}{4}}}\Vert g\Vert _{\dot{H}^{\frac{1}{2}}}^{\frac{1}{2}}|| g||_{\dot H^1}^{\frac{3}{2}}\\ &~~~\quad +\left(\left\Vert f_2\right\Vert_{\dot{H}^2}+\left\Vert f_2\right\Vert_{\dot{H}^{\frac{7}{4}}}^2\right)\left\Vert g\right\Vert_{\dot H^{\frac{1}{2}}}\left\Vert g\right\Vert_{\dot H^{1}}+\left\Vert f_1\right\Vert_{\dot{H}^{\frac{7}{4}}}\Vert g\Vert _{\dot{H}^{\frac{1}{2}}}^{\frac{1}{2}}|| g||_{\dot H^1}^{\frac{3}{2}}. \end{align*} By interpolation inequality \eqref{Z20'} \begin{align*} &\left\Vert f_k\right\Vert_{\dot H^{\frac{7}{4}}}\lesssim_M \log\left(4+\left\Vert f_k\right\Vert_{2,\frac{1}{3}}\right)^{-\frac{1}{3}} \left\Vert f_k\right\Vert_{2,\frac{1}{3}}^{\frac{1}{2}},\\ &\left\Vert f_k\right\Vert_{\dot H^2}\lesssim_M \log\left(4+\left\Vert f_k\right\Vert_{2,\frac{1}{3}}\right)^{-\frac{1}{3}} \left\Vert f_k\right\Vert_{2,\frac{1}{3}}, \end{align*} hence \begin{align*} &\frac{\diff}{\dt}\Vert g \Vert^{2}_{\dot{H}^\frac{1}{2}}+C(M)\bigg[ \log\Big(4+||f_1||_{2,\frac{1}{3}}\Big)^{-\frac{1}{3}}+\log\Big(4+||f_2||_{2,\frac{1}{3}}\Big)^{-\frac{1}{3}}\bigg] \left\Vert g\right\Vert_{\dot H^1}^2\\ &\qquad\qquad\lesssim_M \log\left(4+\left\Vert f_1\right\Vert_{2,\frac{1}{3}}\right)^{-\frac{1}{3}} \left\Vert f_1\right\Vert_{2,\frac{1}{3}}\left\Vert g\right\Vert_{\dot H^{\frac{1}{2}}}\left\Vert g\right\Vert_{\dot H^{1}}\\ &\qquad\qquad\quad+\log\left(4+\left\Vert f_2\right\Vert_{2,\frac{1}{3}}\right)^{-\frac{1}{3}} \left\Vert f_2\right\Vert_{2,\frac{1}{3}}^{\frac{1}{2}}\left\Vert g\right\Vert _{\dot{H}^{\frac{1}{2}}}^{\frac{1}{2}}\left\Vert g\right\Vert_{\dot H^1}^{\frac{3}{2}}\\ &\qquad\qquad\quad+\log\left(4+\left\Vert f_2\right\Vert_{2,\frac{1}{3}}\right)^{-\frac{1}{3}} \left\Vert f_2\right\Vert_{2,\frac{1}{3}}\left\Vert g\right\Vert_{\dot H^{\frac{1}{2}}}\left\Vert g\right\Vert_{\dot H^{1}}\\ &\qquad\qquad\quad+\log\left(4+\left\Vert f_1\right\Vert_{2,\frac{1}{3}}\right)^{-\frac{1}{3}} \left\Vert f_1\right\Vert_{2,\frac{1}{3}}^{\frac{1}{2}}\left\Vert g\right\Vert _{\dot{H}^{\frac{1}{2}}}^{\frac{1}{2}}\left\Vert g\right\Vert_{\dot H^1}^{\frac{3}{2}}. \end{align*} Finally, by Holder's inequality, \begin{align*} \frac{\diff}{\dt}\Vert g \Vert^{2}_{\dot{H}^\frac{1}{2}}\lesssim_M \bigg(\sum_{k=1}^2 \log\left(4+||f_k||_{2,\frac{1}{3}}\right)^{-\frac{1}{3}} \left\Vert f_k\right\Vert_{2,\frac{1}{3}}^2\bigg)\left\Vert g\right\Vert_{\dot H^{\frac{1}{2}}}^2, \end{align*} which in turn implies \eqref{Z107}. The proof is complete. \end{proof} \section*{Acknowledgments} \noindent Thomas Alazard acknowledges the support of the SingFlows project, grant ANR-18-CE40-0027 of the French National Research Agency (ANR). Quoc.-Hung Nguyen is supported by the Shanghai Tech University startup fund.
1,314,259,996,447
arxiv
\section{Introduction} This paper considers the long-standing problem of Bayesian variable selection in a linear regression model. Variable selection is a complicated task in high dimensional settings where the number of regression parameters $P$ is much larger than the number of observations $N$. In this context, it is crucial to introduce sparsity assumptions based on the prior knowledge that only a few number of regression parameters are significant. Using a sequence of observations from a linear regression model, the aims are \textit{(i)} to determine which components of the regression vector are active and explain the observations and \textit{(ii)} to estimate the regression vector. Many methods have been proposed to perform variable selection, see~\cite{ohara:sillanpaa:2009} for a review of Bayesian methods. Among the most popular are the penalized least squares estimators with a $\mathrm{L}_1$-norm penalization introduced by \cite{tibshirani:1996}, also known as the Least Absolute Shrinkage and Selection Operator (LASSO); see also {\em e.g.} \cite{bickel:ritov:tsybakov:2009,vandegeer:2008,bunea:tsybakov:wegkamp:2007} and the references therein. In a Bayesian framework, many approaches introduce prior distributions both on the regression vector and on some hyper-parameters. These methods consist then in using a Gibbs sampler to draw alternatively the regression vector and each hyper-parameter in order to explore the joint posterior distribution and to find regression vectors with high posterior probability, see \cite{west:2003,tan:li:stoica:2010,griffin:brown:2011}. These algorithms yield approximately sparse estimations of the regression vector which means that many components are close but not equal to zero. For example, \cite{casella:park:2008} proposed a Bayesian LASSO by interpreting the $\mathrm{L}_1$ penalization as a double exponential prior which shrinks regression parameters toward zero. The Bayesian spike and slab models have been first proposed by \cite{beauchamp:mitchell:1988}. The model used in \cite{beauchamp:mitchell:1988} introduces independent priors for the regression parameters which are mixtures between a uniform flat distribution (the slab) and a Dirac distribution at zero (the spike), yielding exactly sparse estimators. This model is extended in \cite{george:mcculloch:1993} which uses a binary latent variable to locate the regression parameters that explain the observations. A normal distribution with high variance (the slab) is then associated with these parameters, and a normal distribution with very small variance (the spike) is associated with the other regression parameters. In \cite{ishwaran:rao:2005}, the scale of the mixture components is set through a prior distribution on the hyper-variance. See~\cite{malsinerwalli:wagner:2011} for a comparison of the different spike and slab priors which can be used, and particularly for a study of the differences between the priors with a Dirac spike and those with a Gaussian spike. These spike and slab models based on a (non-degenerated) Gaussian spike provide better results for high dimensional regression settings. However, they do not allow to actually set to zero some regression parameters. In other Bayesian variable selection approaches, the prior distribution of the regression vector makes it difficult to sample from the corresponding posterior distribution. Therefore, Markov Chain Monte Carlo (MCMC) methods such as random walk Hastings Metropolis algorithms, block-Gibbs samplers which update a subset of regression parameters at each iteration, or Metropolis adjusted Langevin algorithm (MALA), have been widely used. In some cases, the prior distribution of the regression vector uses a penalization function yielding to approximately sparse samples (see \cite{wipf:etal:2011} for a theoretical and empirical comparison of different penalization functions). \cite{lucka:2012} uses a $\mathrm{L}_1$-penalization and compares results obtained by different MCMC samplers. \cite{dalalyan:tsybakov:2012} introduces a smooth penalization function to obtain a differentiable posterior distribution which allows to use MALA. In the case of a non-differentiable penalized posterior distribution, \cite{pereyra:2013} combines MALA with a proximal operator. Other MCMC approaches for Bayesian variable selection define a posterior distribution on the model space, where a model is a binary vector locating the active (non null) components of the regression vector. The objective is then to explore this posterior distribution, which is equivalent to estimate probabilities of activation for each regression parameter. In~\cite{brown:fearn:vannucci:2001} for example, this exploration is performed with a Gibbs sampler. Variants and adaptive versions of the Gibbs sampler for this problem have been proposed in \cite{nott:kohn:2005,lamnisos:etal:2013}. Samples from the posterior distribution of the models are obtained in~\cite{shi:dunson:2011} and in \cite{schafer:chopin:2013} with particle filters. These methods are extended in~\cite{rigollet:tsybakov:2012} to obtain estimators of the regression vector using the mean square estimators associated with each model. The last class of MCMC methods for Bayesian variable selection is designed to obtain exactly sparse samples (with some components actually set to zero). These methods jointly sample a model and the regression parameters active in this model, see {\em e.g.} \cite{dellaportas:forster:ntzoufras:2002} and the references therein. The Reversible Jump MCMC (RJMCMC) is a popular algorithm introduced in \cite{green:1995} which produces a Markov chain evolving between spaces of different dimensions. The dimension of the sample varies at any iteration as active parameters are added or discarded from the model. Each new sample is accepted or rejected using a Metropolis-Hastings step where the acceptance probability is adjusted to the transdimensional moves. See also~\cite{brooks:etal:2003,karagiannis:andrieu:2013} for efficient ways to implement RJMCMC. \cite{carlin:chib:1995} consider another setting that encompasses all the models jointly: at each iteration, pseudo-prior distributions are used to jointly sample regression parameters associated with all models. For high dimensional statistical problems, such a joint sampling is computationally too expensive. A more efficient algorithm is proposed in \cite{dellaportas:forster:ntzoufras:2002} which only involves the simulation of a new model and of the regression parameters corresponding to this newly sampled model. This method, called Metropolized Carlin and Chib (MCC), avoids the need to sample from all the pseudo-priors and can be implemented in practice, see also~\cite{petralias:dellaportas:2013}. \cite{ji:schmidler:2013} use an adaptive Metropolized algorithm to sample from a posterior distribution which is a mixture of a Dirac at zero and slab distributions. This algorithm samples independently each regression parameter according to an adaptive mixture of a Dirac at zero and a Gaussian distribution. It is therefore not appropriate for high dimensional settings as the proposal strategy does not take into account the target distribution. In this paper, we introduce a new MCMC algorithm, called Shrinkage-Thresholding MALA (STMALA), designed to sample sparse regression vectors by jointly sampling a model and a regression vector in this model. This algorithm, which is a transdimensional MCMC method, relies on MALA (see \cite{roberts:tweedie:1996b}). The proposal distribution of MALA is based on the computation of the gradient of the logarithm of the target distribution. In order to both deal with a non-differentiable target posterior distribution and to actually set some components to zero, we propose to combine MALA with a shrinkage-thresholding operator by: \begin{enumerate}[-] \item computing a noisy gradient step involving the term of the logarithm of the target distribution which is continuously differentiable; \item then applying a shrinkage-thresholding operator to ensure sparsity and shrink small values of the regression parameters toward zero. \end{enumerate} Such an algorithm is motivated by Bayesian variable selection with non-smooth priors. This algorithm can perform global moves from one model to a rather distant other one, which allows to explore efficiently high dimensional spaces (in comparison to local move algorithms). The geometric ergodicity of this new algorithm is proved for a large class of target distributions. To our knowledge, it is the first result providing a rate of convergence for a transdimensional MCMC algorithm (like RJMCMC and MCC); usually, only Harris recurrence is proved, see \cite{roberts:rosenthal:2006}. This paper is organized as follows. STMALA and its application to Bayesian variable selection is described in Section~\ref{sec:PMALA}. Different implementations are proposed in Section~\ref{sec:PMALA:variants}. The geometric ergodicity of this new sampler is addressed in Section~\ref{sec:ergodicity}. Numerical experiments on simulated and real data sets to assess the performance of STMALA are given in Section~\ref{sec:exp}. Finally, all the proofs are postponed to Section~\ref{PMALA:sec:proofs}. \section{The Shrinkage-Thresholding MALA algorithm} \label{sec:PMALA} This section introduces the Shrinkage-Thresholding MALA (STMALA) algorithm which is designed to sample from a target distribution defined on $\mathbb{R}^{P\times T}$, where $P, T \in \mathbb{N}^*$, under the sparsity assumption that a large number of rows of each sample should be null. Let $\mathcal{M} \ensuremath{\stackrel{\mathrm{def}}{=}}\{0,1\}^P$ be the set of binary vectors locating the non-zero rows of elements of $\mathbb{R}^{P\times T}$. For any $m = (m_1, \dots, m_P) \in \mathcal{M}$, set \begin{equation} \label{eq:def:Im} I_m \ensuremath{\stackrel{\mathrm{def}}{=}} \{i \in \{1, \cdots, P\};\; m_i = 1 \} \;. \end{equation} We consider target distributions on $\mathbb{R}^{P \times T}$ absolutely continuous with respect to the positive measure $\mathrm{d} \nu(x)$ given by \begin{align} \label{eq:def:nu} \mathrm{d} \nu(x) \ensuremath{\stackrel{\mathrm{def}}{=}} \sum_{m \in \mathcal{M}} \left(\prod_{i \notin I_m} \delta_{0}( \mathrm{d} x_{i\cdot}) \right) \ \left( \prod_{i \in I_m} \mathrm{d} x_{i \cdot}\right) \;, \end{align} where, for $x \in \mathbb{R}^{P \times T}$ and $1 \leq i \leq P$, $x_{i \cdot}$ is the $i$-th row of $x$. The STMALA algorithm is based on the MALA algorithm which proposes local moves using information about the gradient of the logarithm of the target density (when it is differentiable). Nevertheless, MALA is not designed to produce sparse samples. Therefore, we propose to combine a gradient step as in MALA with a shrinkage-thresholding step, which produces sparse matrices of $\mathbb{R}^{P \times T}$. This mechanism is followed by an accept-reject step to guarantee the convergence to the right target distribution. Before describing the algorithm, we introduce some notations; \paragraph{Notations} For any matrix $A \in \mathbb{R}^{\ell \times \ell'}$, $A_{ij}$ denotes the entry $(i,j)$ of the matrix $A$ and $A_{i \cdot}$ is the $i$-th row of $A$. For any $m=(m_1, \cdots, m_P) \in \{0,1\}^{\ell}$, let $|m| \ensuremath{\stackrel{\mathrm{def}}{=}} \sum_{i=1}^{P} m_i$ denotes the number of positive entries. $A_{m \cdot}$ denotes the $|m| \times \ell'$ matrix obtained by extracting from $A$ the active components in $m$. Similarly, $A_{\cdot m}$ for $m \in \{0,1\}^{\ell'}$ collects the columns of $A$ indexed by the active components in $m$. By convention, if $m$ in $\{0,1\}^l$ (resp. $\{0,1\}^{l'}$) is such that $|m|=0$, then $A_{m \cdot} = 0$ (resp. $A_{\cdot m}=0$). $A_{-m \cdot}$ denotes the $(P - |m|) \times T$ matrix obtained by extracting from $A$ the rows indexed by $i \notin I_m$. Define the Frobenius norm $\|\cdot\|_2$, the $\mathrm{L}_{2,1}$-norm $\|\cdot\|_{2,1}$ and the $1$-norm $\|\cdot\|_1$ of a $\ell' \times \ell$-matrix $A$ as \[ \| A\|_{2} \ensuremath{\stackrel{\mathrm{def}}{=}} \left(\sum_{i=1}^{\ell} \sum_{i=1}^{\ell'} A_{i,j}^2\right)^{1/2} \;,\;\; \| A\|_{2,1} \ensuremath{\stackrel{\mathrm{def}}{=}} \sum_{i=1}^{\ell} \left(\sum_{j=1}^{\ell'} A_{i,j}^2\right)^{1/2}\quad\mbox{and}\;\; \| A\|_{1} \ensuremath{\stackrel{\mathrm{def}}{=}} \sum_{i=1}^{\ell} \sum_{j=1}^{\ell'} | A_{i,j}|\;. \] \subsection{The STMALA algorithm} It is assumed that \begin{hypA}\label{hyp:definition:pi} $\pi \ \mathrm{d} \nu$ is the target distribution: $\sup_{\mathbb{R}^{P \times T}} \pi < \infty$ and there exists a continuously differentiable function $g: \mathbb{R}^{P \times T} \to \mathbb{R}$ and a measurable function $\bar g: \mathbb{R}^{P \times T} \to \mathbb{R}$ such that \[ \pi(x) \propto \exp\left\{- g(x) - \bar g(x)\right\} \;. \] \end{hypA} Let $\sigma >0$ be a fixed stepsize and $\Psi : \mathbb{R}^{P \times T} \to \mathbb{R}^{P \times T}$ be a shrinkage-thresholding operator. In this paper, three different operators are considered to sample sparse matrices (see Section~\ref{sec:PMALA:variants} for further comments on these operators): for any $\gamma>0$, any $1\leq i\leq P$ and any $1\leq j\leq T$, \begin{align*} \left( \Psi_1(u) \right)_{i,j} &= u_{i,j} \left(1-\frac{ \gamma}{ \|u_{i.}\|_2} \right)_+\;, \\ \left( \Psi_2(u) \right)_{i,j} & = u_{i,j} \mathbf{1}_{ \|u_{i.}\|_2 > \gamma}\;, \\ \left( \Psi_3(u) \right)_{i,j} &= u_{i,j} \left( 1 - \frac{\gamma^2}{\|u_{i \cdot}\|_2^2} \right)_+ \;, \end{align*} where for $a \in \mathbb{R}$, $a_+$ denotes the positive part of $a$: $a_+ \ensuremath{\stackrel{\mathrm{def}}{=}} \max(a,0)$. From a current state $ X^{n}$ the algorithm proposes a new point $Z$ according to a proposal distribution $q_{\Psi}(X^n, \cdot)$ which can be seen as a noisy proximal gradient step: given the current value of the chain $X^n$, the candidate $Z$ is defined by \begin{equation} \label{eq:proposal:algo} Z = \Psi \left( X^n - \frac{\sigma^2}{2} \nabla g( X^n) + \sigma \Xi^{n+1} \right)\;, \end{equation} where $\Xi^{n+1}$ is a $\mathbb{R}^{P \times T}$ random matrix with independent and identically distributed (i.i.d.) standard Gaussian entries. This candidate is then accepted or not with an accept-reject step. If $\Psi$ is the identity operator, then the candidate becomes \begin{align*} Z = X^n - \frac{\sigma^2}{2} \nabla g(X^n) + \sigma \Xi^{n+1} \;, \end{align*} which is the proposal mechanism of MALA. STMALA is outlined in Algorithm~\ref{PMALA:alg:pmala}. It produces a sequence $(X^n)_{n \in \mathbb{N}}$ which is a Hastings-Metropolis Markov chain with proposal distribution $q_{\Psi}$ and target distribution $\pi \, \mathrm{d} \nu$. The expression of the transition density $q_{\Psi}$ is established in Section~\ref{sec:PMALA:variants} for different shrinkage-thresholding operators $\Psi$. We will also interpret these proposal distributions as mechanisms to sample both a binary vector $m \in \{0,1 \}^P$ and a $|m|\times T$ matrix. \begin{algorithm}[htbp] \caption{One iteration of the STMALA algorithm given $X^n$}\label{PMALA:alg:pmala} \begin{algorithmic}[1] \State \label{algo:step1} Draw a $P \times T$ matrix $\Xi^{n+1}$ with i.i.d. entries sampled from $\mathcal{N}(0,1)$. \State \label{algo:step2} Set $Z = \Psi \left( X^n - \frac{\sigma^2}{2} \nabla g( X^n) + \sigma \Xi^{n+1} \right)$. \State \label{algo:step3} Set \[ \alpha(X^n, Z) = 1 \wedge \frac{ \pi({Z}) \, q_{\Psi}(Z, X^n)}{\pi({X^n}) \, q_{\Psi}(X^n, Z)} \;. \] \State Draw $U \sim U(0,1)$. \If {$U\leq \alpha(X^n, Z)$} \State $X^{n+1} = Z$. \Else \State $X^{n+1} = X^{n}$. \EndIf \end{algorithmic} \end{algorithm} To motivate this framework, let us consider the Bayesian variable selection problem. \subsection{Application to Bayesian variable selection} \label{sec:BayesianSetup} Let $Y \in \mathbb{R}^{N \times T}$ be the observations modeled as \begin{align} \label{eq:model} Y = GX + \sqrt{\tau} E \;, \end{align} where $G \in \mathbb{R}^{N \times P}$ is a known gain matrix, $X \in \mathbb{R}^{P \times T}$ is the unknown regression matrix, and $E \in \mathbb{R}^{N \times T}$ is a noise matrix. It is assumed that the entries $E_{i,j}$ for $1\leq i\leq N$ and $1 \leq j \leq T$ are i.i.d. according to $\mathcal{N}(0,1)$. This Gaussian linear regression model appears in many different situations in modern applied statistics such as genomics or Magnetic Resonance Imaging (MRI) studies. For example, in the case of MRI, a stimulus is delivered to a patient and his brain activity is measured by magnetoencephalography and electroencephalography. The objective is then to retrieve the original signal $X$ (source amplitudes) using the measured signal $Y$. By Maxwell's equations, the signal measured by the sensors is a linear combination of the electromagnetic fields produced by all the sources. In this case, $N$ is the number of sensors, $T$ is the number of measurement times, $P$ is the number of sources and $G$ is the gain matrix modeling the electromagnetic properties of the brain. In high dimensional variable selection problems, the regression vector $X$ has to be recovered under sparsity constraints. A sparse signal $X$ can equivalently be defined by \textit{(i)} a binary vector $m = (m_1, \cdots, m_P) \in \{0,1\}^P$ with the convention that $m_k =1$ if and only if $X_{k\cdot}$ is active i.e. is non null; and \textit{(ii)} the matrix $X_{m\cdot}$ which collects the $|m|$ active rows of $X$. Hence, $m \in \mathcal{M}$ is a model, $|m|$ is the number of active rows, and $I_m$ given by (\ref{eq:def:Im}) is the set of indices corresponding to active rows. An algorithm to sample sparse matrices can be described as a sampler for the exploration of a posterior distribution absolutely continuous with respect to $\rmd \nu$ defined in~\eqref{eq:def:nu}. For any model $m \in \mathcal{M}$, denote by $S_m$ the subset of $\mathbb{R}^{P \times T}$ associated with $m$, defined by \begin{equation} \label{eq:def:sm} S_m \ensuremath{\stackrel{\mathrm{def}}{=}} \{z \in \mathbb{R}^{P \times T}, z_{i\cdot} \neq 0 \ \forall i \in I_m \ \text{and} \ z_{-m \cdot} = 0 \} \;. \end{equation} Then $(S_m)_{m \in \mathcal{M}}$ is a partition of $\mathbb{R}^{P \times T}$ with non-null $\nu$-measure. Sampling a distribution absolutely continuous with respect to $\rmd \nu$ on $\mathbb{R}^{P \times T}$ is equivalent to sampling a pair $(m, X_{m\cdot})$ where $m$ is the set of the active rows and given $m$, $X_{m\cdot} \in \mathbb{R}^{|m| \times T}$ collects the value of these rows. Under the statistical model (\ref{eq:model}), for $m \in \mathcal{M}$ and $X \in S_m$, the likelihood of the observation $Y$ given $X$ is \begin{align*} \pi(Y \vert X) \ensuremath{\stackrel{\mathrm{def}}{=}} (2 \pi \tau)^{-NT/2} \exp \left(- \frac{1}{\tau} \|Y - G_{\cdot m} X_{m\cdot}\|_2^2 \right) = (2 \pi \tau)^{-NT/2} \exp \left(- \frac{1}{\tau} \|Y - GX\|_2^2 \right) \;. \end{align*} The sparsity constraint is expressed by a joint prior distribution: $\pi(X_{m\cdot} \vert m)$ is a prior distribution on $\mathbb{R}^{|m| \times T}$ conditionally to the model $m$, and $(\omega_m)_{m \in \mathcal{M}}$ is a prior distribution on $\mathcal{M}$ that is $(\omega_m)_{m \in \mathcal{M}}$ is a non-negative sequence satisfying $ \sum_{m \in \mathcal{M}} \omega_m = 1$. An example of prior distribution is a $L_{2,1}$-penalty on the regression vector: \[ \pi(X_{m \cdot} \vert m) \ensuremath{\stackrel{\mathrm{def}}{=}} \exp(-\lambda \|X_{m \cdot}\|_{2,1} - |m| \ln c_\lambda) \;, \] with $\lambda \geq 0$ and (see Lemma~\ref{PMALA:lem:calculnormalisation}) \begin{equation} \label{eq:definition:clambda} c_\lambda \ensuremath{\stackrel{\mathrm{def}}{=}} \left\{ \begin{array}{ll} 2 \pi^{T/2} (T-1)! \lambda^{-T} \left(\Gamma\left(T/2\right) \right)^{-1} & \text{if $\lambda >0$ \;,} \\ 1 & \text{if $\lambda =0$} \;, \end{array} \right. \end{equation} where $\Gamma$ is the standard Gamma function defined on $(0,+\infty)$ by $\Gamma:x\mapsto\int_0^{+\infty}t^{x-1}\mathrm{e}^{-t}\mathrm{d}t$. Therefore, the posterior density $\pi(X \vert Y)$ on $\mathbb{R}^{P \times T}$ is given by, for $m \in \mathcal{M}$ and $X \in S_m$ \begin{align} \label{eq:def:pi:rp} \pi(X \vert Y) \propto \omega_{m} \, c_\lambda^{-|m|} \, \exp\left(-\frac{1}{2 \tau} \|Y-G x \|_2^2 - \lambda \|x \|_{2,1} \right) \;. \end{align} In this application, the target density is $x \mapsto \pi(x \vert Y)$ and it is proportional to $\exp\left\{- g(x) - \bar g(x) \right\}$, with for any $m \in \mathcal{M}$ and $x \in S_m$, \begin{align*} g(x) = \frac{1}{2 \tau} \|Y-G x \|_2^2 \quad \textrm{and} \quad \bar g(x) = \lambda \|x \|_{2,1} - \log \left(w_{m} c_{\lambda}^{-|m|} \right)\;. \end{align*} \subsection{Partial updating} \label{sec:bPMALA} In high dimensional settings, STMALA may encounter some difficulties to accept the proposed moves. Following the idea introduced in~\cite{neal:roberts:2006}, we introduce in this section a variant of the algorithm in which only a fixed proportion of components of $X^n$ are updated at each iteration $n$. This is achieved by combining STMALA and a Gibbs sampler in a STMALA-within-Gibbs algorithm, called block-STMALA. This algorithm depends on a new parameter $\eta \in \{1, \cdots, P \}$ which specifies the number of rows to be updated at each iteration of the algorithm. Let $\eta$ be fixed. Denote by $\mathcal{B}_{\eta}$ the set of subsets of $\{1,\dots,P\}$ with exactly $\eta$ elements. The first step consists in choosing at random a subset $b \in \mathcal{B}_\eta$. Then, given $b$, a STMALA algorithm is run with the conditional distribution of $ x_{b\cdot}$ given the other components $ x_{-b \cdot}$ under $\pi$, denoted by $\pi( x_{b\cdot} \vert x_{-b\cdot})$, as target distribution. For $b \in \mathcal{B}_\eta$, denote by $q_b$ the proposal transition density of this block-STMALA step, and by $\nabla_{b}g( x)$ the gradient of the function $ x \mapsto g( x)$ with respect to $ x_{b \cdot}$, \textit{i.e.} $\nabla_{b}g( x)=(\nabla g(x))_{b \cdot}$. The block-STMALA algorithm is summarized in Algorithm~\ref{bPMALA:alg:bpmala} with a block size set to $\eta$. \begin{algorithm}[htbp] \caption{One iteration of the block-STMALA algorithm given $\boldsymbol X^n$}\label{bPMALA:alg:bpmala} \begin{algorithmic}[1] \State Select uniformly $b \in \mathcal{B}_{\eta}$. \State Draw a $\eta \times T$ matrix $\Xi^{n+1}$ with i.i.d. entries sampled from $\mathcal{N}(0,1)$. \State \label{proposal:bPMALA} Define $Z$: set $Z_{-b\cdot} = X^n_{-b\cdot}$ and $Z_{b\cdot} = \Psi \left( X^n_{b\cdot} - \frac{\sigma^2}{2} \nabla_b g(X^n) + \sigma \Xi^{n+1} \right)$. \State Set \[ \alpha_b(X^n, Z) = 1 \wedge \frac{ \pi({Z}) \, q_{b}(Z_{b\cdot}, X^n_{b\cdot})}{\pi({X^n}) \, q_{b}(X^n_{b\cdot}, Z_{b\cdot})} \;, \] \State Draw $U \sim U(0,1)$. \If {$U\leq \alpha_b(X^n, Z)$} \State $ X^{n+1} = Z$. \Else \State $X^{n+1} = X^{n}$. \EndIf \end{algorithmic} \end{algorithm} The transition kernel associated with the block-STMALA algorithm is given by \begin{align*} P_{\textrm{block}} \ensuremath{\stackrel{\mathrm{def}}{=}} {P\choose \eta}^{-1} \sum_{b \in \mathcal{B}_{\eta}} P_b \;, \end{align*} where, for any $b \in \mathcal{B}_{\eta}$, $P_{b}$ is given by \begin{align*} P_b(x, \mathrm{d} z) \ensuremath{\stackrel{\mathrm{def}}{=}} \left( \prod_{i \notin b} \delta_{x_{i \cdot}} (\mathrm{d} z_{i \cdot}) \right) \ \left( \alpha_b(x, z) q_b(x_{b \cdot}, \mathrm{d} z_{b \cdot}) + \delta_{x_{b \cdot}}(\mathrm{d} z_{b \cdot}) \int (1-\alpha_b(x, \tilde{z})) q_b(x, \mathrm{d} \tilde{z})\right)\;. \end{align*} Note that for any $b \in \mathcal{B}_{\eta}$, the target density $\pi$ is invariant with respect to $P_b$, so that $\pi$ is also invariant with respect to $P_{\textrm{block}}$. \section{Shrinkage-Thresholding operators for STMALA} \label{sec:PMALA:variants} We consider in turn three different shrinkage-thresholding operators $\Psi$. For each of them, we provide an explicit expression of the proposal distribution $q_\Psi$ and show that this distribution is equivalent to \textit{(i)} first sampling the indices of the active rows of the candidate matrix by sampling a binary vector $m \in \mathcal{M}$; and \textit{(ii)} sampling a matrix in $\mathbb{R}^{|m| \times T}$. Figure~\ref{fig:toyex:fct_seuil:d1} displays these operators in the case $P=T=1$. \begin{figure}[ht!] \begin{center} \includegraphics[height=6cm,width=.8\linewidth]{fct_seuil_d1_bis} \end{center} \caption{Shrinkage-Thresholding functions associated with the $L_{2,1}$ proximal operator (Prox - left), the hard thresholding operator (HT - center) and the soft thresholding operator with vanishing shrinkage (STVS - right) in one dimension.} \label{fig:toyex:fct_seuil:d1} \end{figure} \subsection{The $L_{2,1}$ proximal operator} A first idea is to consider the $L_{2,1}$ proximal operator $\Psi_1: \mathbb{R}^{P \times T} \to \mathbb{R}^{P \times T}$ defined componentwise by \begin{align} \label{PMALA:eq:prox:21} \left(\Psi_1(u)\right)_{i,j} &= u_{i,j} \left(1-\frac{ \gamma}{ \|u_{i.}\|_2} \right)_+\;, \end{align} for some (fixed) positive parameter $\gamma$. The function $u \mapsto \Psi_1(u)$ is displayed on Figure~\ref{fig:toyex:fct_seuil:d1}[left] in the case $P=T=1$. When $g$ is a continuously differentiable convex function such that $ \nabla g$ is $L_g$-Lipschitz, it is known (see e.g. \cite[theorem 3.1]{beck:teboulle:2009}, or~\cite{parikh:boyd:2013}) that the deterministic sequence $(x^n)_{n}$ defined by $x^{n+1} = \Psi_1(x^n - \sigma^2 \nabla g(x^n)/2)$ for some fixed $\sigma$ such that $\sigma^2/2 \in (0,L_g^{-1}]$ converges to a minimum of $x \mapsto g(x) + 2\gamma / \sigma^2 \|x\|_{2,1}$. In the case when $\pi(x) \propto \exp(-(g(x)+ 2\gamma / \sigma^2 \|x\|_{2,1}))$, this remark gives an insight on the proposal mechanism (\ref{eq:proposal:algo}) of STMALA and shows that it can be read as an extension of MALA to non-differentiable target densities: the proposed sample $Z=\Psi_1(X^n - \sigma^2 \nabla g(X^n)/2+\sigma \Xi^{n+1})$ is obtained by moving from the current sample $X^n$ to a point which is a sparse perturbation of the point $\Psi_1(X^n - \sigma^2 \nabla g(X^n)/2)$ which has higher probability under $\pi$ than $X^n$ (as soon as $\sigma^2/2 \leq L_g^{-1}$). Therefore, our proposal mechanism can be seen as one iteration of a stochastic $L_{2,1}$-gradient proximal algorithm designed to converge to the minima of $x \mapsto g(x) + 2 \gamma/ \sigma^2\|x\|_{2,1}$. We now provide an explicit expression of the proposal distribution $q_{\Psi_1}$, which is required in order to compute the acceptance probability in Algorithm~\ref{PMALA:alg:pmala}. Lemma~\ref{PMALA:lem:propnorm21} applied with $\mu=X^n - \frac{\sigma^2}{2} \nabla g(X^n)$ answers the question. \begin{lemma} \label{PMALA:lem:propnorm21} Let $\mu \in\mathbb{R}^{P\times T}$ and positive constants $\gamma, \sigma >0$. Set $z \ensuremath{\stackrel{\mathrm{def}}{=}} \Psi_1( \mu + \sigma \xi )$ where $\xi \in\mathbb{R}^{P\times T}$ is a matrix of independent standard Gaussian random variables. The distribution of $z \in\mathbb{R}^{P\times T}$ is given by \begin{align} \label{eq:proposal:L21gradientproximal} \sum_{m \in \mathcal{M}} \left( \prod_{i \notin I_m} p( \mu_{i\cdot}) \, \delta_0( \mathrm{d} z_{i\cdot}) \right) \ \left( \prod_{i \in I_m} f( \mu_{i\cdot}, z_{i\cdot}) \mathrm{d} z_{i\cdot} \right)\;, \end{align} where for any $c\in \mathbb{R}^T$ and $z \in \mathbb{R}^T \setminus \{0 \}$ \begin{align*} p(c) &\ensuremath{\stackrel{\mathrm{def}}{=}} \mathbb{P}\left\{\|c+\xi\|_2 \leq \gamma\right\}\;,\; \mathrm{with}\; \xi\sim\mathcal{N}(0, \sigma^2 I_T)\;,\\ f(c, z) &\ensuremath{\stackrel{\mathrm{def}}{=}}\left( 2 \pi \sigma^2 \right)^{-T/2}\exp \left(-\frac{1 }{2\sigma^2}\left\| \left( 1 + \frac{\gamma}{\|{z}\|_2} \right){z}-c\right\|^2_2 \right)\left(1+\frac{\gamma}{\|{z}\|_2}\right)^{T-1}\;. \end{align*} \end{lemma} Lemma~\ref{PMALA:lem:propnorm21} is proved in Section~\ref{PMALA:sec:proofs}. It implies that the proposal distribution $q_{\Psi_1}(x,z)$ is the mixture (\ref{eq:proposal:L21gradientproximal}) when $\mu = x - \frac{\sigma^2}{2}\nabla g ({x})$. This proposal distribution is equivalent to sampling a new binary vector $m'=(m'_1, \cdots, m'_P) \in \mathcal{M}$ conditionally to $x$; and then sampling a new matrix with non null rows in $\mathbb{R}^{|m'| \times T}$ conditionally to $(m',x)$ as follows: \begin{enumerate}[(i)] \item sample independently the components $(m'_i, i \in \{1, \cdots, P \})$ such that $m'_i$ is a Bernoulli random variable with success parameter \begin{align*} 1- \mathbb{P}\left( \Big\| \left( x - \frac{\sigma^2}{2} \nabla g( x) \right)_{i\cdot} + \xi \Big\|_2 \leq \gamma \right) \quad\mbox{where}\;\; \xi \sim \mathcal{N}(0, \sigma^2 I_T) \;; \end{align*} \item for $i \notin I_{m'}$, set $z_{i\cdot} = 0$; conditionally to $(m', x)$, sample independent random rows such that for any $i \in I_{m'}$, the distribution of $z_{i \cdot}$ is proportional to \begin{align*} \exp \left(-\frac{1 }{2\sigma^2}\left\| \left( 1 + \frac{\gamma }{ \|z_{i\cdot}\|_2} \right){z_{i\cdot}}-\left(x - \frac{\sigma^2}{2} \nabla g(x) \right)_{i\cdot}\right\|^2_2 \right)\left(1+\frac{\gamma}{ \|z_{i\cdot}\|_2}\right)^{T-1}\;. \end{align*} \end{enumerate} \bigskip Other gradient-proximal operators could be considered to define the Shrinkage-Thresholding operator: for any $\gamma >0$ and any convex function $h:\mathbb{R}^{P\times T} \to \mathbb{R}$ set \begin{equation} \label{eq:def:proximal} \Psi(u) \ensuremath{\stackrel{\mathrm{def}}{=}} \mathrm{argmin}_{x\in\mathsf{X}} \left( h(x) + \frac{1}{\sigma^2} \|x - u\|^2_2 \right) \;. \end{equation} This operator is such that the deterministic sequence given by $x^{n+1} = \Psi(x^n - \sigma^2 \nabla g(x^n)/2)$ converges to a minimum of $x \mapsto g +h$ (see e.g. \cite{beck:teboulle:2009}, or~\cite{parikh:boyd:2013}). Note that the definition~(\ref{PMALA:eq:prox:21}) corresponds to the case $h(x) = 2 \gamma/ \sigma^2 \|x\|_{2,1}$. When $\pi(x) \propto(-(g(x) + \bar g(x)))$ with $\bar g$ convex, it is natural to choose $h =\bar g$ as soon as the proposal distribution $q_\Psi$ has an explicit expression. \subsection{ The hard thresholding operator} \label{sec:HTMALA} Another suggestion of operator for Algorithm~\ref{PMALA:alg:pmala} is the hard thresholding operator $\Psi_2: \mathbb{R}^{P \times T} \to \mathbb{R}^{P \times T}$ defined componentwise by \begin{equation} \label{eq:HardThreshold:DefOperator} \left(\Psi_2(u) \right)_{i,j} \ensuremath{\stackrel{\mathrm{def}}{=}} u_{i,j} \mathbf{1}_{ \|u_{i.}\|_2 > \gamma} \;. \end{equation} The function $u \mapsto \Psi_2(u)$ is displayed in Figure~\ref{fig:toyex:fct_seuil:d1}[center] in the case $P=T=1$. Compared to the $L_{2,1}$ proximal operator (\ref{PMALA:eq:prox:21}), this operator avoids shrinkage of the active rows caused by the proximal operator. Lemma~\ref{HTMALA:lem:prop} applied with $\mu = X^n - \frac{\sigma^2}{2} \nabla g(X^n)$ gives the expression of the transition density $q_{\Psi_2}$ in this case. \begin{lemma} \label{HTMALA:lem:prop} Let $\mu \in\mathbb{R}^{P\times T}$ and positive constants $\gamma, \sigma >0$. Set $z \ensuremath{\stackrel{\mathrm{def}}{=}} \Psi_2(\mu + \sigma \xi )$ where $\xi \in\mathbb{R}^{P\times T}$ is a matrix of independent standard Gaussian random variables. The distribution of $z \in\mathbb{R}^{P\times T}$ is given by \begin{align*} \sum_{m \in \mathcal{M}} \left( \prod_{i \notin I_m} p(\mu_{i\cdot}) \, \delta_0( \mathrm{d} z_{i\cdot}) \right) \ \left( \prod_{i \in I_m} f_{ht}( \mu_{i\cdot}, z_{i\cdot}) \mathrm{d} z_{i\cdot} \right)\;, \end{align*} where for any $c\in \mathbb{R}^T$ and $z \in \mathbb{R}^T$ \begin{align*} f_{ht}(c, z) &\ensuremath{\stackrel{\mathrm{def}}{=}}\left( 2 \pi \sigma^2 \right)^{-T/2}\exp \left(-\frac{1 }{2\sigma^2}\left\| z-c\right\|^2_2 \right) \mathbf{1}_{\|z\|_2 > \gamma}\;, \end{align*} and $c \mapsto p(c)$ is defined in Lemma~\ref{PMALA:lem:propnorm21}. \end{lemma} The proof of Lemma~\ref{HTMALA:lem:prop} follows the same lines as the proof of Lemma~\ref{PMALA:lem:propnorm21} and is omitted. Here again, the proposal distribution can be read as sampling first the indices of the active rows in the candidate matrix by sampling a binary vector $m' \in \mathcal{M}$ (with the same sampling mechanism as with the $L_{2,1}$ gradient-proximal operator (\ref{PMALA:eq:prox:21}) - see Lemma~\ref{PMALA:lem:propnorm21}); and then, conditionally to $(m',x)$, sampling independently the active rows of the candidate matrix with a truncated Gaussian distribution. \subsection{A soft thresholding function with vanishing shrinkage} \label{sec:STMALA} The hard thresholding operator $\Psi_2$ proposed in Section~\ref{sec:HTMALA} avoids the shrinkage of the proposed active rows but also prevents these rows from having a $\mathrm{L}_2$-norm lower than a given threshold. The efficiency of STMALA with $\Psi_2$ as shrinkage-thresholding operator highly depends on the choice of the threshold, as illustrated in Section~\ref{sec:exp}. To overcome this difficulty, a soft thresholding operator with a vanishing shrinkage can be used. An example of such an operator, known as the empirical Wiener operator (see~\cite{siedenburg:2012}), is defined componentwise as follows: for some $\gamma>0$, $\Psi_3: \mathbb{R}^{P \times T} \to \mathbb{R}^{P \times T}$ is given by \begin{align} \left( \Psi_3(u) \right)_{i,j} &\ensuremath{\stackrel{\mathrm{def}}{=}} u_{i,j} \left( 1 - \frac{\gamma^2}{\|u_{i \cdot}\|_2^2} \right)_+ \;. \end{align} Figure~\ref{fig:toyex:fct_seuil:d1}[right] displays $u \mapsto \Psi_3(u)$ when $P=T=1$. Lemma~\ref{STMALA:lem:prop}, applied with $\mu = X^n - \frac{\sigma^2}{2} \nabla g(X^n)$, gives the expression of the transition density $q_{\Psi_3}$. \begin{lemma} \label{STMALA:lem:prop} Let $\mu \in\mathbb{R}^{P\times T}$ and positive constants $\gamma, \sigma >0$. Set $z \ensuremath{\stackrel{\mathrm{def}}{=}} \Psi_3(\mu + \sigma \xi )$ where $\xi \in\mathbb{R}^{P\times T}$ is a matrix of independent standard Gaussian random variables. The distribution of $z \in\mathbb{R}^{P\times T}$ is given by \begin{align*} \sum_{m \in \mathcal{M}} \left( \prod_{i \notin I_m} p(\mu_{i\cdot}) \, \delta_0( \mathrm{d} z_{i\cdot}) \right) \ \left( \prod_{i \in I_m} f_{\textrm{st}}( \mu_{i\cdot}, z_{i\cdot}) \mathrm{d} z_{i\cdot} \right)\;, \end{align*} where for any $c\in \mathbb{R}^T$, $z \in \mathbb{R}^T \setminus \{0 \}$, $u>0$ \begin{align*} f_{st}(c, z) &\ensuremath{\stackrel{\mathrm{def}}{=}}\left( 2 \pi \sigma^2 \right)^{-T/2} \ \left( g\left(\frac{\gamma^2}{\|z\|^2_2}\right) \right)^T \ \tilde g\left(\frac{\gamma^2}{\|z\|^2_2}\right) \ \exp \left(-\frac{1 }{2\sigma^2}\left\| g\left(\frac{\gamma^2}{\|z\|^2_2}\right) \ {z}-c\right\|^2_2 \right) \ \;,\\ g(u)&\ensuremath{\stackrel{\mathrm{def}}{=}} 1+\frac{2 u}{1 + \sqrt{1+ 4 u }}\;, \qquad \tilde g(u) \ensuremath{\stackrel{\mathrm{def}}{=}} \frac{1}{\sqrt{1+4u}} \;, \end{align*} and $c \mapsto p(c)$ is given by Lemma~\ref{PMALA:lem:propnorm21}. \end{lemma} Lemma~\ref{STMALA:lem:prop} is proved in section~\ref{PMALA:sec:proofs}. Here again, the proposal distribution can be read as sampling first the indices of the active rows in the candidate matrix by sampling a binary vector $m' \in \mathcal{M}$ (with the same sampling mechanism as with the $L_{2,1}$ gradient-proximal operator (\ref{PMALA:eq:prox:21}) - see Lemma~\ref{PMALA:lem:propnorm21}); and then, conditionally to $(m',x)$, sampling independently the active rows of the candidate matrix under the distribution $f_{st}$. Lemma~\ref{STMALA:prox} shows that $\Psi_3$ compromises between minimizing a (non-convex) function $h$ and being near to $u$. \begin{lemma} \label{STMALA:prox} For any $\gamma>0$ and $u \in \mathbb{R}^\ell$, \[ \Psi_3(u) = \mathrm{argmin}_{x \in \mathbb{R}^\ell} \left( h(x) + \frac{1}{2} \|x -u\|^2 \right)\;, \] where the function $h : \mathbb{R}^\ell \to \mathbb{R}$ is given by \begin{align*} h(x) = \gamma^2 \left[ \textrm{asinh} \left( \frac{\|x\|}{2 \gamma} \right) - \frac{1}{2} \exp \left(-2 \textrm{asinh} \left( \frac{\|x\|}{2 \gamma} \right) \right) \right] \;. \end{align*} \end{lemma} \begin{proof} The proof of Lemma~\ref{STMALA:prox} is in Section~\ref{sec:proofs:pmala:etc}. \end{proof} \section{$V$-Geometric ergodicity of the $L_{2,1}$ proximal STMALA} \label{sec:ergodicity} In this section, we address the $V$-geometric ergodicity of the STMALA chain $(X^n)_{n\geq 0}$ where, at iteration $n$, the candidate $Z$ is given by \begin{equation} \label{eq:STMALA:truncated} Z = \Psi_1\left(X^n - \frac{\sigma^2}{2} \frac{D \nabla g(x)}{\max(D, \| \nabla g(x) \|_2 )} + \sigma \Xi^{n+1}\right) \;, \end{equation} where $(\Xi^n, n\geq 1)$ is a sequence of $P \times T$ random matrices with i.i.d. $\mathcal{N}(0,1)$ entries and $\Psi_1$ is given by (\ref{PMALA:eq:prox:21}). Hereafter, $\gamma$ (in the definition of $\Psi_1$) and $D$ are fixed positive constants. This update differs from the update proposed in~(\ref{eq:proposal:algo}) by truncating the gradient, as it was already suggested in~\cite{roberts:tweedie:1996b} and used in~\cite{atchade:2006}. This truncation makes the algorithm more stable in practice. In most of the examples presented in Section~\ref{sec:exp}, we observed that truncating the gradient has only a minor impact on the results, which are therefore presented with no truncation. For the real data set presented in Section~\ref{sec:exp:realdata}, the truncation prevents the algorithm from moving too far from the current state and therefore avoids too low acceptance rates. To make the remainder of the paper less technical, the proof is given in the case $T=1$ and $\Psi = \Psi_1$. Extensions to $T>1$ and other shrinkage-thresholding operators are not addressed in this paper. The sets $\{S_m, m \in \mathcal{M}\}$, where $S_m$ is given by (\ref{eq:def:sm}), are a partition of $\mathbb{R}^P$ and we denote by $\pi_m$ the restriction of $\pi$ to $S_m$: \begin{align*} \pi(x) = \sum_{m \in \mathcal{M}} \pi_m(x) \mathbf{1}_{S_m}(x) \;. \end{align*} The convergence of this STMALA is addressed under the following assumptions on the target density $\pi$. \begin{hypA} \label{hyp:reg:pi} \begin{enumerate}[(i)] \item \label{hyp:reg:pi:cont} For any $m \in \mathcal{M}$, $\pi_m$ is continuous on $S_m$. \item \label{hyp:reg:pi:lim} $\pi(x) \to 0$ when $\|x\|_2 \to \infty$. \end{enumerate} \end{hypA} Assumption A\ref{hyp:superexp:pi} below ensures that $\pi$ is super-exponential, {\em i.e.} that the target density $\pi$ decreases fast enough when $\|x\|_2$ is large. \begin{hypA} \label{hyp:superexp:pi} For any $s>0$, $m \in \mathcal{M}$, \begin{align*} \lim_{r \to \infty} \sup \limits_{x \in S_m, \|x\| \geq r} \frac{\pi_m(x + s \, n(x))}{\pi_m(x)} = 0 \;, \qquad \text{where} \qquad n(x) \ensuremath{\stackrel{\mathrm{def}}{=}} \frac{x}{\|x\|} \;. \end{align*} \end{hypA} When for any $m \in \mathcal{M}$, the restriction $\pi_m$ of $\pi$ to the subset $S_m$ is differentiable, A\ref{hyp:superexp:pi} is satisfied if (see~\cite[Section 4]{jarner:hansen:2000} for details) \begin{align} \label{hyp:superexp:diff} \forall m \in \mathcal{M}, \quad \lim \limits_{x \in S_m, \|x\|_2 \to \infty} \pscal{\frac{x}{\|x\|_2}}{\nabla \log (\pi_m (x))} = - \infty \;. \end{align} Let $u,b,\epsilon>0$ such that $u \in (0, b)$. For any $m \in \mathcal{M}$ and $x \in S_m$, define \begin{align} \label{eq:definition:cone} W_m(x) \ensuremath{\stackrel{\mathrm{def}}{=}} \{x_m - u \ n(x_m) - s \xi: s \in (0,b-u), \|\xi\|_2=1, \|\xi-n(x_m)\|_2 \leq \epsilon \} \;. \end{align} $W_m(x)$ is the cone of $\mathbb{R}^{|m|}$ with apex $x_m - u \ n(x_m)$ and aperture $2 \epsilon$. We will prove (see Lemma~\ref{lemme:cone}) that A\ref{hyp:cone:pi} guarantees that, for $\|x\|_2$ large enough, the probability to accept a move from $x$ to any point of $W_m(x)$ equals one. \begin{hypA} \label{hyp:cone:pi} There exist $b,R,\epsilon>0$ and $u \in (0,b)$ such that for any $m \in \mathcal{M}$, for any \\$x \in S_m \cap \{\|x\|_2 \geq R\}$, \begin{align*} \forall \ y \in S_m \ \text{such that} \ y_m \in W_{m}(x), \quad \pi_m(x - u \ n(x)) \leq \pi_m(y) \;, \end{align*} \end{hypA} Note that, when $\pi_m$ is differentiable for any $m \in \mathcal{M}$, A\ref{hyp:cone:pi} is implied by \begin{align} \label{hyp:cone:diff} \forall m \in \mathcal{M}, \quad \limsup \limits_{x \in S_m, \|x\|_2 \to \infty} \pscal{\frac{x}{\|x\|_2}}{\frac{\nabla \pi_m(x)}{\| \nabla \pi_m (x)\|_2}} <0 \;, \end{align} which is similar to the conditions used in~\cite[condition (32)]{jarner:hansen:2000} for instance (see~\cite[proof of Theorem 4.3]{jarner:hansen:2000} for details). An example of target density satisfying A\ref{hyp:definition:pi} to A\ref{hyp:cone:pi} and related to the density defined in Section~\ref{sec:BayesianSetup} is presented in Appendix~\ref{sec:ex:pi}. \bigskip Theorem~\ref{th:erggeo} establishes the $V$-geometric ergodicity of the STMALA algorithm with truncated gradient; denote by $P_{trunc}$ the transition kernel associated with the Hastings-Metropolis chain $(X^n)_n$ with proposal distribution given by (\ref{eq:STMALA:truncated}). The dependence upon the constants $D,\gamma,\sigma$ (which are fixed by the user prior any run of the algorithm) is omitted to simplify the notations. \begin{theorem} \label{th:erggeo} Assume A\ref{hyp:definition:pi} to A\ref{hyp:cone:pi} hold. Then, for any $\beta \in (0,1)$, there exist $C >0$ and $\rho \in (0,1)$ such that for any $ n \geq 0$ and any $x \in \mathbb{R}^P$, \begin{align} \|P_{trunc}^n(x,.) - \pi \|_V \leq C \, V(x) \, \rho^n \;, \end{align} where $V(x) \propto \pi(x)^{-\beta}$ and for any signed measure $\eta$, $\|\eta\|_V = \sup \limits_{f, |f| \leq V}|\int f \mathrm{d} \eta|$. \end{theorem} By definition of the acceptance probability in Algorithm~\ref{PMALA:alg:pmala}, $\pi$ is invariant with respect to $P_{trunc}$. The rate of convergence is then a consequence of Proposition~\ref{prop:small} and Proposition~\ref{prop:drift} given in Section~\ref{sec:proofs:erg}: Proposition~\ref{prop:small} establishes that the chain is psi-irreducible and aperiodic and shows that any Borel set $C \in \mathbb{R}^P$ such that $C \cap S_m$ is a compact subset of $S_m$ is a small set for $P_{trunc}$; Proposition~\ref{prop:drift} shows that there exists a small set $C \in \mathbb{R}^P$ and constants $c_1 \in (0,1)$ and $c_2<\infty$ such that for any $x \in \mathbb{R}^P$, \begin{align*} P_{trunc} V(x) \leq c_1 V(x) + c_2 \mathbf{1}_{C}(x) \;. \end{align*} The proof is then concluded by \cite[Theorem 15.0.2]{meyn:tweedie:1993}. \section{Numerical illustrations} \label{sec:exp} In this section, STMALA\footnote{MATLAB codes for STMALA are available at the address http://perso.telecom-paristech.fr/$\sim$schreck/recherche.html} is compared to the reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We detail in Appendix~\ref{sec:rjmcmc} how RJMCMC is implemented. We also discuss in Appendix~\ref{sec:calcul:p} how to approximate the probability $p(c)$ given by Lemma~\ref{PMALA:lem:propnorm21}, a quantity which appears in the implementation of STMALA whatever the shrinkage-thresholding operator $\Psi \in \{\Psi_1, \Psi_2, \Psi_3 \}$. \subsection{STMALA on a toy example} \subsubsection{Presentation of the data set}\label{toyex:dataset} The data $Y$ are sampled from the model~\eqref{eq:model}. The design matrix $G$ is obtained by sampling i.i.d. $\mathcal{N}(0,1)$ entries. We consider the target distribution described in Section~\ref{sec:BayesianSetup}, with the prior distribution on $\mathcal{M}$ defined by $w_m = 0.1^{|m|} 0.9^{P-|m|}$ and $\lambda =1$. Moreover, $N=100$, $P=16$, $T=1$, $\tau=1$ and $X_j= \mathbf{1}_{j \leq S}$ with $S=8$. Since $T=1$, note that $\|.\|_{2,1}=\|.\|_{1}$. In this section, $P$ is chosen small enough ($P=16$), so that the posterior distribution of the models $\pi(m \vert Y)$ can be explicitly computed (see below). This allows to compare the algorithms using the error when estimating the activation probabilities, defined by \begin{align} \label{eq:error:activationproba} \mathcal{E} \ensuremath{\stackrel{\mathrm{def}}{=}} \sum_{i=1}^P \left| \mathbb{P}(X_i \neq 0) - \frac{1}{N_{it}} \sum_{n=B}^{N_{it}+B} \mathbf{1}_{X^n_i \neq 0} \right| \;, \end{align} where $N_{it}$ is the number of iterations used to compute the approximations, $B$ denotes the number of iterations discarded as a burn-in period, and $\mathbb{P}(X_i \neq 0)$ is the posterior probability of activation of the $i$th component of $X$, defined for any $1 \leq i \leq P$, by \begin{align} \label{toyex:pact} \mathbb{P}(X_i \neq 0) = \sum_{m \in \mathcal{M}} \pi(m \vert Y) \ m_i \;. \end{align} \noindent Let us derive the expression of $\pi(m \vert Y)$. By (\ref{eq:def:pi:rp}), for any $m \in\mathcal{M}$, \begin{align*} \pi(m| Y) = \propto w_m \ c_\lambda^{-|m|} \int_{\mathbb{R}^{|m|}} \exp \left( - \frac{1}{2 \tau} \|Y - G_{\cdot m} x \|_2^2 - \lambda \|x\|_{1} \right) \mathrm{d} x \;. \end{align*} Then, $\pi(m \vert Y) = 0$ when the matrix $G_{\cdot m}' G_{\cdot m}$ is not invertible.; otherwise, \begin{align*} \pi(m \vert Y) & \propto w_m \ c_\lambda^{-|m|} \exp \left(\frac{1}{2 \tau} Y' A_m Y \right) (2 \pi \tau)^{|m|/2} \left( \det \left( G_{\cdot m}' G_{\cdot m}\right) \right)^{-1/2} \cdots \\ & \hspace{8cm} \times \int_{\mathbb{R}^{|m|}} \phi_m(x) \exp \left(- \lambda \|x\|_{1} \right) \mathrm{d} x \;, \end{align*} where $\overline{X}(m) = (G_{\cdot m}' G_{\cdot m})^{-1} G_{\cdot m}' Y$, $A_m \ensuremath{\stackrel{\mathrm{def}}{=}} G_{\cdot m} (G_{\cdot m}' G_{\cdot m})^{-1} (G_{\cdot m})'$ and $\phi_m$ denotes the probability density function of a Gaussian vector with mean $\overline{X}(m)$ and covariance matrix $\tau \ (G_{\cdot m}' G_{\cdot m})^{-1}$. The last integral is not explicit when $\lambda \neq 0$, but it can be estimated by a standard Monte Carlo method. \subsubsection{Discussion on the implementation parameters} \paragraph{Comparison of the shrinkage-thresholding operators} Figure~\ref{fig:toyex:comp:fct_seuil:err} provides a comparison of the three shrinkage-thresholding operators proposed in Section~\ref{sec:PMALA:variants} for the model described in Section~\ref{toyex:dataset}. Figure~\ref{fig:toyex:comp:fct_seuil:err} (left) displays the evolution of the mean error given by block-STMALA, with $L_{2,1}$ proximal operator (Prox), hard thresholding operator (HT) and soft thresholding with vanishing shrinkage operator (STVS), over $100$ independent trajectories as a function of the number of iterations. Let $L_g$ be the lipschitz norm of the gradient of $g$, $L_g \ensuremath{\stackrel{\mathrm{def}}{=}} \|G G^t \|/\tau$. All the algorithms are run with $\sigma = \sqrt{2/L_g}$. The number of components to be updated at each iteration is $\eta = 4$ (the role of the block size is discussed below) and the threshold is set to $\gamma = 0.1$ (see details on the choice of the threshold below). All the algorithms start from the null regressor. \begin{figure}[ht!] \begin{center} \includegraphics[width=.45\linewidth]{toyex_comp_fctseuil_err_bis} \includegraphics[width=.45\linewidth]{toyex_comp_fctseuil_acc_bis} \end{center} \caption{(left) Evolution of the mean estimation error (\ref{eq:error:activationproba}) of the activation probabilities for block-STMALA as a function of the number of iterations, when $\Psi=\Psi_1$ (Prox), $\Psi=\Psi_2$ (HT) and $\Psi=\Psi_3$ (STVS) as shrinkage-thresholding operator. (right) Evolution of the mean acceptance rate.} \label{fig:toyex:comp:fct_seuil:err} \end{figure} As HT (solid blue line) does not shrink the nonzero regression parameters (see figure~\ref{fig:toyex:fct_seuil:d1}), it cannot produce any values lower than $\gamma$. Therefore, STMALA with hard thresholding is not robust to the choice of the threshold and did not reach convergence in the situation presented in Figure~\ref{fig:toyex:comp:fct_seuil:err} (left). On the other hand, the estimation errors of block-STMALA with STVS (dash-dot red line) and with Prox (dashed green line) decrease at a similar rate and variability in this case. Figure~\ref{fig:toyex:comp:fct_seuil:err} (right) displays the mean acceptance rate as a function of the number of samples $N_{it}$ for the three algorithms. At each iteration, if the proposed point is sampled into a region of high probability under the target distribution, the shrinking step of block-STMALA with proximal shrinkage may drive this sample toward regions of lower probability, and therefore decrease the mean acceptance-rejection ratio. Figure~\ref{fig:toyex:comp:fct_seuil:err} (right) shows that block-STMALA, with no shrinkage (HT), and block-STMALA with a vanishing shrinkage (STVS), accept twice as many proposed samples as block-STMALA with Prox. For more complex models, the mean acceptance-rejection ratio of block-STMALA with Prox can even decrease to zero so that the algorithm remains quickly trapped in one point. As block-STMALA with STVS combines the best behavior both in terms of acceptance rate and of mean error, this variant is used for the rest of the numerical experiments. Therefore, from now on, the shrinkage-thresholding operator is chosen as $\Psi = \Psi_3$. \paragraph{The block size $\eta$} When all the components of a point are updated at each iteration ({\em i.e.} if $\eta=P$), the distance between the current point and the proposed one is such that it leads to a low acceptance-rejection ratio. Figure~\ref{fig:toyex:comp:bloc:acc} (left) displays the mean acceptance-rejection ratio over $100$ independent trajectories after $N_{it}=10^5$ iterations of block-STMALA with no burn-in period ($B=0$) as a function of $\eta$ (expressed as a percentage of the total number of regressors) in the model described in Section~\ref{toyex:dataset}. \begin{figure}[ht!] \begin{center} \includegraphics[width=.45\linewidth]{toyex_comp_bloc_acc} \includegraphics[width=.45\linewidth]{toyex_comp_seuil_err} \end{center} \caption{(left) Mean acceptance-rejection ratio as a function of the block size (expressed as a percentage of $P$) and (right) mean error as a function of $\ln(\gamma)$.} \label{fig:toyex:comp:bloc:acc} \end{figure} Note that if the acceptance-rejection ratio is too low, the convergence is very slow as few proposed samples are accepted. However, a slow convergence can also be the consequence of small block sizes, since two consecutive samples have at least $P-\eta$ equal coefficients. We observed that, in general, choosing rather small block sizes yields good results. \paragraph{The threshold $\gamma$} The choice of the threshold is crucial. If $\gamma$ is too large, few nonzero samples are proposed and the algorithm will converge slowly. If $\gamma$ is too small, the algorithm proposes non-sparse solutions that are not likely to be accepted. This is illustrated in figure~\ref{fig:toyex:comp:bloc:acc} (right) which displays the mean error made by block-STMALA when estimating the activation probabilities, computed over $100$ independent trajectories of $N_{it}=10^5$ iterations with no burn-in ($B=0$), as a function of $\ln(\gamma)$. Here $\eta = 4$ and $\sigma = \sqrt{2/L_g}$, and the computations are made for the model described in Section~\ref{toyex:dataset}. \paragraph{The standard deviation $\sigma$} Figure~\ref{fig:toyex:comp:sigma:acc} displays the mean acceptance-rejection ratio and the mean error (\ref{eq:error:activationproba}) over $100$ independent trajectories, of block-STMALA as a function of $\sigma$ (the scale is $\sigma \sqrt{L_g/2}$ on the $x$-axis) after $10^5$ iterations. $\eta$ is set to $4$ and the threshold $\gamma$ is chosen so that the mean number of thresholded coefficients in one iteration of STMALA starting from the empty model is about 55\%. If $\sigma$ is too large, the distance between the current point and the proposed point is high and leads to a low acceptance-rejection ratio and a slow convergence. If $\sigma$ is too small, $q_{\Psi}(z,\cdot)$, where $z$ is the proposed point, is a spike function centered at $\Psi_3 \left(z - \frac{\tau^2}{2} \nabla g (z) \right)$, with $\gamma = \tau^2 \lambda/2$, which could be quite far from the current point $X^n$, thus producing too small values of $q_{\Psi}(z, X^n)/q_{\Psi}(X^n,z)$ and leading therefore to a low acceptance-rejection ratio and a slow convergence. \begin{figure}[ht!] \begin{center} \includegraphics[width=.45\linewidth]{toyex_comp_sigma_acc} \includegraphics[width=.45\linewidth]{toyex_comp_sigma_err} \end{center} \caption{(left) Mean acceptance rate. (right) Mean error as a function of $\sigma \sqrt{L_g/2}$.} \label{fig:toyex:comp:sigma:acc} \end{figure} \subsubsection{Further illustrations} This section provides additional experiments using block-STMALA for the model described in Section~\ref{toyex:dataset}. We set $\sigma = \sqrt{2/L_g}$, $\eta=4$ and $\gamma=0.28$. The algorithm is initialized to the null regression vector. Figure~\ref{fig:toyex:comp:rj:err} compares the mean error over 100 independent trajectories made by block-STMALA with STVS operator (solid blue) and RJMCMC (dash-dot red) when estimating the activation probabilities as a function of the number of iterations (right) and displays the associated boxplots (left). For block-STMALA, we set $\gamma=0.07$, $\eta=4$, $\sigma = \sqrt{2/L_g}$ and $B=0$. RJMCMC is implemented as described in Section~\ref{sec:rjmcmc} with $\sigma_{\textrm{RJ}}=0.02$. The parameters of block-STMALA and RJMCMC are chosen so that the two algorithms have similar acceptance rates (about 23 \%). Note that both algorithms are compared for the same number of evaluations of the target density $\pi$, {\em i.e.} for the same number of iterations, as the computation time depends on the code efficiency. In Figure~\ref{fig:toyex:comp:rj:err}, block-STMALA clearly outperforms RJMCMC. For example, after $300.000$ iterations, the error made by RJMCMC is twice as big as the error made by block-STMALA. This is due to the fact that RJMCMC only modifies one component at each iteration. Therefore, block-STMALA, which modifies 4 components at each iteration, moves faster. \begin{figure}[ht!] \begin{center} \includegraphics[width=.49\linewidth]{toyex_comp_rj_err} \includegraphics[width=.5\linewidth]{toyex_comp_rj_errbp} \end{center} \caption{Evolution of the mean error for block-STMALA and RJMCMC as a function of the number of iterations (left) and the associated boxplots (right).} \label{fig:toyex:comp:rj:err} \end{figure} Figure~\ref{fig:toyex:comp:rj:acf} (left) shows the empirical autocorrelation function of $X_1$ and $X_8$ for the two algorithms, with $\eta = 4$, $\gamma=0.07$ and $\sigma = \sqrt{2/L_g}$ for block-STMALA and the standard deviation of the random walk in RJMCMC set to $\sigma_{\textrm{RJ}}=0.02$. The autocorrelation is computed along a single trajectory of $300.000$ iterations (with $30\%$ of these iterations discarded as a burn-in period). The mean regression vectors obtained by block-STMALA and RJMCMC are displayed in Figure~\ref{fig:toyex:comp:rj:acf} (right). \begin{figure}[ht!] \begin{center} \includegraphics[width=.45\linewidth]{toyex_comp_rj_acf_x} \includegraphics[width=.45\linewidth]{toyex_comp_rj_est} \end{center} \caption{(left) Empirical autocorrelation function of $X_1$ and $X_8$ of block-STMALA and RJMCMC. (right) Regression vectors estimated by block-STMALA and RJMCMC.} \label{fig:toyex:comp:rj:acf} \end{figure} Finally, Figure~\ref{fig:toyex:comp:rj:estbp} displays the evolution of the mean estimators of $\int x_i \ \pi(x \vert Y) \mathrm{d} \nu(x)$ for $i=1$ and $i=8$ computed by block-STMALA and RJMCMC as a function of the number of iterations and the associated boxplots. Once again the computations are made for the model described in Section~\ref{toyex:dataset} over $100$ independent trajectories of $300.000$ iterations. In figure~\ref{fig:toyex:comp:rj:estbp} (left), block-STMALA converges faster than RJMCMC. This is confirmed by the boxplots shown in figure~\ref{fig:toyex:comp:rj:estbp} (right). \begin{figure}[ht!] \begin{center} \includegraphics[width=.49\linewidth]{toyex_comp_rj_estev} \includegraphics[width=.5\linewidth]{toyex_comp_rj_estbp} \end{center} \caption{(left) Evolution of the mean estimators (over $100$ independent runs) of $\int x_i \ \pi(x \vert Y) \mathrm{d} \nu(x)$ for $i=1$ and $i=8$ computed by block-STMALA and RJMCMC as a function of the number of iterations. (right) Associated boxplots.} \label{fig:toyex:comp:rj:estbp} \end{figure} \subsection{A sparse spike and slab model} \subsubsection{The model} \label{sec:breiman} The model for the observations $Y \in \mathbb{R}^N$ is assumed to be \[ Y = G X + \vartheta^{-1/2} E \;, \] where $G$ is a $N \times P$ (known) design matrix, $E$ is a Gaussian random vector with i.i.d. standard entries and $\vartheta$ is the (known) precision. We want to find the subset of nonzero covariate parameters from $X \in \mathbb{R}^N$. We consider the following sparse spike and slab hierarchical model. \begin{enumerate}[-] \item Given $m = (m_{\ell})_{1\leq \ell\leq P} \in \mathcal{M}$ and positive precisions $(\vartheta_\ell)_{1\leq \ell\leq P}$, the entries of the covariate vector $X = (X_\ell)_{1\leq \ell\leq P}$ are independent with distribution \[ (X_k \vert m, \vartheta_1, \cdots, \vartheta_P) \sim \left\{ \begin{array}{ll} \delta_0(X_k) & \ \text{if $m_k = 0,$} \\ \mathcal{N}(0, 1/\vartheta_k) & \ \text{if $m_k =1$.} \end{array} \right. \] \item The precision parameters $(\vartheta_\ell)_{1\leq \ell\leq P}$ are i.i.d. with Gamma distribution $\mathrm{Ga}\left( a, aK\right)$, where $a,K$ are fixed. \item The components of $m \in \mathcal{M}$ are i.i.d. Bernoulli random variables with parameter $\omega_\star$. \end{enumerate} Under this model the posterior density $\pi(X, m, \vartheta_1, \cdots, \vartheta_P\vert Y)$ can be derived analytically: \begin{multline*} \pi(X, m, \vartheta_1, \cdots, \vartheta_P\vert Y) \propto \exp\left( - \frac{\vartheta}{2 } \|Y - G X \|^2\right) \, \omega^{|m|}_\star (1-\omega_\star)^{P-|m|} \\ \times \prod_{\ell=1}^P \vartheta_\ell^{a-1} \exp(-a K \vartheta_\ell) \mathbf{1}_{\mathbb{R}^+}(\vartheta_\ell) \left\{ \exp\left( - \frac{\vartheta_\ell}{2} X_\ell^2\right) \sqrt{\vartheta_\ell} \mathbf{1}_{m_\ell =1} + \delta_0(X_\ell) \ \mathbf{1}_{m_\ell =0} \right\}\;. \end{multline*} By integrating out, we obtain the posterior density of $(X, m)$: \begin{align*} \pi\left( X, m \vert Y \right) & \propto \exp\left( - \frac{\vartheta}{2 } \|Y - G X \|^2\right) \, \omega_\star^{|m|} (1-\omega_\star)^{P-|m|} \\ &\hspace{1cm}\times \prod_{\ell=1}^P \left\{ \left( 1 + \frac{X_\ell^2}{2aK} \right)^{ -( a+1/2)} \mathbf{1}_{m_\ell =1} + \delta_0(X_l) \ \mathbf{1}_{m_\ell = 0} \right\}\;. \\ \end{align*} \subsubsection{Numerical illustrations} The performance of block-STMALA with STVS operator is illustrated with the model introduced in \cite{breiman:1992} and presented in \cite[Section~$8$]{ishwaran:rao:2005}. It is assumed that $ N =100$, $P=200$, $\omega_{\star}=0.1$ and that $\vartheta$ is known and fixed to $\vartheta = 1$. The covariates $(G_{;,i})_{1\leq i\leq P}$ are sampled from a Gaussian distribution with $\mathbb{E}[G_{\cdot i}] = 0$ and $\mathbb{E}[G_{ji}G_{ki}] = \rho^{|j-k|}$. In the example below, $\rho$ is given by $\rho = 0.3$. The nonzero coefficients of $X$ are in $4$ clusters of $5$ adjacent variables such that, for all $k\in\{1,2,3,4\}$ and all $j\in\{1,2,3,4,5\}$, $ X_{50*(k-1)+j} = (-1)^{k+1}\,j^{1/k}$. $a=2$ and $K = 0.08$ are chosen so that the Gamma distribution with parameters $a$ and $a K$ has a mode at $\vartheta_{\star}$ such that $\vartheta_{\star}^{-1/2} = \max(|X|)/2$. The other parameters are given by $\sigma= \sqrt{2/L_g}$, $\eta = 20$ and $\gamma=0.35$. The standard deviation of the RJMCMC proposal is $\sigma_{\textrm{RJ}}=0.01$ and chosen so that block-STMALA and RJMCMC have similar acceptance rates (between 15\% and 20\%). The computations are made over $50$ independent trajectories of $10^6$ iterations. As the dimension is high and RJMCMC modifies only one component uniformly chosen at each iteration, its autocorrelation function is expected to converge more slowly than the autocorrelation function of block-STMALA. This is illustrated in figure~\ref{fig:bre:acc:acf} (left) which displays the two mean autocorrelation functions estimated over the $50$ independent trajectories of length $10^6$ iterations ($10\%$ of these iterations are discarded as a burn-in period). \begin{figure}[ht!] \begin{center} \includegraphics[width=.45\linewidth]{bre_acfx} \includegraphics[width=.45\linewidth]{bre_est} \end{center} \caption{(left) Mean autocorrelation function of $X_1$ for block-STMALA and RJMCMC. (right) Regression vectors estimated by block-STMALA and RJMCMC.} \label{fig:bre:acc:acf} \end{figure} Figure~\ref{fig:bre:acc:acf} (right) shows the true regression vector $X$ and its estimates by block-STMALA and RJMCMC: it shows that block-STMALA provides a sparse estimation while RJMCMC needs a lot of components to explain the observations. This is probably because RJMCMC is more or less equivalent to test each model in turn, which yields slow convergence in high dimensional settings. This slow convergence is also illustrated in Figure~\ref{fig:bre:nact:bpx}. Figure~\ref{fig:bre:nact:bpx} (left) shows the evolution of the mean number of active components $|m|$. RJMCMC has not converged after the $300.000$ iterations while the mean number of active components of block-STMALA is stable after few iterations. Figure~\ref{fig:bre:nact:bpx} (right) displays the boxplots of the estimation of the first component $X_1$ estimated by block-STMALA and RJMCMC as a function of the number of iterations. \begin{figure}[ht!] \begin{center} \includegraphics[width=.45\linewidth]{bre_nact_v2} \includegraphics[width=.49\linewidth]{bre_bpx} \end{center} \caption{(left) Evolution of the mean number of active components for STMALA and RJMCMC. (right) Evolution of the estimation of $X_1$ (mean over iterations) for block-STMALA and RJMCMC.} \label{fig:bre:nact:bpx} \end{figure} Figure~\ref{fig:bre:rvsp:et} (left) shows the signal $G \hat{X}$ estimated by block-STMALA and RJMCMC as a function of the actual emitted signal $GX$ (blue circles), where $\hat{X}$ is the mean regression vector over a trajectory. To highlight over fitting effects, a test sample $Y_{\rm test} = G_{\rm test} X + \sqrt{1/\vartheta} E_{\rm test}$, where $G_{\rm test} \in \mathbb{R}^{100 \times 200}$ and $E_{\rm test} \in \mathbb{R}^{100}$ are generated exactly as $G$ and $E$, is also used. With green circles, $G_{\rm test} \hat{X}$ as a function of $G_{\rm test} X$ are displayed. This test data set is also used to compute a test error, which is given by \[ \mathcal{E}_{\rm test} \ensuremath{\stackrel{\mathrm{def}}{=}} \frac{\|G_{\rm test} \hat{X}-G_{\rm test} X\|_2^2}{100}\;. \] The evolution of the mean test error $\epsilon_{\rm test}$ over 100 independent runs, is displayed in Figure~\ref{fig:bre:rvsp:et} (right). Both figures show that RJMCMC is subject to some over fitting, which is not the case of block-STMALA. \begin{figure}[ht!] \begin{center} \includegraphics[width=.45\linewidth]{bre_rvsp} \includegraphics[width=.45\linewidth]{bre_testerr} \end{center} \caption{(left) Emitted signal $G \hat{X}$ estimated by block-STMALA and RJMCMC versus actual emitted signal $GX$. (right) Evolution of the mean test error for RJMCMC and block-STMALA.} \label{fig:bre:rvsp:et} \end{figure} \subsection{Regression for spectroscopy data} \label{sec:exp:realdata} We use the biscuits data set composed of near infrared absorbance spectra of 70 cookies with different water, fat, flour and sugar contents studied in \cite{brown:fearn:vannucci:2001} and \cite{caron:doucet:2008}. The data are divided into a training data set containing measurements for $N=39$ cookies, and a test data set containing measurements for $31$ cookies. Each row of the design matrix consists of absorbance measurements for $P=300$ different wavelengths from $1202$ nm to $2400$ nm with gaps of $4$ nm. We compare the results obtained by block-STMALA with those obtained by RJMCMC for the prediction of fat content (\textit{i.e.} $T=1$). To improve the stability of the algorithm, the columns of the matrix $G$ containing the measurements are centered and a column with each entry being equal to one is added. The model used here is the one presented in Section~\ref{sec:BayesianSetup} with the unknown noise parameter set to $\tau = 0.5$. The parameters of the algorithms are given by $\sigma = 2\sqrt{2/L_g}$, $\eta=15$, $\gamma = 0.35$ for block-STMALA, and by $\sigma_{\textrm{RJ}} = 0.9$ for RJMCMC. The gradient in block-STMALA is truncated as suggested in~\cite{roberts:tweedie:1996b}, so that the norm of the truncated gradient does not exceed $0.7$. The computations are made over $100$ independent trajectories of $N_{it}=2.10^6$ iterations, with $B=10^5$. We choose the parameters so that the two algorithms have similar acceptance-rejection ratios (the final ratios are about $45 \%$ for block-STMALA and $42 \%$ for RJMCMC). Figure~\ref{fig:bis:est} shows the regression vectors $\hat{X}$ estimated by block-STMALA and RJMCMC after one trajectory (left) and the mean regression vector estimated by block-STMALA and RJMCMC over the $100$ independent trajectories (right). The regression vector estimated by block-STMALA with STVS operator has a spike around $1726$ nm, which is known to be in a fat absorbance region (see~\cite{brown:fearn:vannucci:2001,caron:doucet:2008}), in almost all the trajectories. The regression vector estimated by RJMCMC has also a spike close to this region, but its location is very unstable over the different trajectories. This instability explains the differences between block-STMALA and RJMCMC, even if the mean regression vectors estimated by the two algorithms over the $100$ independent trajectories are quite similar. \begin{figure}[ht!] \begin{center} \includegraphics[width=.45\linewidth]{biscuit_est_1} \includegraphics[width=.45\linewidth]{biscuit_est_moy} \end{center} \caption{(left) Regression vectors estimated by block-STMALA and RJMCMC after one trajectory. (right) Mean regression vectors estimated by block-STMALA and RJMCMC over $100$ independent trajectories.} \label{fig:bis:est} \end{figure} Figure~\ref{fig:bis:bpest} displays the boxplots of the $100$ independent values of the components of the regression vectors estimated by block-STMALA and RJMCMC associated to $9$ wavelengths close to $1726$ nm. It illustrates that the location of the spike retrieved by RJMCMC, when is it retrieved, is not stable, while block-STMALA retrieves a spike centered at $1726$ nm in almost every trajectory, even if the height of this spike fluctuates. \begin{figure}[ht!] \begin{center} \includegraphics[height=7cm,width=.7\linewidth]{biscuit_bp_est} \end{center} \caption{Boxplots of the $100$ independent values of the components of the regression vectors estimated by block-STMALA and RJMCMC associated to $9$ wavelengths close to $1726$ nm.} \label{fig:bis:bpest} \end{figure} Figure~\ref{fig:bis:test} (left) shows the emitted signal $G \hat{X}$ estimated over one trajectory by block-STMALA and RJMCMC as a function of the observations $Y$. In this numerical experiment, block-STMALA provides better results than RJMCMC for both the training set and the test set. This is confirmed by Figure~\ref{fig:bis:test} (right) which displays the evolution of the mean square error (MSE) on the test dataset, defined by \begin{align*} \textrm{MSE} = \frac{\|G_{\textrm{test}} \hat{X} - Y_{\textrm{test}} \|_2^2}{31} \;, \end{align*} as a function of the number of iterations (mean over 100 independent trajectories). The mean MSE after $2.10^6$ iterations is about $0.75$ for block-STMALA and about $1.6$ times greater for RJMCMC. \begin{figure}[ht!] \begin{center} \includegraphics[width=.45\linewidth]{biscuit_yy_1} \includegraphics[width=.45\linewidth]{biscuit_ter_moy} \end{center} \caption{(left) Emitted signal $G \hat{X}$ estimated by block-STMALA and RJMCMC versus the observations $Y$. (right) Evolution of the mean MSE (over $100$ independent trajectories) on the test data set for RJMCMC and block-STMALA.} \label{fig:bis:test} \end{figure} \ \section{Proofs} \label{PMALA:sec:proofs} \subsection{Proof of Eq.~\eqref{eq:definition:clambda}} \begin{lemma} \label{PMALA:lem:calculnormalisation} The function defined on $\mathbb{R}^{|m| \times T}$ by \[ x\mapsto \left(\frac{\lambda^T\Gamma\left(\frac{T}{2}\right)}{2\pi^{T/2}(T-1)!}\right)^{|m|} \mathrm{exp}(-\lambda \|x\|_{2,1}) \;, \] is a probability density function with respect to the Lebesgue measure. $\Gamma$ denotes the standard Gamma function defined on $(0,+\infty)$ by $\Gamma:x\mapsto\int_0^{+\infty}t^{x-1}\mathrm{e}^{-t}\mathrm{d}t$. \end{lemma} \begin{proof} By definition of $\|\cdot\|_{2,1}$, \begin{equation} \label{PMALA:eq:prodintegrales} \int_{\mathbb{R}^{|m|\times T}}\mathrm{exp}(-\lambda \|x\|_{2,1})\mathrm{d}x=\left(\int_{\mathbb{R}^{T}}\mathrm{exp}(-\lambda \|x\|_{2})\mathrm{d} x\right)^{|m|}\;. \end{equation} Let $S_T \ensuremath{\stackrel{\mathrm{def}}{=}} \left\{x\in\mathbb{R}^T;\; \|x\|_2=1\right\}$ and $\sigma_{T-1}$ be the $T-1$ dimensional Hausdorff measure on $S_T$. On one hand, we have \begin{align*} \int_{\mathbb{R}^T}\mathrm{e}^{-\lambda\|x\|_2}\mathrm{d}x = \sigma_{T-1}(S_T)\int_{\mathbb{R}_+^{\star}} r^{T-1}\mathrm{e}^{-\lambda r}\mathrm{d}r =\sigma_{T-1}(S_T)\frac{(T-1)!}{\lambda^T}\,. \end{align*} On the other hand, \[ \pi^{T/2} = \int_{\mathbb{R}^T}\mathrm{e}^{-\|x\|_2^2}\mathrm{d}x =\sigma_{T-1}(S_T)\int_{\mathbb{R}_+^{\star}}\mathrm{e}^{-r^2}r^{T-1}\mathrm{d}r = \frac{\sigma_{T-1}(S_T)}{2}\Gamma\left(\frac{T}{2}\right)\,. \] which implies that $\sigma_{T-1}(S_T) = 2 \pi^{T/2} / \Gamma(T/2)$. This concludes the proof. \end{proof} \subsection{Proofs of Section~\ref{sec:PMALA:variants}} \label{sec:proofs:pmala:etc} \subsubsection{Proof of Lemma~\ref{PMALA:lem:propnorm21}} For any $1 \leq i \leq P$, $1 \leq j \leq T$, and any $y \in \mathbb{R}^{P \times T}$, define the proximal operator \[ \left( \textrm{prox}_{\gamma \|.\|_{2,1}}(y) \right)_{ij} = \left\{ \begin{array}{ll} 0 & \text{if $i$ such that $\|y_{i\cdot}\|_2 \leq \gamma$}\;, \\ y_{i,j} \left(1-\gamma/\|y_{i.}\|_2 \right) & \text{otherwise}\;. \end{array} \right. \] Let $\varphi$ be a bounded continuous function on $\mathbb{R}^{P \times T}$. \begin{align*} \mathbb{E}[\varphi(Z)] & = \left( 2 \pi \sigma^2 \right)^{-TP/2} \int_{\mathbb{R}^{P \times T}} \varphi \left(\textrm{prox}_{\gamma \|\cdot\|_{2,1}}(y) \right)\prod_{i=1}^P\exp \left(-\frac{\|y_{i\cdot}- \mu_{i\cdot}\|^2}{2 \sigma^2} \right) \mathrm{d} y\;. \end{align*} For $y \in \mathbb{R}^{|m| \times T}$, denote by $\overline{y}$ the $(|m| \times T)$-matrix defined by $\overline{y}_{i\cdot} = y_{i\cdot} \left( 1 - \gamma/\| y_{i\cdot}\|_2 \right)$. Fubini's theorem yields \begin{align*} \mathbb{E}[\varphi(Z)] &= \left( 2 \pi \sigma^2 \right)^{-TP/2} \sum_{m \in \mathcal{M}} \prod_{i \notin I_m} \int \mathbf{1}_{\|y_{i\cdot}\|_2 \leq \gamma} \exp\left(-\frac{\|y_{i\cdot}-\mu_{i\cdot}\|^2}{2 \sigma^2} \right) \mathrm{d} {y}_{i\cdot} \ \\ & \hspace{3cm}\times \int_{\mathbb{R}^{|m| \times T}} \varphi \left((m,\overline{y}) \right) \ \left( \prod_{k=1}^{|m| } \mathbf{1}_{\| \overline{y}_{k\cdot}\|_2 > \gamma} \right) \exp \left(-\frac{\|\overline{ y}-\mu_{m\cdot}\|^2_2}{2 \sigma^2} \right) \mathrm{d} \overline{y}\;, \\ &= \left( 2 \pi \sigma^2 \right)^{-TP/2} \sum_{m \in \mathcal{M}} \prod_{i \notin I_m} p\left(\mu_{i\cdot} \right) \int_{\mathbb{R}^{|m| \times T}} \varphi \left((m,\overline{y}) \right) \ \left( \prod_{k=1}^{|m| } \mathbf{1}_{\| \overline{y}_{k\cdot}\|_2 > \gamma} \right) \\ & \hspace{9cm}\exp \left(-\frac{\|\overline{y}-\mu_{m\cdot}\|^2_2}{2 \sigma^2} \right) \mathrm{d} \overline{y}\;. \end{align*} By Fubini's theorem, it is sufficient to compute integrals of the form \begin{align*} \int_{ \mathbb{R}^T} \widetilde{\varphi} \left( v \left( 1 - \frac{\gamma}{\|v\|_2} \right) \right) \, \mathbf{1}_{\|v\|_2 > \gamma} \, \exp \left(-\frac{\|v-\mu_{i,\cdot}\|_2^2}{2 \sigma^2} \right) \mathrm{d} v \;, \end{align*} for a generic function $\widetilde{\varphi}$. Consider the change of variable $\mathbb{R}^T \to \mathbb{R}^T$ $z = v \left( 1- \frac{\gamma}{\|v\|_2}\right)$. Note that $\|z\|_2 = \|v\|_2-\gamma$ and that $v = \psi(z)$, where for any $z \in \mathbb{R}^T$, $\psi(z) \ensuremath{\stackrel{\mathrm{def}}{=}} \frac{\|z\|_2 + \gamma}{\|z\|_2} z$. We now determine the Jacobian matrix of $\psi$. Hereafter, $z$ and $h$ are elements of $\mathbb{R}^T$. For any $h,z$ such that $z\neq 0$, \[ \|z+h\|_2 = \|z\|_2 + \pscal{\frac{z}{\|z\|_2}}{h} + o\left(\|h\|_2\right)\;. \] Then, \[ \frac{1}{\|z+h\|_2} = \frac{1}{\|z\|_2}\frac{1}{1+\pscal{\frac{z}{\|z\|_2^2}}{h} + o\left(\|h\|_2\right)} = \frac{1}{\|z\|_2}\left(1 - \pscal{\frac{z}{\|z\|_2^2}}{h} + o\left(\|h\|_2\right)\right)\;. \] Therefore, \[ \psi(z+h) = \left(1+\frac{\gamma}{\|z+h\|_2}\right)(z+h) = \psi(z) + \left\{\left(1+\frac{\gamma}{\|z\|_2}\right)I_T - \frac{\gamma}{\|z\|_2^3}zz^{\star}\right\}h + o(\|h\|_2) \] and the Jacobian matrix of $\psi$ at $z$ is \[ J\psi(z) = \left(1+\frac{\gamma}{\|z\|_2}\right)I_T - \frac{\gamma}{\|z\|_2^3}zz^{\star}\;. \] Define the unit vector $\omega \ensuremath{\stackrel{\mathrm{def}}{=}} z/\|z\|_2$. Then, the determinant of $J\psi(z)$ is given by \begin{align*} \mathrm{Det}\left(J\psi(z)\right) &= \left(1+\frac{\gamma}{\|z\|_2}\right)^T\mathrm{Det}\left(I_T - \frac{\gamma}{\gamma+\|z\|_2}\omega\omega^{\star}\right)\;,\\ &=\left(1+\frac{\gamma}{\|z\|_2}\right)^T\left(1 - \frac{\gamma}{\gamma+\|z\|_2}\right) = \left(1+\frac{\gamma}{\|z\|_2}\right)^{T-1} \;. \end{align*} Finally, \begin{align*} & \int_{\mathbb{R}^T} \widetilde{\varphi} \left( v \left( 1 - \frac{\gamma}{\|v\|_2} \right) \right) \mathbf{1}_{\|v\|_2 > \gamma} \exp \left(-\frac{\|v-\mu_{i\cdot}\|_2^2}{2 \sigma^2} \right) \mathrm{d} v \\ & \hspace{4cm} = \int_{\mathbb{R}^T} \widetilde{\varphi} \left(v \right) \, \exp \left(-\frac{\|\psi(v)-\mu_{i\cdot}\|_2^2}{2 \sigma^2} \right) \left(\frac{\gamma + \|v\|_2}{\|v\|_2} \right)^{T-1}\mathrm{d} v \;. \end{align*} This concludes the proof. \subsubsection{Proof of Lemma~\ref{STMALA:lem:prop}} The proof of Lemma~\ref{STMALA:lem:prop} follows the same lines as the proof of Lemma~\ref{PMALA:lem:propnorm21}, with the function $\psi$ replace by $ \widetilde{\psi}(z) = g\left(\gamma^2/\|z\|_2^2 \right) \ z$. We detail the computation of the Jacobian. We have \[ \nabla \widetilde{\psi}(z) = g\left( \frac{\gamma^2}{\|z\|_2^2} \right) \ I + g'\left( \frac{\gamma^2}{\|z\|_2^2} \right) \left( - \frac{\gamma^2}{\|z\|_2^4}\right) \ 2 z z^\star \;, \] and for any $u>0$, $ g'(u) = 1/\sqrt{1+4u}$. Th proof follows upon noting that for any $a,b$, $\mathrm{Det}(aI -b z z^\star) = a^T - a^{T-1} b \|z\|_2^2$. \subsubsection{Proof of Lemma~\ref{STMALA:prox}} \textit{Proof in the case $\ell=1$.} We first compute the derivative of $h$ on $]0, \infty[$ (note that $h$ is symmetric). For any $x \in ]0, + \infty[$, \begin{align*} h'(x) &= \gamma^2 \left[\frac{1}{\sqrt{x^2+4\gamma^2}} + \frac{1}{\sqrt{x^2+4\gamma^2}} \exp \left(-2 \textrm{asinh} \left(\frac{x}{2 \gamma} \right) \right) \right] \;. \end{align*} As $\exp(-2t) = 2 sh^2(t) + 1 - 2 \sqrt{1+sh(t)^2} sh(t)$, this yields \begin{align*} h'(x) = \frac{-x + \sqrt{x^2+4 \gamma^2}}{2} \quad \text{for any} \quad x>0 \;. \end{align*} Since $h$ is symmetric, \[ h'(x) = \frac{-x - \sqrt{x^2+4 \gamma^2}}{2} \quad x<0 \;. \] Set $\psi_u(x) \ensuremath{\stackrel{\mathrm{def}}{=}} h(x) + \ (x-u)^2/2$. Since we have $ \psi_{-u}(x) = \psi_u(-x)$, we only have to consider the case when $u \geq 0$. Hereafter, $u \geq 0$. It is easily proved that on $]0, \infty[$, the derivative $\psi_u'$ is strictly increasing to infinity, and a solution to the equation $\psi_u'(x) = 0$ exists on $]0, \infty[$ iff $u > \gamma$. In this case, this solution is $u-\gamma^2/u$, and $\psi_u(u-\gamma^2/u) \leq \psi_u(0)$. When $u \in [0, \gamma)$, $\inf_{x>0} \psi_u(x) = \psi_u(0)$. Moreover, it can be proved that $\psi_u'(x)=0$ has no solution on $]- \infty, 0[$, and therefore that $\inf_{x<0} \psi_u(x) = \psi_u(0)$ whatever $u>0$ is. Hence, the minimum is reached at $0$ if $u \in [0, \gamma[$ and at $u-\gamma^2/u$ if $u >\gamma$. This concludes the proof. \textit{Proof in the case $\ell >1$.} Set $x \in \mathbb{R}^\ell$ of the form $x = r \xi$ where $r>0$ and $\xi$ is on the unit sphere of $\mathbb{R}^\ell$. Since the function $h$ only depends on the radius $r$, the minimum over $\mathbb{R}^\ell$ of $x \mapsto h(x) + \| x- u \|^2/2$ is reached in the direction $\xi_\star = u/\|u\|$. Then, finding the minimum in this direction is equivalent to find the minimum of the function $\psi_u$ on $\mathbb{R}^+$, which yields $r_\star =0$ if $ \|u\| \leq \gamma$ and $r_\star = (1-\gamma^2/\|u\|^2)$ otherwise. This concludes the proof. \subsection{Proof of Theorem~\ref{th:erggeo}} \label{sec:proofs:erg} In this section, let $\psi: \bigcup_{m \in \mathcal{M}} \left(\{m\} \times (\mathbb{R}^\star)^{|m|} \right) \to \mathbb{R}^P$ denote the one-to-one map such that for any $m\in\mathcal{M}$, $\psi(m,.) : (\mathbb{R}^\star)^{|m|}\to \mathbb{R}^P$ is the function such that \begin{align} \label{eq:def:psi} \psi(m, x)= y \quad \textrm{with} \quad y_{m \cdot}= x \quad \textrm{and} \quad y_{-m \cdot}=0 \;. \end{align} Set \begin{equation} \label{eq:local:mu} \tilde \mu(x) \ensuremath{\stackrel{\mathrm{def}}{=}} x - \frac{\sigma^2}{2} \frac{D \, \nabla g(x)}{ \max\left( D, \| \nabla g(x) \|_2 \right)} \;. \end{equation} To make the notations easier, we denote by $q$ the proposal distribution. Since $T=1$, Lemma~\ref{PMALA:lem:propnorm21} shows that for any $m \in \mathcal{M}$ and $y \in S_m$ \begin{equation} \label{eq:local:q} q(x,y)= \prod_{i \notin I_m} p\left( \tilde \mu_i(x) \right) \ \ \prod_{i \in I_m} f\left( \tilde \mu_i(x), y_i\right) \; \end{equation} where $p$ is given by Lemma~\ref{PMALA:lem:propnorm21} and \begin{equation} \label{eq:local:densityf} f(c, y) = (\sqrt{2 \pi} \sigma)^{-1} \exp \left( - \left|y + \gamma \ \textrm{sign}(y) -c \right|/(2\sigma^2) \right) \;. \end{equation} We start with a preliminary lemma which will be fundamental for the proofs since it allows to compare the proposal distribution $q$ to gaussian proposals. \begin{lemma} \label{lemme:bgaus} Denote by $g_\epsilon$ the one-dimensional centered Gaussian density with variance $\epsilon$. Set $\epsilon_1 \ensuremath{\stackrel{\mathrm{def}}{=}} \sigma^2/2, \epsilon_2 \ensuremath{\stackrel{\mathrm{def}}{=}} 2 \sigma^2$ and \[ k_1 \ensuremath{\stackrel{\mathrm{def}}{=}} \exp \left(-\left(\gamma/\sigma^2 + D/2\right)^2 \right)/\sqrt{2} \;, \quad k_2 \ensuremath{\stackrel{\mathrm{def}}{=}} \exp \left(\left(\gamma/(2\sigma^2) + D/4\right)^2 \right) \sqrt{2} \;. \] \begin{enumerate}[(i)] \item \label{lemme:bgaus:encadrement} For any $x,y \in \mathbb{R}^P$ and any $1 \leq i \leq P$, \begin{align} k_1 \ g_{\epsilon_1}(y_i - x_i) \leq f(\tilde \mu_i(x), y_i) \leq k_2 \ g_{\epsilon_2}(y_i - x_i) \;. \end{align} \item \label{lemme:bgaus:maj:q} For any $x \in \mathbb{R}^P$ and $y \in S_m$, $q(x,y) \leq k_2^{|m|} \prod_{i \in I_{m}} g_{\epsilon_2}(y_i-x_i)$. Therefore, there exists a constant $C>0$ such that for any $x,y \in \mathbb{R}^P$, $q(x,y) \leq C$. \end{enumerate} \end{lemma} \begin{proof} Let $x$ and $y$ be in $\mathbb{R}^P$ and $ i \in \{1, \cdots, P\}$. By definition of $\tilde \mu$ (see \ref{eq:local:mu}), we have $ \left| \tilde \mu_i(x)- x_i \right| \leq \| \tilde \mu(x) - x\|_2 \leq D \sigma^2/2$. Thus, on one hand, \begin{align*} \left| y_i - x_i \right| & \leq \left| y_i + \gamma \ \textrm{sign}(y_i) - \tilde \mu_i(x) \right| + \gamma + \left|\tilde \mu_i(x) - x_i \right|\;, \\ & \leq \left| y_i + \gamma \ \textrm{sign}(y_i) - \tilde \mu_i(x) \right| + \gamma + \frac{D \sigma^2}{2} \;, \end{align*} which implies $\left| y_i + \gamma \ \textrm{sign}(y_i) - \tilde \mu_i(x) \right|^2 \geq \frac{1}{2} \left| y_i - x_i \right|^2 - \left(\gamma + D \sigma^2/2\right)^2$. Similarly, it holds $ \left| y_i + \gamma \ \textrm{sign}(y_i) - \tilde \mu_i(x) \right|^2 \leq 2 \left| y_i-x_i \right|^2 + 2 \left(\gamma + D \sigma^2/2\right)^2$. This concludes the proof of \eqref{lemme:bgaus:encadrement}. The second statement follows trivially from (\ref{eq:local:q}) since $p(\tilde \mu_i(x)) \leq 1$. \end{proof} The proof of Theorem~\ref{th:erggeo} also requires a lower bound on the probability that a component of the proposed point will be set to zero. Such a bound is given in Lemma~\ref{lem:minoration:probap}. \begin{lemma} \label{lem:minoration:probap} Let $p$ and $\tilde \mu$ be given by Lemma~\ref{PMALA:lem:propnorm21} and (\ref{eq:local:mu}). It holds \[ \inf_{z \in \mathbb{R}^P} \min_{i \notin I_{m_z}} p(\tilde \mu_i(z)) > 0 \;. \] \end{lemma} \begin{proof} For any $z \in \mathbb{R}^P$, $\| z - \tilde \mu(z) \|_2 \leq D \sigma^2/2$ by definition of $\tilde \mu(z)$ (see (\ref{eq:local:mu})). Then, $| \tilde \mu_i(z) | \leq D \sigma^2 /2$ for any $i \notin I_{m_z}$. Hence, there exists a constant $C>0$ such that \begin{align} \label{eq:min:p21} \inf_{z \in \mathbb{R}^P} \min_{i \notin I_{m_z}} \mathbb{P}(|\tilde\mu_i(z) + \xi| \leq \gamma) \geq C \;, \end{align} with $\xi \sim \mathcal{N}(0,1)$. \end{proof} \begin{proposition}\label{prop:small} \begin{enumerate}[(i)] \item \label{prop:small:set} Let $C$ be a Borel set of $\mathbb{R}^P$ such that for any $m \in \mathcal{M}$, $C \cap S_m$ is a compact set of $S_m$, where $S_m$ is defined by~\eqref{eq:def:sm}. Then, $C$ is a one-small set for the kernel $P_{trunc}$: there exists a positive measure $\tilde \nu$ on $\mathbb{R}^P$ such that $P_{trunc}(x,A) \geq \tilde \nu(A) \mathbf{1}_{C}(x)$. \item \label{prop:small:irreducibility} The Markov kernel $P_{trunc}$ is psi-irreducible and aperiodic. \end{enumerate} \end{proposition} \begin{proof} {\em \eqref{prop:small:set}}: Let $C$ and $K$ be two Borel sets of $\mathbb{R}^P$ such that $\nu(K) >0$ and for any $m \in \mathcal{M}$, $C \cap S_m$ and $K \cap S_m$ are compact subsets of $S_m$. Since $\mathbb{R}^P = \bigcup_{m \in \mathcal{M}} S_m$, we have \begin{align*} \inf \limits_{ x \in C} P_{trunc}(x,A) = \inf \limits_{m \in \mathcal{M}} \inf \limits_{ x \in C \cap S_m} P_{trunc}( x,A) \;, \end{align*} so that it is enough to establish a minorization on the kernel for any $x \in C \cap S_{m_\star}$ whatever $m_\star \in \mathcal{M}$. Let $m_\star \in \mathcal{M}$. By definition of $P_{trunc}$, $\psi$ (see (\ref{eq:def:psi})), $q$ (see (\ref{eq:local:q})) and $\nu$ (see~(\ref{eq:def:nu})) \begin{align*} P_{trunc}( x,A) & \geq \int_{A \cap K} \alpha( x, y) q( x, y) \rmd \nu( y) \;,\\ &\geq \sum_{m \in \mathcal{M}} \int_{A \cap K} \alpha( x, y) \prod_{i \notin I_m} p(\tilde \mu_i( x)) \delta_0(\mathrm{d} y_i) \prod_{i \in I_m} f(\tilde \mu_i(x),y_i) \mathrm{d} y_i \;,\\ &\geq \sum_{m \in \mathcal{M}} \int_{(A \cap K) \cap S_m} \alpha( x, y) \prod_{i \notin I_m} p(\tilde \mu_i(x)) \delta_0(\mathrm{d} y_i) \prod_{i \in I_m} f(\tilde \mu_i(x),y_i) \mathrm{d} y_i \;,\\ &\geq \sum_{m \in \mathcal{M}} \prod_{i \notin I_m} p(\tilde \mu_i(x)) \int_{A \cap K \cap S_m} \alpha( x,\psi(m, y_m)) \prod_{i \in I_m} f(\tilde \mu_i(x),y_i) \mathrm{d} y_i \;,\\ & \geq \sum_{m \in \mathcal{M}} k_1^{|m|} \prod_{i \notin I_m} p(\tilde \mu_i(x)) \int_{A \cap K \cap S_m} \alpha( x,\psi(m, y_m)) \prod_{i \in I_m} g_{\epsilon_1}(x_i-y_i) \mathrm{d} y_i \;, \end{align*} where the last inequality is a consequence of Lemma~\ref{lemme:bgaus}\eqref{lemme:bgaus:encadrement}. For any $x \in S_{m_\star}$ and $y \in S_m$, we have \[ \alpha(x,y) = 1 \wedge \frac{\pi_m(y)}{\pi_{m_\star}(x)} \frac{\prod_{i \notin I_{m_\star}} p\left( \tilde \mu_i(y) \right) \ \ \prod_{i \in I_{m_\star}} f\left( \tilde \mu_i(y), x_i\right)}{\prod_{i \notin I_{m}} p\left( \tilde \mu_i(x) \right) \ \ \prod_{i \in I_{m}} f\left( \tilde \mu_i(x), y_i\right)}\;. \] There exists a compact set of $\mathbb{R}$ such that for any $x \in C \cap S_m$ and $y \in K \cap S_m$, $\tilde \mu_i(x)$ and $\tilde \mu_i(y)$ are in this compact for any $i$. Hence, A\ref{hyp:reg:pi}\eqref{hyp:reg:pi:cont} and Lemma~\ref{lemme:bgaus}\eqref{lemme:bgaus:encadrement} imply that there exists $\varepsilon_m >0$ such that for any $x \in C \cap S_m$ and $y \in K \cap S_m$, \[ \alpha(x,y) \geq \varepsilon_m \;, \qquad \inf_{i \in I_m} g_{\epsilon_1}(x_i-y_i) \geq \varepsilon_m \;. \] This yields for any $x \in C \cap S_{m_\star}$, $P_{trunc}( x,A) \geq \inf_{m \in \mathcal{M}} \varepsilon_m \int_{A} \mathbf{1}_{K} ( y) \mathrm{d} \nu(y)$, thus concluding the proof. {\em \eqref{prop:small:irreducibility}}: By~\cite[Lemma 1.1]{mengersen:tweedie:1996}, the Markov chain $\left(X_n\right)_{n\ge 0}$ is psi-irreducible since for any $x,y \in \mathbb{R}^P$, $q(x,y)>0$ and $\pi(x) >0$ as a consequence of Lemma~\ref{lemme:bgaus}\eqref{lemme:bgaus:encadrement} and A\ref{hyp:definition:pi}. The chain is strongly aperiodic since by Proposition~\ref{prop:small}\eqref{prop:small:set} it possesses a one-small set with positive $\nu$-measure, which concludes the proof. \end{proof} For any measurable function $f: \mathbb{R}^p \to \mathbb{R}^+$, $P_{trunc} f: \mathbb{R}^P \to \mathbb{R}^+$ is a measurable function defined by $x\mapsto \int P_{trunc}(x,\mathrm{d} z) f(z)$. $P_{trunc}$ is a Hastings-Metropolis kernel with proposal distribution $q(x,y) \mathrm{d} \nu(y)$ given by (\ref{eq:local:q}) and target distribution $\pi(y) \mathrm{d} \nu(y)$. Fix $\beta \in (0,1)$ and set $V: \mathbb{R}^P \to [1, \infty)$, $x \mapsto c_\beta \pi^{-\beta}(x)$. Note that such a constant $c_\beta$ exists under A\ref{hyp:definition:pi}. Define the possible rejection region $R(x)$ by \begin{align*} R(x) \ensuremath{\stackrel{\mathrm{def}}{=}} \{y \in \mathbb{R}^P: \alpha(x,y) <1 \} = \{y \in \mathbb{R}^P: \pi(x) q(x,y) > \pi(y) q(y,x) \} \;. \end{align*} We have \begin{align*} \frac{P_{trunc} V(x)}{V(x)} & \leq \int \alpha(x,y) \frac{\pi^{-\beta}(y)}{\pi^{-\beta}(x)} q(x,y) \rmd \nu(y) + \int_{R(x)} q(x,y) \rmd \nu(y) \;, \\ & \leq \sum_{m \in \mathcal{M}} T_m(x) + \int_{R(x)} q(x,y) \rmd \nu(y) \;, \end{align*} where \begin{align} \label{eq:def:tm} T_m(x) & \ensuremath{\stackrel{\mathrm{def}}{=}} \int_{\mathbb{R}^{|m|}} \alpha(x, \psi(m,z)) \, \frac{\pi^{-\beta}(\psi(m,z))}{\pi^{-\beta}(x)} \, q(x,\psi(m,z)) \, \mathrm{d} z \;. \end{align} \begin{proposition} \label{prop:drift} Assume A\ref{hyp:definition:pi} to A\ref{hyp:cone:pi} hold. For $\beta \in (0,1)$, set $V(x) \propto \pi^{-\beta}(x)$. Then \begin{align*} \limsup \limits_{\|x\| \to \infty} \frac{P_{trunc} V(x)}{V(x)} < 1 \;. \end{align*} \end{proposition} The proof of Proposition~\ref{prop:drift} is detailed below: by Lemma~\ref{lemme:tm}, $\limsup \limits_{\|x\|_2 \to \infty} T_m(x) = 0$ for all $m\in\mathcal{M}$; and by Lemma~\ref{lemme:remaining}, $\limsup \limits_{\|x\| \to \infty} \int_{R(x)} q(x,y) \rmd \nu(y) <1$. \begin{lemma} \label{lemme:tm} Assume A\ref{hyp:definition:pi}, A\ref{hyp:reg:pi} and A\ref{hyp:superexp:pi} hold. Then for any $m \in \mathcal{M}$, $\limsup \limits_{\|x\|_2 \to \infty} T_m(x) = 0$. \end{lemma} \begin{proof} The proof is adapted from~\cite{jarner:hansen:2000} and \cite{atchade:2006} who respectively address the geometric ergodicity of a symmetric Random Walk Hastings-Metropolis algorithm and the geometric ergodicity of MALA. Let $m \in \mathcal{M}$ and $\epsilon >0$ be fixed. Define \begin{align*} \mathcal{B}(x_m,a) & \ensuremath{\stackrel{\mathrm{def}}{=}} \{z \in \mathbb{R}^{|m|}, \|z-x_m\|_2 \leq a \} \;, \\ \mathcal{C}_{m}(x) &\ensuremath{\stackrel{\mathrm{def}}{=}} \{ z \in \mathbb{R}^{|m|}, \pi(\psi(m,z))=\pi(x) \} \;, \\ \mathcal{C}_{m}(x,u) & \ensuremath{\stackrel{\mathrm{def}}{=}} \{z + s n(z), |s| \leq u, z \in \mathcal{C}_{m}(x) \} \;, \\ A_m(x) &\ensuremath{\stackrel{\mathrm{def}}{=}} \{z \in \mathbb{R}^{|m|}, \pi(\psi(m,z)) q(\psi(m,z),x) \geq \pi(x) q(x,\psi(m,z)) \}\;, \\ R_m(x) & \ensuremath{\stackrel{\mathrm{def}}{=}} \mathbb{R}^{|m|} \setminus A_m(x) \;. \end{align*} We write $T_m(x) \leq T_{m,1}(x,a) +\sum_{j=2}^4 T_{m,j}(x,a,u)$ with \begin{align*} T_{m,1}(x,a) &\ensuremath{\stackrel{\mathrm{def}}{=}} \int_{\mathcal{B}^c(x_m,a)} \alpha(x,\psi(m,z)) \frac{\pi^{-\beta}(\psi(m,z))}{\pi^{-\beta}(x)} q(x,\psi(m,z)) \mathrm{d} z \;, \\ T_{m,2}(x,a,u) & \ensuremath{\stackrel{\mathrm{def}}{=}} \int_{ \mathcal{B}(x_m,a) \cap \mathcal{C}_{m}(x,u)} \alpha(x,\psi(m,z)) \frac{\pi^{-\beta}(\psi(m,z))}{\pi^{-\beta}(x)} q(x,\psi(m,z)) \mathrm{d} z \;, \\ T_{m,3}(x,a,u) &\ensuremath{\stackrel{\mathrm{def}}{=}} \int_{A_m(x)\cap \mathcal{B}(x_m,a) \cap \mathcal{C}^c_{m}(x,u)} \frac{\pi^{-\beta}(\psi(m,z))}{\pi^{-\beta}(x)} q(x,\psi(m,z)) \mathrm{d} z \;, \\ T_{m,4}(x,a,u) &\ensuremath{\stackrel{\mathrm{def}}{=}} \int_{R_m(x)\cap \mathcal{B}(x_m,a) \cap \mathcal{C}^c_{m}(x,u)} \frac{\pi^{1-\beta}(\psi(m,z))}{\pi^{1-\beta}(x)} q(\psi(m,z),x) \mathrm{d} z\;. \end{align*} We prove that there exist positive constants $C,M$ such that $\sup_{\|x\| \geq M} T_m(x) \leq C \epsilon$. Since $\epsilon$ is arbitrarily small, this yields the lemma. Note that for any $z \in \mathbb{R}^{|m|}$, \begin{equation} \label{eq:bound:partout} \alpha(x,\psi(m,z)) \frac{\pi^{-\beta}(\psi(m,z))}{\pi^{-\beta}(x)} \leq \left( \frac{q(\psi(m,z),x)}{q(x,\psi(m,z))}\right)^\beta \;. \end{equation} \paragraph{Control of $T_{m,1}$:} By (\ref{eq:bound:partout}), $ T_{m,1}(x,a)\leq \int_{\mathcal{B}^c(x_m,a)} q(x,\psi(m,z))^{1-\beta} q(\psi(m,z),x)^{\beta} \mathrm{d} z$. By (\ref{eq:local:q}) and Lemma~\ref{lemme:bgaus}, there exists a constant $C>0$ such that \begin{align*} T_{m,1}(x,a) &\leq C k_2^{|m|(1-\beta)} \int_{\mathcal{B}^c(x_m,a)} \prod_{i \in I_m} g_{\epsilon_2}(x_i - y_i)^{1-\beta} \mathrm{d} y_i \\ &\leq C k_2^{|m| (1-\beta)} \int_{\mathcal{B}^c(0,a)} \prod_{i \in I_m} g_{\epsilon_2}(y_i)^{1-\beta} \mathrm{d} y_i \;. \end{align*} Therefore, there exists $a>0$ such that $\sup_{x \in \mathbb{R}^P} T_{m,1}(x,a) \leq \epsilon$. \paragraph{Control of $T_{m,2}$:} By (\ref{eq:bound:partout}), $ T_{m,2}(x,a,u) \leq \int_{\mathcal{B}(x_m,a) \cap \mathcal{C}_{m}(x,u)} q(x,\psi(m,z))^{1-\beta} q(\psi(m,z),x)^{\beta} \mathrm{d} z$. By A\ref{hyp:superexp:pi}, the Lebesgue measure of $\mathcal{B}_m(x,a_m) \cap \mathcal{C}_{m}(x,u)$ can be made arbitrarily small - independently of $x \in \mathbb{R}^P$ - when $u$ is small enough (see~\cite[Proof of Theorem 4.1]{jarner:hansen:2000} for details). Therefore, since $q$ is bounded (see Lemma~\ref{lemme:bgaus}\eqref{lemme:bgaus:maj:q}), there exists $u>0$ such that $ \sup_{x \in \mathbb{R}^P} T_{m,2}(x,a,u) \leq \epsilon$. \paragraph{Control of $T_{m,3}$:} Set $d_r(u) \ensuremath{\stackrel{\mathrm{def}}{=}} \sup_{\|x\|_2 \geq r} \pi(x + u \, n(x))/\pi(x)$. By A\ref{hyp:superexp:pi}, choose $r$ large enough so that $\left(d_{r-u}(u) \right)^{1-\beta} \vee \left(d_r(u)\right)^\beta\leq \epsilon$. By A\ref{hyp:definition:pi} and A\ref{hyp:reg:pi}\eqref{hyp:reg:pi:cont}, $\sup \limits_{z \in \mathcal{B}(0,r)} \pi(\psi(m,z))^{-\beta} < \infty$, so that by Lemma~\ref{lemme:bgaus}\eqref{lemme:bgaus:maj:q} \begin{align*} \sup_{x \in \mathbb{R}^P} \int_{A_m(x)\cap \mathcal{B}(x_m,a) \cap \mathcal{C}^c_{m}(x,u) \cap \mathcal{B}(0,r)} q(x,\psi(m,z)) \pi^{-\beta}(\psi(m,z)) \mathrm{d} z < \infty \;. \end{align*} A\ref{hyp:reg:pi}\eqref{hyp:reg:pi:lim} implies that \[ \limsup_{\|x \| \to \infty} \int_{A_m(x)\cap \mathcal{B}(x_m,a) \cap \mathcal{C}_{m}^c(x,u) \cap \mathcal{B}(0,r)} \frac{\pi^{-\beta}(\psi(m,z))}{\pi^{-\beta}(x)} q(x,\psi(m,z)) \mathrm{d} z = 0 \;. \] Moreover, by definition of $A_m(x)$, for any $z \in A_m(x)$ it holds \[ \frac{\pi^{-\beta}(\psi(m,z))}{\pi^{-\beta}(x)} q(x,\psi(m,z)) \leq \frac{\pi^{1-\beta}(\psi(m,z))}{\pi^{1-\beta}(x)} q(\psi(m,z),x) \;; \] by Lemma~\ref{lemme:bgaus}\eqref{lemme:bgaus:maj:q}, there exists a constant $C$ such that for any $x \in \mathbb{R}^P$ and $z \in A_m(x)$ \[ \frac{\pi^{-\beta}(\psi(m,z))}{\pi^{-\beta}(x)} q(x,\psi(m,z)) \leq C \left( \frac{\pi^{-\beta}(\psi(m,z))}{\pi^{-\beta}(x)} \wedge \frac{\pi^{1-\beta}(\psi(m,z))}{\pi^{1-\beta}(x)} \right)\;. \] This yields \begin{align*} & \int_{A_m(x)\cap \mathcal{B}(x_m,a) \cap \mathcal{C}^c_{m}(x,u)\cap \mathcal{B}^c(0,r)} \frac{\pi^{-\beta}(\psi(m,z))}{\pi^{-\beta}(x)} q(x,\psi(m,z)) \mathrm{d} z \\ & \leq C \, \left( \sup_{z \in \mathcal{C}^c_m(x,u) \cap \mathcal{B}^c(0,r)} \frac{\pi^{\beta}(x)}{\pi^{\beta}(\psi(m,z))} \wedge \sup_{z \in \mathcal{C}^c_m(x,u) \cap \mathcal{B}^c(0,r)} \frac{\pi^{1-\beta}(\psi(m,z))}{\pi^{1-\beta}(x)} \right) \;. \end{align*} Let $z \in \mathcal{C}_m^c(x,u) \cap \{z: \pi(\psi(m,z)) < \pi(x) \}$ and set $\bar{x} \ensuremath{\stackrel{\mathrm{def}}{=}} \psi(m,y_m) - u \ n(\psi(m,y_m))$. By A\ref{hyp:reg:pi}\eqref{hyp:reg:pi:cont}, $h: s \mapsto \pi(\psi(m,z) - s \ n(\psi(m,z))) - \pi(x)$ is continuous, and by definition of $\mathcal{C}_m^c(x,u)$, $h(s) \neq 0$ for any $0 \leq s \leq u$. Since $h(0)<0$ (we assumed that $\pi(\psi(m,z)) < \pi(x)$), this implies that $h(u)<0$ i.e. $\pi(\bar{x}) \leq \pi(x)$. Then, \begin{align*} \sup_{z \in \mathcal{C}^c_m(x,u) \cap \mathcal{B}^c(0,r)} \frac{\pi (\psi(m,z))}{\pi(x)} = \frac{\pi(\psi(m,z))}{\pi(\bar{x})} \frac{\pi(\bar{x})}{\pi(x)} \leq \frac{\pi(\psi(m,z))}{\pi(\bar{x})} \leq d_{r-u}(u) \;. \end{align*} If $z\in \mathcal{C}_m^c(x,u) \cap \{z: \pi(\psi(m,z)) \geq \pi(x) \}$, we set $\bar x \ensuremath{\stackrel{\mathrm{def}}{=}} \psi(m,y_m) + u \ n(\psi(m,y_m))$ and obtain similarly that $ \pi(x)/\pi(\psi(m,z)) \leq d_{r}(u)$. Hence, we established that \begin{align*} \sup_{z \in \mathcal{C}^c_m(x,u) \cap \mathcal{B}^c(0,r)} \frac{\pi(\psi(m,z))}{\pi(x)} \leq d_{r}(u) \;. \end{align*} As a conclusion, there exist constants $C,M$ such that $\sup_{\|x\| \geq M} T_{m,3}(x,a,u) \leq C \epsilon$. \paragraph{Control of $T_{m,4}$} Following the same lines as for the control of $T_{m,3}(x,a,u)$, it can be shown that there exist constants $C,M$ such that $\sup_{\|x\| \geq M} T_{m,4}(x,a,u) \leq C \epsilon$. \end{proof} \begin{lemma} \label{lemme:cone} Assume A\ref{hyp:definition:pi}, A\ref{hyp:superexp:pi} and A\ref{hyp:cone:pi} hold. Let $u,b,\epsilon,R$ be given by A\ref{hyp:cone:pi} and $W_m(x)$ be defined by \eqref{eq:definition:cone}. There exists $r >R$ such that for any $m \in \mathcal{M}$ and $x \in S_m \cap \{\|x \|_2 \geq r\}$, $W_m(x) \subset \{y \in \mathbb{R}^{|m|}, \alpha(x, \psi(m,y)) =1 \}$. \end{lemma} \begin{proof} The proof is adapted from~\cite{jarner:hansen:2000}. Let $m \in \mathcal{M}$ and $x \in S_m$ such that $\| x \| \geq r$ for some $r>R$ to be fixed later (the constant $R$ is given by A\ref{hyp:cone:pi}). We first prove that there exists a positive constant $C_b$ such that \begin{equation} \label{eq:constanteCb} \frac{\pi(x)}{\pi(x-u n(x))} \leq C_b \leq \inf_{z \in \mathcal{B}(x_m,b)} \frac{q(\psi(m,z), x)}{q(x, \psi(m,z))} \;. \end{equation} By (\ref{eq:local:q}), Lemma~\ref{lemme:bgaus}\eqref{lemme:bgaus:encadrement} and Lemma~\ref{lem:minoration:probap}, there exist $C, C_b>0$ - independent of $x \in S_m$ - such that \begin{align*} \inf_{z \in \mathcal{B}(x_m,b)} \frac{q(\psi(m,z),x)}{q(x,\psi(m,z))} \geq C^{P-|m|} k_1^{|m|} k_2^{-|m|} \ \inf_{z \in \mathcal{B}(x_m,b)} \prod_{i \in I_m} \frac{g_{\epsilon_1}(x_i - z_i)}{g_{\epsilon_2}(x_i - z_i)} \geq C_b\;. \end{align*} By A\ref{hyp:superexp:pi}, we can choose $r$ large enough so that $\pi(x)/\pi(x - u n(x)) \leq C_b$. This yields (\ref{eq:constanteCb}). Let $z \in W_m(x)$. Then, $\|z-x_m\|_2 \leq b$ so that $z \in \mathcal{B}(x_m,b)$. Hence, by (\ref{eq:constanteCb}), $q(\psi(m,z),x)/q(x,\psi(m,z)) \geq C_b$. In addition, \begin{align*} \frac{\pi(\psi(m,z))}{\pi(x)} = \frac{\pi(\psi(m,z))}{\pi(x-u n(x))} \frac{\pi(x-un(x))}{\pi(x)} \geq \frac{\pi(\psi(m,z))}{\pi(x-u n(x))} \frac{1}{C_b} \geq \frac{1}{C_b} \;, \end{align*} where in the last inequality we used A\ref{hyp:cone:pi}. Hence, \[ \alpha(x, \psi(m,z)) = \frac{\pi(\psi(m,z))}{\pi(x)} \frac{q(\psi(m,z),x)}{q(x,\psi(m,z))} \geq 1 \] thus showing the lemma. \end{proof} \begin{lemma} \label{lemme:remaining} Assume A\ref{hyp:definition:pi} to A\ref{hyp:cone:pi} hold. Then $ \limsup \limits_{\|x\|_2 \to \infty} \int_{R(x)} q(x,y) \rmd \nu(y) < 1$. \end{lemma} \begin{proof} Set $A_m(x) \ensuremath{\stackrel{\mathrm{def}}{=}} \{z \in \mathbb{R}^{|m|}, \alpha(x,\psi(m,z)) =1\}$. By definition of $\rmd \nu$, by Lemma~\ref{lemme:bgaus}\eqref{lemme:bgaus:encadrement} and by Lemma~\ref{lem:minoration:probap}, there exists a constant $C>0$ such that \begin{align*} 1- \int_{R(x)} q(x,y) \rmd \nu(y) &= \sum_{m \in \mathcal{M}} \int_{A_m(x)} q(x,\psi(m,z)) \mathrm{d} z\;, \\ & \geq \sum_{m \in \mathcal{M}} k_1^{|m|} \prod_{i \notin I_m} p(\tilde \mu_i(x)) \int_{A_m(x)} \prod_{i \in I_m} g_{\epsilon_1}(x_i-y_i) \mathrm{d} y_i \\ & \geq k_1^{|m_x|} \prod_{i \notin I_{m_x}} p(\tilde \mu_i(x)) \int_{A_{m_x}(x)} \prod_{i \in I_{m_x}} g_{\epsilon_1}(x_i-y_i) \mathrm{d} y_i \\ & \geq C \, k_1^{|m_x|} \int_{A_{m_x}(x)} \prod_{i \in I_{m_x}} g_{\epsilon_1}(x_i-y_i) \mathrm{d} y_i \;. \end{align*} By Lemma~\ref{lemme:cone}, for any $x$ large enough, \[ 1- \int_{R(x)} q(x,y) \rmd \nu(y) \geq C \, k_1^{|m_x|} \int_{W_{m_x}(x)} \prod_{i \in I_{m_x}} g_{\epsilon_1}(x_i-y_i) \mathrm{d} y_i \;. \] where $W_m(x)$ is defined in~\eqref{eq:definition:cone}. We have \begin{align} \label{eq:dependanceonx} \int_{W_{m_x}(x)} \prod_{i \in I_{m_x}} g_{\epsilon_1}(x_{i}-y_{i}) \mathrm{d} y_{i} = \int_{W_{m_x}(x)-x_{m_x}} \prod_{i \in I_{m_x}} g_{\epsilon_1}(y_{i}) \mathrm{d} y_{i} \;, \end{align} where $A-x \ensuremath{\stackrel{\mathrm{def}}{=}} \{z, z+x \in A\}$. Observe that \begin{multline*} W_{m_x}(x)-x_{m_x} = \{ - u n(x_{m_x}) - s \xi; 0<s<b-u, \xi \in \mathbb{R}^{|m_x|}, \|\xi\|_2=1, \|\xi-n(x_{m_x})\|_2 \leq \epsilon \} \;, \end{multline*} so that the integrals in (\ref{eq:dependanceonx}) depend on $x$ only through $m_x$. Since $\mathcal{M}$ is finite, there exists a constant $C'>0$ independent of $x$ such that \begin{align*} \int_{W_{m_x}(x)-x_{m_x}} g_{\epsilon_1}(y_{m_x}) \mathrm{d} y_{m_x} \geq C' \;. \end{align*} \end{proof}
1,314,259,996,448
arxiv
\section{Introduction} Despite its proximity and wealth of existing observations, the origin of the radio and $\gamma$-ray emission from the Galactic Center remains controversial. Previous attempts at modeling the observed $\gamma$-ray spectrum have focused their efforts on unresolved point sources or dark matter \citep[e.g.][]{Abazajian11, Gordon13}. Alternatively, \cite{YusefZadeh13} have attempted to explain both the radio and $\gamma$-ray emission as diffuse emission from cosmic-ray electrons. Resolving these problems is especially critical given the similarities of the Galactic Center to starburst galaxies. Intense magnetic and radiation fields and concentrated molecular gas content in the Galactic Center are reminiscent of conditions in starburst systems. Additionally, the slope of the local cosmic ray spectrum is steeper than its spectrum at its source due to the effects of diffusion and energy losses. In denser environments the cosmic-ray electron spectral steepening is due only to energy losses and not energy-dependent diffusion. This hypothesis has been tested in the past in starburst nuclei as both radio and $\gamma$-ray observations are available for a few such objects. To test how well the parallels between the Galactic Center and starburst galaxies apply, we apply a model for cosmic ray interactions designed for starburst nuclei \citep[][hereafter YEGZ]{YoastHull13} to the Central Molecular Zone (CMZ) of the Galactic Center and include both cosmic-ray protons and electrons. Concentrating on the recent $\gamma$-ray observations by \textit{Fermi} and HESS, we find that the starburst model explains the TeV energy emission quite well but underpredicts the emission at GeV energies. We also explore what kind of cosmic ray population would be necessary to produce the observed GeV energy emission and what types of sources could account for these cosmic rays. Finally, we model the radio spectrum and investigate the impact of the additional population of cosmic rays introduced to model the GeV energy $\gamma$-rays. The next section provides a brief overview of how the population of energetic particles was computed and describes the observed properties of the Galactic Center. Section 3 contains the results of our model for the Galactic Center and Section 4 provides a discussion of the implications and concluding remarks. \section{Theoretical Model \& Parameters} \subsection{YEGZ Model} \begin{center} \begin{deluxetable*}{lllc} \tablecaption{Input Model Parameters} \tablewidth{0pt} \tablehead{ \colhead{Physical Parameters} & \colhead{Model A} & \colhead{Model B} & \colhead{References} } \startdata Distance & 8.0 kpc & 8.0 kpc & 1 \\ CMZ Radius & 200 pc & 250 pc & 2 \\ CMZ Disk Scale Height & 50 pc & 50 pc & 2 \\ Molecular Gas Mass & $3 \times 10^{7}$ $M_{\odot}$ & $5 \times 10^{7}$ $M_{\odot}$ & 3 \\ Ionized Gas Mass\tablenotemark{a} & $2.7 \times 10^{3}$ $M_{\odot}$ & $7.3 \times 10^{3}$ $M_{\odot}$ & \\ Average ISM Density\tablenotemark{b} & $\sim$80 cm$^{-3}$ & $\sim$85 cm$^{-3}$ & \\ FIR Luminosity & $4\times 10^{8}$ $L_{\odot}$ & $4\times 10^{8}$ $L_{\odot}$ & 3 \\ FIR Radiation Field Energy Density\tablenotemark{b} & 13 eV~cm$^{-3}$ & 8 eV~cm$^{-3}$ & \\ Dust Temperature & 21 K & 21 K & 3 \\ Stellar IR Luminosity & $2.5\times 10^{9}$ $L_{\odot}$ & $2.5\times 10^{9}$ $L_{\odot}$ & 3 \\ Stellar IR Radiation Field Energy Density\tablenotemark{b} & 82 eV~cm$^{-3}$ & 53 eV~cm$^{-3}$ & \\ Effective Stellar Temperature & 4400 K & 4400 K & 3 \\ Star-Formation Rate (SFR) & $0.01$ M$_{\odot}$ yr$^{-1}$ & $0.025$ M$_{\odot}$ yr$^{-1}$ & 4 \\ SN Explosion Rate ($\nu_{SN}$)\tablenotemark{b} & $10^{-4}$ yr$^{-1}$ & $2.75 \times 10^{-4}$ yr$^{-1}$ & \\ SN Explosion Energy\tablenotemark{c} & 10$^{51}$ ergs & 10$^{51}$ ergs & \\ SN Energy in Cosmic-Ray Protons\tablenotemark{c} & 10\% & 10\% & \\ Ratio of Primary Protons to Electrons ($N_{p}$/$N_{e}$) & 50 & 50 & \\ Slope of Primary Cosmic Ray Source Function & 2.3 & 2.3 & \\ \enddata \tablenotetext{a}{Scales with the star-formation rate and molecular gas mass} \tablenotetext{b}{Derived from above parameters} \tablenotetext{c}{Excludes neutrino energy} \tablerefs{ (1)~\cite{Reid09}; (2)~\cite{Ferriere07}; (3)~\cite{Launhardt02}; (4)~\cite{Longmore13}; } \end{deluxetable*} \end{center} \setcounter{footnote}{3} Previously, we created a model for cosmic ray interactions in the CMZs of star-forming galaxies capable of calculating both the radio and $\gamma$-ray emission due to cosmic ray interactions (YEGZ). Our treatment of cosmic ray interactions is similar to several other works \citep[e.g.][]{Torres04, Lacki10, Paglione12}. As our single zone model is designed for starburst systems in which strong galactic winds are present and environmental conditions are extreme enough that the energy loss timescales are significantly less than diffusion timescales, we include only energy and advective losses. Thus, following the approach in YEGZ, the spectrum for cosmic rays depends only on the injection spectrum and the lifetime. We assume a power-law source function, $Q(E) \propto E^{-p}$, for cosmic rays such that \begin{equation} \int_{E_{\text{min}}}^{E_{\text{max}}} Q(E) E dE = \frac{\eta \nu_{\text{SN}} E_{51}}{V} , \end{equation} where $\nu_{\text{SN}}$ is the volume integrated supernova rate, $V$ is the volume of the starburst region, $\eta$ is the fraction of the supernova energy transferred to cosmic rays, and $E_{51} = 1$ is 10$^{51}$ ergs, the typical energy from a supernova explosion. The steady state cosmic-ray proton spectrum is given by \begin{equation} N(E) = \frac{(p-2)}{E_{\text{min}}^{-p+2}} ~ \frac{\eta \nu_{\text{SN}} E_{51}}{V} E^{-p} ~ \tau(E), \end{equation} where $E_{\text{min}}$ is the minimum cosmic ray energy, here taken to be $E_{\text{min}} = 0.1$ GeV, and $p$ is the spectral index. The cosmic ray lifetime is determined by the combined radiative and collisional energy loss and advection (energy-independent) timescales \begin{equation}\label{tau} \tau(E)^{-1} \equiv \tau_{\text{adv}}^{-1} + \tau_{\text{loss}}^{-1} = \left( \frac{H}{v_{\text{adv}}} \right)^{-1} + \left( - \frac{E}{dE/dt} \right)^{-1}, \end{equation} where $H$ is the scale height of the starburst region and $v_{\text{adv}}$ is the speed of the particles in the wind from the starburst region \citep[c.f.][]{Torres04}. Energy losses include ionization, the Coulomb effect, and pion production\footnote{Proton-proton collisions produce a variety of secondary mesons including pions and kaons. However, in terms of secondary electron/positron and $\gamma$-ray production, pions dominate over other mesons produced in these interactions \citep{Dermer09}. As such, we only consider pion production and decay in our calculations.} for cosmic-ray protons and ionization, bremsstrahlung, inverse Compton emission, and synchrotron emission for cosmic-ray electrons (see YEGZ for further details). In our models, we assume a three-phase interstellar medium (ISM) composed of a diffuse, hot gas that fills the majority of the volume with clumps of warm, ionized gas and dense molecular gas. Interactions between the cosmic-ray protons and the molecular gas result in the production of a variety of secondary pions which quickly decay. Neutral pion decay into $\gamma$-rays is the main hadronic channel for $\gamma$-ray production. Additionally, charged pions decay into secondary electrons and positrons. Additionally, while pions can also be produced in proton-photon interactions in the presence of sufficiently intense radiation fields \citep{Schlick02}, the pion production rate from this process in the CMZ is negligible compared to that due to proton-proton interactions. In addition to hadronic $\gamma$-ray processes, we include the leptonic production mechanisms of bremsstrahlung and inverse Compton. We combine the primary cosmic-ray electron population with the secondary electron/positron population when calculating the emission from leptonic processes. While $\gamma$-rays from bremsstrahlung and $\pi^{0}$ decay are relatively simple to model, $\gamma$-rays from inverse Compton are more difficult to calculate due to the knowledge required about the stellar and thermal infrared radiation fields with which cosmic-ray electrons and positrons interact. We assume a modified, diluted blackbody radiation spectrum \citep[see][]{YoastHull14}, taking the radiation energy density and temperature from the observed far-infrared flux for the CMZ (see Table 1). \begin{figure*}[t!] \subfigure[Model A, $U_{\text{rad, dust}} = 13$ eV~cm$^{-3}$]{ \includegraphics[width=0.5\linewidth]{Figure1A.eps}} \subfigure[Model B, $U_{\text{rad, dust}} = 8$ eV~cm$^{-3}$]{ \includegraphics[width=0.5\linewidth]{Figure1B.eps}} \caption{Best-fit $\gamma$-ray spectra for YEGZ Models A \& B. Model parameters are set at $p = 2.3$, $B = 350$ $\mu$G with (\textit{left}) $v_{\text{adv}} = 700$ km~s$^{-1}$, $U_{\text{rad, dust}} = 13$ eV~cm$^{-3}$ and (\textit{right}) $v_{\text{adv}} = 2000$ km~s$^{-1}$, $U_{\text{rad, dust}} = 8$ eV~cm$^{-3}$. The solid lines represent the total $\gamma$-ray flux; the dashed lines represent the contribution from neutral pion decay. The dotted lines represent the contribution from bremsstrahlung and the dot-dashed lines represent the contribution from inverse Compton. $\gamma$-ray data include: \cite{YusefZadeh13} (\textit{Fermi} - triangles), \cite{Aharonian06} (HESS - squares). Data with downward arrows represent upper limits for both \textit{Fermi} and HESS data.} \end{figure*} Finally, in calculating the radio spectrum, we include synchrotron emission from the diffuse, hot gas from both primary cosmic-ray electrons and secondary cosmic-ray electrons and positrons from charged pion decay. The effects of free-free emission and absorption from the warm, ionized gas are also included \citep[for details see][]{YoastHull14}. While free-free emission is responsible for a significant portion of the radio emission at high frequencies, free-free absorption \citep[$\alpha_{\nu}^{ff} \propto \nu^{-2}$;][]{RL79} can flatten or completely turn down the radio spectrum at low frequencies \citep{Condon92}. In the case of M82, the ionized gas acts as a foreground screen such that the radio emission from the starburst core is completely absorbed at low frequencies \citep{Adebahr13}. Here, we leave the covering fraction ($f_{\text{abs}}$) for the ionized gas as a variable such that for a low covering fraction ($f_{\text{abs}} \sim 0.1 - 0.2$), only a small portion of the radio emission is absorbed by free electrons and the spectrum flattens \citep{YoastHull14} and for a large covering fraction ($f_{\text{abs}} \sim 1.0$), the radio emission turns over completely at low frequencies (YEGZ). \subsection{Properties of the Galactic Center} In this paper, we consider only the region of the Galactic Center known as the central molecular zone (CMZ), spanning the inner $\sim$500 pc \citep{Launhardt02}. This region is primarily characterized by its 180-pc radius molecular ring and a highly asymmetrical distribution of interstellar dust and gas \citep{Launhardt02, Jones13}. Measurements of the total mass of the molecular gas range from $\sim10^{7}$ to $10^{8}$ M$_{\odot}$ with the majority of the molecular gas being contained in compact clouds occupying only a small percentage of the total volume \citep{Ferriere07}. Here, we adopt a gas mass of $(3 - 5) \times 10^{7}$ M$_{\odot}$ similar to \cite{Ferriere07} (see Table 1) and assume the cosmic rays sample the mean density \citep{Boettcher13}. Extensive infrared observations show that while the CMZ contains both cold (21 K) and warm (49 K) interstellar dust, the warm dust makes up only a small fraction of the dust by mass \citep{Launhardt02}. Thus, when modeling the interstellar radiation field, we use a modified blackbody spectrum with a dust temperature of 21 K. \begin{figure*}[t!] \subfigure[Model A]{ \includegraphics[width=0.5\linewidth]{Figure2A.eps}} \subfigure[Model B]{ \includegraphics[width=0.5\linewidth]{Figure2B.eps}} \subfigure[Model A, Soft Electron Spectrum]{ \includegraphics[width=0.5\linewidth]{Figure2C.eps}} \subfigure[Model A, Soft Proton Spectrum]{ \includegraphics[width=0.5\linewidth]{Figure2D.eps}} \caption{Contour plots showing $\chi^{2}$ variations for fits to the $\gamma$-ray spectrum for the range of magnetic field strength ($B$) and advection (wind) speed ($v_{adv}$) with parameters for Models A (top left \& bottom) and B (top right) listed in Table 1. For the top plots, the total number of degrees of freedom is 9 and it is 15 for the bottom plots. The top plots show the shape of parameter space for a single cosmic ray spectrum while the bottom plots include an extra soft spectrum of electrons with $p = 2.7$ (left) and protons with $p = 3.1$ (right).} \end{figure*} Other characteristic properties of the Galactic Center include its supermassive black hole \citep[M$_{\text{SMBH}} = 4.4 \times 10^{6}$ M$_{\odot}$;][]{Genzel10} which is currently inactive, as evidence for a radiating accretion disk is lacking \citep{Mezger96}. However, while structures such as the Fermi bubbles may be indicators of past activity \citep{Guo12, Yang12}, the majority of the emission seen from the inner $\sim$30 pc is due to the dense complex of star-formation. Based on measurements of the far-infrared luminosity and ionization rates for the CMZ, we adopt star-formation rates in the range of $0.01 - 0.025$ M$_{\odot}$ yr$^{-1}$ \citep{Longmore13}. Radio observations of the CMZ also reveal a highly asymmetrical structure with a mixture of thermal and non-thermal sources \citep{Law08}. Several non-thermal filamentary structures are found throughout the region and likely arise from nearby star clusters \citep{Law08}. Additionally, these filaments are tracers of the organized magnetic field perpendicular to the Galactic disk \citep{YusefZadeh13}. Estimates of the magnetic field strength range from $\sim$10 $\mu$G \citep{LaRosa05, YusefZadeh13} to as high as $\sim$1 mG \citep{YusefZadeh84, Morris96}. We consider magnetic field strengths of a few hundred $\mu$G as in nearby starburst environments \citep[e.g.][]{Crocker11, Thompson07}. $\gamma$-ray observations at both GeV and TeV energies exist for the CMZ. Diffuse $\gamma$-ray observations of the CMZ at TeV energies have been shown to be coincident with a large complex of molecular clouds by the HESS collaboration \citep{Aharonian06}. It is likely that these TeV $\gamma$-rays were produced in interactions between cosmic rays and the interstellar gas; however, it is unclear exactly what is responsible for the diffuse GeV energy emission \citep[see][and references therein]{Gordon13}. \begin{center} \begin{deluxetable*}{lccccccccccc} \tablecaption{Best-Fit Parameters} \tablewidth{0pt} \tablehead{ \colhead{Data} \vspace{-0.15cm} & \colhead{Extra} & & & & \colhead{$B$} & \colhead{$v_{\text{adv}}$} & \colhead{$n_{\text{ion}}$} & & \colhead{\# of} & & \colhead{$E_{CR} \times \nu / \nu_{SN}$}\\ \vspace{-0.15cm} & & \colhead{Model} & \colhead{$\chi^{2}$} & \colhead{$p$} & & & & \colhead{$f_{\text{abs}}$} & & \colhead{d.o.f.} & \\ \colhead{Set} & \colhead{Component} & \colhead{} & \colhead{} & \colhead{} & \colhead{($\mu$G)} & \colhead{(km~s$^{-1}$)} & \colhead{(cm$^{-3}$)} & \colhead{} & \colhead{Data Points} & \colhead{} & \colhead{($10^{50}$ ergs)} } \startdata TeV $\gamma$-Rays & -- & A & 5.3 & 2.3 & 350 & 700 & -- & -- & 9 & 7 & 1.02\\ TeV $\gamma$-Rays & -- & B & 52.9 & 2.3 & 350 & 2000 & -- & -- & 9 & 7 & 1.02\\ \\ All $\gamma$-Rays & -- & A & 19.2 & 2.3 & 300 & 700 & -- & -- & 15 & 13 & 1.02\\ All $\gamma$-Rays & -- & B & 64.4 & 2.3 & 350 & 2000 & -- & -- & 15 & 13 & 1.02\\ \\ All $\gamma$-Rays & Electrons & A & 9.2 & 2.7 & 250 & 900 & -- & -- & 15 & 13 & 1.84\\ All $\gamma$-Rays & Electrons & B & 61.5 & 2.9 & 350 & 2000 & -- & -- & 15 & 13 & 3.96\\ \\ All $\gamma$-Rays & Protons & A & 9.2 & 3.1 & 350 & 700 & -- & -- & 15 & 13 & 174\\ All $\gamma$-Rays & Protons & B & 61.0 & 3.3 & 350 & 2000 & -- & -- & 15 & 13 & 254\\ \\ Radio & -- & A & 169 & 2.3 & 200 & 100 & 100 & 1.0 & 4 & 0 & 1.02\\ Radio & -- & B & 70.7 & 2.3 & 250 & 500 & 100 & 1.0 & 4 & 0 & 1.02\\ \\ Radio & Electrons & A & 73.4 & 2.7 & 100 & 300 & 75 & 1.0 & 4 & 0 & 1.38\\ Radio & Electrons & B & 60.0 & 2.3 & 100 & 300 & 75 & 1.0 & 4 & 0 & 1.05\\ \\ Radio \& $\gamma$-rays\tablenotemark{a} & -- & A & 342 & 2.3 & 350 & 300 & 100 & 1.0 & 19 & 15 & 1.02\\ Radio \& $\gamma$-rays\tablenotemark{a} & -- & B & 396 & 2.3 & 350 & 1400 & 100 & 1.0 & 19 & 15 & 1.02\\ \\ Radio \& $\gamma$-rays\tablenotemark{a} & Electrons & A & 572 & 2.7 & 100 & 700 & 75 & 1.0 & 19 & 15 & 1.48\\ Radio \& $\gamma$-rays\tablenotemark{a} & Electrons & B & 2980 & 2.7 & 150 & 1900 & 50 & 1.0 & 19 & 15 & 1.40\\ \\ \enddata \tablenotetext{a}{While we list the combined solutions for the $\gamma$-ray and radio spectra, the results are heavily weighted by the radio spectrum and the corresponding $\gamma$-ray $\chi^{2}$ values are $\gtrsim$200. Thus, there is no optimal solution that agrees well with both data sets.} \tablecomments{The final column of the table is the energy input into cosmic rays (per supernovae). In the original starburst model, $10^{50}$ erg is put into cosmic-ray protons and $2 \times 10^{48}$ erg goes into cosmic-ray electrons. Thus, for the original models, the last column reads 1.02. For the rows with an additional soft spectrum, the additional energy is the amount required to match the soft spectrum with the observed GeV energy data.} \end{deluxetable*} \end{center} \section{Results} \subsection{TeV $\gamma$-Ray Spectrum} Previously, we used $\chi^{2}$ tests to optimize the input values for magnetic field strength ($B$), wind speed ($v_{\text{adv}}$), ionized gas density ($n_{\text{ion}}$), and absorption fraction ($f_{\text{abs}}$). While magnetic field strength and wind speed affect both the $\gamma$-ray and radio spectra, ionized gas density and absorption fraction primarily affect the radio spectrum and have little to no effect on the predicted $\gamma$-ray spectrum. We also explore how uncertainties in observational properties of the CMZ (supernova rate, CMZ radius, and molecular gas mass) affect minimization. We test two scenarios for a minimum (Model A) and a maximum (Model B) set of parameters (see Table 1). For Model A, we find that our model accurately predicts the TeV $\gamma$-ray emission but underestimates the majority of the observed GeV energy data (see Figure 1). At TeV energies, most of the modeled emission is due to $\pi^{0}$ decay with a rather small contribution from inverse Compton while bremsstrahlung makes a significant contribution at GeV energies only. A significant wind with speeds of $\sim$600 -- 900 km~s$^{-1}$ is required to avoid overestimating the TeV spectrum. Models within one sigma of the best-fit model have magnetic field strengths ranging from 150 to 350 $\mu$G. While one might expect the $\gamma$-ray spectrum to be independent of magnetic field strength, inverse Compton and synchrotron are competitive processes. As magnetic field strength increases, electrons are emitting more synchrotron radiation, leaving less energy available for inverse Compton. Thus, to achieve the same goodness of fit, the wind speed must decrease with increasing magnetic field strength such that the electron spectrum remains the same (see Figure 2). This is consistent with our previous results for the $\gamma$-rays in other galaxies and is opposite the relationship found for the radio spectrum \citep{YoastHull14}. Additionally, while the observational values assumed for Model A easily lead to a best-fit model, the $\chi^{2}$ results for Model B never minimize. Thus, Model B is not a suitable model (see Figure 2 \& Table 2). \begin{figure*}[t!] \subfigure[Model A, Soft Electron Spectrum]{ \includegraphics[width=0.5\linewidth]{Figure3A.eps}} \subfigure[Model A, Soft Proton Spectrum]{ \includegraphics[width=0.5\linewidth]{Figure3B.eps}} \caption{Best-fit $\gamma$-ray spectra including additional soft component. Model parameters are set at (\textit{left}) $p = 2.7$ for an additional soft electron spectrum with $B$ = and (\textit{right}) $p = 3.1$ for an additional soft proton spectrum. The soft proton spectrum includes inverse Compton and bremsstrahlung from secondary electrons/positrons (as both neutral and charged pions are produced in proton-proton collisions) and the soft electron spectrum includes emission from both bremsstrahlung and inverse Compton. Dashed lines show emission from the original starburst model while dot-dashed lines, labeled as ``2,'' show emission from the additional soft component. $\gamma$-ray data include: \cite{YusefZadeh13} (\textit{Fermi} - triangles), \cite{Aharonian06} (HESS - squares). Data with downward arrows represent upper limits for both \textit{Fermi} and HESS data.} \end{figure*} As the average densities for the two models are essentially the same, the difference in results must be attributed to the choice in supernova rate or acceleration efficiency, as only the product of the two can be constrained. The changes caused by the choice in supernova rate are seen, in particular, in the contribution of bremsstrahlung and inverse Compton to the total $\gamma$-ray flux in Figure 1. The $\gamma$-ray flux from leptons increases by a factor of $\sim$3 from Model A to Model B while the flux from hadronic interactions is essentially the same. To ensure that the flux from neutral pion decay continues to agree with the TeV energy data, the best-fit for Model B requires an increase in wind speed to 2000 km~s$^{-1}$ from 700 km~s$^{-1}$ in Model A. However, because the lifetimes of the cosmic-ray electrons are much shorter than that of the cosmic-ray protons, due to larger energy losses, the electron spectrum remains virtually unchanged by the increase in wind speed. Thus, the increase in the flux from inverse Compton is not because of the difference in radiation field energy density or lifetime but the higher supernova rate. In testing Models A \& B, we vary supernova rate, CMZ radius, and molecular gas mass. While the far-infrared luminosity is essentially the same for both models, the change in volume means a change in radiation field energy density. Observations of dust in the CMZ give a far-infrared luminosity of $L = 4 \times 10^{8}$ $L_{\odot}$ with a corresponding dust temperature of $\sim$21 K \citep{Launhardt02}. Based on this luminosity and depending on assumptions about volume, the radiation field energy density ranges from $\sim$8 -- 13 eV~cm$^{-3}$ between Models A \& B. In a typical starburst, the radiation field due to far-infrared emission from dust would be the dominant field. However, in the Galactic Center, the stellar radiation field is actually larger than that from dust. Observations show that infrared emission due to stars has a luminosity of $L = 2.5 \times 10^{9}$ $L_{\odot}$ with a temperature of $\sim$4400 K \citep{Launhardt02}. This corresponds to a radiation field energy density of $\sim$53 -- 82 eV~cm$^{-3}$. Other models for the Galactic Center include \cite{Lacki13} and \cite{Crocker11}, each with different approaches to the radiation field. \cite{Lacki13} use the far-infrared luminosity in a smaller volume ($R \sim 100$ pc) with a background radiation field which gives a total energy density of $\sim$60 eV~cm$^{-3}$. \cite{Crocker11} assume an energy density of 90 eV~cm$^{-3}$ by taking the infrared luminosity in both dust and stars \citep[$L_{\text{TIR}} = 3.6 \times 10^{9}$ L$_{\odot}$;][]{Launhardt02}. Though we treat the stellar and dust components separately in our calculation, it is unclear if \cite{Crocker11} do the same. This makes little difference at lower energies for the energy losses. However, the energy at which Klein-Nishina losses dominate such that inverse Compton is no longer a competitive energy loss mechanism depends on the radiation field temperature and thus different for each component. The $\gamma$-ray emissivity for inverse Compton scattering also depends on temperature, not just energy density. Thus, while interactions with infrared emission from dust result in a small contribution to the total $\gamma$-ray emissivity, scattering via starlight is completely negligible as the resulting inverse Compton emission is several orders of magnitude below that from the thermal infrared emission from dust. \begin{figure*}[t!] \subfigure[Model B, Best-Fit Original Radio Spectrum]{ \includegraphics[width=0.5\linewidth]{Figure4A.eps}} \subfigure[Model A, Original Radio Spectrum ($\gamma$-Ray Parameters)]{ \includegraphics[width=0.5\linewidth]{Figure4B.eps}} \subfigure[Model B, Best-Fit for Radio Spectrum with Extra Comp.]{ \includegraphics[width=0.5\linewidth]{Figure4C.eps}} \subfigure[Model A, Radio Spectrum with Extra Comp. ($\gamma$-Ray Parameters)]{ \includegraphics[width=0.5\linewidth]{Figure4D.eps}} \caption{On the left are the best-fit radio spectra with (bottom) and without (top) an additional soft electron component. Models on the right are the radio spectra for the best-fit $\gamma$-ray parameters with (bottom) and without (top) an additional soft electron component. Parameters are list in Table 2 in rows 1 (top right), 5 (bottom right), 10 (top left), and 12 (bottom left). For models at the bottom, dashed lines show emission from the original starburst model while dot-dashed lines show emission from the additional soft component. Radio data come from \cite{YusefZadeh13}.} \end{figure*} \subsection{GeV $\gamma$-Ray Spectrum} The most notable feature of Figure 1 is that the starburst model matches up well with the $\gamma$-ray spectrum observed at TeV energies but is insufficient at GeV energies. Not only does the model seriously underestimate the observed GeV emission, the spectral index of the model is significantly flatter than observed. The ``excess'' of the observed Galactic Center $\gamma$-ray emission at GeV energies above typical background models is well-documented \citep[e.g.][]{Abazajian11, Crocker11, YusefZadeh13, Gordon13}. Though there are many different explanations for this excess, ranging from an unresolved population of millisecond pulsars to dark matter annihilation, we focus only on the characteristics of a cosmic ray population necessary to account for the excess emission. Additionally, we suggest what type of accelerator would be necessary to produce such a population. We add a simple extra population of cosmic rays to our existing model for the $\gamma$-ray emission at TeV energies. For the source function, we require the energy put into cosmic rays for the extra population be large enough that the model matches the observed emission at 1.3 GeV while leaving the average density ($n_{\text{ISM}}$), radiation field energy density ($U_{\text{rad}}$), and acceleration efficiency ($\eta$) unchanged. Again, we use $\chi^{2}$ tests to determine the spectral index ($p$) required for a soft spectrum to match the GeV $\gamma$-ray emission while still optimizing magnetic field strength and wind speed for all cosmic rays. As the model changes with wind speed and magnetic field strength, the required energy input also changes. We test both a soft electron and a soft proton spectrum (including secondary electrons/positrons produced in proton-proton interactions but no additional primary electrons, see Figure 3). Complete results for the $\chi^{2}$ tests can be found in Table 2. \begin{figure*}[t!] \subfigure[Model B, Original Starburst Model]{ \includegraphics[width=0.5\linewidth]{Figure5A.eps}} \subfigure[Model B, Additional Soft Electron Component]{ \includegraphics[width=0.5\linewidth]{Figure5B.eps}} \caption{Contour plots showing $\chi^{2}$ values for the radio spectrum for varying magnetic field strength ($B$) and advection (wind) speed ($v_{adv}$) assuming parameters for Model B as listed in Table 1. For fits to the CMZ radio spectrum, the left plot shows the variation in $\chi^{2}$ for a single cosmic ray spectrum while the right plot includes an extra soft spectrum of electrons with $p = 2.3$. For both plots, the total number of degrees of freedom is 0, as there are as many data points as free parameters.} \end{figure*} Results for the optimization of the magnetic field strength and wind speed are similar to previous results for only TeV emission (see Figure 2 \& Table 2), with a similar range of value for models within one sigma of the best-fit, though there is a more limited range for magnetic field strength. The best-fit models with the extra cosmic ray populations have spectral indices of $p = 2.7$ for a soft electron spectrum and $p = 3.1$ for a soft proton spectrum. For a standard supernova accelerator, the typical energy input into cosmic rays is $10^{50}$ erg for protons (assuming a 10\% acceleration efficiency) and $2 \times 10^{48}$ erg for electrons (assuming an electron-to-proton ratio of 0.02) \citep{Blandford78}. The energy input per event is effectively the amplitude of the source function by which we scale our additional population, see Equation (1). In fitting, we require the soft cosmic-ray spectrum to agree with the observed GeV data. To accomplish this, we find that the soft electron spectrum needs 40 -- 150 times more energy per event than a standard model. Thus, the total energy input into cosmic rays per event ranges from $1.84 \times 10^{50}$ erg to $3.96 \times 10^{50}$ erg. The soft proton spectrum needs $\sim$200 times more energy per event, which is equivalent to a total energy input per event of $\sim 2 \times 10^{52}$ erg. As such, the soft proton spectrum is excluded if supernovae are the source. While cosmic-ray protons are accelerated in larger numbers than cosmic-ray electrons, the energy necessary for a soft proton component is significantly more than for a soft electron component due to differences in particle rigidity and radiative efficiency. Additionally, if the GeV excess is a true bump, as suggested by \cite{Daylan14}, then $\gamma$-ray emission by neutral pion decay could be a more appropriate match to the spectrum shape. However, we rule out a soft proton spectrum due to energy requirements and focus our efforts on an extra cosmic-ray electron population, as suggested by others \citep[e.g.][]{YusefZadeh13}. \subsection{Radio Spectrum} In addition to modeling the $\gamma$-ray spectrum for the Galactic Center, we also model the radio spectrum as it is the main diagnostic we have for the cosmic-ray electron population. However, we have as many data points as we have variables for the radio spectrum. Thus, while the results for the radio spectrum will only give us a general idea of the optimal parameter space and not definitively constrain magnetic field strength or wind speed, they will provide a vital check on our additional soft electron spectrum. We model the radio spectrum both with and without the additional soft electron population but discard the soft proton population. Along with varying magnetic field strength ($B$) and wind speed ($v_{\text{adv}}$), we also vary ionized gas density ($n_{\text{ion}}$), from 25 to 100 cm$^{-3}$, and absorption fraction ($f_{\text{abs}}$), from 0.1 to 1.0, as they directly affect the amount of free-free absorption and emission. The limitations of our single-zone model can most easily be seen in the radio spectrum. Though the general level of flux observed can be achieved with our models, the specific shape of the spectrum is not, as seen in Figure 4. Results for the original starburst model show that, in contrast to the $\gamma$-ray spectrum, the radio spectrum favors Model B and a lower magnetic field strength of $\sim$250 $\mu$G. Again, we find a degeneracy between magnetic field strength and wind speed as seen in Figure 5. To achieve a fit of similar goodness, wind speed must be increased as magnetic field increases so as to not overestimate the radio spectrum. With the addition of an extra soft electron spectrum, the goodness of the fit for Model A improves significantly while Model B remains largely the same (see Table 2). However, even with the introduction of the soft component, Model B is still preferred to Model A. Comparing the parameters for the best-fits for the radio spectrum and the $\gamma$-ray spectrum, we find very different wind speeds and magnetic field strengths for each. As we have as many parameters as data points for the radio spectrum, the results are primarily used as a check on the $\gamma$-ray results. In the case of the original starburst model, the minimized parameters for the $\gamma$-ray spectrum result in an underestimation of the radio spectrum (see Figure 4). Conversely, for the extra soft component, not only does the preferred spectral index for the $\gamma$-rays not match with the radio data, but the radio spectrum is overestimated for the $\gamma$-ray parameters. \section{Discussion \& Conclusions} Applying our YEGZ model for starburst galaxies to the CMZ of the Galactic Center, we find that the model agrees well with the observed TeV energy $\gamma$-ray emission but that the model underpredicts the GeV energy emission. While the model also agrees with the radio spectrum moderately well, our $\chi^{2}$ minimization shows that the $\gamma$-ray and radio spectra favor two different supernova rates. While we can only constrain the product of supernova rate and acceleration efficiency, the ratio of primary protons to electrons is set by rigidity. Thus, as the $\gamma$-rays at TeV energies are mainly produced in neutral pion decay and the radio spectrum is mostly from primary electrons with a relatively small contribution from secondaries, it is unlikely that the difference in requirements for the radio and $\gamma$-ray spectra are due to differences in acceleration efficiency. Additionally, the results for the radio spectrum show that our best-fit models have a magnetic field strength of $\sim$100 -- 250 $\mu$G. This is consistent with estimates by \cite{Crocker10}. Of course, as we have as many data points as constraints, this is only a rough estimate of the field. However, $\chi^{2}$ results show that the fits worsen as magnetic field strength decreases, and so it is unlikely that the magnetic field strength is on the order of $\sim$10 $\mu$G as suggested by \cite[][]{YusefZadeh13}. Additional data and a more sophisticated model, however, are necessary to truly determine the magnetic field strength in the Galactic Center. Finally, inclusion of an additional soft cosmic ray spectrum permits a fit to the GeV energy data. The extra energy required rules out a soft proton spectrum in favor of a soft electron spectrum. General possibilities for the source of any population of cosmic-ray electrons include SNRs, pulsar wind nebulae (PWNe), and the massive central black hole. In regards to acceleration of a soft electron spectrum by supernovae, the amount of additional energy needed is achievable within the uncertainty for the supernova rate. However, inclusion of this extra component results in a gross overestimation of the radio spectrum for parameters corresponding to the best-fit $\gamma$-ray models. As such, the best-fit models for the radio spectrum are in very different areas of parameter space for both supernova rate and wind speed from the best-fits to the $\gamma$-ray spectrum (see Figures 2c and 5b). This discrepancy between our YEGZ single-zone models suggests that the excess in the observed GeV data is unlikely to be diffuse in nature. \acknowledgements This work was supported in part by NSF AST-0907837, NSF PHY-0821899 (to the Center for Magnetic Self-Organization in Laboratory and Astrophysical Plasmas), and NSF PHY-0969061 (to the IceCube Collaboration). We thank the referee for their comments and help in improving our manuscript. We acknowledge discussions with Roland Crocker and thank Francis Halzen for his help and support. We also thank the organizers of IAU Symposium 303 on the Galactic Center for allowing this topic to be discussed.
1,314,259,996,449
arxiv
\section{Introduction} Regression learning is increasingly used in mission-critical systems: In medicine for the development of pharmaceuticals \cite{ekins2019exploiting, Kuchler2019}, in the financial sector for predictive analysis, such as managing hedge funds \cite{Wigglesworth_2019,Porzecanski_2019} and cash forecasting \cite{Handelsblatt_2018}, as well as for predictive maintenance \cite{PdM} and quality control \cite{Juarez_2019}. As we rely more and more on these systems, researchers find that they are vulnerable to malicious attacks. Two strains of attacks can be distinguished. Attacks at test time (evasion) and attacks at training time (poisoning) \cite{8406613}. In this work, we focus on the latter. Poisoning attacks introduce a small fraction of 'poisoned samples' into the training process, which maximally 'confuses' the learner and causes either a denial of service (i.e., renders the model useless with respect to its original intent), or introduces a 'backdoor', which gives the attacker control over the model at test time. These attacks (and corresponding defenses) have been studied by the scientific community in detail for classification learning \cite{Chen2018, Xiao2015, Biggio2012, Shafahi2018, Steinhardt}. However, there is almost no research on adversarial poisoning attacks (and corresponding defenses) for regression learning. We find only a single contribution \cite{Jagielski2018} for regression learning, even though regression is used in many mission-critical systems as described above. As an example, consider the following medical scenario. The blood thinner Warfarin has a very small therapeutic window; too high dosage leads to bleeding, while too low dosage leads to clotting \cite{WikiWarfarin}. Machine learning can help in estimating the correct dosage, and pharmaceutic companies provide appropriate datasets \cite{PharmGKB_2019} on which regression learning has been successfully applied \cite{Sharabiani_Bress_Douzali_Darabi_2015, Ma_Wang_Gao_Wang_Khalighi_2018}. However, such an estimation is susceptible to data poisoning attacks: A malicious entity may introduce a very small percentage of 'poisoned samples' into the dataset. Motives for such an attack can be manifold: Personal motives (a malignant doctor, underpaid caregiver or simply a psychopath nurse \cite{NYT2019}), financial motives (one company damaging another company's reputation, or an individual betting on the crash of some company's stock value, similar to \cite{Rogers_2018}), or even political or terrorist motives. Such poisoning attacks are not just theoretical threats: Since even small data poisoning can have significant effect, such an attack is practicable even by an individual. Related work has shown that data poisoning is feasible and has already been observed in real-life scenarios\cite{Shafahi2018, Chen2018}. To address this issue, we \begin{itemize}[topsep=0pt] \itemsep-0.2em \item show the harmfulness of data poisoning in regression by means of studying the problem of Warfarin dose prediction, \item present a new black-box attack which exceeds previous state-of-the-art, and for the first time evaluate poisoning attacks on nonlinear regression learners, \item present an improvement to previously suggested defenses which consistently outperforms the baseline, \item thoroughly evaluate our attack and defense on 26 datasets and state-of-the-art regression learners, i.e. Neural Networks, Kernel SVR, and Kernel Regression, \item and publish all source code and experiments to enable full reproducibility. \end{itemize} \section{Case Study: Warfarin Dose Estimation} \label{s:case_study_warfarin} In order to demonstrate the effect of even small fractions of poison samples, we examine the medical use case of Warfarin dose prediction in this section. The International Warfarin Pharmacogenetics Consortium \cite{PharmGKB_2019}, a group of pharmocogenetic research centers, have created the IWCP dataset (\emph{Warfarin dataset}). It is the joint effort of 59 contributors, resulting in an average contribution of $1.7\%$ of the data per member. Based on this dataset, models have been developed which predict the therapeutic dose of Warfarin for a patient~\cite{Sharabiani_Bress_Douzali_Darabi_2015,Ma_Wang_Gao_Wang_Khalighi_2018}. We use a new black-box poisoning attack(to be detailed in Section~\ref{ss:flip}) and add $2\%$ poison data to the Warfarin dataset, which is about as much as the average IWPC contributor did. Table \ref{tab:warfarin_mae} shows the Mean Absolute Error (MAE) of different models after training on this dataset. In the absence of poison samples, the MAE of models like Lasso, Elastic Net and Ridge is around $8.50$, which is comparable to state of the art \cite{Ma_Wang_Gao_Wang_Khalighi_2018}. When adding just $2\%$ of data poisoning, the median error increases to 11.07 (a 29 percent increase). This has tangible effect on the patients: The rate of acceptable doses\footnote{Following \cite{Ma_Wang_Gao_Wang_Khalighi_2018}, we measure the percentage of patients whose predicted dose of Warfarin is within $20\%$ of the true therapeutic dose. This is referred to as an \emph{acceptable dose}.} decreases by $21\%$. \begin{table}[h] \centering \input{warfarin_results.tex} \caption{Mean absolute error (MAE) of different regression models when poisoning the \emph{Warfarin} dataset. The first column shows the MAE when no data poisoning is present. The second column shows the MAE when $2\%$ poison samples are introduced, with the relative change indicated by the third column. The poisoning strongly affects the amount of patients who receive an \emph{acceptable dosage} of Warfarin (fourth column).} \label{tab:warfarin_mae} \end{table} \section{Related Work} In this section, we give a short overview on existing literature on data poisoning. \subsection{Poisoning Attacks in Classification} Early work on data poisoning attacks against classifiers include \cite{Biggio2012,Xiao2015}, which use the Karush-Kuhn-Tucker conditions to find optimal poisoning samples against linear models. \cite{Biggio2012} first develops a poisoning attack against SVMs. \cite{Xiao2015} considers the security of feature selection against poisoning attacks and adapts the approach for LASSO, Ridge Regression and Elastic Net. In both scenarios the attacker attempts to increase the test error and, thus, decrease the overall performance of the classifier. \cite{Munoz-Gonzalez:2017:TPD:3128572.3140451} are the first to extent data poisoning to the multi-class scenario, which allows for targeted attacks. Instead of the Karush-Kuhn-Tucker conditions, they use back-gradient optimization to generate the first poison samples for neural networks in an end-to-end fashion without the requirement of a surrogate model. While these first results indicate a higher resilience of deep neural networks against availability attacks, \cite{Munoz-Gonzalez:2017:TPD:3128572.3140451} also shows the effectiveness of this approach against simpler models like MLPs with a single hidden layer. \cite{Shafahi2018} build upon the work of \cite{Munoz-Gonzalez:2017:TPD:3128572.3140451} and demonstrate reliable clean-label attacks, in which the attacker can control the input data $\mathbf{X}$, but not the corresponding labels $\mathbf{y}$. The attackers objective is to achieve misclassification of a certain instance as another class at test time. For a transfer learning scenario, they show, that a single poison sample is capable of successfully poisoning a classifier. For end-to-end learning settings they develop a watermarking approach to poisoning. \subsection{Poisoning Attacks in Regression} Data poisoning has so far been examined almost exclusively for classification learning. For regression learning, there is work only by Jagielski et al. \cite{Jagielski2018}. They build upon work by Xiao et al. \cite{Xiao2015}, who introduce a gradient-based optimisation attack for linear classifiers such as Lasso, Ridge Regression and Elastic Net for feature selection. Jagielski et al. \cite{Jagielski2018} use the same approach for the same models, but interpret the model's decision surface as a predictor for the continuous target variable, yielding a poisoning attack for linear regression. Additionally, they introduce a non-gradient based attack, plus a defense called \emph{Trim} and evaluate it on three datasets. Their approach in evaluating the defense is, however, not applicable in practice, since they use an oracle to determine the defense's hyper parameters. More specifically, they assume they know the fraction $\epsilon$ of poisoned samples in the dataset of size $n$, which is generally unknown. Nonlinear regressors such as Kernel Ridge, Kernel SVM and Neural Networks have, to the best of our knowledge, not yet been examined in the context of adversarial poisoning. This may be because the attack presented in \cite{Xiao2015} is not applicable to nonlinear learners. \section{Poisoning Attacks in Regression} In this section, we present our threat model, previously suggested attacks and our proposed and improved attack. A thorough evaluation on 26 datasets is given in Section~\ref{s:eval}. \subsection{Threat Model} We consider a realistic attack scenario where the attacker has only limited capabilities, such as for example a malicious individual could have. Specifically, we consider black-box attacks where 1) the attacker knows nothing about the model (not even what kind of regressor is used), 2) the attacker does not have access to the training dataset $(\mathbf{X}, \mathbf{y})$, but only to a smaller substitute dataset $(\mathbf{X}^{sub}, \mathbf{y}^{sub})$, and 3) where the attacker is capable of fully controlling the $\epsilon n$ data samples he contributes to the dataset. He is not able to manipulate the rest of the data. As indicated in the introduction and Section \ref{s:case_study_warfarin}, the possibility of introducing small amounts of poison data into the dataset is highly realistic. If the data are crawled and collected automatically, malicious instances just need to be placed where the crawler can find them \cite{Shafahi2018, Perdisci2006}. If data are collected manually, the ability to poison a dataset is proportional to an individual's contribution to the dataset. As detailed in Section \ref{s:case_study_warfarin}, the Warfarin dataset is collected by 59 individuals; thus, an average contribution constitutes about $2\%$ of the dataset. We show that this amount of poisoning is sufficient to effectively poison the dataset (c.f. Section~\ref{s:case_study_warfarin}, ~\ref{ss:eval_flip}, and \ref{s:warfarin_revisited}). \subsection{Related Poisoning Attacks in Regression}\label{ss:related_work_attacks} \cite{Jagielski2018} present both a white box and a black-box attack on regression learning. In this section, we present these attacks and and their limitations. \subsubsection{Related White Box Attacks} Deriving from \cite{Xiao2015}, a white-box attack on linear regressors is presented in \cite{Jagielski2018}. The attacker's objective is formulated as a bilevel optimization problem as follows: \begin{align} \arg \max_{\mathbf{D_p}} & \hspace{10pt} \mathcal{W}(\mathbf{D'}, \theta_p) \label{e:1} \\ s.t. & \hspace{10pt} \theta_p \in \arg \min_\theta \mathcal{L}(\mathbf{D_{tr}} \cup \mathbf{D_p}, \theta) \label{e:2} \end{align} Equation~\ref{e:2} is the usual minimization of the model loss $\mathcal{L}$ during the fitting of a model on both the clean training dataset $\mathbf{D_{tr}}$ and the poisoned dataset $\mathbf{D_p}$. This yields an optimal set of weights $\theta_p$. This is called the 'inner optimisation'. Equation \ref{e:1} refers to maximising the attacker's objective $\mathcal{W}$ with respect to some test set $\mathbf{D'}$, using the model's weights as determined by Equation \ref{e:2}. Minimizing Equation \ref{e:1} depends on the solution of Equation \ref{e:2}, which is why it is considered a bilevel optimization problem. This is a hard problem: The attacker has to determine how the points they introduce in the dataset will change the model weights during training. \cite{Xiao2015, Jagielski2018} solve this using the Karush-Kuhn-Tucker (KKT) conditions as a set of conditions which they assume remain satisfied when a given poison sample $\mathbf{x_c}$ is introduced. They then proceed in solving a linear system, and, thus, derive the gradients. This approach is not feasible in deep neural networks \cite{Munoz-Gonzalez:2017:TPD:3128572.3140451}, since the time required for solving the linear system is in $O(p^3)$, where $p$ is the number of parameters in the model. Since even small, commonly used pretrained models have a few million parameters \cite{BibEntry2019Sep}, the computation is not feasible. Even with simplifying assumptions or a sufficiently small number of parameters, this approach still requires an exact solution to the optimization problem, which, in general, can not be obtained. For a more detailed analysis we refer to \cite{Munoz-Gonzalez:2017:TPD:3128572.3140451}. \subsubsection{Related Black-Box Attacks} \cite{Jagielski2018} also present a black-box attack called \emph{StatP}. This attack samples $\epsilon n$ points from a multivariate Gaussian distribution, where the corresponding mean $\mathbf{\mu}$ and co-variance matrix $\mathbf{\Sigma}$ are estimated as the mean and co-variance of the true dataset $\mathbf{D_{tr}}$. Then, \emph{StatP} rounds the feature variables to the corners, queries the model and rounds the target variable to the opposite corner. The corners are defined as the minimum and maximum of the feasibility domain $\gamma$ of each variable. Both features and target are scaled to $[0, 1]$, thus the feasibility domain is a hypercube $[0, 1]^{d+1}$ where $d$ is the number of features. In summary, this attack creates a few isolated clusters of adversarial data, where both features and target take only extreme values of either $\gamma_{min}=0$ or $\gamma_{max}=1$. This attack, however, still requires access to the trained black-box model, which may be unrealistic in a real-world scenario. Additionally, we find that while this attack is successful on linear models, it is unsuccessful when applied to non-linear models. We show this empirically in Section~\ref{s:eval}, but give a brief explanation here: Nonlinear learners (such as Neural Networks, Kernel SVR, and Kernel Regression) are able to accommodate both the poison points and the true data simultaneously. This is because the poison data created by \emph{StatP} does not contradict the true data points, since true data points rarely have features in the corners of the feasibility domain. This insight will motivate our proposed \emph{Flip} attack on nonlinear learners, which we present in the next section. \subsection{Flip: A Black-Box Attack on Nonlinear Regressors} \label{ss:flip} \begin{algorithm}[t] \caption{Flip attack}\label{alg:1} \begin{algorithmic}[1] \REQUIRE \STATE Substitute data $\mathbf{X}^{sub}, \mathbf{y}^{sub}$ of size $m$ \STATE Number of poison points $\lceil \epsilon n \rceil$ to compute \STATE Feasibility domain $[\gamma_{min}, \gamma_{max}]$ of the target values \FUNCTION{Flip}{} \STATE $\mathbf{\Delta} \xleftarrow{} \emptyset$ \FOR{$i \in [1, ..., m]$} \STATE $\Delta_i \xleftarrow{} \max( y^{sub}_i - \gamma_{min} , \gamma_{max} - y^{sub}_i )$ \STATE $\mathbf{\Delta} \xleftarrow{} \mathbf{\Delta} \cup \Delta_i$ \ENDFOR \STATE $T_\epsilon \xleftarrow{} t \in \mathbb{R}$ s.t. $t$ is the $\lceil \epsilon n \rceil$-th highest value of $\mathbf{\Delta}$ \STATE $I_\epsilon \xleftarrow{} \{ i \in [1, ..., m] \text{ s.t. } {d}_i >= T_\epsilon \text{ where } d_i \in \Delta\}$ \STATE $\mathbf{X_p} \xleftarrow{} \emptyset$, \hspace{5pt} $\mathbf{y_p} \xleftarrow{} \emptyset$ \FOR{$i \in I_\epsilon$} \IF{$y_i > \frac{1}{2}(\gamma_{max} - \gamma_{min})$} \STATE $ y_{p, i} \xleftarrow{} \gamma_{min}$ \STATE $\mathbf{y_p} \xleftarrow{} \mathbf{y_p} \cup {y_{p, i}}$ \ELSE \STATE $ y_{p, i} \xleftarrow{} \gamma_{max}$ \STATE $\mathbf{y_p} \xleftarrow{} \mathbf{y_p} \cup {y_{p, i}}$ \ENDIF \STATE $\mathbf{X_p} \xleftarrow{} \mathbf{X_p} \cup {X}^{sub}_i$ \ENDFOR \STATE \textbf{return} $\mathbf{X_p}, \mathbf{y_p}$ \ENDFUNCTION \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:1} presents our proposed black-box attack called \emph{Flip}. This algorithm computes a set of adversarial poisoning points for any degree of poisoning $0 < \epsilon < 1$. The attack is completely independent of the regressor model and only requires a substitute dataset $(\mathbf{X}^{sub}, \mathbf{y}^{sub})$ from the same domain as the training dataset $\mathbf{D_{tr}}$ and a feasibility domain of the target variables $[\gamma_{min}, \gamma_{max}]$. The feasibility domain is necessary because we usually assume that only certain target variables are valid. Other values are bound to raise suspicion, such as for example a room temperature of $-400$ degrees Celsius, or medical doses that are extremely high or low. We now describe our attack. After having initialised an empty set $\mathbf{\Delta}$ in line 2, we populate it in the following \emph{for loop} (line 3-6). For each instance in the substitute dataset, we find the maximum of the distance to the lower or upper end of the feasibility domain, and save the results to $\mathbf{\Delta}$. Then, in line 7 we find the $\lceil \epsilon n \rceil$-highest value $t \in \mathbb{R}$ in $\mathbf{\Delta}$. This is used in line 8 to compute the indices of those points for which there is most potential to disturb. Thus, the rational of line 7-8 is to find those points for which the target value is closest to either $\gamma_{min}$ or $\gamma_{max}$. These values are the ones which can be maximally disturbed by shifting the target variable to the other side of the feasibility domain. This is implemented in line 10-19, where we compute the poison set by retaining the feature values and 'flipping' the target value to the other side of the feasibility domain for the appropriate candidates as specified by $I_\epsilon$. Finally, in line 20, we return the found poison data. \section{Data Poisoning Defenses}\label{s:defenses} In this section, we present defenses for adversarial data poisoning in regression. First, we propose a set of requirements to make a defense applicable in practice. Second, we evaluate existing defenses with respect to these requirements. Finally, we present our improvement over the baseline. A quantitative evaluation is given in Section~\ref{s:eval}, while a qualitative evaluation is presented in Section~\ref{s:warfarin_revisited}. \subsection{Requirements for Data Poisoning Defenses} When creating a dataset such as \cite{PharmGKB_2019}, the defender does not know the degree of data poisoning, if any. Put differently, while $\epsilon$ might be known to the attacker, it is unknown to the defender. It is also entirely possible that no poisoning has happened, i.e. that $\epsilon = 0$. Thus, the quality of a defense should not depend too much on a correct guess of $\epsilon$, and should not deteriorate the quality of an unpoisoned dataset. With this requirement in mind, we proceed to present existing defenses and evaluate them against it. \subsection{Related Defenses}\label{ss:related_defs} Since there exists so little work on poisoning in regression, existing defenses are also few. Two defenses from the domain of classification learning are presented in \cite{Steinhardt}. The \emph{Sphere} defense first computes centroids in the poisoned data, and then removes points outside a spherical radius around the centroids. The \emph{Slab} defense 'projects points onto the line between the centroids and then discards points that are too far away' \cite{Steinhardt}. However, \cite{Steinhardt} themselves note that these defenses may leave datasets vulnerable, and present an example based on the IMDB classification dataset where both \emph{Sphere} and \emph{Slab} fail. For this reason, we do not consider them to be viable. \begin{algorithm} \caption{Trim defense, as proposed in \cite{Jagielski2018}}\label{alg:2} \begin{algorithmic}[1] \REQUIRE \STATE Poisoned dataset $\mathbf{D_{tr}} \cup \mathbf{D_p}$ of combined size $n$ \STATE Some model loss $\mathcal{L}$ \STATE Estimated fraction of poison points $ \hat{\epsilon}$ \FUNCTION{Trim}{} \STATE $\mathcal{I}^{(0)} \leftarrow $ a random subset of indices with size $n\frac{1}{1+\hat{\epsilon}}$ \STATE $\theta^{(0)} \leftarrow \arg \min_\theta \mathcal{L}(D^{\mathcal{I}^{(0)}}, \theta)$ \STATE $i \leftarrow 0$ \WHILE{True} \STATE $i \leftarrow i + 1$ \STATE $\mathcal{I}^{(i)} \leftarrow $ subset, size $n\frac{1}{1+\hat{\epsilon}}$, min. $\mathcal{L}(D^{\mathcal{I}^{(i)}}, \theta^{i-1})$ \STATE $\theta^{(i)} \leftarrow \arg \min_\theta \mathcal{L}(D^{\mathcal{I}^{(i)}}, \theta)$ \STATE break if some convergence condition is met \ENDWHILE \STATE \textbf{return} $D^{\mathcal{I}^{(i)}}$ \ENDFUNCTION \end{algorithmic} \end{algorithm} The state-of-the-art defense is \emph{Trim} \cite{Jagielski2018}, for which we give pseudo-code in Algorithm~\ref{alg:2}. It is an iterative algorithm, which first fits a regressor to a subset of the poisoned data, and then iteratively calculates the error between the regressor's prediction on the \emph{train} set and the \emph{train} targets. It refits the regressor on those points with the smallest error, and repeats until a convergence criteria is met. Finally, it returns the points with the smallest error as a 'cleaned' dataset. The number of points to fit on, and conversely, the number of points to discard, is determined by a supplied parameter $\hat{\epsilon}$, the assumed degree of poisoning. If $\hat{\epsilon} = \epsilon$, the defense has been shown to work very well \cite{Jagielski2018}. However, this is not a realistic scenario, since the defender does not know $\epsilon$. Consider Figure~\ref{fig:defense_and_eps}, where we poison three real-world datasets from \cite{Jagielski2018}, including the \emph{Warfarin} dataset, with a poison data fraction of $\epsilon = 0.04$. Then, for each dataset, we clean it using the \emph{Trim} defense, where we supply $\hat{\epsilon} \in [0.00, 0.02, 0.04, 0.06, 0.08, 0.10]$ (i.e. we clean the poisoned dataset with different estimates $\hat{\epsilon}$ to quantify the effect of $\hat{\epsilon}$ on \emph{Trim}). On the resulting data (which is partially or fully free from poison samples, depending on $\hat{\epsilon}$), we train a regressor and calculate the MSE on a separate test set. Then, we average the MSE over all datasets and plot the median of the regressors against $\hat{\epsilon}$. \begin{figure}[!ht] \centering \includegraphics[width=0.90\linewidth]{resources/median_effect_trim_eps_estimate.png} \caption{Three real-world datasets (\emph{Warfarin}, \emph{Loan} and \emph{Housing} \cite{Jagielski2018}) are poisoned with the \emph{Flip} attack, where $\epsilon = 0.04$. We then apply the \emph{Trim} defense with different values of $\hat{\epsilon}$, and train a regressor on the resulting dataset. Finally, we plot the test MSE against $\hat{\epsilon}$, averaged over all datasets and regressors. Observe that the \emph{Trim} defense is highly dependent on the correct choice of $\hat{\epsilon}$ (in this case $0.04$): If $\hat{\epsilon}$ is estimated too low, poison points remain in the training data, skewing the regressor and causing high test loss. If $\hat{\epsilon}$ is estimated too high, legitimate points are removed with the poisoned points. This loss in training data causes the regressor to learn a distribution different from the test distribution, which also incurs higher test loss. Thus, it is important to accurately estimate the degree of poisoning $\hat{\epsilon}$.} \label{fig:defense_and_eps} \end{figure} We make one key observation: The effectiveness of \emph{Trim} highly depends on the correct choice of $\hat{\epsilon}$. Selecting $\hat{\epsilon}$ below the actual degree of poisoning results in not all poison samples being removed and, thus, in an increase of the test MSE of an regressor. Selecting $\hat{\epsilon}$ above the actual degree of poisoning results in pristine data being removed, which might also remove relevant structure/information contained in the dataset and, as a result, also increase test MSE. Therefore, a better selection strategy than blind overestimation of $\hat{\epsilon}$ is required. \subsection{The Iterative Trim Defense}\label{ss:itrim} \begin{algorithm} \caption{iTrim defense}\label{alg:3} \begin{algorithmic}[1] \REQUIRE \STATE Poisoned dataset $\mathbf{D_{tr}} \cup \mathbf{D_p}$ \STATE Some model loss $\mathcal{L}$ \STATE Maximum estimated poisoning rate $\epsilon_{max}$ \STATE Number of runs $r$ \STATE Threshold $t$ \FUNCTION{iTrim}{} \STATE $ I \leftarrow \Big \{ \epsilon_{max} \frac{j}{r-1} $ s.t. $j \in \{0, ..., r-1\} \Big \} $ \FOR{$i \in I $} \STATE $\mathbf{D}^{(i)} \leftarrow trim(\mathbf{D_{tr}} \cup \mathbf{D_p}, \mathcal{L}, \hat{\epsilon} = i) $ \STATE $L^{(i)} \leftarrow \min_\theta \mathcal{L}(\mathbf{D}^{(i)}, \theta)$ \ENDFOR \STATE $\epsilon_{opt} \leftarrow \min \{ i \in I $ s.t. $ | L^{(i)} - L^{(i-1)} | < t \} $ \STATE \textbf{return} $trim(\mathbf{D_{tr}} \cup \mathbf{D_p}, \mathcal{L}, \hat{\epsilon} = \epsilon_{opt})$ \ENDFUNCTION \end{algorithmic} \end{algorithm} As shown in the last subsection, the \emph{Trim} defense has the potential to accurately remove poison samples from a given dataset, provided that $\hat{\epsilon}$ is chosen correctly, but over- or underestimating $\hat{\epsilon}$ significantly decreases test performance. From this result stems the motivation for our proposed \emph{Iterative Trim} defense (\emph{iTrim}). This defense enhances \emph{Trim} by an iterative search for the best $\hat{\epsilon}$. In this section, we present this algorithm and our proposal for selecting the ideal value for $\hat{\epsilon}$. In Section~\ref{s:eval}, we show empirically on 26 datasets that \emph{iTrim} can be applied under realistic conditions to poisoned data, and reliably identifies and removes the poisoned data. \subsubsection{Algorithm Description} Algorithm~\ref{alg:3} details the \emph{iTrim} defense. It takes as arguments the poisoned dataset $\mathbf{D_{tr}} \cup \mathbf{D_p}$, a loss $\mathcal{L}$, and three scalar hyper parameters. The first, $\epsilon_{max}$, is an estimate of the maximum possible poisoning rate. This hyper parameter can be chosen arbitrarily large without impacting the defense's result, but if chosen correctly will improve run time. The second hyper parameter specifies the number of runs $r$. This hyper parameter does not have too much influence on the algorithm's performance; it influences together with $\epsilon_{max}$ which values of $\hat{\epsilon}$ will be tried. The final hyper parameter, the threshold $t$, does have impact on the algorithm's performance, and we will discuss how to chose it later on. \emph{iTrim} starts by calculating a set $I$ of possible candidates $\hat{\epsilon}$ (line 2). The hyper parameters $\epsilon_{max}$ and $r$ define the right bound and the number of points, respectively. Then, for each candidate, calculate the cleaned dataset $\mathbf{D}^{(i)}$ using Trim, train the regressor and obtain the corresponding train loss $L^{(i)}$ (lines 3 - 4). Finally, the optimal value for $\hat{\epsilon}$ is found when the error in train loss between two consecutive losses $ L^{(i)} - L^{(i-1)} $ first undercuts some threshold $t$ (line 7). The dataset is cleaned using \emph{Trim} with this estimate, and the result is returned (line 8). \subsubsection{Poison Rate Selection} Before we give an intuition for our algorithm and show the reasoning for our selection criterion, we shortly address validation approaches to finding $\hat{\epsilon}$. As already mentioned, $\hat{\epsilon}$ is a hyper parameter of \emph{Trim}. In machine learning, a common approach to finding hyper parameters are validation schemes, e.g. cross validation. But for this approach to work, we require a clean validation dataset. Since we only have a single dataset, we have to assume that any validation split will contain poisoned instances, rendering conventional validation approaches unsuited for finding hyper parameters in this setting. Thus, we now proceed to explain our iterative approach to finding $\hat{\epsilon}$: Consider Figure~\ref{fig:iTrim_warfarin}, where we apply \emph{Trim} to the \emph{Warfarin} dataset poisoned with $\epsilon = 0.04$. The orange dashed line shows the train loss for different candidate values $\hat{\epsilon} \in [0.00, 0.02, 0.04, 0.06, 0.08, 0.10]$. Note that for the correct estimation of poisoning degree $\hat{\epsilon} = 0.04$, the train loss becomes almost zero, decreasing several orders of magnitude compared to $\hat{\epsilon} = 0.00$. Further increasing $\hat{\epsilon}$ still decreases the train MSE, but only insignificantly. Thus, the train loss can be approximated by two straight lines, joined at a distinctive kink where $\hat{\epsilon} = \epsilon$. Figure~\ref{fig:iTrim_housing_loan} in the Supplementary Material shows this for other real-world datasets. We can understand this kink as the point where the dataset ceases to contain data which incurs extremely high train loss - in other words, where all adversarial poison data have been removed. This assumption is supported by the blue line in Figure~\ref{fig:iTrim_warfarin}, which shows the test MSE for the same \emph{Kernel Ridge} regressor trained on the thusly cleaned datasets. For $\hat{\epsilon} = 0.04$, the test loss is minimal. For $\hat{\epsilon} > 0.04$, \emph{Trim} starts to remove legitimate data (since all poison data have been removed), which is why test performance deteriorates. Section~\ref{s:eval} will verify this empirically. \begin{figure} \centering \includegraphics[width=0.90\linewidth]{resources/warfarin_trim_eps_estimate.png} \caption{Applying \emph{Trim} to the \emph{Warfarin} dataset, poisoned with $\epsilon = 0.04$. The orange dashed line shows the averaged train loss for different estimations of poisoning $\hat{\epsilon}$, using \emph{KernelRidge} regression. There is high train loss for $\hat{\epsilon} = 0$ and $0.02$, because in these cases, not all poison points in the dataset can be removed. Once all adversarial poison data is removed ($\hat{\epsilon} = 0.04$, the delta in train loss decreases by several orders of magnitude, approaching zero. The line in blue shows the test error for the same setup. Note that the best $\hat{\epsilon}$ is indeed characterized by sudden change of train loss, as indicated in the left figure. Also note that for $\hat{\epsilon} > 0.04$, \emph{Trim} starts to remove legitimate data, which deteriorates test performance.} \label{fig:iTrim_warfarin} \end{figure} Based on the insight that the train loss can be approximated by two straight lines which intersect at $\epsilon$, we develop our selection criterion for $\hat{\epsilon}$. We define $t$ as the maximum absolute gradient of the straight line where $\hat{\epsilon} > \epsilon$ (i.e. the slope of the orange dashed line on the 'right' side of the graph, where all poison data have been removed). We will refer to this straight as the \emph{normal straight}. Then $| L^{(i)} - L^{(i-1)} |$ is used to approximate the gradient of the straight for each subinterval. The division by the length of the interval is omitted since all intervals are equidistant. We choose $\hat{\epsilon}$ as the first candidate so the estimated straight is normal (i.e. $| L^{(i)} - L^{(i-1)} | < t$). \subsubsection{Threshold Selection} \emph{iTrim} is dependent on an appropriate choice of the threshold $t$. If $t$ is vastly too large, poison points are left in the dataset. If $t$ is too small, \emph{iTrim} deteriorates to \emph{Trim}, and starts removing non-poison points. However, we find that there is rather a large window of appropriate values of $t$. This is because 1) we apply feature/target scaling to $[0, 1]$, and 2) the difference in train loss we observe once all poisoned points are removed is dramatic (c.f. Figure~\ref{fig:iTrim_warfarin}). Based on our evaluation on 26 datasets (c.f. Section~\ref{ss:data_sets}), we find empirically that choosing values between $0.05$ and $0.0001$ performs comparably, and thus decide on a threshold $t = 0.001$. In summery, we find that: \begin{itemize} \itemsep0em \item \emph{Trim} lacks a mechanism to find a good estimate for the percentage $\hat{\epsilon}$ of poisoned points in the dataset. \item Over- and under estimation of $\hat{\epsilon}$ deteriorate the dataset to be cleaned. \item Appropriate values for $\hat{\epsilon}$ can be found via \emph{iTrim}. \end{itemize} \section{Empirical Evaluation}\label{s:eval} In this section, we evaluate our attack and defense algorithms against 26 datasets. We show that we 1) can reliably poison nonlinear and linear models while assuming a realistic black-box threat model, and 2) defend against this attack better than previously suggested defenses. \subsection{Experimental Setup} We try to make our experiment as general and realistic as possible. First, we split each of the 26 datasets into a randomly drawn \emph{substitute set} of size $0.25$, a \emph{train set} of size $0.75 * 0.8$ and a \emph{test set} of size $0.75 * 0.2$. For each combination of the 26 \emph{substitute} datasets and $\epsilon \in [0.00, 0.02, 0.04, 0.06, 0.08, 0.10]$, we create a poisoned dataset using the respective attack, which we append to the corresponding \emph{train set} and shuffle. This results in $6*26 = 156$ combinations of \emph{train} dataset and poisoning rate. This step does not depend on the regressors. Then, for each regressor and each of the 156 \emph{poisoned train datasets}, we perform Cross-Validated Grid Search to find suitable hyper parameters. Finally, for all 156 \emph{poisoned train datasets} and both defenses (\emph{Trim} and \emph{iTrim}, we clean each of the 156 \emph{poisoned train datasets}. We then train a regressor and measure test error on the \emph{test data} sets and report below. Thus, in total we run $156 * 7 * 2 = 2184$ experiments ($7$ being the number of different regressors evaluated). The experiments and source code are published to enable reproducibility\footnote{See \url{https://github.com/Fraunhofer-AISEC/regression_data_poisoning}}. \subsection{Datasets and Regressors}\label{ss:data_sets} For our experiments, we use 26 datasets: Three datasets introduced in~\cite{Jagielski2018}, eight datasets from the GitHub repository \emph{imbalanced dataset} \cite{branco2019imbalanced}, and 15 datasets from the \emph{KEEL} regression repository~\cite{Alcala-Fdez2011}. Each dataset contains at least $1000$ data points. For datasets where $n > 10000$, we randomly sample a subset $n = 10000$. In keeping with~\cite{Jagielski2018}, we scale features and targets to $[0, 1]$. See Table~\ref{tab:datasets} in the Supplementary Material for a detailed summary. We evaluate four linear models (HuberRegressor, Lasso, Ridge, Elastic Net) and three non-linear models (Neural Networks, Kernel Ridge with RBF kernel, and Support Vector Regressor with RBF kernel). To the best of our knowledge, we are the first to evaluate poisoning attacks against non-linear regressors. \subsection{Evaluation of \emph{StatP}} In this section, we very briefly report the effectiveness of \emph{StatP} on non-linear regressors. As detailed in \cite{Jagielski2018}, the attack is effective for linear regressors. We find, however, that it is not effective when applied to non-linear learners. For example, a Neural Network's MSE remains nearly unchanged (from $0.051$ to $0.055$) when poisoned with ten percent of poison samples created by \emph{StatP}. In the Supplementary Material, we elaborate this in more detail, and evaluate additional non-linear learners such as Kernel SVM and Kernel Ridge, which we find to behave similarly. \subsection{Evaluation of \emph{Flip}}\label{ss:eval_flip} In this section, we present the results when evaluating our proposed \emph{Flip} attack against 26 datasets and seven regressors. Figure~\ref{fig:flip_attack_all} shows the performance of the \emph{Flip} attack, averaged over all datasets. Figure~\ref{fig:attack_results_non_averaged} in the Supplementary Material shows results per dataset. The attack is highly effective: When adding only $4\%$ of poison data, the MSE of most regressors doubles compared to the non-poisoned case. We observe that all models seem equally susceptible to our attack, with the exception of the Huber Regressor and Support Vector Regressor, which are designed to be outlier-resistant. \begin{figure} \centering \includegraphics[width=0.90\linewidth]{resources/all_reg_flip_results.png} \caption{Evaluation of our proposed \emph{Flip} attack. This plot shows the MSE for different poison rates per regressor, averaged over all 26 datasets. Most regressors obtain a MSE of around $0.003$ when $\epsilon = 0$, and all but two deteriorate linearly as $\epsilon$ increases. Figure~\ref{fig:attack_results_non_averaged} in the Supplementary Material shows the same results, but for each dataset individually. } \label{fig:flip_attack_all} \end{figure} \subsection{Evaluation of \emph{Trim} and \emph{iTrim}}\label{ss:eval_trim} In this section, we report the results when defending against the \emph{Flip} attack. We report both the performance of the \emph{Trim} defense and our proposed \emph{iTrim} defense, and compare efficiency. We set both $\hat{\epsilon} = 0.14$ (for \emph{Trim}) and ${\epsilon}_{max} = 0.14$ (for \emph{iTrim}) to mimic the behaviour of a defender in a realistic scenario. A defender would have to guess the percentage of poisoned data $\epsilon$, with a preference for overestimation rather than underestimation (as explained in Section~\ref{ss:itrim}). We proceed as follows: With varying degrees of poisoning, we poison all 26 datasets using the \emph{Flip} attack. Then, for each regressor, we clean (i.e. 'defend') the datasets using the \emph{Trim} as well as the \emph{iTrim} defense (separately). We fit the regressor on the thusly obtained dataset, and compare the test error against a regressor trained on the 'clean' data. Figure~\ref{fig:trim_itrim_def} shows the median of all regressors for \emph{Trim} (blue line) and \emph{iTrim} (orange dashed line)\footnote{ The Supplementary Material provides more details: Figure~\ref{fig:trim_itrim_def_all} presents the results for each regressor individually, while Figure~\ref{fig:defenses_results_non_averaged} depicts the results for all 26 datasets. }. \begin{figure}[] \centering \includegraphics[width=0.90\linewidth]{resources/median_trim_itrim_results.png} \caption{Evaluation of the \emph{Trim} and \emph{iTrim} defense. First, we poison the 26 datasets using \emph{Flip}. Then we apply the \emph{Trim} or \emph{iTrim} defense, and calculate the test MSE. Finally, we normalize by the \emph{baseline} MSE - the error obtained by a model trained on unpoisoned data. The resulting quotient represents the degree to which the defenses can negate the \emph{Flip} attack. We observe that \emph{iTrim} consistently outperforms \emph{Trim}. More detailed, non-averaged results are presented in Figure~\ref{fig:trim_itrim_def_all} and Figure~\ref{fig:defenses_results_non_averaged} in the Supplementary Material. } \label{fig:trim_itrim_def} \end{figure} We observe that both defenses are effective. However, \emph{iTrim} achieves higher performance than \emph{Trim}, especially when there is a large discrepancy between $\epsilon$ and $\hat{\epsilon}$. This is due to \emph{iTrim}'s capability of more accurately estimating the degree of poisoning. Especially for $\hat{\epsilon} > \epsilon$, we see considerable improvement due to \emph{iTrims} more advanced estimate of $\epsilon$. Trim is also computationally feasible, despite its iterative approach: The average runtime for defending a given dataset was 1.6 minutes. A more detailed discussion can be found in the Supplementary Material. \subsection{Runtime} In this section, we detail the runtime of the \emph{iTrim} defense algorithm. \emph{iTrim} calls the \emph{Trim} defense $r$ times, which in turn performs $j$ fit operations of the regressor - until either a convergence criterion is met, or the number of runs $j$ is exhausted. In our experiments, we set $r=6, j = 20$. Running the complete experiment (attacking all 26 datasets, for seven regressors, and six poisoning rates $\epsilon$) results in $26*6*7 = 1092$ calls to the \emph{iTrim} defense. On a Intel(R) Xeon(R) CPU E7-4860 v2 @ 2.60GHz with 96 cores, this takes about $120$ minutes when parallelizing into 15 separate processes. Thus, running a single \emph{iTrim} defense takes, on average, $120 / 1092 * 15 = 1.6$ minutes per $6$-core process. Obviously, this is highly dependent on the regressor's complexity, the size of the dataset, the number of features, and parallelism capabilities of the program code. Still, this indicates the feasibility of applying the \emph{iTrim} defense in a real-world scenario, where after weeks, months or even years of data gathering, running the \emph{iTrim} incurs negligible additional time overhead. \section{Warfarin Revisited} \label{s:warfarin_revisited} \begin{table*} \centering \caption{Mean absolute error (MAE) of different regression models when poisoned using the \emph{Flip} attack. } \input{warfarin_results_more.tex} \label{tab:warfarin_mae_more} \end{table*} In Section \ref{s:case_study_warfarin} we presented the medical use case of predicting the therapeutic Warfarin dose and showed that data poisoning can significantly impact the performance of regressors on the task. In this section, we will illustrate the empirical results of Section \ref{ss:eval_trim} on the use case of Warfarin dose prediction, where we consider three different scenarios: First, the (C)lean case. In this scenario, no data poisoning occurs. This case will be used as a baseline for measuring the effects of data poisoning and defence. Second, the (P)oison case. In this scenario, the attacker introduces $2\%$ poison samples using the \emph{Flip} attack proposed in Section \ref{ss:flip}. No countermeasures are taken. Third, the (D)efended case. In this scenario the data are poisoned with $2\%$ poison samples like in (P), but \emph{iTrim} is used as a counter measure. The results for these three scenarios are summarized in Table~\ref{tab:warfarin_mae_more}. To recapitulate: Warfarin is a blood thinner with a narrow therapeutic window resulting in high medical significance for the correct prediction of the therapeutic Warfarin dose. The scenarios (C) and (P) have already been presented in Section~\ref{s:case_study_warfarin}. To summarize: The models used in our evaluation perform comparable to state-of-the-art models and $2\%$ poison samples are sufficient to noticeably increase metrics like the MAE and to decrease the number of patients receiving an acceptable dose of Warfarin by up to $22\%$. In Table~\ref{tab:warfarin_mae_more} the column 'MAE D/C' provides the factor by which the MAE of a regressor increases, when the dataset is poisoned with $2\%$ poison samples and then defended using \emph{iTrim}. As we can see, the median is $1.00$, indicating that the damage is mitigated. The individual values range from $0.98$ to $1.03$, which indicates that where previously \emph{Flip} incurred an increase in MAE of up to $31\%$, the \emph{iTrim} defense reduces this error increase to a tenth. In summary, the MAE of the tested models in the (D) scenario is approximately the same as in the (C) scenario, meaning the defense successfully eliminates (most) of the negative impact of the poison samples. The column \emph{Acceptable D/C} gives the percentage by which the number of patients receiving an acceptable Warfarin dose decreases in the (D) scenario compared to the (C) scenario. The median reduces from $21.07$ in scenario (P) to a median of close to $0$ in scenario (D). This shows that the number of patients receiving an unacceptable Warfarin dose due to data poisoning is significantly reduced when the \emph{iTrim} defense is employed. In summary, we observe that the \emph{iTrim} defense decreases the influence of poison samples. It results in more patients receiving adequate predictions for their therapeutic Warfarin dose. \section{Conclusion} In this paper we introduce a novel data poisoning attack on regression learning as well as a matching defense mechanism. We show the effectiveness of our proposed attack and defense algorithm in a large empirical evaluation over seven regressors and 26 datasets. Both attack and defense assume realistic constraints: The attack is black-box and doesn't assume access to the true dataset, but only a substitute dataset. The defense, on the other hand, does not assume any knowledge of the poisoning rate $\epsilon$, but estimates it using an iterative approach. \section{First Section} \subsection{A Subsection Sample} Please note that the first paragraph of a section or subsection is not indented. The first paragraph that follows a table, figure, equation etc. does not need an indent, either. Subsequent paragraphs, however, are indented. \subsubsection{Sample Heading (Third Level)} Only two levels of headings should be numbered. Lower level headings remain unnumbered; they are formatted as run-in headings. \paragraph{Sample Heading (Fourth Level)} The contribution should contain no more than four levels of headings. Table~\ref{tab1} gives a summary of all heading levels. \begin{table} \caption{Table captions should be placed above the tables.}\label{tab1} \begin{tabular}{|l|l|l|} \hline Heading level & Example & Font size and style\\ \hline Title (centered) & {\Large\bfseries Lecture Notes} & 14 point, bold\\ 1st-level heading & {\large\bfseries 1 Introduction} & 12 point, bold\\ 2nd-level heading & {\bfseries 2.1 Printing Area} & 10 point, bold\\ 3rd-level heading & {\bfseries Run-in Heading in Bold.} Text follows & 10 point, bold\\ 4th-level heading & {\itshape Lowest Level Heading.} Text follows & 10 point, italic\\ \hline \end{tabular} \end{table} \noindent Displayed equations are centered and set on a separate line. \begin{equation} x + y = z \end{equation} Please try to avoid rasterized images for line-art diagrams and schemas. Whenever possible, use vector graphics instead (see Fig.~\ref{fig1}). \begin{figure} \includegraphics[width=\textwidth]{fig1.eps} \caption{A figure caption is always placed below the illustration. Please note that short captions are centered, while long ones are justified by the macro package automatically.} \label{fig1} \end{figure} \begin{theorem} This is a sample theorem. The run-in heading is set in bold, while the following text appears in italics. Definitions, lemmas, propositions, and corollaries are styled the same way. \end{theorem} \begin{proof} Proofs, examples, and remarks have the initial word in italics, while the following text appears in normal font. \end{proof} For citations of references, we prefer the use of square brackets and consecutive numbers. Citations using labels or the author/year convention are also acceptable. The following bibliography provides a sample reference list with entries for journal articles~\cite{ref_article1}, an LNCS chapter~\cite{ref_lncs1}, a book~\cite{ref_book1}, proceedings without editors~\cite{ref_proc1}, and a homepage~\cite{ref_url1}. Multiple citations are grouped \cite{ref_article1,ref_lncs1,ref_book1}, \cite{ref_article1,ref_book1,ref_proc1,ref_url1}.
1,314,259,996,450
arxiv
\section{Introduction} Plasma science has an admirable track record as an enabling technology that underpin our modern society and has the potential to make wide-ranging contributions to address many societal challenges~\cite{roadmap2012,roadmap2017}. Technologies based on low-temperature plasmas (LTPs) are ubiquitous in today's society. These include mature technologies such as fluorescent lamps and gas lasers, for example, as well as other more ``modern" technologies in use but still being developed, such as plasma reactors for processing of semiconductors, for fabrication of microelectronics to name a few~\cite{encyclopedia}. Today, there are extensive research activities and rapidly emerging applications of LTPs in medicine and in agriculture~\cite{plasmamedicine,plasmaagriculture}. LTPs are generated most simply by applying a sufficiently high voltage across a gas gap separated by two electrodes~\cite{ltpreview}. The properties of the plasmas so generated vary considerably with the experimental parameters - gas pressure and composition, geometrical configuration, means of applying an electromagnetic field (e.g., application of a DC, AC and/or rf, pulsed or steady-state voltage across the electrodes, injection of microwave) and the specificities of the external circuit. For purposes of discussion here, we will consider only an applied electric field. LTPs consist of electrons and ions flowing through a neutral background gas in response to the applied electric field, and, for many applications, the number density of the neutral molecules exceeds that of the charged particles by many orders of magnitude. Our knowledge of the electron and ion interactions with atomic and molecular species within the plasma, and evaluation of cross-sections and reaction rates for such collisions has played an important role in the exploitation of plasmas in several applications~\cite{encyclopedia}. Being much lighter than ions, electrons are more easily accelerated in the electric field that sustains the plasma, and hence the electrons are the vector through which electrical energy is transferred to the neutrals through collisions. For a wide range of conditions in LTPs, electrons collide predominately with background neutral gas molecules in their ground state. In these conditions, the electron energy distribution function is generally non-Maxwellian and the electron ``temperature" is much higher than the temperature of the ions or that of the neutrals. Because of the huge range of realizable conditions, optimization of a LTP plasma for a particular application necessarily requires a combination of experiment and modeling. The data required for modeling LTPs depend on the level of description but are in all cases are extensive~\cite{plasmasimulation1,alvesltpmodeling}. Fluid models - where the electrons and the ions are treated as separate fluids because of their widely disparate temperatures and these are coupled to Maxwell's equations for the EM. In their simplest form, fluid models require electron and ion mobilities, diffusion coefficients, and electron ionization/attachment rate coefficients. The product of the mobility and neutral density, $\mu N$; the product of the diffusion coefficient and neutral density, $DN$; and rate coefficients are dependent on $E/N$, the ratio of the electric field strength to the neutral density in the limit of a constant (in time and space) electric field. These transport and rate coefficients as functions of $E/N$ are commonly called ``swarm" parameters in analogy with a drifting spreading swarm of bees where the average kinetic energy is much higher than the directed – or drift – energy. On the other hand, the more detailed kinetic models (such as Particle-In-Cell simulations with Monte Carlo Collisions) require electron-neutral and ion-neutral cross sections vs. energy for each possible outcome of the collision, whether it be elastic, excitation, or ionization. Of course, there are many different possible excitation channels and a cross section for each as a function of energy is required in general. The future developments in the LTP areas will be based upon our ability in the manipulation of the plasma properties which requires a thorough understanding of plasma chemistry, and availability of accurate cross section data and swarm parameters~\cite{leanne-gec,pnascollisions}. Swarm parameters can be measured fairly easily and with very high accuracy (0.5$\%$ for the drift velocity, for example), and since the works of Ramsauer, Mayer, Townsend and Bailey in the early 1920's researchers have aimed to extract information about microscopic cross sections from measurements of macroscopic swarm parameters. Cross sections, on the other hand, are much more difficult to measure and highly accurate quantum mechanical calculations for simple atomic target species are just now becoming available. However, despite their significance, many processes are not well understood because of the lack of availability of the required cross sections and its absence is a major impediment for experimentalists as well as computational investigators. \begin{figure} \centering \includegraphics[width=\linewidth]{draw_images/Inverse_problem.png} \caption{Forward problem of mapping from a set of cross sections to a set of corresponding swarm data is well-posed and can be solved numerically by solving Boltzmann’s equation. On the other hand, the inverse problem is ill-posed and this inverse mapping function does not exist.} \label{inverseproblem} \end{figure} The complete sets of cross sections include a momentum transfer cross section for elastic scattering, and cross sections for excitation and ionization processes for a given target species. A partial set includes a subset of the important scattering processes for that species. Complete sets of cross sections are needed as input to a Boltzmann equation solver to determine the electron or ion energy distribution function, subsequently from which swarm parameters can be computed (Forward mapping in Fig.~\ref{inverseproblem}). Therefore, complete sets of cross section data play an important role in designing new experiments as well as simulations. Obtaining cross sections from swarm parameters is a challenging task and was pioneered by Townsend and Ramsauer in the $1920$s. This inverse problem is ill-posed and the inverse mapping function does not exist, as depicted in Fig.~\ref{inverseproblem}. Thus, the method used in those early analyses involved inverting the integral relating the drift velocity and momentum transfer cross sections using a simplified expression of the electron energy distribution. This approach was significantly improved in the $1960$s by Phelps along with other collaborators, employing iterative methods to solve the Boltzmann's equation to obtain an accurate energy distribution of the electrons~\cite{frost1962rotational, engelhardt1963elastic, engelhardt1964determination, hake1967momentum}. This allowed accurate computation of the momentum transfer and lower energy inelastic cross sections from the available swarm data. The iterative process of inverting the swarm data to obtain cross sections involves solving the Boltzmann's equation, calculating the electron energy distribution and altering the model cross sections till a satisfactory match is found between the original and computed swarm parameters, making it a computationally expensive problem to solve. To address this issue,~\cite{suzuki1990momentum, morgan1991use} used numerical optimization algorithms to help speed up the process of obtaining cross sections from swarm data. But, the inverse swarm problem is ill-posed, especially when there is a lack of available swarm data. Therefore, these optimization algorithms would often get stuck in a local minima due to the non-uniqueness of the inverse swarm mapping. Neural networks have been successfully used to learn the non-linear mappings between two sets of data, and once the network has been trained, it can give the outputs in roughly $\mathcal{O}(1)$. Also, it is relatively easier to avoid local minima during optimization using neural networks. Hoping to utilize these advantages, W. L. Morgan investigated the feasibility to use neural networks to solve the inverse swarm problem~\cite{morgan1991feasibility} and concluded that neural network were indeed useful to determine the cross sections from electron swarm data but couldn't achieve high accuracy levels due to the lack of quality cross section data available, along with various limitations of the commercially available neural net simulator of those times. Artificial neural networks have been also used to successfully predict the proton impact single ionization double differential cross sections of atoms and molecules~\cite{harris2013applications}. Since Morgan's findings in $1991$, there has been an increase in the amount of available cross sections and swarm data (LXCat~\cite{lxcat}). Recently, study carried out by Stokes \textit{et al.}~\cite{stokes2019determining} verified Morgan's claims and their work~\cite{stokes2020self} demonstrated their automatic solution using artificial neural network had an accuracy comparable to that of a human expert in determining cross sections of the biomolecule tetrahydrofuran (THF) using experimentally measured swarm data. In~\cite{stokes2019determining}, they also showed that use of large amount of synthetic training data generated using the real cross sections available from LXCat indeed gave good results when used to predict elastic momentum transfer and ionization cross sections of Helium and Argon. However, the same needs to be verified for a number of different gas species to safely conclude the feasibility of this machine learning approach to solve the inverse swarm problem. Moreover, their study was limited only to the use of artificial neural network, which had minor improvements over the architecture proposed by Morgan to increase the parameter efficiency and training speed of the model. Additionally in the last decade, there has been drastic increase in computing power along with vast improvements of machine learning algorithms, allowing creation of large and powerful neural networks. There are numerous applications in computer vision and image processing where other neural network types, such as CNN, outperform ANN's predictions because of its ability to extract spatial information. In this problem too, the swarm data which is to be used as an input to the neural network is in the form of continuous series and thus, it becomes imperative to study performance of the CNN architectures in solving the inverse swarm problem. Additionally, since this inverse problem in itself is ill-posed in nature, it is more reasonable to find the entire distribution from which the plausible solutions can be sampled. Thus, in this study, we explore the suitability of deep neural networks to identify the inverse relationship for a wide range of gas species and assess the efficacy of different neural network architectures in predicting scattering cross sections from simulated swarm data. Furthermore, we perform uncertainty quantification to estimate the distribution of all the plausible solutions of the inverse problem. To this end, study exploring the use of CNN and DenseNet for this inverse swarm problem has not yet been reported. In section 2, we describe our complete data-driven methodology starting from data preparation to the implementation of two new neural networks (CNN and DenseNet based) for the solution of this inverse swarm problem. In section 3, we present a detailed comparison of the performance of three neural networks (ANN, CNN and DenseNet) in determining the cross sections of seven gas species. Reliability of the predictions has been also evaluated by using an uncertainty quantification method in subsection 3.1. Finally we conclude the paper with a summary of our results and brief discussion on how accuracy of this data driven approach can be further improved. \section{Methodology} Our data driven methodology for determining a set of cross sections consistent with swarm parameters involves several steps such as data collection and profiling, feature engineering, building the suitable Machine Learning (ML) models followed by training and evaluation. Figure~\ref{workflow} describes the complete workflow used in this study for solving the inverse swarm problem. Firstly, complete sets of cross sections for different gas species are obtained from the LXCat~\cite{lxcat} database, however since this data is limited, we generate abundant synthetic cross sections data. Secondly, using the cross section data, we compute the corresponding swarm coefficients using the freeware Boltzmann equation solver BOLSIG+~\cite{bolsig+}. Thirdly, we perform a feature selection followed by data normalization. Finally different neural network models are designed and are trained using the combination of cross section and swarm data. The predicted results are compared to the cross sections obtained from LXCat. We then also estimate the complete distribution of the plausible cross sections by quantifying the uncertainty in the solution using Monte Carlo Dropout~\cite{gal2016dropout}. In the following subsections we provide a detailed description of each of the above mentioned steps. \begin{figure} \centering \includegraphics[width=\linewidth]{draw_images/Workflow.png} \caption{Complete workflow used in this study for solving the inverse swarm problem.} \label{workflow} \end{figure} \subsection{Dataset} Efficient training of the neural network to identify an inverse non-linear relationship between swarm data and cross sections requires abundant training data. Morgan generated these training cross sections using a power-law model of the form $\sigma(\epsilon) = \epsilon_0 / \epsilon^{p}$, where $\epsilon$ and $p$ are randomly chosen from $(10^{-17},\; 10^{-14})$ and $(0,\; 1)$ respectively~\cite{morgan1991feasibility}. This parameterized method allows to generate an infinite number of training examples and thus can be considered ideal for machine learning problems. However, this parameterized equation represents only a small subset of physically plausible cross sections. To expose our deep learning models to more realistic data, we use cross sections data for gas species compiled on the LXCat website, shown in Fig.~\ref{cs}. The cross sections include the energy-dependent momentum transfer cross section for elastic scattering, and total (angle integrated) cross sections for excitation and ionization processes for a given target species. In general, the probability of a collision of a particular type occurring depends on the relative velocity of the collision partners and the scattering angle. However, it has been shown that the additional detail regarding angular scattering has very little effect on the calculated swarm parameters. Note that there are many different excitation processes with different energy thresholds, and predicting all (or even the most important) of them is a challenging task. In this work, and for the sake of demonstrating the features of different ML algorithms, we consider only one excitation cross section; that is, the training data consider only the lowest excitation process from the compilations available on LXCat. The Boltzmann equation is solved using only these three input cross sections, and the swarm parameters so calculated are not be compared with those tabulated from experiments on the LXCat website, which are generally well predicted when a complete set of cross sections is used. This procedure considerably simplifies the computational requirements and is expected to correctly demonstrate the capabilities of each of the ML algorithms studied here. \begin{figure} \centering \includegraphics[width=\linewidth]{images/Crosssections_lxcat_12_nonvertical.png} \caption{Complete cross section data of various gas species obtained from LXCat} \label{cs} \end{figure} \subsubsection{Extrapolation of inelastic cross sections} In this work, we aim to predict elastic momentum transfer, ionization and excitation cross section for the energies in range [$10^{-1}$ eV, $10^2$ eV], [$10^{0}$ eV, $10^4$ eV], [$10^{-1}$ eV, $10^3$ eV] respectively, and as evident from Fig.~\ref{cs}, inelastic cross sections of many gas species in LXCat databases are not available for the entire energy domain under consideration. Thus, we use an analytical expression to extrapolate these cross sections to higher energies. For the ionization cross sections, we use the parameterization (Eq.~\ref{ionization_formula}) proposed by Rost and Pattard~\cite{rost1997analytical} \begin{equation} \sigma(E) = \frac{kE^\alpha}{(E+E_M /\alpha)^{\alpha+1}} \label{ionization_formula} \end{equation} where $E$ is the excess energy of the system measured from the ionization threshold, $E_M$ corresponds to the energy where the cross section attains a maximum value, $k$ and $\alpha$ are the parameters which are computed to obtain the best fit. Various approximation from quantum mechanics could be used to extrapolate excitation cross sections to higher energies. However, we have chosen to simply use a power-law relationship, Eq.~\ref{excitation_formula}, to extrapolate the data. \begin{equation} \ln\sigma(E) = k\ln E + C \label{excitation_formula} \end{equation} \subsubsection{Synthetic data generation for training} Deep neural networks require large training datasets for effective performance. The cross section data obtained from LXCat however, are very limited (complete cross sections of only 46 different gas species) and clearly insufficient to properly train the model. Therefore, we generate synthetic training examples by interpolating the actual cross sections. Firstly, all the 46 gas species for which complete sets of data exist on LXCat are manually classified into three different groups based on the characteristics of their elastic momentum transfer cross section as shown in Fig.~\ref{classes}. Group-1, Group-2 and Group-3 consists of 12, 18 and 16 different species respectively. To avoid generation of nonphysical cross sections, a new artificial cross section is calculated by taking a weighted geometric average~\cite{stokes2019determining} of two actual cross sections belonging to the same group : \begin{equation} \sigma_{\mathit{new}}(\epsilon) = \sigma_1^{1-r}(\epsilon + \epsilon_1 - \epsilon_1^{1-r}\epsilon_2^r)\;\sigma_2^{r}(\epsilon + \epsilon_2 - \epsilon_1^{1-r}\epsilon_2^r) \label{interpolate} \end{equation} where $\sigma_1(\epsilon)$ and $\sigma_2(\epsilon)$ are the cross sections of gas species belonging to the same group, $\epsilon_1$, $\epsilon_2$ and $\epsilon_1^{1-r}\epsilon_2^r$ are the threshold energies of $\sigma_1(\epsilon)$, $\sigma_2(\epsilon)$ and $\sigma_{\mathit{new}}(\epsilon) $ respectively, and $0\leq r\leq 1$ is a uniformly distributed random variable. \begin{figure} \centering \includegraphics[width=\linewidth]{images/classes.png} \caption{Gas species separated into three different classes based on the characteristics of their elastic momentum transfer cross sections Group 1 consists of Ar, Kr, SF\textsubscript{6}, Xe, CO\textsubscript{2}, Si(CH\textsubscript{3})\textsubscript{4}, CF\textsubscript{4}, CH\textsubscript{4}, H\textsubscript{2}O, HCl, SiH\textsubscript{4} and Cu, Group 2 consists of D\textsubscript{2}, H\textsubscript{2}, He, C, Be, C\textsubscript{2}H\textsubscript{2}, Ne, O\textsubscript{2}, N\textsubscript{2}, F, C(2p(2)\_1D), C(2p(2)\_1S), C\textsubscript{2}H\textsubscript{4}, O, N-elec, CO, C\textsubscript{3}H\textsubscript{6} and O\textsubscript{2}(0.98), and Group 3 consists of CHF\textsubscript{3}, Be(2s\_2p\_1P), N, C(2p3s\_1Po), C(2p3s\_3Po), H(1S), H(2P), H(2S), H(3D), H(3P), H(3S), H(4D), H(4F), H(4P), H(4S) and H~\cite{database1, database2, database3, database4, database5, database6, database7, database8, database9, database10, database11, database12, database13, database14, database15, database16, database17, zatsarinny2004b, allan2006near, bray1992convergent, fursa1995calculation, zammit2014electron, zammit2016complete, christophorou2000electron}} \label{classes} \end{figure} Out of the $46$ gas species available, we set apart one gas species so that we can later on use our deep learning model to predict its cross sections and compare that with the actual cross section from LxCAT to determine the accuracy of our model. Then for our training data, we use Eq.~\ref{interpolate} to generate a total of $10\,000$ different cross sections (Fig.~\ref{cs_interpolated}) including the actual complete cross section of 45 different gas species. Thus, only the cross sections of the gas species for which the model is to be tested is excluded, while all the other gas species contribute equally in generating these synthetic training cross sections. Subsequently, these cross sections are sampled at $100$ discrete log-spaced energy values, between the energy range considered for prediction. So, we have a total of $10^6$ energy -- cross section pairs in our training dataset. \begin{figure} \centering \includegraphics[width=\linewidth]{images/Crosssections_interpolated_12_nonvertical.png} \caption{Synthetically generated cross sections data} \label{cs_interpolated} \end{figure} \subsubsection{Swarm data calculation and Feature selection} Finally, we complete the input-output training pairs by computing the swarm coefficients corresponding to the cross sections present in our training dataset. Swarm data are computed using the BOLSIG+~\cite{bolsig+} solver for the numerical solution of the Boltzmann's equation~\cite{hagelaar2005solving}, with the cross sections data as input. Swarm data are calculated at temperature $T = 300$K for $100$ equally log-spaced reduced electric fields in the range $10^{-3}$ Td to $10^{3}$ Td ($1$ Td = $10^{-21}$ Vm$^2$). Note that BOLSIG+ can extrapolate cross sections to higher energies if needed for very high E/N. \begin{figure} \centering \includegraphics[width=\linewidth]{images/correlation.png} \caption{Pearson correlation coefficient of different swarm parameters obtained from BOLSIG+} \label{corr} \end{figure} Mean energy, mobility, diffusion, energy mobility, energy diffusion, total collision frequency, momentum frequency, total ionization frequency, Townsend ionization coefficient, power, elastic power loss, inelastic power loss, growth power, maximum energy and drift velocity are 15 different quantities which are included in the output of the BOLSIG+ solver. However, unlike BOLSIG+, in most Boltzmann solvers, the max energy is input. Using all of these quantities as input to the neural network is not feasible as it would increase both the training time of the model and its memory requirements. It might even reduce the overall effectiveness of the model and hence, we use feature selection to reduce the input data by removing redundant variables. We compute the Pearson correlation coefficient between all these possible inputs as depicted in Fig.~\ref{corr}. Pearson correlation coefficient shown in Fig.~\ref{corr} is average of all the gas species except for helium (assuming He is the test species on which the model will be tested). Features with high correlation value ($>0.85$ or $<-0.85$) are more linearly dependent and hence have almost the same information content, thus we keep one and drop rest of these highly correlated variables. Using this feature selection method, we are left with mean energy, mobility, diffusion, Townsend ionization, elastic power loss and inelastic power loss. However, mean energy, elastic power loss and inelastic power loss are the swarm coefficients whose data is not readily available on LXCat because these are generally not experimentally measured and thus we drop them from our feature set because in the long-term, we would like to apply our methodology to analysis of experimental swarm data. However, as a future study, it will be interesting to see how the results will get affected if we include these three parameters in the data. \subsection{ML methods and Model Training} \subsubsection{Data normalization} Cross sections along with swarm data scale across many orders of magnitude. Directly using this data to train the neural network will severely impede neural network's ability to learn meaningful trends in the data. Also, large input values would result in large weight values of the neural networks, making them highly unstable. Small input values having zero mean and standard deviation of one are generally considered as ideal for neural networks, and thus we log transform everything (Eq.~\ref{log1}) and then subsequently normalize it to [-1, 1] range (Eq.~\ref{log2}). \begin{equation} y = \log(x) \label{log1} \end{equation} \begin{equation} z = 2\left(\frac{y-y_{min}}{y_{max}-y_{min}}\right) - 1 \label{log2} \end{equation} If the data value is zero, then it is replaced by sufficiently small positive quantity ($\delta=10^{-50}$) before applying the log transformation. \subsubsection{Neural network architecture} Input to our network consists of different swarm parameters --- mobility ($\mu N$), diffusion coefficient ($ND$) and Townsend ionization coefficient ($\alpha/N$) --- measured at 100 distinct reduced electric fields $E_1/N$, $E_2/N$, $\ldots$, where $N$ (or $n_0$) is the number density of the background neutrals. We use neural network itself to estimate the cross sections as function of energy. Thus energy $\epsilon$ is also added to the input to the neural network and output is the single value of cross section corresponding to that energy, $\sigma(\epsilon)$ \begin{equation} x = \left[ \begin{array}{cc} \epsilon \\ N\mu(E_1/N) \\ N\mu(E_2/N) \\ \vdots\\ ND(E_1/N) \\ ND(E_2/N) \\ \vdots\\ \alpha/N(E_1/N) \\ \alpha/N(E_2/N) \\ \vdots \end{array} \right] , \;\;\; y = \sigma(\epsilon) \end{equation} Neural networks are composed of several artificial neurons. The structure of these neurons and its connections play an important role in inferring the function which maps the input to the output. Hence, we test different neural networks to study these performance changes. Artificial Neural Network (ANN) is the most basic form of neural network and its use to solve the inverse swarm problem has been proposed in~\cite{morgan1991feasibility}. Minor improvements were made to this architecture by~\cite{stokes2019determining}, which made the network simpler and faster to train. This ANN architecture has three hidden layers, each having 128 neurons, with $\mathit{swish}$ as non-linear activation function, where $\mathit{swish}(x) = x/(1+\exp{(-x)})$. We consider this as our benchmark architecture. \begin{figure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.95\linewidth]{draw_images/CNN_12.png} \caption{Convolutional Neural Network} \label{cnn_architecture} \end{subfigure} \newline \begin{subfigure}{\textwidth} \centering \includegraphics[width=\linewidth]{draw_images/DenseNet.png} \caption{DenseNet} \label{densenet_architecture} \end{subfigure} \caption{Neural Network layouts used in this study. Various CNN and DenseNet architectures with different hyperparameters like number of convolutional filters, kernel size, number of hidden layers and the choice of activation function were implemented, and the layout having best performance has been finally chosen.} \label{architecture} \end{figure} For our other model, we implement a 1D Convolutional Neural Network~\cite{lecun1995convolutional} because of its ability to extract spatial information from the input data, which is in the form of continuous series. Various CNN architectures were trained to determine optimal hyper-parameters and Fig.~\ref{cnn_architecture} shows the one for which the best results were obtained. Features from the different swarm coefficients are extracted by three successive blocks, each consisting of batch normalization layer, convolutional layer with 64 filters and kernel size of $5\times1$, and a \textit{swish} activation layer. This is followed by an average pooling layer, which is then flattened and passed to two fully connected layers along with the energy input. FC layers have 256 and 64 neurons with \textit{swish} activation function. Finally, it is connected to the linearly activated single output neuron. Dense Convolutional Neural Network (DenseNet) is extension of CNN, which provides substantial performance improvement with comparison to previous CNN architectures~\cite{huang2017densely} and hence we try 1D-DenseNet architecture as our third model. DenseNet improves the information flow between the layers by introducing direct connections from any layer to all subsequent layers. This also leads to feature reuse throughout the network and hence, it requires fewer parameters than a CNN architecture to achieve similar performance (Parameter efficiency). The concatenation of feature maps of all preceding layers, $x_0, x_1, \ldots, x_{l-1} $ are provided as input to the $l^{th}$ layer. \begin{equation} x_l = H_l([x_0, x_1, \ldots, x_{l-1}]) \end{equation} The composite block $H_l$ consists of three successive layers: batch normalization (BN), followed by \textit{swish} activation and a convolution layer. We trained and tested different DenseNet layouts having 3-6 such composite blocks, where the number of convolution filters in each block were kept constant (32 filters) and zero padding was applied to each end of the input so as to keep the feature map's size fixed. Highest accuracy levels were achieved for DenseNet having five composite layers $H_l$ (Fig.~\ref{densenet_architecture}). We use longer convolution kernel to begin with, and gradually decrease its size in the subsequent layers. Concatenation of all these different length features allows the network to learn short-term as well as long-term trends from the swarm data but on the downside, these accumulated features substantially increases the model size. Hence, we use a $1\times1$ convolution followed by average pooling, to reduce the dimension of feature map and avoid overfitting before passing it, along with energy input, to two fully connected layers having size 128 and 64, with \textit{swish} activation. The output layer in each of the architectures is a single neuron corresponding to the cross section being predicted. Hence, we need to train three separate models to predict elastic momentum transfer, ionization and excitation cross sections, for each architecture discussed above. This also allows the network to have feature maps pertinent to each cross section type. This separate training can be eliminated if the output layer is increased to 3 neurons, one for each cross section type. However, this would severely inhibit the network's capability as it would force the network to work with same feature maps even for different types of cross section. Thus, we avoid simultaneous prediction of different cross section types. Purely from the machine learning perspective, neural networks are trained to improve their predictions by heavily penalizing large errors. Although this seems logical, Stokes~\textit{et al.}~\cite{stokes2019determining} found that using $L_2$ loss actually provided worse results than $L_1$ loss for the inverse swarm problem due to the inherent uncertainty in the solution of this inverse problem and consistently trying to fit these uncertain cross sections impeded the model's overall performance. Hence, we also choose mean absolute error ($L_1$-norm) as it is less sensitive to large errors, but make a slight modification to improve model's performance. As discussed earlier, zero-valued cross sections are replaced with a small threshold value $\delta=10^{-50}$ before performing data normalization. This is just an approximate value of $\delta$ and clearly it would be wrong to penalize the network if the predicted value lies in the range [$0$, $\delta$]. Thus, we use a custom $L_1$ loss function \begin{equation} L(y, \,\hat{y}) = \frac{1}{N}\sum_{i=1}^{N}| \mathit{max}(y_i, \,\Delta) - \hat{y_i}| \label{loss} \end{equation} where $N$ is the number of training examples, $y_i$ is the model's prediction, $\hat{y_i}$ is the target value and $\Delta$ is log normalized value of $\delta$ calculated using Eq.~\ref{log1}~\&~\ref{log2} . This loss function clips the predicted output if it is less than $\Delta$, allowing the network's final prediction to be less than $\delta$ without any penalty. This slight modification significantly improves the prediction results of ionization and excitation cross sections. The training dataset is divided into batches containing $10^3$ samples and all the models are trained by minimizing Eq.~\ref{loss} using Adam optimizer~\cite{kingma2014adam} with learning rate of $10^{-4}$ and exponential decay rates of the first moment ($\beta_1$) and second moment ($\beta_2$) as $0.9$ and $0.999$ respectively. The models were implemented using Keras~(2.3.0)~\cite{chollet2015keras} with Tensorflow~(2.2.0) backend~\cite{tensorflow2015-whitepaper} having GPU support. \subsubsection{Determining training duration} During the iterative training of our neural networks, its error on the training set continuously decreases. However, the same does not apply on its generalization error (errors on unseen data), which actually begins to increase after a point in training (overfitting) and hence, ideally we must stop training our network when the generalization error is the least. Since it is not possible to calculate the generalization error explicitly, we try to roughly determine it using $k$-fold cross-validation as dividing our data simply into training and validation dataset is not feasible due to the insufficient availability of the actual cross sections. Each of the 3 groups of gas species divided earlier (Fig.~\ref{classes}) are subdivided in two separate parts (randomly) to form a total of 6 parts. Of these 6 parts, one part is kept as validation data and the remaining 5 parts will be present in training data set. The synthetic cross sections, which are generated using Eq.~\ref{interpolate} from two cross sections $\sigma_1$ and $\sigma_2$ will be split as per the following criteria -- if both $\sigma_1$ and $\sigma_2$ belong to the newly formed training dataset, then this artificial cross section too will be added to the training dataset whereas if both $\sigma_1$ and $\sigma_2$ belong to the newly formed validation dataset, then it will be added to the validation dataset. Then, we train the networks on this newly formed training dataset and monitor the changes in validation error at each epoch. This process is repeated 6 times, with each of the 6 parts used exactly once in the validation data. This ensures that no data is wasted and our models get the opportunity to train on multiple train-validate splits. All the six validation errors are averaged at each epoch and this averaged validation error can be considered as a close substitute for the generalization error. Thus, we determine the optimal number of epochs when this averaged validation error reaches a minimum value. Later while testing our models, we will train them again on all the $10^6$ examples (no division into training-validation dataset) for this optimal number of epochs. \section{Results} All the architectures -- ANN, CNN and DenseNet -- were trained to separately predict elastic momentum transfer, ionization and excitation cross sections using a total of $10^6$ examples which were generated using the process described earlier. These trained models were used to predict the unseen cross sections of nitrogen~(N\textsubscript{2}), argon~(Ar), helium~(He), fluorine~(F), methane~(CH\textsubscript{4}), oxygen~(O\textsubscript{2}) and sulphur hexafluoride~(SF\textsubscript{6}). Here, we would like to explicitly state that even though only one cross section type was predicted at a time, no assumption was made whatsoever regarding the values of other cross section types while predicting a particular cross section--i.e. while estimating the elastic MTCS of a gas species, we do not provide any details about the values of its ionization or excitation cross sections. We test the trained models for such a wide range of gas species, having different physical and chemical properties, to ensure robust performance. Fig.~\ref{results_elastic}, \ref{results_ionization} and \ref{results_excitation} shows the comparison between different architecture's estimate of elastic momentum transfer, ionization and excitation cross sections respectively, with the cross sections available on LXCat. For gas species N\textsubscript{2}, Ar, He, O\textsubscript{2} and SF\textsubscript{6}, cross sections (actual cs depicted in the Figs.~\ref{results_elastic}, \ref{results_ionization} and \ref{results_excitation} with black line) are sourced from the Biagi database~\cite{database2}, while those of F and CH\textsubscript{4} are taken from BSR~\cite{database4} and Hayashi~\cite{database9} database respectively. These cross sections were used to generate the simulated swarm data of these gas species using BOLSIG+, which were then used as the input to our trained model to predict the cross sections. Cross sections, sourced from other databases available on the LXCat, are plotted on the same graph just to give an estimate of the inherent variations in the values of cross sections available in the literature from past research works. Fig.~\ref{results_total} shows the similar comparison for the total cross sections, which is calculated by summing elastic momentum transfer, ionization and excitation cross sections. The predicted cross sections are again used to calculate the corresponding swarm coefficients using the BOLSIG+ solver and its comparison with the swarm coefficients calculated using the actual cross sections is shown in Fig.~\ref{results_swarmimages} \begin{figure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/elastic/N2.png} \caption{Nitrogen (N\textsubscript{2})} \label{elastic_N2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/elastic/Ar.png} \caption{Argon (Ar)} \label{elastic_Ar} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/elastic/He.png} \caption{Helium (He)} \label{elastic_He} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/elastic/F.png} \caption{Fluorine (F)} \label{elastic_F} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/elastic/CH4.png} \caption{Methane (CH\textsubscript{4})} \label{elastic_CH4} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/elastic/O2.png} \caption{Oxygen (O\textsubscript{2})} \label{elastic_O2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/elastic/SF6.png} \caption{Sulfur hexafluoride (SF\textsubscript{6})} \label{elastic_SF6} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/legend_vertical_extension.png} \label{elastic_legend} \end{subfigure} \caption{Prediction of elastic momentum transfer cross sections of various gas species. It is to be noted that in some cases ``Other CS data available on LXCat" (shown in grey color) consists of both elastic momentum transfer and total elastic scattering cross sections. The grey lines simply provides some estimation of the inherent variations in determination of cross sections already available in the literature.} \label{results_elastic} \end{figure} \begin{figure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.87\linewidth]{images/results/ionization/N2.png} \caption{Nitrogen (N\textsubscript{2})} \label{ionization_N2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.87\linewidth]{images/results/ionization/Ar.png} \caption{Argon (Ar)} \label{ionization_Ar} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.87\linewidth]{images/results/ionization/He.png} \caption{Helium (He)} \label{ionization_He} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.87\linewidth]{images/results/ionization/F.png} \caption{Fluorine (F)} \label{ionization_F} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.87\linewidth]{images/results/ionization/CH4.png} \caption{Methane (CH\textsubscript{4})} \label{ionization_CH4} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.87\linewidth]{images/results/ionization/O2.png} \caption{Oxygen (O\textsubscript{2})} \label{ionization_O2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.87\linewidth]{images/results/ionization/SF6.png} \caption{Sulfur hexafluoride (SF\textsubscript{6})} \label{ionization_SF6} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/legend_vertical_extension.png} \label{ionization_legend} \end{subfigure} \caption{Prediction of ionization cross sections of various gas species. It is to be noted that in some cases ``Other CS data available on LXCat" (shown in grey color) consists of both individual ionization processes as well as sums of all ionization processes. The grey lines simply provides some estimation of the inherent variations in determination of cross sections already available in the literature.} \label{results_ionization} \end{figure} \begin{figure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation/N2.png} \caption{Nitrogen (N\textsubscript{2})} \label{excitation_N2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation/Ar.png} \caption{Argon (Ar)} \label{excitation_Ar} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation/He.png} \caption{Helium (He)} \label{excitation_He} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation/F.png} \caption{Fluorine (F)} \label{excitation_F} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation/CH4.png} \caption{Methane (CH\textsubscript{4})} \label{excitation_CH4} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation/O2.png} \caption{Oxygen (O\textsubscript{2})} \label{excitation_O2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation/SF6.png} \caption{Sulfur hexafluoride (SF\textsubscript{6})} \label{excitation_SF6} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/legend_vertical_extension.png} \label{excitation_legend} \end{subfigure} \caption{Prediction of excitation cross sections of various gas species} \label{results_excitation} \end{figure} \begin{figure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/total/N2.png} \caption{Nitrogen (N\textsubscript{2})} \label{total_N2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/total/Ar.png} \caption{Argon (Ar)} \label{total_Ar} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/total/He.png} \caption{Helium (He)} \label{total_He} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/total/F.png} \caption{Fluorine (F)} \label{total_F} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/total/CH4.png} \caption{Methane (CH\textsubscript{4})} \label{total_CH4} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/total/O2.png} \caption{Oxygen (O\textsubscript{2})} \label{total_O2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/total/SF6.png} \caption{Sulfur hexafluoride (SF\textsubscript{6})} \label{total_SF6} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/legend_vertical_extension.png} \label{total_legend} \end{subfigure} \caption{Predicted total cross sections of various gas species} \label{results_total} \end{figure} \begin{figure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.985\linewidth]{images/results/swarm/N2.png} \caption{Nitrogen (N\textsubscript{2})} \label{swarm_N2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.985\linewidth]{images/results/swarm/Ar.png} \caption{Argon (Ar)} \label{swarm_Ar} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.985\linewidth]{images/results/swarm/He.png} \caption{Helium (He)} \label{swarm_He} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.985\linewidth]{images/results/swarm/F.png} \caption{Fluorine (F)} \label{swarm_F} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.985\linewidth]{images/results/swarm/CH4.png} \caption{Methane (CH\textsubscript{4})} \label{swarm_CH4} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.985\linewidth]{images/results/swarm/O2.png} \caption{Oxygen (O\textsubscript{2})} \label{swarm_O2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.985\linewidth]{images/results/swarm/SF6.png} \caption{Sulfur hexafluoride (SF\textsubscript{6})} \label{swarm_SF6} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{images/results/swarm/legend.png} \label{swarm_legend} \end{subfigure} \caption{Comparison of swarm parameters reconstructed using - actual cross sections available on LXCat vs. DenseNet's predicted cross sections} \label{results_swarmimages} \end{figure} As evident from figures~\ref{results_elastic} and \ref{results_ionization}, prediction of both elastic momentum transfer and ionization cross sections, from all the three neural network architectures, agrees reasonably well over the entire energy range with the experimentally measured cross sections obtained from LXCat. Further, to quantitatively compare the performance of different architectures, we use three different metrics: mean absolute error (log-normalized scale), coefficient of determination ($R^2$) and Mean Absolute Relative Percentage Difference (\textit{MARPD}) \begin{equation} \mathit{MARPD} = \frac{1}{N}\sum_{i=1}^{N}\left| 100 \times \frac{y_i - \hat{y_i}}{|y_i|+|\hat{y_i}|}\right| \label{marpd} \end{equation} where $N$ is the number of data points, $y_i$ is predicted value and $\hat{y_i}$ is the true value. Mean absolute error on log-normalized scale depicts the error as seen by the model (test loss). Coefficient of determination ($R^2$) quantifies the degree of correlation between the actual and the predicted values. Its value lies between $(-\infty,\,1]$, with 1 representing complete dependency between the quantities being compared. Mean absolute relative percentage difference provides a standardized error value, which is not only comparable but also more interpretable even to those unfamiliar with the measurement scale of electron cross sections. These three metrics collectively provide a better understanding of network's performance compared to what a single metric alone provides. \newcommand\T{\rule{0pt}{3ex}} \newcommand\B{\rule[-1.2ex]{0pt}{0pt}} \renewcommand{\arraystretch}{1.05} \begin{table} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c c c|c c c|c c c|} \hline \multirow{2}{*}{Species} & \multirow{2}{*}{Cross section} & \multicolumn{3}{c|}{ANN*} & \multicolumn{3}{c|}{CNN} & \multicolumn{3}{c|}{DenseNet}\T\B\\ \cline{3-11} & & MAE & R$^2$ & MARPD & MAE & R$^2$ & MARPD & MAE & R$^2$ & MARPD \B\T\\ \hline \multirow{3}{*}{N\textsubscript{2}} & Elastic & $0.0285$ & $0.578$ & $7.17\%$ & $0.0224$ & $0.615$ & $5.65\%$ & $0.0186$ & $0.637$ & $4.67\%$ \T\\ & Ionization & $0.0164$ & $0.991$ & $5.65\%$ & $0.0083$ & $0.991$ & $3.04\%$ & $0.0080$ & $0.996$ & $2.99\%$ \\ & Total & $0.0302$ & $0.468$ & $7.57\%$ & $0.0239$ & $0.504$ & $5.99\%$ & $0.0205$ & $0.567$ & $5.16\%$ \B\\ \hline \multirow{3}{*}{Ar} & Elastic & $0.0661$ & $0.724$ & $15.95\%$ & $0.0584$ & $0.662$ & $14.23\%$ & $0.0315$ & $0.931$ & $7.90\%$ \T\\ & Ionization & $0.0407$ & $0.867$ & $14.99\%$ & $0.0165$ & $0.968$ & $5.52\%$ & $0.0079$ & $0.994$ & $2.93\%$ \\ & Total & $0.0597$ & $0.722$ & $14.46\%$ & $0.0551$ & $0.659$ & $13.46\%$ & $0.0274$ & $0.935$ & $6.88\%$ \B \\ \hline \multirow{3}{*}{He} & Elastic & $0.0067$ & $0.986$ & $1.70\%$ & $0.0048$ & $0.997$ & $1.21\%$ & $0.0032$ & $0.999$ & $0.81\%$ \T\\ & Ionization & $0.0125$ & $0.970$ & $4.67$\% & $0.0081$ & $0.989$ & $2.62\%$ & $0.0085$ & $0.975$ & $3.17\%$ \\ & Total & $0.0062$ & $0.985$ & $1.58\%$ & $0.0043$ & $0.997$ & $1.09\%$ & $0.0045$ & $0.998$ & $1.15\%$ \B\\ \hline \multirow{3}{*}{F} & Elastic & $0.0205$ & $0.803$ & $5.18\%$ & $0.0143$ & $0.931$ & $3.62\%$ & $0.0104$ & $0.986$ & $2.65\%$ \T\\ & Ionization & $8.8971$ & $-964$ & $70.51\%$ & $0.1307$ & $-10.7$ & $35.65\%$ & $0.0411$ & $0.836$ & $12.97\%$ \\ & Total & $0.0189$ & $0.814$ & $4.78\%$ & $0.0148$ & $0.929$ & $3.74\%$ & $0.0102$ & $0.987$ & $2.56\%$ \B\\ \hline \multirow{3}{*}{CH\textsubscript{4}} & Elastic & $0.0293$ & $0.872$ & $7.39\%$ & $0.0198$ & $0.978$ & $5.01\%$ & $0.0165$ & $0.980$ & $4.17\%$ \T\\ & Ionization & $0.0332$ & $0.902$ & $11.29\%$ & $0.0139$ & $0.995$ & $4.87\%$ & $0.0180$ & $0.953$ & $6.56\%$ \\ & Total & $0.0519$ & $0.833$ & $12.85\%$ & $0.0276$ & $0.978$ & $6.95\%$ & $0.0183$ & $0.978$ & $4.64\%$ \B\\ \hline \multirow{3}{*}{O\textsubscript{2}} & Elastic & $0.0129$ & $0.889$ & $3.28\%$ & $0.0104$ & $0.948$ & $2.61\%$ & $0.0079$ & $0.946$ & $2.01\%$ \T\\ & Ionization & $0.0630$ & $0.719$ & $20.04\%$ & $0.4706$ & $0.991$ & $7.46\%$ & $0.0252$ & $0.980$ & $8.11\%$ \\ & Total & $0.0165$ & $-0.281$ & $4.16\%$ & $0.0137$ & $0.619$ & $3.47\%$ & $0.0112$ & $0.773$ & $2.84\%$ \B\\ \hline \multirow{3}{*}{SF\textsubscript{6}} & Elastic & $0.0178$ & $0.973$ & $4.50\%$ & $0.0169$ & $0.980$ & $4.27\%$ & $0.0125$ & $0.986$ & $3.17\%$ \T\\ & Ionization & $0.0385$ & $0.951$ & $13.85\%$ & $0.0173$ & $0.967$ & $6.05\%$ & $0.0156$ & $0.975$ & $5.78\%$ \\ & Total & $0.0154$ & $0.980$ & $3.90\%$ & $0.0169$ & $0.982$ & $4.82\%$ & $0.0122$ & $0.987$ & $3.08\%$ \B\\ \hline \end{tabular}} \caption{Performance metrics of all architectures implemented in this study. *ANN architecture adopted from~\cite{stokes2019determining}} \label{table_results} \end{table} \renewcommand{\arraystretch}{1.05} \begin{table} \centering \resizebox{0.85\textwidth}{!}{% \begin{tabular}{|c|c|c c|c c|c c|} \hline \multirow{2}{*}{Species} & \multirow{2}{*}{Swarm Coefficient} & \multicolumn{2}{c|}{ANN*} & \multicolumn{2}{c|}{CNN} & \multicolumn{2}{c|}{DenseNet} \T\B\\ \cline{3-8} & & R$^2$ & MARPD & R$^2$ & MARPD & R$^2$ & MARPD \T\B\\ \hline \multirow{3}{*}{N\textsubscript{2}} & Mobility & $0.592$ & $16.61\%$ & $0.970$ & $6.38\%$ & $0.915$ & $6.41\%$ \T\\ & Diffusion & $0.868$ & $19.15\%$ & $0.955$ & $5.71\%$ & $0.989$ & $6.18\%$ \\ & Townsend Ionization & $0.998$ & $6.47\%$ & $0.998$ & $6.37\%$ & $0.999$ & $5.05\%$ \B\\ \hline \multirow{3}{*}{Ar} & Mobility & $0.877$ & $10.02\%$ & $0.901$ & $10.99\%$ & $0.944$ & $3.44\%$ \T\\ & Diffusion & $0.673$ & $8.54\%$ & $0.726$ & $8.34\%$ & $0.901$ & $4.77\%$ \\ & Townsend Ionization & $0.967$ & $20.93\%$ & $0.989$ & $19.41\%$ & $0.999$ & $14.97\%$ \B\\ \hline \multirow{3}{*}{He} & Mobility & $0.999$ & $0.78\%$ & $0.999$ & $0.74\%$ & $0.999$ & $0.72\%$ \T\\ & Diffusion & $0.990$ & $1.85\%$ & $0.998$ & $1.35\%$ & $0.997$ & $1.49\%$ \\ & Townsend Ionization & $0.988$ & $14.48\%$ & $0.985$ & $13.12\%$ & $0.996$ & $2.51\%$ \B\\ \hline \multirow{3}{*}{F} & Mobility & $0.906$ & $6.54\%$ & $0.996$ & $3.88\%$ & $0.999$ & $6.043\%$ \T\\ & Diffusion & $-0.75$ & $17.96\%$ & $0.356$ & $12.03\%$ & $0.987$ & $7.78\%$ \\ & Townsend Ionization & $0.098$ & $32.24\%$ & $0.658$ & $26.67\%$ & $0.982$ & $17.58\%$ \B\\ \hline \multirow{3}{*}{CH\textsubscript{4}} & Mobility & $0.712$ & $12.08\%$ & $0.716$ & $8.01\%$ & $0.966$ & $7.26\%$ \T\\ & Diffusion & $0.001$ & $21.83\%$ & $0.662$ & $9.97\%$ & $0.933$ & $7.41\%$ \\ & Townsend Ionization & $0.989$ & $12.43\%$ & $0.999$ & $6.25\%$ & $0.995$ & $9.29\%$ \B\\ \hline \multirow{3}{*}{O\textsubscript{2}} & Mobility & $0.907$ & $8.38\%$ & $0.967$ & $8.26\%$ & $0.984$ & $4.39\%$ \T\\ & Diffusion & $0.877$ & $7.24\%$ & $0.862$ & $9.58\%$ & $0.948$ & $4.92\%$ \\ & Townsend Ionization & $0.967$ & $12.88\%$ & $0.998$ & $12.78\%$ & $0.995$ & $3.13\%$ \B\\ \hline \multirow{3}{*}{SF\textsubscript{6}} & Mobility & $0.923$ & $5.47\%$ & $0.972$ & $3.09\%$ & $0.990$ & $1.71\%$ \T\\ & Diffusion & $0.978$ & $5.69\%$ & $0.990$ & $3.79\%$ & $0.998$ & $2.60\%$ \\ & Townsend Ionization & $0.996$ & $6.02\%$ & $0.999$ & $5.17\%$ & $0.998$ & $0.98\%$ \B\\ \hline \end{tabular}} \caption{Performance metrics of reproduced swarm coefficients by predictions of all architectures implemented in this study. *ANN architecture adopted from~\cite{stokes2019determining} } \label{results_swarm} \end{table} From the performance metrics (shown in Table~\ref{table_results}), we can safely conclude that the DenseNet architecture performs significantly better compared to CNN, which in turn yield better results than ANN architecture for predicting the elastic momentum cross sections over the entire energy domain considered, of all the gas species considered in our study. A common trend across all the gas species in prediction of elastic MTCS is that all the three architectures predict the cross section with significantly higher accuracy for the range $30-100$ eV. To further comment on the accuracy of the architectures, we analysed the prediction trends of individual gas species in detail. Nitrogen's elastic MTCS has a characteristic peak between $2-2.5$ eV, which is not present in any of the other gas species used in the training data and thus, both ANN and CNN fail to predict this peak. This is due to a quantum mechanical effect specific to $N_2$ in this energy range and it may be difficult to teach the network about the same. DenseNet, on the other hand, does notably better in predicting the presence of this peak, yet, its estimate of the energy at which it occurs is off by $\sim0.5$ eV. Likewise, Argon has Ramsauer-Townsend minimum whose value is significantly lower than all other gas species considered in the training data (another quantum mechanical effect). Still, DenseNet is able to predict the presence of Ramsauer-Townsend minimum at the correct energy value, only erring slightly in determining its magnitude, whereas both CNN and ANN fail to even determine the presence of this minimum. Such trends are observed in prediction of elastic MTCS of all other gas species too, wherein DenseNet is able to determine the characteristic local maximum/minimum values and its locations with remarkably higher accuracy compared to ANN or CNN architecture. We believe our use of convolution kernels of varying sizes allowed the DenseNet architecture to better understand the trends of swarm data which in turn lead to this enhanced performance. Also, layers in the DenseNet architecture receive additional supervision from the loss function through shorter connections, alleviating the vanishing gradient problem and improving the flow of information and gradients throughout the network. This deep supervision provided by the DenseNet could also be one of the reasons for this improved accuracy of the predicted cross sections. For predicting the ionization cross sections, both DenseNet and CNN gives equally good results compared to ANN over the entire energy domain, according to the performance metrics. Moreover, even though no prior information about the threshold energy of ionization cross sections was provided to any neural network, both CNN and DenseNet were able to predict the threshold energy of all gas species, with an accuracy upto one decimal place. Specifically for the case of fluorine, all the three models somewhat struggle to determine the ionization cross section with accuracy as compared to other predictions. This is purely due to the fact that the ionization cross section of fluorine is unusually lower than the ionization cross sections of the other gas species in the training data and thus can be considered as an outlier. Prediction of excitation cross sections by all the architectures differ substantially from the actual cross sections. A possible reason for the same is that the swarm data themselves provide less information about the excitation cross sections compared to elastic momentum transfer and ionization cross sections. This assumption is backed up by a comparison of the two sets of swarm parameters, as depicted in Fig.~\ref{results_swarmimages}. The first being calculated from predicted elastic momentum transfer, ionization and excitation cross sections, while the second set of swarm parameters is calculated using the actual elastic momentum transfer, ionization and excitation cross sections. Another point to be noted here is that only the lowest threshold processes is used in the training and for many cases, this is far less than the sum of all the excitation cross sections. Although the predicted excitation cross sections differed substantially from the actual cross section, the same is not replicated in the comparison of swarm parameters, whose metrics (Table~\ref{results_swarm}) are almost consistent with those predicted of elastic momentum transfer and ionization cross sections' predictions. Thus, we can attribute lack of information content about excitation cross sections in swarm coefficients as one of the possible reasons behind the inaccuracy of predicted excitation cross sections. However, this requires a more detailed investigation in future. \subsection{Uncertainty Quantification} Solutions obtained using deep learning methods have some inherent uncertainty. Quantifying this uncertainty would assist us in determining the reliability of the predictions. Moreover, the mapping of swarm coefficients to cross sections is non-unique - there exist multiple cross sections which map to the same swarm coefficient and the probability distribution of the cross sections generated by the uncertainty quantification (UQ) allows us to sample all these plausible solutions. \begin{figure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/elastic_uncertainty/N2.png} \caption{Nitrogen (N\textsubscript{2})} \label{elastic_uncertainty_N2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/elastic_uncertainty/Ar.png} \caption{Argon (Ar)} \label{elastic_uncertainty_Ar} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/elastic_uncertainty/He.png} \caption{Helium (He)} \label{elastic_uncertainty_He} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/elastic_uncertainty/F.png} \caption{Fluorine (F)} \label{elastic_uncertainty_F} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/elastic_uncertainty/CH4.png} \caption{Methane (CH\textsubscript{4})} \label{elastic_uncertainty_CH4} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/elastic_uncertainty/O2.png} \caption{Oxygen (O\textsubscript{2})} \label{elastic_uncertainty_O2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/elastic_uncertainty/SF6.png} \caption{Sulfur hexafluoride (SF\textsubscript{6})} \label{elastic_uncertainty_SF6} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/legend_uncertainty.png} \label{elastic_uncertainty_legend} \end{subfigure} \caption{Uncertainty in prediction of elastic cross sections of various gas species} \label{results_elastic_uncertainty} \end{figure} \begin{figure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/ionization_uncertainty/N2.png} \caption{Nitrogen (N\textsubscript{2})} \label{ionization_uncertainty_N2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/ionization_uncertainty/Ar.png} \caption{Argon (Ar)} \label{ionization_uncertainty_Ar} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/ionization_uncertainty/He.png} \caption{Helium (He)} \label{ionization_uncertainty_He} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/ionization_uncertainty/F.png} \caption{Fluorine (F)} \label{ionization_uncertainty_F} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/ionization_uncertainty/CH4.png} \caption{Methane (CH\textsubscript{4})} \label{ionization_uncertainty_CH4} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/ionization_uncertainty/O2.png} \caption{Oxygen (O\textsubscript{2})} \label{ionization_uncertainty_O2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/ionization_uncertainty/SF6.png} \caption{Sulfur hexafluoride (SF\textsubscript{6})} \label{ionization_uncertainty_SF6} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/legend_uncertainty.png} \label{ionization_uncertainty_legend} \end{subfigure} \caption{Uncertainty in prediction of ionization cross sections of various gas species} \label{results_ionization_uncertainty} \end{figure} \begin{figure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation_uncertainty/N2.png} \caption{Nitrogen (N\textsubscript{2})} \label{excitation_uncertainty_N2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation_uncertainty/Ar.png} \caption{Argon (Ar)} \label{excitation_uncertainty_Ar} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation_uncertainty/He.png} \caption{Helium (He)} \label{excitation_uncertainty_He} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation_uncertainty/F.png} \caption{Fluorine (F)} \label{excitation_uncertainty_F} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation_uncertainty/CH4.png} \caption{Methane (CH\textsubscript{4})} \label{excitation_uncertainty_CH4} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation_uncertainty/O2.png} \caption{Oxygen (O\textsubscript{2})} \label{excitation_uncertainty_O2} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.927\linewidth]{images/results/excitation_uncertainty/SF6.png} \caption{Sulfur hexafluoride (SF\textsubscript{6})} \label{excitation_uncertainty_SF6} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.875\linewidth]{images/results/legend_uncertainty.png} \label{excitation_uncertainty_legend} \end{subfigure} \caption{Uncertainty in prediction of excitation cross sections of various gas species. ``Actual" curves shown here correspond to only a part (only the lowest energy process) of real excitation.} \label{results_excitation_uncertainty} \end{figure} Bayesian neural networks (BNNs)~\cite{mackay1992practical} predicts the complete probability distribution of the output variable and hence are most suited to determine the model uncertainty. Yet, BNNs are not frequently used due to its high computational cost. Thus, Monte Carlo Dropout~\cite{gal2016dropout} is generally used as an approximate Bayesian inference and we apply this to quantify the uncertainty in our inverse solution. It is implemented by first replicating the DenseNet architecture outlined previously. Subsequently, a dropout of $20\%$ is introduced in the dense layers. These neurons are disabled randomly during both the training and the testing phase. Therefore, every time an input value is passed to the model, different values are predicted which are sampled from some probabilistic distribution. We deduce this distribution by sampling a total of $10^4$ estimated cross sections, and the results are shown in Fig.~\ref{results_elastic_uncertainty}, \ref{results_ionization_uncertainty} and \ref{results_excitation_uncertainty}, which depict the confidence intervals in which the cross section value might lie. We observe a general trend for all gas species except Helium, that the model has a higher uncertainty in determining elastic MTCS at low energies ($0.1-0.8$ eV) compared to that at high energy values. Conversely, the model has higher uncertainty in predicting the ionization cross section at higher energies ($>4000$ eV). Additionally, we find that the model is absolutely certain about the predicted ionization threshold energy but is less certain in determining the peak value of the ionization cross section, even though it gives almost accurate results for both of these quantities. Further, the uncertainty in predicting the excitation cross sections is more compared to both elastic momentum transfer and ionization cross section and as suggested earlier, the lack of information content about the excitation cross sections in swarm data, could be one of the possible reasons for this higher uncertainty of excitation cross sections. \section{Conclusion} We have presented a data-driven approach, to obtain cross sections from the corresponding swarm data using different deep learning models which are trained upon the synthetic data generated from cross sections available on the LXCat. We have demonstrated the feasibility and the robustness of this deep learning based approach, by testing the trained networks to predict the elastic momentum transfer, ionization and excitation cross sections of various gas species, having diverse physical and chemical properties, and found the predicted cross sections to be consistent with the cross sections for elastic momentum transfer and ionization. Also, the swarm coefficients calculated using the predicted cross sections agrees reasonably well with those calculated using the cross sections sets for each species from LXCat (considering only the lowest energy excitation process). We have quantitatively analysed the performance of three different neural network architectures (ANN, CNN and DenseNet) in finding the solution to the inverse swarm problem and found that the Dense Convolutional Neural Network (DenseNet), due to its ability to effectively extract both long and short term trends from the swarm data, significantly outperformed artificial neural network used in previous works, as indicated by the ensemble of metrics used to access the accuracy of the architecture. In summary, we have tested our models on a wide range of gas species, used more performance metrics for statistical analysis and determined cross sections over a greater energy range compared to previous works based on ANNs. Finally, the uncertainty quantification of the model provides us a good estimate of the probability distribution of the cross sections from which all the physically plausible solutions of this inverse swarm problem can be sampled. Based on our results, we can conclusively say that CNN based models, particularly DenseNet, are better compared to ANN models in accurate determination of cross sections from swarm data. Interestingly, unlike ANNs, DenseNet could also predict characteristic peaks in specific energy ranges present in some gas species such as Nitrogen and Argon; these peaks are due to quantum mechanical effects and require domain expertise for such analysis. These significant improvements in prediction accuracy and pattern recognition while using DenseNet will provide the required confidence to the LTP community to accept such data driven approaches. However, additional work is needed before using actual swarm measurements (experimental) as input to such models. Many real gas species have multiple excitation cross sections and they all have an effect on the corresponding swarm coefficients but our proposed model is trained upon the swarm data which is computed using only a single excitation cross sections. Future works should address this issue. The performance of deep learning models is highly dependent upon the training data fed to it. In this work, we have generated synthetic training data by interpolating the actual cross sections which have been categorized based on the characteristics of elastic momentum transfer cross sections. This approach is sufficient to provide the model with a large amount of data to train upon but clearly limits new trends in the synthetic data. Thus, we believe the performance of these neural networks would further improve if we actually use a sophisticated synthetic data generation scheme which can provide artificial cross sections which are physically-plausible, yet have unique trends of their own. One such possible approach is to use Generative Adversarial Networks (GANs), which is a machine learning framework used to extract complex features from a dataset and based on it, generate completely new data with random noise as input. Work on improving the quality of synthetic data with the use of GANs is currently underway. \section*{Acknowledgment} The authors would like to thank Dr. Leanne Pitchford, emeritus senior research scientist at LAPLACE Laboratory, CNRS, Toulouse, France, for discussions about the inverse problem in the context of swarm parameters, and for her valuable comments after careful reading of this manuscript. \section*{References} \bibliographystyle{ieeetr}
1,314,259,996,451
arxiv
\section{Introduction} This document is a model and instructions for \LaTeX. Please observe the conference page limits. \section{Notation} \subsection{$k$-th order distribution induced by $P_{X^n}$} \label{subsection kth order distribution induced} For a random $n$-tuple $X^n$, we let $Q_{X}^{\sf{ave}, (n)}$ denote the distribution of the random variable obtained by choosing one of the $n$ components of $X^n$ at random. I.e., for $J \sim \mbox{Unif} \{ 1, 2, \ldots, n \} $ and independent of $X^n$, $Q_{X}^{\sf{ave}, (n)}$ is the law of $X_J$. In other words, $Q_{X}^{\sf{ave}, (n)}$ is the ``average'' of the marginal laws $\{ P_{X_i} \}_{i=1}^n$, and hence the superscript. We write $Q_{X}^{\sf{ave}, (n)} [P_{X^n}]$ when we want to make its dependence on the law of $X^n$ explicit. Similarly, for $k \leq n$, and $J \sim \mbox{Unif} \{ 1, 2, \ldots, n - k + 1 \} $ independent of $X^n$, we let $Q_{X^k}^{\sf{ave}, (n)}$ denote the law of the $k$-tuple $X_{J}^{J+k-1}$. In other words, $Q_{X^k}^{\sf{ave}, (n)}$ is the law obtained by averaging the marginal $k$-tuple laws $\left\{ P_{X_i^{i+k-1}} \right\}_{i=1}^{n - k + 1}$. We write $Q_{X^k}^{\sf{ave}, (n)} [P_{X^n}]$ when we want to make its dependence on $P_{X^n}$ explicit. We extend this notation in the obvious way to $Q_{X,Y}^{\sf{ave}, (n)} = Q_{X,Y}^{\sf{ave}, (n)} [ P_{X^n, Y^n} ] $ and $Q_{X^k,Y^k}^{\sf{ave}, (n)} = Q_{X^k,Y^k}^{\sf{ave}, (n)} [ P_{X^n, Y^n} ] $. \subsection{$k$-th order empirical distribution induced by $x^n$} For a fixed $n$-tuple $x^n$, we let $Q_{X}^{\sf{emp}, (n)}[x^n]$ denote the empirical (first-order) distribution that it induces. I.e., $Q_{X}^{\sf{emp}, (n)} [ x^n]$ is a PMF on the finite alphabet $\mathcal{X}$ in which the components of $x^n$ reside, with $Q_{X}^{\sf{emp}, (n)} [x^n](a)$ denoting the probability it assigns to $a \in \mathcal{X}$, namely the fraction of times the symbol $a$ appears along the $n$-tuple $x^n$. To simplify the notation, we suppress the dependence on $x^n$, using $Q_{X}^{\sf{emp}, (n)}$ when $x^n$ should be clear from the context. Similarly, for $k \leq n$, $Q_{X^k}^{\sf{emp}, (n)} [x^n]$ will denote the empirical distribution of $k$-tuples along $x^n$. I.e., $Q_{X^k}^{\sf{emp}, (n)} [ x^n]$ is a PMF of a $k$-tuple, with $Q_{X^k}^{\sf{emp}, (n)} [x^n](a^k)$ denoting the probability it assigns to $a^k \in \mathcal{X}^k$, namely the fraction of times the $k$-tuple $a^k$ appears along the $n$-tuple $x^n$. Here too we'll suppress the dependence on $x^n$ and write $Q_{X^k}^{\sf{emp}, (n)}$ when $x^n$ should be clear from the context. We extend this notation to $Q_{X,Y}^{\sf{emp}, (n)} = Q_{X,Y}^{\sf{emp}, (n)} [ x^n, y^n ]$ and $Q_{X^k,Y^k}^{\sf{emp}, (n)} = Q_{X^k,Y^k}^{\sf{emp}, (n)} [ x^n, y^n ]$ in the obvious ways. \subsection{On the relationship between $Q_{X}^{\sf{ave}, (n)}$ and $Q_{X}^{\sf{emp}, (n)}$} When $X^n$ is stochastic, so is $Q_{X^k}^{\sf{emp}, (n)} = Q_{X^k}^{\sf{emp}, (n)} [X^n]$, and for any $a^k \in \mathcal{X}^k$ we have \begin{equation} \label{eq: exp of emp dist is kth order dist} E \left[ Q_{X^k}^{\sf{emp}, (n)} (a^k) \right] = Q_{X}^{\sf{ave}, (n)} (a^k) . \end{equation} Note further that, letting $\stackrel{n \rightarrow \infty}{\Longrightarrow}$ denote convergence in distribution, in any scenario where \[ Q_{X^k}^{\sf{emp}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} \mu_{X^k} \ \ \ a.s. \] for some PMF on $k$-tuples $\mu_{X^k}$, we also have, by (\ref{eq: exp of emp dist is kth order dist}) and the bounded convergence theorem, \[ Q_{X^k}^{\sf{ave}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} \mu_{X^k} . \] Thus, convergence of $Q_{X^k}^{\sf{emp}, (n)}$ is stronger than (implies) convergence of $Q_{X^k}^{\sf{ave}, (n)}$. \section{Samples from the posterior via noisy data compression} Consider the canonical setting where the components of the clean, noisy, and reconstructed sources all take values in the same finite $M$-ary alphabet $\mathcal{A} = \{0,1, \ldots, M-1\}$. The noise-free source $\mathbf{X} = (X_1, X_2, \ldots )$ is stationary ergodic and corrupted by additive ``white'' noise. That is, we assume the components of the noisy observation process $\mathbf{Z}$ are given by \begin{equation} \label{eq: noise model} Z_i = X_i + N_i , \end{equation} where the $N_i$s are IID$\sim N$, independent (collectively) of $\mathbf{X}$, and addition in (\ref{eq: noise model}) is in the mod-$M$ sense. We assume the distribution of the noise is ``non-singular'' in the sense that the Toeplitz matrix whose rows are shifted versions of the row vector representing the PMF of $N$ is invertible, a benign condition guaranteeing a one-to-one correspondence between the distributions of the noise-free and noisy sources \cite{dudepaper}. We construct a difference distortion measure $\rho: \mathcal{A} \rightarrow [0, \infty]$ from the distribution of the noise according to \begin{equation} \label{eq: distortion induced by noise} \rho (a) = \log \frac{1}{\Pr (N=a)} . \end{equation} Good lossy compression of the noisy source $\mathbf{Z}$ under this distortion criterion at distortion level equal to the entropy of the noise turns out to result in reconstructions that are ``samples from the posterior'' of the noise-free given the noisy source. In particular, the finite dimensional distributions of these reconstructions converge to those of the underlying noise-free source. We state this phenomenon quantitatively and rigorously in the theorem below, which follows directly from Part 3 of \cite[Theorem 9]{ordentlichweissman} combined with \cite[Theorem 4]{ordentlichweissman}. \begin{theorem}[\cite{ordentlichweissman}] \label{theorem: emp dist of good codes for noisy data} Suppose $\mathbf{X}$ is a stationary ergodic process. Let $\{ Y^n \}_{n \geq 1}$ be the reconstructions associated with a good code for the source $\mathbf{Z}$ with respect to the difference distortion function in (\ref{eq: distortion induced by noise}), at distortion level $H(N)$. For any finite $k$ and $n \geq k$ let $Q_{Z^k,Y^k}^{\sf{ave}, (n)} = Q_{Z^k,Y^k}^{\sf{ave}, (n)} [ P_{Z^n, Y^n} ]$ and $Q_{Z^k,Y^k}^{\sf{emp}, (n)} = Q_{Z^k,Y^k}^{\sf{emp}, (n)} [Z^n, Y^n]$ denote, respectively, the $k$-th order joint distribution induced by $P_{Z^n, Y^n}$ and the (random) $k$-th order joint distribution induced by the realized $(Z^n, Y^n)$. Then \begin{equation} Q_{Z^k,Y^k}^{\sf{emp}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} P_{Z^k, X^k} \ \ a.s. \end{equation} and a fortiori \begin{equation} Q_{Z^k,Y^k}^{\sf{ave}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} P_{Z^k, X^k} \end{equation} where $P_{Z^k, X^k} $ is the joint $k$th-order distribution of the noisy and original noise-free source. \end{theorem} \section{Application for Learning from Noisy Examples} Consider the standard framework of learning from $M$ labeled IID examples $\{ ( X^{n , (i)} , L_i ) \}_{i=1}^M$. The $i$th example comprises the data point/signal/image $X^{n , (i)}$, which is an $n$-tuple with $\mathcal{A}$-valued components, and the label $L_i$ takes values in a finite alphabet of labels $\mathcal{L}$. Suppose now that each of the components of each of the $X^{n , (i)}$s is added an IID noise component, as in (\ref{eq: noise model}). Theorem \ref{theorem: emp dist of good codes for noisy data} tells us that if, for every label value $\ell \in \mathcal{L}$, we jointly compress all the data associated with that label $\{ X^{n , (i)} \}_{ 1 \leq i \leq M : L_i = \ell }$, using a good code for the distortion measure and level specified in that theorem, then the associated reconstructions $\{ Y^{n , (i)} \}_{ 1 \leq i \leq M : L_i = \ell }$ will have an empirical distribution converging to the distribution of the $X^{n , (i)}$s associated with (conditioned on) label $\ell$ in the limit of many examples $M \rightarrow \infty$. Furthermore, when the generic $X^n$ governing the data are the first $n$ components of a stationary ergodic process, even if we merely employ good compressors separately on each example, Theorem \ref{theorem: emp dist of good codes for noisy data} guarantees that the reconstructions will tend to be loyal to the original data in the sense of their finite dimensional distributions, when $n$ is large. The assumption of a stationary ergodic process governing the examples may be natural in many applications, such as when the $X^{n , (i)}$s represent audio signals. Also, things carry over naturally to multi-dimensionally indexed data such as when the $X^{m \times n , (i)}$s represent images sampled from the generic $X^{m \times n}$, representing the $m \times n$ grid of samples from a (spatially) stationary ergodic random field. \section{On The Amount and Type of Noise to Add} We consider scenarios where the added noise is a design feature. Adding the right kind of noise could simultaneously boost privacy, robustness, and accuracy of the learning, while substantially reducing the cost (in bits) of storing the (training and testing) data. CAN FILL IN QUANTITATIVELY AFTER WE DISCUSS. \section{Ease of Use} \subsection{Maintaining the Integrity of the Specifications} The IEEEtran class file is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations. \section{Prepare Your Paper Before Styling} Before you begin to format your paper, first write and save the content as a separate text file. Complete all content and organizational editing before formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on proofreading, spelling and grammar. Keep your text and graphic files separate until after the text has been formatted and styled. Do not number text heads---{\LaTeX} will do that for you. \subsection{Abbreviations and Acronyms}\label{AA} Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. \subsection{Units} \begin{itemize} \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''. \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. \item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''. \item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.) \end{itemize} \subsection{Equations} Number equations consecutively. To make your equations more compact, you may use the solidus (~/~), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in: \begin{equation} a+b=\gamma\label{eq} \end{equation} Be sure that the symbols in your equation have been defined before or immediately following the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at the beginning of a sentence: ``Equation \eqref{eq} is . . .'' \subsection{\LaTeX-Specific Advice} Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead of ``hard'' references (e.g., \verb|(1)|). That will make it possible to combine sections, add equations, or change the order of figures or citations without having to go through the file line by line. Please don't use the \verb|{eqnarray}| equation environment. Use \verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}| environment leaves unsightly spaces around relation symbols. Please note that the \verb|{subequations}| environment in {\LaTeX} will increment the main equation counter even when there are no equation numbers displayed. If you forget that, you might write an article in which the equation numbers skip from (17) to (20), causing the copy editors to wonder if you've discovered a new method of counting. {\BibTeX} does not work by magic. It doesn't get the bibliographic data from thin air but from .bib files. If you use {\BibTeX} to produce a bibliography you must send the .bib files. {\LaTeX} can't read your mind. If you assign the same label to a subsubsection and a table, you might find that Table I has been cross referenced as Table IV-B3. {\LaTeX} does not have precognitive abilities. If you put a \verb|\label| command before the command that updates the counter it's supposed to be using, the label will pick up the last counter to be cross referenced instead. In particular, a \verb|\label| command should not go before the caption of a figure or a table. Do not use \verb|\nonumber| inside the \verb|{array}| environment. It will not stop equation numbers inside \verb|{array}| (there won't be any anyway) and it might stop a wanted equation number in the surrounding equation. \subsection{Some Common Mistakes}\label{SCM} \begin{itemize} \item The word ``data'' is plural, not singular. \item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''. \item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) \item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates). \item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''. \item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased. \item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''. \item Do not confuse ``imply'' and ``infer''. \item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen. \item There is no period after the ``et'' in the Latin abbreviation ``et al.''. \item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''. \end{itemize} An excellent style manual for science writers is \cite{b7}. \subsection{Authors and Affiliations} \textbf{The class file is designed for, but not limited to, six authors.} A minimum of one author is required for all conference articles. Author names should be listed starting from left to right and then moving down to the next line. This is the author sequence that will be used in future citations and by indexing services. Names should not be listed in columns nor group by affiliation. Please keep your affiliations as succinct as possible (for example, do not differentiate among departments of the same organization). \subsection{Identify the Headings} Headings, or heads, are organizational devices that guide the reader through your paper. There are two types: component heads and text heads. Component heads identify the different components of your paper and are not topically subordinate to each other. Examples include Acknowledgments and References and, for these, the correct style to use is ``Heading 5''. Use ``figure caption'' for your Figure captions, and ``table head'' for your table title. Run-in heads, such as ``Abstract'', will require you to apply a style (in this case, italic) in addition to the style provided by the drop down menu to differentiate the head from the text. Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. \subsection{Figures and Tables} \paragraph{Positioning Figures and Tables} Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are cited in the text. Use the abbreviation ``Fig.~\ref{fig}'', even at the beginning of a sentence. \begin{table}[htbp] \caption{Table Type Styles} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\ \cline{2-4} \textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\ \hline copy& More table copy$^{\mathrm{a}}$& & \\ \hline \multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.} \end{tabular} \label{tab1} \end{center} \end{table} \begin{figure}[htbp] \centerline{\includegraphics{fig1.png}} \caption{Example of a figure caption.} \label{fig} \end{figure} Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity ``Magnetization'', or ``Magnetization, M'', not just ``M''. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization \{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature (K)'', not ``Temperature/K''. \section*{Acknowledgment} The preferred spelling of the word ``acknowledgment'' in America is without an ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B. G.) thanks $\ldots$''. Instead, try ``R. B. G. thanks$\ldots$''. Put sponsor acknowledgments in the unnumbered footnote on the first page. \section*{References} Please number citations consecutively within brackets \cite{b1}. The sentence punctuation follows the bracket \cite{b2}. Refer simply to the reference number, as in \cite{b3}---do not use ``Ref. \cite{b3}'' or ``reference \cite{b3}'' except at the beginning of a sentence: ``Reference \cite{b3} was the first $\ldots$'' Number footnotes separately in superscripts. Place the actual footnote at the bottom of the column in which it was cited. Do not put footnotes in the abstract or reference list. Use letters for table footnotes. Unless there are six authors or more give all authors' names; do not use ``et al.''. Papers that have not been published, even if they have been submitted for publication, should be cited as ``unpublished'' \cite{b4}. Papers that have been accepted for publication should be cited as ``in press'' \cite{b5}. Capitalize only the first word in a paper title, except for proper nouns and element symbols. For papers published in translation journals, please give the English citation first, followed by the original foreign-language citation \cite{b6}. \bibliographystyle{IEEEtranN} \subsection{RANDOM NOTATION TO REMEMBER} \bibliographystyle{IEEEtranN} \section{Introduction} \label{intro} One of the most crucial factors contributing to the recent success of machine learning is the wide availability of user data \cite{jordan2015machine}. However, relying on such data brings several challenges in storage and user privacy. While privacy-preserving methods for machine learning have been studied extensively, efficient storage of data for learning (a major problem even for synthetic datasets such as ImageNet~\citep{krizhevsky2012imagenet} and CelebA~\cite{celeba}) remains largely unexplored. In this work, we propose a framework to tackle the two problems jointly. We seek to develop a storage-efficient privacy-guaranteeing processing procedure that preserves the utility of the data for learning. \begin{figure}[h] \centering \includegraphics[width=.45\textwidth]{figures/extended_LCON.drawio.png} \caption{Proposed data pre-processing framework. $X$: noise-free data, $N$: added noise, $Z$: noisy data, $\hat{X}$: reconstructions from lossy compression of the noisy data. $\hat{X}$ are then used for the learning, in lieu of $X$. A sample of noise-free, noise-injected, and lossily compressed images from the CelebA dataset are given at the bottom. Here JPEG compression with quality factor 1 is applied to noisy images with $12$ dB PSNR.} \label{fig:framework} \end{figure} To achieve this goal, we first inject noise $\mathbf{N}$ to the (learning data) examples $\mathbf{X}$ and then lossily compress the noisy examples $\mathbf{Z}$ (see Fig.~\ref{fig:framework}). The reconstructions from the lossy compression of noisy (LCoN) examples $\mathbf{\hat{X}}$ are then used for the learning. The lossy compression is done under a distortion criterion and level that are matched to the noise characteristics in a way we prescribe below. For data efficiency, we aim to achieve a compression rate close to the optimum, as characterized by the rate-distortion function associated with the noisy data. As for privacy, following \cite{makhdoumi2014information, sankar2013utility, du2012privacy, makhdoumi2013privacy, chatzikokolakis2010statistical, rebollo2009t, huang2018generative} and references therein, we guarantee an upper bound on the privacy leakage as measured by mutual information between the original data $\mathbf{X}$ and that retained $\mathbf{\hat{X}}$. We show that this procedure achieves our goal, which might seem surprising at first glance in light of results from the literature on privacy and robustness showing significant degradation in performance of the trained model when data are corrupted by noise \cite{mcpherson2016defeating, 43405}. Nevertheless, in a sense we make precise, this problem is alleviated in our framework due to the effective denoising that occurs when noisy data are lossily compressed. More concretely, when the distortion criterion and level in the lossy compression are matched to the noise characteristics, the lossily compressed noisy data samples converge, in distribution, to that of the noise-free data because, in effect, they are samples from the posterior distribution of the noise-free data given its noise-corrupted versions. The learning then is performed on data with the ``right'' statistics so, in principle, should entail no performance loss in the downstream inference tasks. Our initial experimentation with gender classification on the CelebA dataset seems in agreement with the theory. For example, one working point of our method decreases the cost of storing the data (in bits) by a factor of two, provides privacy guarantees by adding Gaussian noise (with varying variance where the individuals in the noisy images were unrecognized by the authors), while achieving better accuracy than the benchmark methods. Furthermore, our method yields substantial performance boosts over the benchmark methods when tested on adversarially generated data. Our main contributions can be summarized as: \begin{enumerate} \item We propose a framework for data-efficient privacy-preserving pre-processing that retains the utility/quality of the data for learning by essentially preserving its distributional properties. We call it LCoN pre-processing since it contains \textbf{L}ossy \textbf{Co}mpression of \textbf{N}oisy data. \item We present initial experimentation demonstrating the efficacy of our suggested pre-processing pipeline on the CelebA dataset not only with respect to the criteria that motivated its design, but also in providing robustness to adversarial data. \end{enumerate} \section{Preliminaries} \label{preliminary} \subsection{$k$-th order distribution induced by $P_{X^n}$} \label{subsection kth order distribution induced} For a random $n$-tuple $X^n$, we let $Q_{X}^{\sf{ave}, (n)}$ denote the distribution of the random variable obtained by choosing one of the $n$ components of $X^n$ at random. More precisely, for $J \sim \mbox{Unif} \{ 1, 2, \ldots, n \} $ and independent of $X^n$, $Q_{X}^{\sf{ave}, (n)}$ is the law of $X_J$. In other words, $Q_{X}^{\sf{ave}, (n)}$ is the ``average'' of the marginal laws $\{ P_{X_i} \}_{i=1}^n$, and hence the superscript. We write $Q_{X}^{\sf{ave}, (n)} [P_{X^n}]$ when we want to make its dependence on the law of $X^n$ explicit. Similarly, for $k \leq n$, and $J \sim \mbox{Unif} \{ 1, 2, \ldots, n - k + 1 \} $ independent of $X^n$, we let $Q_{X^k}^{\sf{ave}, (n)}$ denote the law of the $k$-tuple $X_{J}^{J+k-1}$. In other words, $Q_{X^k}^{\sf{ave}, (n)}$ is the law obtained by averaging the marginal $k$-tuple laws $\left\{ P_{X_i^{i+k-1}} \right\}_{i=1}^{n - k + 1}$. We write $Q_{X^k}^{\sf{ave}, (n)} [P_{X^n}]$ when we want to make its dependence on $P_{X^n}$ explicit. We extend this notation in the obvious way to $Q_{X,Y}^{\sf{ave}, (n)} = Q_{X,Y}^{\sf{ave}, (n)} [ P_{X^n, Y^n} ] $ and $Q_{X^k,Y^k}^{\sf{ave}, (n)} = Q_{X^k,Y^k}^{\sf{ave}, (n)} [ P_{X^n, Y^n} ] $. \subsection{$k$-th order empirical distribution induced by $x^n$} \label{subsection $k$-th order empirical distribution induced by $x^n$} For a fixed finite-alphabet $n$-tuple $x^n$, we let $Q_{X}^{\sf{emp}, (n)}[x^n]$ denote the empirical (first-order) distribution that it induces. To be precise, $Q_{X}^{\sf{emp}, (n)} [ x^n]$ is a probability mass function (PMF) on the finite alphabet $\mathcal{X}$ in which the components of $x^n$ reside, with $Q_{X}^{\sf{emp}, (n)} [x^n](a)$ denoting the probability it assigns to $a \in \mathcal{X}$, namely the fraction of times the symbol $a$ appears along the $n$-tuple $x^n$. To simplify the notation, we suppress the dependence on $x^n$, using $Q_{X}^{\sf{emp}, (n)}$ when $x^n$ should be clear from the context. Similarly, for $k \leq n$, $Q_{X^k}^{\sf{emp}, (n)} [x^n]$ denotes the empirical distribution of $k$-tuples along $x^n$. In other words, $Q_{X^k}^{\sf{emp}, (n)} [ x^n]$ is a PMF of a $k$-tuple, with $Q_{X^k}^{\sf{emp}, (n)} [x^n](a^k)$ denoting the probability it assigns to $a^k \in \mathcal{X}^k$, the fraction of times the $k$-tuple $a^k$ appears along the $n$-tuple $x^n$. Here too we suppress the dependence on $x^n$ and write $Q_{X^k}^{\sf{emp}, (n)}$ when $x^n$ should be clear from the context. We extend this notation to $Q_{X,Y}^{\sf{emp}, (n)} = Q_{X,Y}^{\sf{emp}, (n)} [ x^n, y^n ]$ and $Q_{X^k,Y^k}^{\sf{emp}, (n)} = Q_{X^k,Y^k}^{\sf{emp}, (n)} [ x^n, y^n ]$ in the obvious ways. \subsection{Relationship between $Q_{X}^{\sf{ave}, (n)}$ and $Q_{X}^{\sf{emp}, (n)}$} When $X^n$ is stochastic, so is $Q_{X^k}^{\sf{emp}, (n)} = Q_{X^k}^{\sf{emp}, (n)} [X^n]$, and for any $a^k \in \mathcal{X}^k$, we have \begin{equation} \label{eq: exp of emp dist is kth order dist} \E \left[ Q_{X^k}^{\sf{emp}, (n)} (a^k) \right] = Q_{X}^{\sf{ave}, (n)} (a^k) . \end{equation} Note further that, letting $\stackrel{n \rightarrow \infty}{\Longrightarrow}$ denote convergence in distribution, in any scenario where $ Q_{X^k}^{\sf{emp}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} \mu_{X^k} \ \ \ a.s. $ for some PMF on $k$-tuples $\mu_{X^k}$, we also have, by (\ref{eq: exp of emp dist is kth order dist}) and the bounded convergence theorem, $ Q_{X^k}^{\sf{ave}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} \mu_{X^k} .$ Thus, convergence of $Q_{X^k}^{\sf{emp}, (n)}$ is stronger than (implies) convergence of $Q_{X^k}^{\sf{ave}, (n)}$. \section{Samples from the Posterior via Noisy Lossy Compression} \label{samples_posterior} Consider the canonical setting where the components of the noise-free $\mathbf{X}$, noisy $\mathbf{Z}$, and reconstructed sources $\mathbf{\hat{X}}$ in Fig.~\ref{fig:framework} all take values in the same finite $Q$-ary alphabet $\mathcal{A} = \{0,1, \ldots, Q-1\}$. The noise-free source $\mathbf{X} = (X_1, X_2, \ldots )$ is stationary ergodic and corrupted by additive memoryless noise $\mathbf{N}$. That is, we assume the components of the noisy observation process $\mathbf{Z}$ are given by \begin{equation} \label{eq: noise model} Z_i = X_i + N_i , \end{equation} where the $N_i$s are IID$\sim N$, independent (collectively) of $\mathbf{X}$, and addition in (\ref{eq: noise model}) is in the mod-$Q$ sense\footnote{The framework and results have natural analog analogues, where the alphabet can be the real line or any Euclidean space and addition is in the usual sense. We assume here the finite alphabet setting for concreteness, for avoiding unnecessary technicalities, and because it is better connected to practice where the alphabets are ultimately finite.}. We assume the distribution of the noise to be ``non-singular'' in the sense that the Toeplitz matrix whose rows are shifted versions of the row vector representing the PMF of $N$ is invertible, a benign condition guaranteeing a one-to-one correspondence between the distributions of the noise-free and noisy sources \cite{dudepaper}. We construct a difference distortion measure $\rho_N: \mathcal{A} \rightarrow [0, \infty]$ from the distribution of the noise according to \begin{equation} \label{eq: distortion induced by noise} \rho_N (a) = \log \frac{1}{\Pr (N=a)} . \end{equation} Good lossy compression of the noisy source $\mathbf{Z}$ under this distortion criterion at distortion level equal to the entropy of the noise ($D=H(N)$) turns out to result in reconstructions $\hat{X}$ that are \emph{samples from the posterior} of the noise-free source $\mathbf{X}$ given the noisy source $\mathbf{Z}$. In particular, the finite-dimensional distributions of these reconstructions converge to those of the underlying noise-free source. We state this phenomenon rigorously in the theorem below, which follows from results in \cite{ordentlichweissman} (as will be detailed in the full version of this paper). ``Good code'' refers to a sequence of compressors, indexed by block-lengths, with respective rates and distortions converging to a point on the rate-distortion curve. \begin{theorem} \label{theorem: emp dist of good codes for noisy data} Suppose $\mathbf{X}$ is a stationary ergodic process. Let $\{ \hat{X}^n \}_{n \geq 1}$ be the reconstructions associated with a good code for the source $\mathbf{Z}$ with respect to the difference distortion function in (\ref{eq: distortion induced by noise}), at distortion level $H(N)$. For any finite $k$ and $n \geq k$, let $Q_{Z^k,\hat{X}^k}^{\sf{ave}, (n)} = Q_{Z^k,\hat{X}^k}^{\sf{ave}, (n)} [ P_{Z^n, \hat{X}^n} ]$ and $Q_{Z^k,\hat{X}^k}^{\sf{emp}, (n)} = Q_{Z^k,\hat{X}^k}^{\sf{emp}, (n)} [Z^n, \hat{X}^n]$ denote, respectively, the $k$-th order joint distribution induced by $P_{Z^n, \hat{X}^n}$ and the (random) $k$-th order joint distribution induced by the realized $(Z^n, \hat{X}^n)$. Then \begin{equation} Q_{Z^k,\hat{X}^k}^{\sf{emp}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} P_{Z^k, X^k} \ \ a.s. \end{equation} and a fortiori \begin{equation} Q_{Z^k,\hat{X}^k}^{\sf{ave}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} P_{Z^k, X^k}, \end{equation} where $P_{Z^k, X^k} $ is the joint $k$th-order distribution of the noisy and original noise-free source. \end{theorem} In particular, and most relevant for our purposes, the finite-dimensional distributions of lossy reconstructions of the noisy source converge to those of the underlying noise-free source. \section{Application for Learning} \label{application} \subsection{Learning with Lossily Compressed Noisy Examples} \label{LCON_framework} Consider first the standard framework of unsupervised learning from $M$ non-labeled examples $\{ X^{n , (i)} \}_{i=1}^M$, drawn IID $\sim X^n$. The $i$th example comprises the data point/signal/image $X^{n , (i)}$, which is an $n$-tuple with $\mathcal{A}$-valued components. Our data pre-processing method, illustrated in Fig.~\ref{fig:framework}, comprises noise injection and lossy compression to obtain and store the lossily compressed noisy (LCoN) examples, as follows: \begin{enumerate} \item Pick a distribution for the noise $N$ (we discuss the choice of distribution later). Inject IID$\sim N$ noise components to each component of each of the $X^{n , (i)}$s. Denote the noisy examples as $Z^{n , (i)}$, which are IID $\sim Z^n$, the noisy version of $X^n$. \item Pick a good lossy compressor for the distortion function $d(z^n, \hat{x}^n)=\frac{1}{n} \sum_{i=1}^n \rho (z_i-\hat{x}_i) $ where $\rho(\cdot)$ is the distortion measure in (\ref{eq: distortion induced by noise}) and for distortion level equal to the entropy of the noise, i.e., $D =H(N)$. Jointly compress all the noisy data. Denote the reconstructions from the lossy compression of $Z^{n , (i)}$s as $ \hat{X}^{n , (i)} $. \item Use $ \hat{X}^{n , (i)} $ instead of the $ X^{n , (i)} $ for learning. \end{enumerate} Although the above describes jointly compressing all the data, one may also consider a more practical version where each example is compressed separately, as we elaborate below. \subsection{Data Efficiency while Retaining the Right Distribution} \label{LCON_memory} What will be the cost of storing the compressed noisy data? Assuming the compressors employed are ``good'' in the sense of the previous section, it follows by invoking \cite[Theorem 4]{ordentlichweissman} that, in the limit $M \rightarrow \infty$ of a large amount of training data, we will need a rate of \begin{equation} \label{eq: rate needed} \frac{1}{n} H(Z^n) - H(N) \ \ \ \frac{\mbox{bits}}{\mbox{data component}}, \end{equation} namely the rate distortion function of the IID $\sim Z^n$ source at distortion level $H(N)$. Furthermore, Theorem~\ref{theorem: emp dist of good codes for noisy data} assures us that $\{ \hat{X}^{n , (i)} \}_{ 1 \leq i \leq M }$ will have an empirical distribution converging to the distribution of $X^{n}$ when $M \rightarrow \infty$. Thus, overall, the empirical distribution of $\{ \hat{X}^{n , (i)} \}_{i=1}^M$ converges in distribution to the right one, namely that of $X^n$. Therefore, in the limit of many training examples, performing the learning on $\{ \hat{X}^{n , (i)} \}_{i=1}^M$ should be as good as performing it on the original noise-free data $\{ X^{n , (i)} \}_{i=1}^M$. We note that the foregoing discussion was valid for a fixed $n$ and an arbitrarily distributed $X^n$, in the $M \rightarrow \infty$ limit. It is also meaningful to consider a fixed $M$ in the large $n$ limit. Indeed, when it is reasonable to think of the generic $X^n$ governing the data as the first $n$ components of a stationary ergodic process, even if we merely employ good compressors separately on each example, Theorem \ref{theorem: emp dist of good codes for noisy data} guarantees that the reconstructions will tend to be loyal to the original data in the sense of their finite-dimensional distributions, when $n$ is large. The assumption of a stationary ergodic process governing the examples may be natural in a variety of applications, such as when the $X^{n , (i)}$s represent audio signals or text. Also, things carry over naturally to multi-dimensionally indexed data, e.g., when the $X^{m \times n , (i)}$s represent images sampled from the generic $X^{m \times n}$, representing the $m \times n$ grid of samples from a (spatially) stationary ergodic random field. \subsection{Privacy} \label{LCON_privacy} We would also like to guarantee that the database retained for the learning does not leak too much information about any of the individual examples. To this end, we consider the (normalized) mutual information between the two, known as the \emph{privacy leakage}, which comes with a variety of operational justifications on top of its intuitive appeal (cf. \cite{makhdoumi2014information, du2012privacy, makhdoumi2013privacy}, references therein and thereto). For each $i$, we have \begin{align} \frac{1}{n} I \left( X^{n, (i)}; \left\{ \hat{X}^{n, (j) } \right\}_{j=1}^M \right) \leq \frac{1}{n} I \left( X^{n, (i)}; \left\{ Z^{n, (j) } \right\}_{j=1}^M \right) \nonumber \\ = \frac{1}{n} I \left( X^{n, (i)}; Z^{n, (i) } \right) = \frac{1}{n} I \left( X^{n}; Z^{n } \right) = \frac{1}{n} H \left( Z^{n } \right) - H(N) , \label{eq: upper bound on the privacy leakage} \end{align} where the inequality is due to data processing and the two equalities follow by $\left( X^{n, (i)}, Z^{n, (i)} \right)$s being IID $\sim ( X^{n}, Z^{n } )$. \subsection{Choice of the Noise Distribution} \label{noise_details} How should one choose the distribution of the noise? The higher its entropy, the smaller the respective compression rate and upper bound on the privacy leakage in (\ref{eq: rate needed}) and (\ref{eq: upper bound on the privacy leakage}) so, in principle, we get simultaneously better compression \emph{and} more privacy. In fact, one could get both the compression rate and privacy leakage arbitrarily small with a noise distribution sufficiently close to uniform\footnote{Uniform itself is not allowed as per the stipulation of the noise distribution being non-singular.} since both (\ref{eq: rate needed}) and (\ref{eq: upper bound on the privacy leakage}) are upper bounded by \begin{equation} \log{|\mathcal{A}|} - H(N). \end{equation} The choice of noise distribution, however, affects the convergence rate in large $n$ and $M$ limits. As a result, in practice, when both $n$ and $M$ are finite, there is a tension between getting good (low) compression rate plus privacy leakage and the quality (proximity to the true distribution) of the reconstructions. One might envision turning a knob sweeping through noise distributions to find a good sweet-spot. A more principled understanding of this point is left for future work. \subsection{Supervised Learning} The foregoing framework and results carry over straightforwardly to the case when the noise-free data come as $M$ \emph{labeled} examples $\{ ( X^{n , (i)} , L_i ) \}_{i=1}^M$, drawn IID $\sim (X^n, L)$, where the labels $L_i$ take values in a finite alphabet of labels $\mathcal{L}$. In this case, we apply the operations and arguments discussed above separately on each subset of the data pertaining to each label value. The experimental results of Section~\ref{experiments} are in this setting. \subsection{When Compression is Not Matched to the Noise} The following addresses many of the natural scenarios arising in practice where the lossy compression is tailored for a distortion function and/or level not matched to the added noise characteristics. \begin{theorem} Suppose the added noise is decomposable as $N = U + W$, where $U$ and $W$ are independent. If a good code for the source $\mathbf{Z}$ with respect to $\rho_W$ at distortion level $H(W)$ is utilized then $Q_{Z^k,\hat{X}^k}^{\sf{emp}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} P_{Z^k, \tilde{X}^k} \ \ a.s. $ and a fortiori $Q_{Z^k,\hat{X}^k}^{\sf{ave}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} P_{Z^k, \tilde{X}^k}$, where $\mathbf{\tilde{X}} = \mathbf{X} + \mathbf{U}$ (with $\mathbf{U}$ IID$\sim U$ and independent of $\mathbf{X}$) is the partially noisy source and $P_{Z^k, \tilde{X}^k}$ is the joint kth-order distribution of the noisy and partially noisy source. \label{theorem: partial noise} \end{theorem} Evidently, in the scenarios covered by the theorem, the lossy compression denoises $\mathbf{Z}$ only partially. For example, when applied to the case of added Gaussian noise and compression under squared error distortion, the theorem suggests that if compression is done under distortion $D$ smaller than the variance $\sigma^2$ of the noise then the reconstructions are effectively samples from the distribution of the noise-free data corrupted by Gaussian noise of variance $\sigma^2 - D$. The implications of this phenomenon for robustness will be explored in future work (and briefly touched on experimentally in the next section). \section{Experimental Results} \label{experiments} \begin{figure*}[!t] \centering % \subfigure[Tested on Noise-free Examples. ]{\includegraphics[width=.45\textwidth]{figures/Comparison_on_Clean_Data.jpg}} \subfigure[Tested on Noisy Examples. ]{\includegraphics[width=.45\textwidth]{figures/Comparison_on_Noisy_Data.jpg}} \subfigure[Tested on LCoN Examples. ]{\includegraphics[width=.45\textwidth]{figures/Comparison_on_LCON_Data.jpg}} \subfigure[Tested on Adversarial Examples. ]{\includegraphics[width=.45\textwidth]{figures/Comparison_on_adv_Data.jpg}} \caption{Comparison of models trained with LCoN examples, noise-free examples, and noisy examples on (a) noise-free test examples, (b) noisy test examples, (c) LCoN test examples, (d) adversarial test examples. Adversarial images are generated via the Fast Gradient Sign Method (FGSM) \cite{43405}. The compression rates of LCoN for PSNR $[8.1, 10.1, 11.9, 14.2, 16.1, 18.6, 22.1]$ in (a-c) are $[0.131, 0.136, 0.160, 0.170, 0.173, 0.179, 0.200]$, respectively.}\label{fig:comp_plot} \end{figure*} In this section, we test our suggested pipeline in the context of training a gender classifier on the CelebA dataset \cite{celeba}, consisting of 202,599 face images of celebrities (cf.\ left image in Fig. \ref{fig:framework} for an example), using the ResNet-34 architecture \cite{he2016deep}. We chose the CelebA dataset since privacy of face images is an emerging concern, cf., e.g., a recent work \cite{yang2021study} studied the effect of face obfuscation in the context of the ImageNet challenge. We corrupt the original images in the CelebA dataset with (appropriately discretized) Gaussian noise. The induced distortion function in (\ref{eq: distortion induced by noise}) with distortion level being the entropy of the noise essentially boil down to squared error with distortion level being the variance of the added Gaussian noise. The ``good'' lossy compressor we employ in the experiments, guided by our framework, is JPEG \cite{pennebaker1992jpeg}, which was (arguably) designed with squared error in mind. We tune the compression level so that the squared error distortion approximately matches the variance of the injected noise. In Fig.~\ref{fig:comp_plot}, we compare three training schemes: \textbf{1) Our Setting - LCoN-train} (orange in Fig.~\ref{fig:comp_plot}): Training over reconstructions $\hat{X}^{n, (i) }$ from \textbf{L}ossy \textbf{Co}mpression of \textbf{N}oisy examples. We call $\hat{X}^{n, (i) }$s as LCoN-pre-processed examples. This setting comes with guarantees on the privacy leakage and storage cost of the data, as established in the previous section. Fig.~\ref{fig:framework} exhibits $X^{n, (i)}, Z^{n, (i)}$ and $\hat{X}^{n, (i) }$ for a randomly chosen $i$ at the specified noise level and corresponding distortion. \textbf{2) Baseline-1} (blue in Fig.~\ref{fig:comp_plot}): Training over the noise-free examples $X^{n, (i)}$ from the CelebA dataset. This method does not preserve privacy since the noise-free data are retained. \textbf{3) Baseline-2} (red in Fig.~\ref{fig:comp_plot}): Training over noisy examples $Z^{n, (i)}$, injected with the same noise used in LCoN-train. This time, the privacy guarantee is as good as LCoN-train's. After training, we test the respective three neural networks obtained (three for each noise level) on four different datasets: \textbf{1) Noise-free test images} $X^{n, (i)}$ (Fig.~\ref{fig:comp_plot}(a)). \textbf{2) Noise-injected test images} $Z^{n, (i)}$ -- with the same noise distribution used for the training data (Fig.~\ref{fig:comp_plot}(b)). \textbf{3) LCoN-pre-processed test images} $\hat{X}^{n, (i) }$ -- with the same noise and distortion used for the training data (Fig.~\ref{fig:comp_plot}(c)). \textbf{4) Adversarial test images} -- generated via the Fast Gradient Sign Method (FGSM) \cite{43405} (Fig.~\ref{fig:comp_plot}(d)). In Fig.~\ref{fig:comp_plot}(a-c), PSNR refers to the PSNR of the noisy images (as dictated by the noise variance) after noise injection, prior to lossy compression. For a fair comparison, we calibrate the number of examples used by each scheme so that the overall storage cost (in bits) is approximately the same. In other words, in Fig.~\ref{fig:comp_plot}(a-c), the points on the same vertical line (same PSNR, same privacy) are trained with examples requiring the same storage cost by adjusting the number of training examples used. The compression rate for each point is provided in the title of Fig.~\ref{fig:comp_plot}. In Fig.~\ref{fig:comp_plot}(d), we vary the parameter $\epsilon$ in FGSM. Recall that FGSM corrupts the data as $x_{\text{adv}} = x + \epsilon \cdot \text{sign}(\nabla_x J)$, where $J$ is the loss function of the downstream task, i.e., the higher $\epsilon$ the more corrupted the adversarial data. In Fig.~\ref{fig:comp_plot}(d), in addition to testing directly on the adversarial data, we test LCoN-train and Baseline-1 on LCoN-pre-processed adversarial data as well. We denote the pre-processed adversarial data as LCoN-adv (empty markers). We observe that LCoN-train consistently outperforms Baseline-2 in all settings and noise levels. The gap is most significant when the models are tested on the noise-free images (Fig.~\ref{fig:comp_plot}(a)). This behavior is expected in light of the theory exposed in the previous sections: LCoN examples $\hat{X}^{n, (i) }$ are close in distribution to the noise-free examples $X^{n, (i)}$ so a model trained on LCoN examples should be expected to outperform one trained on the noisy ones $Z^{n, (i)}$. Perhaps less expected is that LCoN-train outperforms Baseline-2 even on the noisy data on which the latter was trained. The comparison to Baseline-1 is also extremely favorable (on top of the fact that Baseline-1 preserves no privacy) essentially across the deck. Even on the noise-free test data, our method yields essentially the same accuracy as Baseline-1 at sufficiently high PSNR. Remarkably, our setting reaches $96.8\%$ accuracy for PSNR higher than $20$ dB, which is even higher than the $96.6 \%$ accuracy of the model trained with full noise-free CelebA dataset (not a subset to comply with the storage constraint, as in Baseline-1). Evidently, even when storage is free and privacy is not an issue, LCoN-train is an accuracy booster. Finally, Fig.~\ref{fig:comp_plot}(d) shows that LCoN-train is a significant performance booster in the face of adversarially corrupted data as well. The gap between LCoN-train and the better of the other two benchmarks becomes as large as $16.4\%$ in accuracy. Furthermore, even if the model is trained on the noise-free data, LCoN pre-processing of the adversarial testing data can result in as much as $13 \%$ of an accuracy boost. Overall, it seems, LCoN pre-processing is advisable both at training and testing (and at just one of them if the other is fixed. \begin{figure}[h] \centering % \includegraphics[width=.45\textwidth]{figures/Comparison_of_Q.jpg} \caption{LCoN-train with training data added Gaussian noise $N(0, 1600)$ and compressed with varying distortion levels (mse). Vertical line corresponds to mse=$1600$, where the distortion level and the entropy of the noise are matched.}\label{fig:varying_dist_level_plot} \end{figure} Lastly, Fig.~\ref{fig:varying_dist_level_plot} shows that the best accuracy (across all test data sets) is obtained when the distortion level (mse) is closest to the entropy of the noise. For all the points in Fig.~\ref{fig:varying_dist_level_plot}, a $N(0, 1600)$ Gaussian noise is added to the training data. The black vertical line corresponds to mse~=$1600$ where the distortion level is matched to the entropy of the noise. As expected from Theorem~\ref{theorem: partial noise}, when the distortion level decreases, the lossily reconstructed examples $\tilde{X}^{n, (i) }$ the model is trained on become more noisy. This results in a model trained on examples with distribution further away from the distribution of the $X^{n, (i) }$s, explaining the significant accuracy drop, especially on the noise-free test data, when $D<H(N)$. A similar effect occurs when $D>H(N)$. \section{Conclusion and Future Work} \label{conclusion} Guided by and combining existing theory on lossy noisy data compression and on information-theoretic privacy, we proposed a data pre-processing procedure for both training and testing data which appears to simultaneously boost data efficiency, privacy, accuracy and robustness. Our theoretical framework has accounted for much of the empirical observations as they pertain to the efficiency (compression), privacy (leakage) and accuracy (due to preservation of the right distribution). The robustness is a welcome additional feature we have observed empirically, and perhaps to be intuitively expected given empirical work showing that noise injection \cite{45818} and image compression \cite{robustness_jpeg1, robustness_jpeg3, robustness_jpeg5}, when applied to adversarial data (each separately), improves robustness. Future work will be dedicated to quantifying this effect via (an extension of) our theoretical framework. From a high-level perspective, LCoN and dithered quantization \cite{gray1993dithered} have some resemblance in injecting noise prior to compression. However, we employ noise injection, independent of the lossy compression, to preserve privacy while the added noise in dithered quantization is an essential component of the quantization step. We also note that our framework and theoretical insights transfer directly to the case where the data is noise-corrupted to begin with (rather than the noise being deliberately injected). In such a case, the compression would be tuned to the real noise characteristics. Practically, we plan to further the experiments to other noise distributions and image compressors such as PNG \cite{PNG}, JPEG XR \cite{jpegXR}, WebP \cite{WebP} and LFZip \cite{LFZip2020}, which would be equally natural to experiment with, so long as they are appropriately matched (Gaussian noise for compressors designed with squared error in mind, Laplacian noise for compressors optimized for absolute error, Uniform distribution on a sub-interval of length equal to the allowed maximum distortion for compressors designed under a maximal distortion criterion such as LFZip \cite{LFZip2020}, etc.). Better compressors will likely boost the performance under the other criteria as well. \section{Acknowledgement} This work was supported in part by a Sony Stanford Graduate Fellowship. \section{Notation} \subsection{$k$-th order distribution induced by $P_{X^n}$} \label{subsection kth order distribution induced} For a random $n$-tuple $X^n$, we let $Q_{X}^{\sf{ave}, (n)}$ denote the distribution of the random variable obtained by choosing one of the $n$ components of $X^n$ at random. I.e., for $J \sim \mbox{Unif} \{ 1, 2, \ldots, n \} $ and independent of $X^n$, $Q_{X}^{\sf{ave}, (n)}$ is the law of $X_J$. In other words, $Q_{X}^{\sf{ave}, (n)}$ is the ``average'' of the marginal laws $\{ P_{X_i} \}_{i=1}^n$, and hence the superscript. We write $Q_{X}^{\sf{ave}, (n)} [P_{X^n}]$ when we want to make its dependence on the law of $X^n$ explicit. Similarly, for $k \leq n$, and $J \sim \mbox{Unif} \{ 1, 2, \ldots, n - k + 1 \} $ independent of $X^n$, we let $Q_{X^k}^{\sf{ave}, (n)}$ denote the law of the $k$-tuple $X_{J}^{J+k-1}$. In other words, $Q_{X^k}^{\sf{ave}, (n)}$ is the law obtained by averaging the marginal $k$-tuple laws $\left\{ P_{X_i^{i+k-1}} \right\}_{i=1}^{n - k + 1}$. We write $Q_{X^k}^{\sf{ave}, (n)} [P_{X^n}]$ when we want to make its dependence on $P_{X^n}$ explicit. We extend this notation in the obvious way to $Q_{X,Y}^{\sf{ave}, (n)} = Q_{X,Y}^{\sf{ave}, (n)} [ P_{X^n, Y^n} ] $ and $Q_{X^k,Y^k}^{\sf{ave}, (n)} = Q_{X^k,Y^k}^{\sf{ave}, (n)} [ P_{X^n, Y^n} ] $. \subsection{$k$-th order empirical distribution induced by $x^n$} For a fixed $n$-tuple $x^n$, we let $Q_{X}^{\sf{emp}, (n)}[x^n]$ denote the empirical (first-order) distribution that it induces. I.e., $Q_{X}^{\sf{emp}, (n)} [ x^n]$ is a PMF on the finite alphabet $\mathcal{X}$ in which the components of $x^n$ reside, with $Q_{X}^{\sf{emp}, (n)} [x^n](a)$ denoting the probability it assigns to $a \in \mathcal{X}$, namely the fraction of times the symbol $a$ appears along the $n$-tuple $x^n$. To simplify the notation, we suppress the dependence on $x^n$, using $Q_{X}^{\sf{emp}, (n)}$ when $x^n$ should be clear from the context. Similarly, for $k \leq n$, $Q_{X^k}^{\sf{emp}, (n)} [x^n]$ will denote the empirical distribution of $k$-tuples along $x^n$. I.e., $Q_{X^k}^{\sf{emp}, (n)} [ x^n]$ is a PMF of a $k$-tuple, with $Q_{X^k}^{\sf{emp}, (n)} [x^n](a^k)$ denoting the probability it assigns to $a^k \in \mathcal{X}^k$, namely the fraction of times the $k$-tuple $a^k$ appears along the $n$-tuple $x^n$. Here too we'll suppress the dependence on $x^n$ and write $Q_{X^k}^{\sf{emp}, (n)}$ when $x^n$ should be clear from the context. We extend this notation to $Q_{X,Y}^{\sf{emp}, (n)} = Q_{X,Y}^{\sf{emp}, (n)} [ x^n, y^n ]$ and $Q_{X^k,Y^k}^{\sf{emp}, (n)} = Q_{X^k,Y^k}^{\sf{emp}, (n)} [ x^n, y^n ]$ in the obvious ways. \subsection{On the relationship between $Q_{X}^{\sf{ave}, (n)}$ and $Q_{X}^{\sf{emp}, (n)}$} When $X^n$ is stochastic, so is $Q_{X^k}^{\sf{emp}, (n)} = Q_{X^k}^{\sf{emp}, (n)} [X^n]$, and for any $a^k \in \mathcal{X}^k$ we have \begin{equation} \label{eq: exp of emp dist is kth order dist} E \left[ Q_{X^k}^{\sf{emp}, (n)} (a^k) \right] = Q_{X}^{\sf{ave}, (n)} (a^k) . \end{equation} Note further that, letting $\stackrel{n \rightarrow \infty}{\Longrightarrow}$ denote convergence in distribution, in any scenario where \[ Q_{X^k}^{\sf{emp}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} \mu_{X^k} \ \ \ a.s. \] for some PMF on $k$-tuples $\mu_{X^k}$, we also have, by (\ref{eq: exp of emp dist is kth order dist}) and the bounded convergence theorem, \[ Q_{X^k}^{\sf{ave}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} \mu_{X^k} . \] Thus, convergence of $Q_{X^k}^{\sf{emp}, (n)}$ is stronger than (implies) convergence of $Q_{X^k}^{\sf{ave}, (n)}$. \section{Samples from the posterior via noisy data compression} Consider the canonical setting where the components of the clean, noisy, and reconstructed sources all take values in the same finite $M$-ary alphabet $\mathcal{A} = \{0,1, \ldots, M-1\}$. The noise-free source $\mathbf{X} = (X_1, X_2, \ldots )$ is stationary ergodic and corrupted by additive ``white'' noise. That is, we assume the components of the noisy observation process $\mathbf{Z}$ are given by \begin{equation} \label{eq: noise model} Z_i = X_i + N_i , \end{equation} where the $N_i$s are IID$\sim N$, independent (collectively) of $\mathbf{X}$, and addition in (\ref{eq: noise model}) is in the mod-$M$ sense. We assume the distribution of the noise is ``non-singular'' in the sense that the Toeplitz matrix whose rows are shifted versions of the row vector representing the PMF of $N$ is invertible, a benign condition guaranteeing a one-to-one correspondence between the distributions of the noise-free and noisy sources \cite{dudepaper}. We construct a difference distortion measure $\rho: \mathcal{A} \rightarrow [0, \infty]$ from the distribution of the noise according to \begin{equation} \label{eq: distortion induced by noise} \rho (a) = \log \frac{1}{\Pr (N=a)} . \end{equation} Good lossy compression of the noisy source $\mathbf{Z}$ under this distortion criterion at distortion level equal to the entropy of the noise turns out to result in reconstructions that are ``samples from the posterior'' of the noise-free given the noisy source. In particular, the finite dimensional distributions of these reconstructions converge to those of the underlying noise-free source. We state this phenomenon quantitatively and rigorously in the theorem below, which follows directly from Part 3 of \cite[Theorem 9]{ordentlichweissman} combined with \cite[Theorem 4]{ordentlichweissman}. \begin{theorem}[\cite{ordentlichweissman}] \label{theorem: emp dist of good codes for noisy data} Suppose $\mathbf{X}$ is a stationary ergodic process. Let $\{ Y^n \}_{n \geq 1}$ be the reconstructions associated with a good code for the source $\mathbf{Z}$ with respect to the difference distortion function in (\ref{eq: distortion induced by noise}), at distortion level $H(N)$. For any finite $k$ and $n \geq k$ let $Q_{Z^k,Y^k}^{\sf{ave}, (n)} = Q_{Z^k,Y^k}^{\sf{ave}, (n)} [ P_{Z^n, Y^n} ]$ and $Q_{Z^k,Y^k}^{\sf{emp}, (n)} = Q_{Z^k,Y^k}^{\sf{emp}, (n)} [Z^n, Y^n]$ denote, respectively, the $k$-th order joint distribution induced by $P_{Z^n, Y^n}$ and the (random) $k$-th order joint distribution induced by the realized $(Z^n, Y^n)$. Then \begin{equation} Q_{Z^k,Y^k}^{\sf{emp}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} P_{Z^k, X^k} \ \ a.s. \end{equation} and a fortiori \begin{equation} Q_{Z^k,Y^k}^{\sf{ave}, (n)} \stackrel{n \rightarrow \infty}{\Longrightarrow} P_{Z^k, X^k} \end{equation} where $P_{Z^k, X^k} $ is the joint $k$th-order distribution of the noisy and original noise-free source. \end{theorem} \section{Application for Learning from Noisy Examples} Consider the standard framework of learning from $M$ labeled IID examples $\{ ( X^{n , (i)} , L_i ) \}_{i=1}^M$. The $i$th example comprises the data point/signal/image $X^{n , (i)}$, which is an $n$-tuple with $\mathcal{A}$-valued components, and the label $L_i$ takes values in a finite alphabet of labels $\mathcal{L}$. Suppose now that each of the components of each of the $X^{n , (i)}$s is added an IID noise component, as in (\ref{eq: noise model}). Theorem \ref{theorem: emp dist of good codes for noisy data} tells us that if, for every label value $\ell \in \mathcal{L}$, we jointly compress all the data associated with that label $\{ X^{n , (i)} \}_{ 1 \leq i \leq M : L_i = \ell }$, using a good code for the distortion measure and level specified in that theorem, then the associated reconstructions $\{ Y^{n , (i)} \}_{ 1 \leq i \leq M : L_i = \ell }$ will have an empirical distribution converging to the distribution of the $X^{n , (i)}$s associated with (conditioned on) label $\ell$ in the limit of many examples $M \rightarrow \infty$. Furthermore, when the generic $X^n$ governing the data are the first $n$ components of a stationary ergodic process, even if we merely employ good compressors separately on each example, Theorem \ref{theorem: emp dist of good codes for noisy data} guarantees that the reconstructions will tend to be loyal to the original data in the sense of their finite dimensional distributions, when $n$ is large. The assumption of a stationary ergodic process governing the examples may be natural in many applications, such as when the $X^{n , (i)}$s represent audio signals. Also, things carry over naturally to multi-dimensionally indexed data such as when the $X^{m \times n , (i)}$s represent images sampled from the generic $X^{m \times n}$, representing the $m \times n$ grid of samples from a (spatially) stationary ergodic random field. \section{On The Amount and Type of Noise to Add} We consider scenarios where the added noise is a design feature. Adding the right kind of noise could simultaneously boost privacy, robustness, and accuracy of the learning, while substantially reducing the cost (in bits) of storing the (training and testing) data. CAN FILL IN QUANTITATIVELY AFTER WE DISCUSS. \include{reader_biblio} \end{document} \subsection{RANDOM NOTATION TO REMEMBER} $\mathcal{M} (\mathcal{X})$ $P_{X, Y^*}^{\otimes k} $ equality in distribution $\stackrel{d}{=}$ $\Pr$ as generic probability talk about referring to qemp results as pointwise while qave as average \subsection{Convergence of distributions} FOR FINITE ALPHABETS WILL BE CONVERGENCE OF THE PMFS, FOR CONTINUOUS WILL BE R OR RD AND THEN WILL BE WEAK CONVERGENCE. use: \[ \stackrel{n \rightarrow \infty}{\Longrightarrow} \] notation ALSO DEFINE HERE OR IN VICINITY: ORNSTEIN DISTANCE, AND HOW CONVERGENCE IN THAT IMPLIES CONVERGENCE IN FINITE DIM DISTRIBUTIONS. MAYBE ALSO THE CONTINUITY. and $Q_{X^k}^{\sf{ave}, (n)} = Q_{X^k}^{\sf{ave}, (n)} [P_{X^n}]$. Note that $Q_{X^k}^{\sf{emp}, (n)}$
1,314,259,996,452
arxiv
\section*{Acknowledgement} The author would like to thank Shiri Chechic, Quentin Godfroy, Merav Parter, and Jukka Suomela for valuable discussions. This material is based upon work supported by the National Science Foundation under Grant Nos.\ CCF-AF-0937274, CNS-1035199, 0939370-CCF and CCF-1217506,\linebreak the AFOSR under Contract No.\ AFOSR Award number FA9550-13-1-0042, the Swiss National Science Foundation (SNSF), the Swiss Society of Friends of the Weizmann Institute of Science, and the German Research Foundation (DFG, reference number Le 3107/1-1). \bibliographystyle{abbrv} \section{Local Computations and Memory Requirements}\label{sec:computations} Examining Algorithms~\ref{algo:high-level} and~\ref{algo:step_2} and how we implemented their various steps, it is not hard to see that all computations that do not use the technique of constructing some bipartite multigraph and coloring its edges merely require $\mathcal{O}(n)$ computational steps (and thus, as all values are of size $\mathcal{O}(\log n)$, also $\mathcal{O}(n \log n)$ memory). Leaving the work and memory requirements of local sorting operations aside, the same applies to Algorithms~\ref{algo:sub_sort} and~\ref{algo:sort}. Assuming that an appropriate sorting algorithm is employed, the remaining question is how efficiently we can implement the steps that do involve coloring. The best known algorithm to color a bipartite multigraph $H=(V,E)$ of maximum degree $\Delta$ with $\Delta$ colors requires $\mathcal{O}(|E|\log \Delta)$ computational steps~\cite{cole01}. Ensuring that $|E|\in \mathcal{O}(n)$ in all cases where we appeal to the procedure will thus result in a complexity of $\mathcal{O}(n\log n)$. Unfortunately, this bound does not hold for the presented algorithms. More precisely, \stepref{line:high_3} of \algref{algo:high-level} and Steps~\ref{line:step_2_2} and~\ref{line:step_2_4} of \algref{algo:step_2} violate this condition. Let us demonstrate first how this issue can be resolved for \stepref{line:high_3} of \algref{algo:high-level}. \begin{lemma}\label{lemma:computation_high_3} Steps~\ref{line:high_3} and~\ref{line:high_4} of \algref{algo:high-level} can be executed in $3$ rounds such that each node performs $\mathcal{O}(n)$ steps of local computation. \end{lemma} \begin{proof} Each node locally orders the messages it holds according to their destination sets $W'$; using bucketsort, this can be done using $\mathcal{O}(n)$ computational steps. According to this order, it moves its messages to the nodes in $W$ following a round-robin pattern. In order to achieve this in $2$ rounds, it first sends to each other node in the system one of the messages; in the second round, these nodes forward these messages to nodes in $W$. Since an appropriate communication pattern can be fixed independently of the specific distribution of messages, no extra computations are required. Observe that in the resulting distribution of messages, no node in $W$ holds more than $2\sqrt{n}$ messages for each set $W'$: For every full $\sqrt{n}$ messages some node in $W$ holds for set $W'$, every node in $W$ gets exactly one message destined for $W'$, plus possible one residual message for each node in $W$ that does not hold an integer multiple of $\sqrt{n}$ messages for $W'$. Hence, moving at most two messages across each edge in a single round, \stepref{line:high_4} can be completed in one round. \end{proof} Note that we save two rounds for \stepref{line:high_3} in comparison to \corollaryref{coro:computation_step_2_4}, but at the expense of doubling the message size in \stepref{line:high_4}. The same argument applies to \stepref{line:step_2_4} of \algref{algo:step_2}. \begin{corollary}\label{coro:computation_step_2_4} Steps~\ref{line:step_2_3} to~\ref{line:step_2_5} of \algref{algo:step_2} can be executed in $2$ rounds, where each node performs $\mathcal{O}(n)$ steps of local computation. \end{corollary} \stepref{line:step_2_2} of \algref{algo:step_2} requires a different approach still relying on our coloring construction. \begin{lemma}\label{lemma:computation_step_2} A variant of \algref{algo:step_2} can execute \stepref{line:high_2} of \algref{algo:high-level} in $5$ rounds using $\mathcal{O}(n\log n)$ steps of local computation and memory bits at each node. \end{lemma} \begin{proof} As mentioned before, the critical issue is that Steps~\ref{line:step_2_2} and~\ref{line:step_2_4} of \algref{algo:step_2} rely on bipartite graphs with too many edges. \corollaryref{coro:computation_step_2_4} applies to \stepref{line:step_2_4}, so we need to deal with \stepref{line:step_2_2} only. To reduce the number edges in the graph, we group messages from $W$ to $W'$ into sets of size $n$. Note that not all respective numbers are integer multiples of $n$, and we need to avoid ``incomplete'' sets of smaller size as otherwise the number of edges still might be too large. This is easily resolved by dealing with such ``residual'' messages by directly sending them to their destinations: Each set will hold less than $n$ such messages for each destination set $W'$ and therefore can deliver these messages using its $n$ edges to set $W'$.\footnote{The nodes account for such messages as well when performing the redistribution of messages within $W$ in Steps~\ref{line:step_2_3} to~\ref{line:step_2_5}.} It follows that the considered bipartite multigraph will have $\mathcal{O}(n)$ edges and maximum degree $\sqrt{n}$. It remains to argue why all steps can be performed with $\mathcal{O}(n\log n)$ steps and memory at each node. This is obvious for \stepref{line:step_2_1} and \stepref{line:step_2_6} and follows from \corollaryref{coro:computation_step_2_4} for Steps~\ref{line:step_2_3} to~\ref{line:step_2_5}. Regarding \stepref{line:step_2_2}, observe that the bipartite graph considered can be constructed in $\mathcal{O}(n)$ steps since this requires adding $\sqrt{n}$ integers for each of the $\sqrt{n}$ destination sets (and determining the integer parts of dividing the results by $n$). Applying the algorithm from~\cite{cole01} then colors the edges within $\mathcal{O}(n\log n)$ steps. Regarding memory, observe that all other steps require $\mathcal{O}(n)$ computational steps and thus trivially satisfy the memory bound. The algorithm from~\cite{cole01} computes the coloring by a recursive divide and conquer strategy; clearly, an appropriate implementation thus will not require more than $\mathcal{O}(n\log n)$ memory either. \end{proof} We conclude that there is an implementation of our scheme that is simultaneously efficient with respect to running time, message size, local computations, and memory consumption. \begin{theorem} \probref{prob:idt} can be solved deterministically within $12$ rounds, where each node performs $\mathcal{O}(n\log n)$ steps of computation using $\mathcal{O}(n\log n)$ memory bits. \end{theorem} This result immediately transfers to \probref{prob:sort}. \begin{corollary} \probref{prob:sort} and its variant discussed in \corollaryref{coro:sort} can be solved in a constant number of rounds, where each node performs $\mathcal{O}(n\log n)$ steps of computation using $\mathcal{O}(n\log n)$ memory bits. \end{corollary} \section{Introduction \& Related Work}\label{sec:intro} Arguably, one of the most fundamental questions in distributed computing is what amount of communication is required to solve a given task. For systems where communication is dominating the ``cost''---be it the time to communicate information, the money to purchase or rent the required infrastructure, or any other measure derived from a notion of communication complexity---exploring the imposed limitations may lead to more efficient solutions. Clearly, in such systems it does not make sense to make the complete input available to all nodes, as this would be too expensive; typically, the same is true for the output. For this reason, one assumes that each node is given a part of the input, and each node needs to compute a corresponding part of the output. For graph theoretic questions, the local input comprises the neighborhood of the node in the respective graph, potentially augmented by weights for its incident edges or similar information that is part of the problem specification. The local output then e.g.\ consists of indication of membership in a set forming the global solution (a dominating set, independent set, vertex cover, etc.), a value between $0$ and $1$ (for the fractional versions), a color, etc. For verification problems, one is satisfied if for a valid solution all nodes output ``yes'' and at least one node outputs ``no'' for an invalid solution. Since the advent of distributed computing, a main research focus has been the \emph{locality} of such computational problems. Obviously, one cannot compute, or even verify, a spanning tree in less than $D$ synchronous communication rounds, where $D$ is the diameter of the graph, as it is impossible to ensure that a subgraph is acyclic without knowing it completely. Formally, the respective lower bound argues that there are instances for which no node can reliably distinguish between a tree and a non-tree since only the local graph topology (and the parts of the prospective solution) up to distance $R$ can affect the information available to a node after $R$ rounds. More subtle such \emph{indistinguishability} results apply to problems that \emph{can} be solved in $o(D)$ time (see e.g.~\cite{kuhn10,linial92,naor91}). This type of argument breaks down in systems where all nodes can communicate directly or within a few number of rounds. However, this does not necessitate the existence of efficient solutions, as due to limited bandwidth usually one has to be selective in what information to actually communicate. This renders otherwise trivial tasks much harder, giving rise to strong lower bounds. For instance, there are $n$-node graphs of constant diameter on which finding or verifying a spanning tree and many related problems require $\tilde{\Omega}(\sqrt{n})$ rounds if messages contain a number of bits that is polylogarithmic in $n$~\cite{elkin06,peleg00near,dasSarma2011}; approximating the diameter up to factor $3/2-\varepsilon$ or determining it exactly cannot be done in $\tilde{o}(\sqrt{n})$ and $\tilde{o}(n)$ rounds, respectively~\cite{frischknecht12}. These and similar lower bounds consider specific graphs whose topology prohibits to communicate efficiently. While the diameters of these graphs are low, necessitating a certain connectivity, the edges ensuring this property are few. Hence, it is impossible to transmit a linear amount of bits between some nodes of the graph quickly, which forms the basis of the above impossibility results. This poses the question whether non-trivial lower bounds also hold in the case where the communication graph is well-connected. After all, there are many networks that do not feature small cuts, some due to natural expansion properties, others by design. Also, e.g.\ in overlay networks, the underlying network structure might be hidden entirely and algorithms may effectively operate in a fully connected system, albeit facing bandwidth limitations. Furthermore, while for scalability reasons full connectivity may not be applicable on a system-wide level, it could prove useful to connect multiple cliques that are not too large by a sparser high-level topology. These considerations motivate to study distributed algorithms for a fully connected system of $n$ nodes subject to a bandwidth limitation of $\mathcal{O}(\log n)$ bits per round and edge, which is the topic of the present paper. Note that such a system is very powerful in terms of communication, as each node can send and receive $\Theta(n \log n)$ bits in each round, summing up to a total of $\Theta(n^2\log n)$ bits per round. Consequently, it is not too surprising that, to the best of our knowledge, so far no negative results for this model have been published. On the positive side, a minimum spanning tree can be constructed in $\mathcal{O}(\log \log n)$ rounds~\cite{lotker06}, and, given to each node the neighbors of a corresponding node in some graph as input, it can be decided within $\mathcal{O}(n^{1/3}/\log n)$ rounds whether the input graph contains a triangle~\cite{lenzen12}. These bounds are deterministic; constant-round randomized algorithms have been devised for the routing~\cite{lenzen11} and sorting~\cite{patt-shamir11} tasks that we solve deterministically in this work. The randomized solutions are about $2$ times as fast, but there is no indication that the best deterministic algorithms are slower than the best randomized algorithms. \subsection*{Contribution} We show that the following closely related problems can be deterministically solved, within a constant number of communication rounds in a fully connected system where messages are of size $\mathcal{O}(\log n)$. \begin{compactenum} \item [\textbf{Routing:}] Each node is source and destination of (up to) $n$ messages of size $\mathcal{O}(\log n)$. Initially only the sources know destinations and contents of their messages. Each node needs to learn all messages it is the destination of. (\sectionref{sec:routing}) \item [\textbf{Sorting:}] Each node is given (up to) $n$ comparable keys of size $\mathcal{O}(\log n)$. Node $i$ needs to learn about the keys with indices $(i-1)n+1,\ldots,in$ in a global enumeration of the keys that respects their order. Alternatively, we can require that nodes need to learn the indices of their keys in the total order of the union of all keys (i.e., all duplicate keys get the same index). Note that this implies constant-round solutions for related problems like selection or determining modes. (\sectionref{sec:sort}) \end{compactenum} We note that the randomized algorithms from previous work are structurally very different from the presented deterministic solutions. They rely on near-uniformity of load distributions obtained by choosing intermediate destinations uniformly and independently at random, in order to achieve bandwidth-efficient communication. In contrast, the presented approach achieves this in a style that has the flavor of a recursive sorting algorithm (with a single level of recursion). While our results are no lower bounds for well-connected systems under the CONGEST model, they shed some light on why it is hard to prove impossibilities in this setting: Even without randomization, the overhead required for coordinating the efforts of the nodes is constant. In particular, any potential lower bound for the considered model must, up to constant factors, also apply in a system where each node can in each round send and receive $\Theta(n\log n)$ bits to and from arbitrary nodes in the system, with no further constraints on communication. We note that due to this observation, our results on sorting can equally well be followed as corollaries of our routing result and Goodrich's sorting algorithm for a bulk-synchronous model~\cite{goodrich99}. However, the derived algorithm is more involved and requires at least an order of magnitude more rounds. Since for such fundamental tasks as routing and sorting the amount of local computations and memory may be of concern, we show in \sectionref{sec:computations} how our algorithms can be adapted to require $\mathcal{O}(n\log n)$ computational steps and memory bits per node. Trivially, these bounds are near-optimal with respect to computations and optimal with respect to memory (if the size of the messages that are to be exchanged between the nodes is $\Theta(\log n)$). To complete the picture, in \sectionref{sec:size} we vary the parameters of bandwidth, message/key size, and number of messages/keys per node. Our techniques are sufficient to obtain asymptotically optimal results for almost the entire range of parameters. For keys of size $o(\log n)$, we show that in fact a huge number of keys can be sorted quickly; this is the special case for which our bounds might not be asymptotically tight. \section{Model}\label{sec:model} In brief, we assume a fully connected system of $n$ nodes under the congestion model. The nodes have unique identifiers $1$ to $n$ that are known to all other nodes. Computation proceeds in synchronous rounds, where in each round, each node performs arbitrary, finite computations,\footnote{Our algorithms will perform polynomial computations with small exponent only.} sends a message to each other node, and receives the messages sent by other nodes. Messages are of size $\mathcal{O}(\log n)$, i.e., in each message nodes may encode a constant number of integer numbers that are polynomially bounded in $n$.\footnote{We will not discuss this constraint when presenting our algorithms and only reason in a few places why messages are not too large; mostly, this should be obvious from the context.} To simplify the presentation, nodes will treat also themselves as receivers, i.e., node $i\in \{1,\ldots,n\}$ will send messages to itself like to any other node $j\neq i$. These model assumptions correspond to the congestion model on the complete graph $K_n=(V,\binom{V}{2})$ on the node set $V=\{1,\ldots,n\}$ (cf.~\cite{peleg00}). We stress that in a given round, a node may send different messages along each of its edges and thus can convey a total of $\Theta(n \log n)$ bits of information. As our results demonstrate, this makes the considered model much stronger than one where in any given round a node must broadcast the same $\Theta(\log n)$ bits to all other nodes. When measuring the complexity of computations performed by the nodes, we assume that basic arithmetic operations on $\mathcal{O}(\log n)$-sized values are a single computational step. \section{Routing}\label{sec:routing} In this section, we derive a deterministic solution to the following task introduced in~\cite{lenzen11}. \begin{problem}[Information Distribution Task]\label{prob:idt}\ \\ Each node $i\in V$ is given a set of $n$ messages of size $\mathcal{O}(\log n)$ \begin{equation*} {\cal S}_i=\{m_i^1,\ldots,m_i^n\} \end{equation*} with destinations $d(m_i^j)\in V$, $j\in \{1,\ldots,n\}$. Messages are globally lexicographically ordered by their source $i$, their destination $d(m_i^j)$, and $j$. For simplicity, each such message explicitly contains these values, in particular making them distinguishable. The goal is to deliver all messages to their destinations, minimizing the total number of rounds. By \begin{equation*} {\cal R}_k:=\left\{m_i^j\in \bigcup_{i\in V}{\cal S}_i\,\Bigg|\,d(m_i^j)=k\right\} \end{equation*} we denote the set of messages a node $k\in V$ shall receive. We require that $|{\cal R}_k|= n$ for all $k\in V$, i.e., also the number of messages a single node needs to receive is $n$. \end{problem} We remark that it is trivial to relax the requirement that each node needs to send and receive \emph{exactly} $n$ messages; this assumption is made to simplify the presentation. If each node sends/receives at most $n$ messages, our techniques can be applied without change, and instances with more than $n$ sent/received messages per node can be split up into smaller ones. \subsection{Basic Communication Primitives} Let us first establish some basic communication patterns our algorithms will employ. We will utilize the following classical result. \begin{theorem}[Koenig's Line Coloring Theorem]\label{theorem:koenig}\ \\ Every $d$-regular bipartite multigraph is a disjoint union of $d$ perfect matchings. \end{theorem} \begin{proof} See e.g.\ Theorem 1.4.18 in \cite{lovasz09}. \end{proof} We remark that such an optimal coloring can be computed efficiently~\cite{cole01}.\footnote{Also, a simple greedy coloring of the line graph results in at most $2d-1$ (imperfect) matchings, which is sufficient for our purposes. This will be used in \sectionref{sec:computations} to reduce the amount of computations performed by the algorithm.} Using this theorem, we can solve \probref{prob:idt} efficiently provided that it is known a priori to all nodes what the sources and destinations of messages are, an observation already made in~\cite{lenzen12}. We will however need a more general statement applying to subsets of nodes that want to communicate among themselves. To this end, we first formulate a generalization of the result from~\cite{lenzen12}. \begin{corollary}\label{coro:2_round} We are given a subset $W\subseteq V$ and a bulk of messages such that the following holds. \begin{compactenum} \item The source and destination of each message is in $W$. \item The source and destination of each message is known in advance to all nodes in $W$, and each source knows the contents of the messages to send. \item Each node is the source of $f|W|$ messages, where $f:=\lfloor n/|W|\rfloor$. \item Each node is the destination of $f|W|$ messages. \end{compactenum} Then a routing scheme to deliver all messages in $2$ rounds can be found efficiently. The routing scheme makes use of edges with at least one endpoint in $W$ only. \end{corollary} \begin{proof} Consider the bipartite multigraph $G=(S\dot{\cup}R,E)$ with $|S|=|R|=|W|$, where $S=\{1_s,\ldots,|W|_s\}$ and $R=\{1_r,\ldots,|W|_r\}$ represent the nodes in their roles as senders and receivers, respectively, and each input message at some node $i$ that is destined for some node $j$ induces an edge from $i_s$ to~$j_r$. By \theoremref{theorem:koenig}, we can color the edge set of $G$ with $m:=f|W|\leq n$ colors such that no two edges with the same color have a node in common. Moreover, as all nodes are aware of the source and destination of each message, they can deterministically and locally compute the same such coloring, without the need to communicate. Now, in the first communication round, each node sends its (unique) message of color $c\in \{1,\ldots,m\}$ to node $c$. As each node holds exactly one message of each color, at most one message is sent over each edge, i.e., by the assumptions of the corollary this step can indeed be performed in one round. Observe that this rule ensures that each node will receive exactly one message of each color in the first round. Hence, because the coloring also guarantees that each node is the destination of exactly one message of each color, it follows for each $i,j\in \{1,\ldots,n\}$ that node $i$ receives exactly $f$ messages that need to be delivered to node $j$ in the first round. Therefore all messages can be delivered by directly sending them to their destinations in the second round. \end{proof} We stress that we can apply this result concurrently to multiple disjoint sets $W$, provided that each of them satisfies the prerequisites of the corollary: since in each routing step, each edge has at least one endpoint in $W$, there will never be an edge which needs to convey more than one message in each direction. This is vital for the success of our algorithms. An observation that will prove crucial for our further reasoning is that for subsets of size at most $\sqrt{n}$, the amount of information that needs to be exchanged in order to establish common knowledge on the sources and destinations of messages becomes sufficiently small to be handled. Since this information itself consists, for each node, of $|W|$ numbers that need to be communicated to $|W|\leq n/|W|$ nodes---with sources and destination known a priori!---we can solve the problem for \emph{unknown} sources and destinations by applying the previous corollary twice. \begin{corollary}\label{coro:4_round} We are given a subset $W\subseteq V$, where $|W|\leq \sqrt{n}$, and a bulk of messages such that the following holds. \begin{compactenum} \item The source and destination of each message is in $W$. \item Each source knows the contents of the messages to send. \item Each node is the source of $f|W|$ messages, where $f:=\lfloor n/|W|\rfloor$. \item Each node is the destination of $f|W|$ messages. \end{compactenum} Then a routing scheme to deliver all messages in $4$ rounds can be found efficiently. The routing scheme makes use of edges with at least one endpoint in $W$ only. \end{corollary} \begin{proof} Each node in $W$ announces the number of messages it holds for each node in $W$ to all nodes in $W$. This requires each node in $W$ to send and receive $|W|^2\leq f|W|$ messages. As sources and destinations of these helper messages are known in advance, by \corollaryref{coro:2_round} we can perform this preprocessing in $2$ rounds. The information received establishes the preconditions of \corollaryref{coro:2_round} for the original set of messages, therefore the nodes now can deliver all messages in another two rounds. \end{proof} \subsection{Solving the Information Distribution Task} Equipped with the results from the previous section, we are ready to tackle \probref{prob:idt}. In the pseudocode of our algorithms, we will use a number of conventions to allow for a straightforward presentation. When we state that a message is \emph{moved} to another node, this means that the receiving node will store a copy and serve as the source of the message in subsequent rounds of the algorithm, whereas the original source may ``forget'' about the message. A step where messages are moved is thus an actual routing step of the algorithm; all other steps serve to prepare the routing steps. The current source of a message \emph{holds} it. Moreover, we will partition the node set into subsets of size $\sqrt{n}$, where for simplicity we assume that $\sqrt{n}$ is integer. We will discuss the general case in the main theorem. We will frequently refer to these subsets, where $W$ will invariably denote any of the sets in its role as source, while $W'$ will denote any of the sets in its role as receiver (both with respect to the current step of the algorithm). Finally, we stress that statements about moving and sending of messages in the pseudocode do not imply that the algorithm does so by direct communication between sending and receiving nodes. Instead, we will discuss fast solutions to the respective (much simpler) routing problems in our proofs establishing that the described strategies can be implemented with small running times. This being said, let us turn our attention to \probref{prob:idt}. The high-level strategy of our solution is given in \algref{algo:high-level}. \begin{algorithm}[ht] \caption{High-level strategy for solving \probref{prob:idt}.}\label{algo:high-level} Partition the nodes into the disjoint subsets $\{(i-1)\sqrt{n}+1,\ldots,i\sqrt{n}\}$ for $i\in \{1,\ldots,\sqrt{n}\}$.\\ Move the messages such that each such subset $W$ holds exactly $|W||W'|=n$ messages for each subset $W'$.\nllabel{line:high_2}\\ For each pair of subsets $W$, $W'$, move all messages destined to nodes in $W'$ within $W$ such that each node in $W$ holds exactly $|W'|=\sqrt{n}$ messages with destinations in $W'$.\nllabel{line:high_3}\\ For each pair of subsets $W$, $W'$, move all messages destined to nodes in $W'$ from $W$ to $W'$.\nllabel{line:high_4}\\ For each $W$, move all messages within $W$ to their destinations.\nllabel{line:high_5}\\ \end{algorithm} Clearly, following this strategy will deliver all messages to their destinations. In order to prove that it can be deterministically executed in a constant number of rounds, we now show that all individual steps can be performed in a constant number of rounds. Obviously, the first step requires no communication. We leave aside \stepref{line:high_2} for now and turn to \stepref{line:high_3}. \begin{corollary}\label{coro:step_3} \stepref{line:high_3} of \algref{algo:high-level} can be implemented in $4$ rounds. \end{corollary} \begin{proof} The proof is analogous to \corollaryref{coro:4_round}. First, each node in $W$ announces to each other node in $W$ the number of messages it holds for each set $W'$. By \corollaryref{coro:2_round}, this step can be completed in $2$ rounds, for all sets $W$ in parallel. With this information, the nodes in $W$ can deterministically compute (intermediate) destinations for each message in $W$ such that the resulting distribution of messages meets the condition imposed by \stepref{line:high_3}. Applying \corollaryref{coro:2_round} once more, this redistribution can be performed in another $2$ rounds, again for all sets $W$ concurrently. \end{proof} Trivially, \stepref{line:high_4} can be executed in a single round by each node in $W$ sending exactly one of the messages with destination in $W'$ it holds to each node in $W'$. According to \corollaryref{coro:4_round}, \stepref{line:high_5} can be performed in $4$ rounds. Regarding \stepref{line:high_2}, we follow similar ideas. \algref{algo:step_2} breaks our approach to this step down into smaller pieces. \begin{algorithm}[ht] \caption{\stepref{line:high_2} of \algref{algo:high-level} in more detail.}\label{algo:step_2} Each subset $W$ computes, for each set $W'$, the number of messages its constituents hold in total for nodes in $W'$. The results are announced to all nodes.\nllabel{line:step_2_1}\\ All nodes locally compute a pattern according to which the messages are to be moved between the sets. It satisfies that from each set $W$ to each set $W'$, $n$ messages need to be sent, and that in the resulting configuration, each subset $W$ holds exactly $|W||W'|=n$ messages for each subset $W'$.\nllabel{line:step_2_2}\\ All nodes in subset $W$ announce to all other nodes in $W$ the number of messages they need to move to each set $W'$ according to the previous step.\nllabel{line:step_2_3}\\ All nodes in $W$ compute a pattern for moving messages within $W$ so that the resulting distribution permits to realize the exchange computed in Step~2 in a single round (i.e., each node in $W$ must hold exactly $|W'|=\sqrt{n}$ messages with (intermediate) destinations in $W'$).\nllabel{line:step_2_4}\\ The redistribution within the sets according to Step~4 is executed.\nllabel{line:step_2_5}\\ The redistribution among the sets computed in Step~2 is executed.\nllabel{line:step_2_6} \end{algorithm} We now show that following the sequence given in \algref{algo:step_2}, \stepref{line:high_2} of \algref{algo:high-level} requires a constant number of communication rounds only. \begin{lemma}\label{lemma:step_2} \stepref{line:high_2} of \algref{algo:high-level} can be implemented in $7$ rounds. \end{lemma} \begin{proof} We will show for each of the six steps of \algref{algo:step_2} that it can be performed in a constant number of rounds and that the information available to the nodes is sufficient to deterministically compute message exchange patterns the involved nodes agree upon. Clearly, \stepref{line:step_2_1} can be executed in two rounds. Each node in $W$ simply sends the number of messages with destinations in the $i^{th}$ set $W'$ it holds, where $i\in \{1,\ldots,\sqrt{n}\}$, to the $i^{th}$ node in $W$. The $i^{th}$ node in $W$ sums up the received values and announces the result to all nodes. Regarding \stepref{line:step_2_2}, consider the following bipartite graph $G=(S\dot{\cup} R,E)$. The sets $S$ and $R$ are of size $\sqrt{n}$ and represent the subsets $W$ in their role as senders and receivers, respectively. For each message held by a node in the $i^{th}$ set $W$ with destination in the $j^{th}$ set $W'$, we add an edge from $i\in S$ to $j\in R$. Note that after \stepref{line:step_2_1}, each node can locally construct this graph. As each node needs to send and receive $n$ messages, $G$ is of uniform degree $n^{3/2}$. By \theoremref{theorem:koenig}, we can color the edge set of $G$ with $n^{3/2}$ colors so that no two edges of the same color share a node. We require that a message of color $c\in \{1,\ldots,n^{3/2}\}$ is sent to the $(c \operatorname{mod} \sqrt{n})^{th}$ set. Hence, the requirement that exactly $n$ messages need to be sent from any set $W$ to any set $W'$ is met. By requiring that each node uses the same deterministic algorithm to color the edge set of $G$, we make sure that the exchange patterns computed by the nodes agree. Note that a subtlety here is that nodes cannot yet determine the precise color of the messages they hold, as they do not know the numbers of messages to sets $W'$ held by other nodes in $W$ and therefore also not the index of their messages according to the global order of the messages. However, each node has sufficient knowledge to compute the number of messages it holds with destination in set $W'$ (for each $W'$), as this number is determined by the total numbers of messages that need to be exchanged between each pair $W$ and $W'$ and the node index only. This permits to perform \stepref{line:step_2_3} and then complete \stepref{line:step_2_2} based on the received information.\footnote{Formally, this can be seen as a deferred completition of \stepref{line:step_2_2}.} As observed before, \stepref{line:step_2_3} can be executed quickly: Each node in $W$ needs to announce $\sqrt{n}$ numbers to all other nodes in $W$, which by \corollaryref{coro:2_round} can be done in $2$ rounds. Now the nodes are capable of computing the color of each of their messages according to the assignment from \stepref{line:step_2_2}. With the information gathered in \stepref{line:step_2_3}, it is now feasible to perform \stepref{line:step_2_4}. This can be seen by applying \theoremref{theorem:koenig} again, for each set $W$ to the bipartite multigraph $G=(W\dot{\cup}R,E)$, where $R$ represents the $\sqrt{n}$ subsets $W'$ in their receiving role with respect to the pattern computed in \stepref{line:step_2_2}, and each edge corresponds to a message held by a node in $W$ with destination in some $W'$. The nodes can locally compute this graph due to the information they received in Steps~\ref{line:step_2_2} and~\ref{line:step_2_3}. As $G$ has degree $n$, we obtain an edge-coloring with $n$ colors. Each node in $W$ will move a message of color $i\in \{1,\ldots,n\}$ to the $(i \operatorname{mod} \sqrt{n})^{th}$ node in $W$, implying that each node will receive for each $W'$ exactly $\sqrt{n}$ messages with destination in $W'$. Since the exchange pattern computed in \stepref{line:step_2_4} is, for each $W$, known to all nodes in $W$, by \corollaryref{coro:2_round} we can perform \stepref{line:step_2_5} for all sets in parallel in $2$ rounds. Finally, \stepref{line:step_2_6} requires a single round only, since we achieved that each node holds for each $W'$ exactly $\sqrt{n}$ messages with destination in $W'$ (according to the pattern computed in \stepref{line:step_2_2}), and thus can send exactly one of them to each of the nodes in $W'$ directly. Summing up the number of rounds required for each of the steps, we see that $2+0+2+0+2+1=7$ rounds are required in total, completing the proof. \end{proof} Overall, we have shown that each step of \algref{algo:high-level} can be executed in a constant number of rounds if $\sqrt{n}$ is integer. It is not hard to generalize this result to arbitrary values of $n$ without incurring larger running times. \begin{theorem}\label{theorem:idt} \probref{prob:idt} can be solved deterministically within $16$ rounds. \end{theorem} \begin{proof} If $\sqrt{n}$ is integer, the result immediately follows from \lemmaref{lemma:step_2}, \corollaryref{coro:step_3}, and \corollaryref{coro:4_round}, taking into account that the fourth step of the high-level strategy requires one round. If $\sqrt{n}$ is not integer, consider the following three sets of nodes: \begin{eqnarray*} V_1&:=&\{1,\ldots,\lfloor \sqrt{n}\rfloor^2\},\\ V_2&:=&\{n-\lfloor \sqrt{n}\rfloor^2+1,\ldots,n\}, \text{ and}\\ V_3&:=&\{1,\ldots,n-\lfloor \sqrt{n}\rfloor^2\}\cup \{\lfloor \sqrt{n}\rfloor^2+1,\ldots,n\}. \end{eqnarray*} $V_1$ and $V_2$ satisfy that $|V_1|=|V_2|=\lfloor \sqrt{n}\rfloor^2$. Hence, we can apply the result for an integer root to the subsets of messages for which either both sender and receiver are in $V_1$ or, symmetrically, in $V_2$. Doing so in parallel will increase the message size by a factor of at most $2$. Note that for messages where sender and receiver are in $V_1\cap V_2$ we can simply delete them from the input of one of the two instances of the algorithm that run concurrently, and adding empty ``dummy'' messages, we see that it is irrelevant that nodes may send or receive less than $n$ messages in the individual instances. Regarding $V_3$, denote for each node $i\in V_3$ by $S_i\subseteq {\cal S}_i$ the subset of messages for which $i$ and the respective receiver are neither both in $V_1$ nor both in $V_2$. In other words, for each message in $S_i$ either $i\in V_1\cap V_3$ and the receiver is in $V_2\cap V_3$ or vice versa. Each node $i\in V_3$ moves the $j^{th}$ message in $S_i$ to node $j$ (one round). No node will receive more than $|V_2\cap V_3|=|V_1\cap V_3|$ messages with destinations in $V_1\cap V_3$, as there are no more than this number of nodes sending such messages. Likewise, at most $|V_2\cap V_3|$ messages for nodes in $V_2\cap V_3$ are received. Hence, in the subsequent round, all nodes can move the messages they received for nodes in $V_1\cap V_3$ to nodes in $V_1\cap V_3$, and the ones received for nodes in $V_2\cap V_3$ to nodes in $V_2\cap V_3$ (one round). Finally, we apply \corollaryref{coro:4_round} to each of the two sets to see that the messages $\bigcup_{i\in V_3}S_i$ can be delivered within $4$ rounds. Overall, this procedure requires $6$ rounds, and running it in parallel with the two instances dealing with other messages will not increase message size beyond $\mathcal{O}(\log n)$. The statement of the theorem follows. \end{proof} \section{Varying Message and Key Size}\label{sec:size} In this section, we discuss scenarios where the number and size of messages and keys for Problems~\ref{prob:idt} and~\ref{prob:sort} vary. This also motivates to reconsider the bound on the number bits that nodes can exchange in each round: For message/key size of $\Theta(\log n)$, communicating $B\in \mathcal{O}(\log n)$ bits over each edge in each round was shown to be sufficient, and for smaller $B$ the number of rounds clearly must increase accordingly.\footnote{Formally proving a lower bound is trivial in both cases, as nodes need to communicate their $n$ messages to deliver all messages or their $n$ keys to enable determining the correct indices of all keys, respectively.} We will see that most ranges for these parameters can be handled asymptotically optimally by the presented techniques. For the remaining cases, we will give solutions in this section. We remark that one can easily verify that the techniques we propose in the sequel are also efficient with respect to local computations and memory requirements. \subsection{Large Messages or Keys} If messages or keys contain $\omega(\log n)$ bits and $B$ is not sufficiently large to communicate a single value in one message, splitting these values into multiple messages is a viable option. For instance, with bandwidth $B\in \Theta(\log n)$, a key of size $\Theta(\log^2 n)$ would be split into $\Theta(\log n)$ separate messages permitting the receiver to reconstruct the key from the individual messages. This simple argument shows that in fact not the total number of messages (or keys) is decisive for the more general versions of Problems~\ref{prob:idt} and~\ref{prob:sort}, but the number of bits that need to be sent and received by each node. If this number is in $\Omega(n \log n)$, the presented techniques are asymptotically optimal. \subsection{Small Messages} If we assume that in \probref{prob:sort} the size of messages is bounded by $M\in o(\log n)$, we may hope that we can solve the problem in a constant number of rounds even if we merely transmit $B\in \mathcal{O}(M)$ bits along each edge. With the additional assumption that nodes can identify the sender of a message even if the identifier is not included, this can be achieved if sources and destinations of messages are known in advance: We apply \corollaryref{coro:2_round} and observe that because the communication pattern is known to all nodes, knowing the sender of a message is sufficient to perform the communication and infer the original source of each message at the destination. On the other hand, if sources/destinations are unknown, consider inputs where $\Omega(n^2)$ messages cannot be sent directly from their sources to their destinations (i.e., using the respective source-receiver edge) within a constant number of rounds. Each of these messages needs to be forwarded in a way preserving their destination, i.e., at least one of the forwarding nodes must learn about the destination of the message (otherwise correct delivery cannot be guaranteed). Explicitly encoding these values for $\Omega(n^2)$ messages requires $\Omega(n^2\log n)$ bits. Implicit encoding can be done by means of the round number or relations between the communication partners' identifiers. However, encoding bits by introducing constraints reduces (at least for worst-case inputs) the number of messages that can be sent by a node accordingly. These considerations show that in case of \probref{prob:idt}, small messages do not simplify the task. \subsection{Small Keys} The situation is different for \probref{prob:sort}. Note that we need to drop the assumption that all keys can be distinguished, as this would necessitate key size $\Omega(\log n)$. In contrast, if keys can be encoded with $o(\log n)$ bits, there are merely $n^{o(1)}$ different keys. Hence, we can statically assign disjoint sets of $\log^2 n$ nodes to each key $\kappa$ (for simplicity we assume that $\log n$ is integer). In the first round, each node binary encodes the number of copies it holds of $\kappa$ and sends the $i^{th}$ bit to $\log n$ of these nodes. The $j^{th}$ of the $\log n$ receiving nodes of bit $i$ counts the number of nodes which sent it a $1$, encodes this number binary, and transmits the $j^{th}$ bit to all nodes. With this information, all nodes are capable of computing the total number of copies of $\kappa$ in the system. In order to assign an order to the different copies of $\kappa$ in the system (if desired), in the second round we can require that in addition the $j^{th}$ node dealing with bit $i$ sends to node $k\in \{1,\ldots,n\}$ the $j^{th}$ bit of an encoding of the number of nodes $k'\in \{1,\ldots,k-1\}$ that sent a $1$ in the first round. This way, node $k$ can also compute the number of copies of $\kappa$ held by nodes $k'<k$, which is sufficient to order the keys as intended. It is noteworthy that this technique can actually be used to order a much larger total number of keys, since we ``used'' very few of the nodes. If we have $K\leq n/\log^2 n$ different keys, we can assign $m:=\lfloor n/K\rfloor$ nodes to each key. This permits to handle any binary encoding of up to $\lfloor \sqrt{m}\rfloor$ many bits in the above manner, potentially allowing for huge numbers of keys. At the same time, messages contain merely $2$ bits (or a single bit, if we accept $3$ rounds of communication). More generally, each node can be concurrently responsible for $B$ bits, improving the power of the approach further for non-constant values of $B$. \section{Sorting}\label{sec:sort} In this section, we present a deterministic sorting algorithm. The problem formulation is essentially equivalent to the one in~\cite{patt-shamir11}. \begin{problem}[Sorting]\label{prob:sort} Each node is given $n$ keys of size $\mathcal{O}(\log n)$ (i.e., a key fits into a message). We assume w.l.o.g.\ that all keys are different.\footnote{Otherwise we order the keys lexicographically by key, node whose input contains the key, and a local enumeration of identical keys at each node.} Node $i$ needs to learn the keys with indices $i(n-1)+1,\ldots,in$ according the total order of all keys. \end{problem} \subsection{Sorting Fewer Keys with Fewer Nodes} Again, we assume for simplicity that $\sqrt{n}$ is integer and deal with the general case later on. Our algorithm will utilize a subroutine that can sort up to $2n^{3/2}$ keys within a subset $W\subset V$ of $\sqrt{n}$ nodes, communicating along edges with at least one endpoint in the respective subset of nodes. The latter condition ensures that we can run the routine in parallel for disjoint subsets $W$. We assume that each of the nodes in $W$ initially holds $2n$ keys. The pseudocode of our approach is given in \algref{algo:sub_sort}. \begin{algorithm}[ht] \caption{Sorting $2n^{3/2}$ keys with $|W|=\sqrt{n}$ nodes. Each node in $W$ has $2n$ input keys and learns their indices in the total order of all $2n^{3/2}$ keys.}\label{algo:sub_sort} Each node in $W$ locally sorts its keys and selects every $(2\sqrt{n})^{th}$ key according to this order (i.e., a key of local index $i$ is selected if $i \operatorname{mod} 2\sqrt{n} = 0$).\nllabel{line:sub_1}\\ Each node in $W$ announces the selected keys to all other nodes in $W$. \nllabel{line:sub_2}\\ Each node in $W$ locally sorts the union of the received keys and selects every $\sqrt{n}^{th}$ key according to this order. We call such a key \emph{delimiter}.\nllabel{line:sub_3}\\ Each node $i\in W$ splits its original input into $\sqrt{n}$ subsets, where the $j^{th}$ subset $K_{i,j}$ contains all keys that are larger than the $(j-1)^{th}$ delimiter (for $j=1$ this condition does not apply) and smaller or equal to the $j^{th}$ delimiter.\nllabel{line:sub_4}\\ Each node $i\in W$ announces for each $j$ $|K_{i,j}|$ to all nodes in $W$.\nllabel{line:sub_5}\\ Each node $i\in W$ sends $K_{i,j}$ to the $j^{th}$ node in $W$.\nllabel{line:sub_6}\\ Each node in $W$ locally sorts the received keys. The sorted sequence now consists of the concatenation of the sorted sequences in the order of the node identifiers.\nllabel{line:sub_7}\\ Keys are redistributed such that each node receives $2n$ keys and the order is maintained.\nllabel{line:sub_8} \end{algorithm} Let us start out with the correctness of the proposed scheme. \begin{lemma}\label{lemma:few_keys_correct} When executing \algref{algo:sub_sort}, the nodes in $W$ are indeed capable of computing their input keys' indices in the order on the union of the input keys of the nodes in $W$. \end{lemma} \begin{proof} Observe that because all nodes use the same input in \stepref{line:sub_3}, they compute the same set of delimiters. The set of all keys is the union $\bigcup_{j=1}^{\sqrt{n}}\bigcup_{i\in W}K_{i,j}$, and the sets $K_{i,j}$ are disjoint. As the $K_{i,j}$ are defined by comparison with the delimiters, we know that all keys in $K_{i,j}$ are larger than keys in $K_{i',j'}$ for all $i'\in W$ and $j'<j$, and smaller than keys in $K_{i',j'}$ for all $i'\in W$ and $j'>j$. Since in \stepref{line:sub_7} the received keys are locally sorted and \stepref{line:sub_8} maintains the resulting order, correctness follows. \end{proof} Before turning to the running time of the algorithm, we show that the partitioning of the keys by the delimiters is well-balanced. \begin{lemma}\label{lemma:balanced} When executing \algref{algo:sub_sort}, for each $j\in \{1,\ldots,\sqrt{n}\}$ it holds that \begin{equation*} \left|\bigcup_{i\in W}K_{i,j}\right|< 4n. \end{equation*} \end{lemma} \begin{proof} Due to the choice of the delimiters, $\bigcup_{i\in W}K_{i,j}$ contains exactly $\sqrt{n}$ of the keys selected in \stepref{line:sub_1} of the algorithm. Denote by $d_i$ the number of such selected keys in $K_{i,j}$. As in \stepref{line:sub_1} each node selects every $(2\sqrt{n})^{th}$ of its keys and the set $K_{i,j}$ is a contiguous subset of the ordered sequence of input keys at $w$, we have that $|K_{i,j}|<2\sqrt{n}(d_i+1)$. It follows that \begin{equation*} \left|\bigcup_{i\in W}K_{i,j}\right|=\sum_{i\in W}|K_{i,j}| <2\sqrt{n}\sum_{i\in W}(d_i+1)=2\sqrt{n}(\sqrt{n}+|W|)=4n.\qedhere \end{equation*} \end{proof} We are now in the position to complete our analysis of the subroutine. \begin{lemma}\label{lemma:few_keys} Given a subset $W\subseteq V$ of size $\sqrt{n}$ such that each $w\in W$ holds $2n$ keys, each node in $W$ can learn about the indices of its keys in the total order of all keys held by nodes in $W$ within $10$ rounds. Furthermore, only edges with at least one endpoint in $W$ are used for this purpose. \end{lemma} \begin{proof} By \lemmaref{lemma:few_keys_correct}, \algref{algo:sub_sort} is correct. Hence, it remains to show that it can be implemented with $10$ rounds of communication, using no edges with both endpoints outside $W$. Steps~\ref{line:sub_1}, \ref{line:sub_3}, \ref{line:sub_4}, and~\ref{line:sub_7} involve local computations only. Since $|W|=\sqrt{n}$ and each node selects exactly $\sqrt{n}$ keys it needs to announce to all other nodes, according to \corollaryref{coro:2_round} \stepref{line:sub_2} can be performed in $2$ rounds. The same holds true for \stepref{line:sub_5}, as again each node needs to announce $|W|=\sqrt{n}$ values to each other node in $W$. In \stepref{line:sub_6}, each node sends its $2n$ input keys and, by \lemmaref{lemma:balanced}, receives at most $4n$ keys. By bundling a constant number of keys in each message, nodes need to send and receive at most $n=|W|\cdot n/|W|$ messages. Hence, \corollaryref{coro:4_round} states that this step can be completed in $4$ rounds. Regarding \stepref{line:sub_8}, observe that due to \stepref{line:sub_5} each node knows how many keys each other node holds at the beginning of the step. Again bundling a constant number of keys into each message, we thus can apply \corollaryref{coro:2_round} to complete \stepref{line:sub_8} in $2$ rounds. In total, we thus require $0+2+0+0+2+4+2=10$ communication rounds. As we invoked Corollaries~\ref{coro:2_round} and~\ref{coro:4_round} in order to define the communication pattern, it immediately follows from the corollaries that all communication is on edges with at least one endpoint in~$W$. \end{proof} \subsection{Sorting All Keys} With this subroutine at hand, we can move on to \probref{prob:sort}. Our solution follows the same pattern as \algref{algo:sub_sort}, where the subroutine in combination with \theoremref{theorem:idt} enables that sets of size $\sqrt{n}$ can take over the function nodes had in \algref{algo:sub_sort}. This increases the processing power by factor $\sqrt{n}$, which is sufficient to deal with all $n^2$ keys. \algref{algo:sort} shows the high-level structure of our solution. \begin{algorithm}[ht] \caption{Solving \probref{prob:sort}.}\label{algo:sort} Each node locally sorts its input and selects every $\sqrt{n}^{th}$ key (i.e., the index in the local order modulo $\sqrt{n}$ equals $0$).\nllabel{line:sort_1}\\ Each node transmits its $i^{th}$ selected key to node $i$.\nllabel{line:sort_2}\\ Using \algref{algo:sub_sort}, nodes $1,\ldots,\sqrt{n}$ sort the in total $n^{3/2}$ keys they received (i.e., determine the respective indices in the induced order).\nllabel{line:sort_3}\\ Out of the sorted subsequence, every $n^{th}$ key is selected as \emph{delimiter} and announced to all nodes (i.e., there is a total of $\sqrt{n}$ delimiters).\nllabel{line:sort_4}\\ Each node $i\in V$ splits its original input into $\sqrt{n}$ subsets, where the $j^{th}$ subset $K_{i,j}$ contains all keys that are larger than the $(j-1)^{th}$ delimiter (for $j=1$ this condition does not apply) and smaller or equal to the $j^{th}$ delimiter.\nllabel{line:sort_5}\\ The nodes are partitioned into $\sqrt{n}$ disjoint sets $W$ of size $\sqrt{n}$. Each node $i\in V$ sends $K_{i,j}$ to the $j^{th}$ set $W$ (i.e., each node in $W$ receives either $\lfloor|K_{i,j}|/|W|\rfloor$ or $\lceil|K_{i,j}|/|W|\rceil$ keys, and each key is sent to exactly one node).\nllabel{line:sort_6}\\ Using \algref{algo:sub_sort}, the sets $W$ sort the received keys.\nllabel{line:sort_7}\\ Keys are redistributed such that each node receives $n$ keys and the order is maintained.\nllabel{line:sort_8} \end{algorithm} The techniques and results from the previous sections are sufficient to derive our second main theorem without further delay. \begin{theorem}\label{theorem:sort} \probref{prob:sort} can be solved in $37$ rounds. \end{theorem} \begin{proof} We discuss the special case of $\sqrt{n}\in \mathbb{N}$ first, to which we can apply \algref{algo:sort}. Correctness of the algorithm follows analogously to \lemmaref{lemma:few_keys_correct}. Steps~\ref{line:sort_1} and \ref{line:sort_5} require local computations only. \stepref{line:sort_2} involves one round of communication. \stepref{line:sort_3} calls \algref{algo:sub_sort}, which by \lemmaref{lemma:few_keys} consumes $10$ rounds. However, we can skip the last step of the algorithm and instead directly execute \stepref{line:sort_4}. This takes merely $2$ rounds, since there are $\sqrt{n}$ nodes each of which needs to announce at most $2\sqrt{n}$ values to all nodes and we can bundle two values in one message. Regarding \stepref{line:sort_6}, observe that, analogously to \lemmaref{lemma:balanced}, we have for each $j\in \{1,\ldots,\sqrt{n}\}$ that \begin{equation*} \left|\bigcup_{i\in V}K_{i,j}\right|=\sum_{i\in V}|K_{i,j}|< \sqrt{n}(n+|V|)=2n^{3/2}. \end{equation*} Hence, each node needs to send at most $n$ keys and receive at most $2n$ keys. Bundling up to two keys in each message, nodes need to send and receive at most $n$ messages. Therefore, by \theoremref{theorem:idt}, \stepref{line:sort_6} can be completed within $16$ rounds. \stepref{line:sub_7} again calls \algref{algo:sub_sort}, this time in parallel for all sets $W$. Nonetheless, by \lemmaref{lemma:few_keys} this requires $10$ rounds only because the edges used for communication are disjoint. Also here, we can skip the last step of the subroutine and directly move on to \stepref{line:sort_8}. Again, \corollaryref{coro:2_round} implies that this step can be completed in $2$ rounds. Overall, the algorithm runs for $0+1+8+2+0+16+8+2=37$ rounds. With respect to non-integer values of $\sqrt{n}$, observe that we can increase message size by any constant factor to accommodate more keys in each message. This way we can work with subsets of size $\lfloor \sqrt{n}\rfloor$ and similarly select keys and delimiters in Steps~\ref{line:sort_1} and~\ref{line:sort_4} such that the adapted algorithm can be completed in $37$ rounds as well. \end{proof} We conclude this section with a corollary stating that the slightly modified task of determining each input key's position in a global enumeration of the \emph{different} keys that are present in the system can also be solved efficiently. Note that this implies constant-round solutions for determining modes and selection as well. \begin{corollary}\label{coro:sort} Consider the variant of \probref{prob:sort} in which each node is required to determine the index of its input keys in the total order of the union of all input keys. This task can be solved deterministically in a constant number of rounds. \end{corollary} \begin{proof} After applying the sorting algorithm, each node announces (i) its smallest and largest key, (ii) how many copies of each of these two keys it holds, and (iii) the number of distinct keys it holds to all other nodes. This takes one round, and from this information all nodes can compute the indices in the non-repetitive sorted sequence for their keys. Applying \theoremref{theorem:idt}, we can inform the nodes whose input the keys were of these values in a constant number of rounds. \end{proof}
1,314,259,996,453
arxiv
\section{Introduction} In this paper we show the following \begin{thm}\label{thm1.1} Let $C$ be an $n$-dimensional complex torus embedded in a complex manifold $M$ of dimensional $n+d$. Assume that $T_CM$, the restriction of $TM$ on $C$, splits as $TC\oplus N_C$. Suppose that the normal bundle of $C$ in $M$ admits transition functions that are Hermitian matrices and satisfy a {\it non-resonant Diophantine} condition $($see Definition~$\ref{dioph})$. Then a neighborhood of $C$ in $M$ is biholomorphic to a neighborhood of the zero section in the normal bundle. \end{thm} We first describe the organization of the proof of our main theorem. A complex torus $C$ can be identified with the quotient of ${\mathbb C}^n$ by a lattice $\Lambda$ spanned by the standard unit vectors $e_1,\dots, e_n$ in ${\mathbb C}^n$ and $n$ additional vectors $e'_1,\dots, e'_n$ in ${\mathbb C}^n$, where $\operatorname{Im} e'_1,\dots, \operatorname{Im} e'_n$ are linearly independent vectors in ${\mathbb R}^n$. Let $\Lambda'$ be the lattice in the cylinder ${\mathcal C}:={\mathbb R}^n/{\mathbb Z}^n+i{\mathbb R}^n$ spanned by $ e_1',\dots, e'_n\mod {\mathbb Z}^n$. There are two coverings for the torus $C={\mathbb C}^n/\Lambda={\mathcal C}/{\Lambda'}$: the universal covering $\pi\colon{\mathbb C}^n\to C$ and the covering by cylinder, $\pi_{\mathcal C}\colon{\mathcal C}\to C$ that extends to a covering $\mathcal M$ over $M$. In section two we recall some facts about factors of automorphy for vector bundles on $C$ via the covering by ${\mathbb C}^n$. In section three, we study the flat vector bundles on $C$. The pull back of the flat vector bundle $N_C$ to the cylinder ${\mathcal C}$ is the normal bundle $N_{{\mathcal C}}$ of ${\mathcal C}$ in $\mathcal M$. We show that $N_{{\mathcal C}}$ is always the holomorphically trivial vector bundle ${\mathcal C}\times{\mathbb C}^d$. By ``vertical coordinates", we mean ``coordinates on ${\mathbb C}^d$", the normal component of the normal bundle $N_C$, while ``horizontal coordinates" mean the tangential components of $N_C$. Since ${\mathcal C}$ is a Stein manifold, a theorem of Siu \cite{siu-stein} says that a neighborhood of ${\mathcal C}$ in $\mathcal M$ is biholomorphic to a neighborhood of the zero section in its normal bundle, which is trivial as mentioned above. We show that the holomorphic classification of neighborhoods $M$ of $C$ with flat $N_C$ is equivalent to the holomorphic classification of the family of the deck transformations of coverings $\mathcal M$ of $M$ in a neighborhood of $\mathcal C$. These deck transformations are ``higher-order" (in the vertical coordinates) perturbations $\tau_1,\dots, \tau_n$ of $\hat\tau_1,\dots, \hat\tau_n$, where the latter are the deck transformations of the covering of $\widetilde N_C$ over $N_C$. In order to find a biholomorphism between a neighborhood of $C$ in $M$ and a neighborhood of its zero section in $N_C$, it is sufficient to find a biholomorphism that conjugates $\{\tau_1,\dots, \tau_n\}$ to $\{\hat\tau_1,\dots, \hat\tau_n\}$. There are two useful features. First, since the fundamental group of $C$ is abelian, the deck transformations $\tau_1,\dots,\tau_n$ commute pairwise. Second, we can also introduce suitable coordinates on ${\mathcal C}$ so that the "horizontal" components of deck transformations have diagonal linear parts. In such a way the classification of neighborhoods of $C$ is reduced to a more attainable classification of deck transformations. While the full theory for this classification is out the scope of this paper, we study the case when $N_C$ admits Hermitian transition functions. Since a Hermitian transition matrix must be locally constant, we call such an $N_C$ {\it Hermitian flat}. The convergence proof for \rt{thm1.1} is given in section four. It relies on a Newton rapid convergence scheme adapted to our situation based on an appropriate Diophantine condition among the lattice and the normal bundle. At step $k$ of the iteration scheme, let $\delta_k$ be the error of the deck transformations $\{\tau_1^{(k)},\dots, \tau_n^{(k)}$\} defined on domain $D^{(k)}$ to $\hat\tau_1,\dots, \hat\tau_n$ in suitable norms. By an appropriate transformation $\Phi^{(k)}$, we conjugate to a new set of deck transformations $\{\tau_1^{(k+1)},\dots, \tau_n^{(k+1)}\}$ of which the error to the linear ones is now $\delta_{k+1}$ on a slightly smaller domain $D^{(k+1)}$. Using our Diophantine conditions, related to the lattice $\Lambda$ and the normal bundle, we show that the sequence $\Phi^{(k)}\circ \cdots \circ \Phi^{(1)}$ converges to a holomorphic transformation $\Phi$ on an open domain $D^{(\infty)}$ where we linearize $\{\tau_1,\dots,\tau_n\}$. We now describe closely related previous results. Our work is motivated by work of Arnol'd and Ilyashnko-Pyartli. Our main theorem was proved in \cite{arnold-embed} when $C$ is an elliptic curve ($n=1$) and $N_C$ has rank one ($d=1$). Il'yashenko-Pyartli~\cite{ilyashenko-pyartly-embed} extended Arnol'd's result to the case when the torus is the product of elliptic curves together with a normal bundle which is a direct sum of line bundles, while \rt{thm1.1} deals with general complex tori. We also assume that $N_C$ is {\it non-resonant}, a condition that is weaker than the non-resonant condition used by Il'yashenko-Pyartli. Our small-divisor condition is also weaker. Of course, the study of neighborhood of embedded compact complex manifolds has a long history. Also, see some recent work \cite{hwang-annals,koike-fourier,loray-moscou}. We refer to \cite{MR4392029} for some references and a different approach to this range of questions. {\bf Acknowledgments.} This work benefits from helpful discussions with Jean-Pierre Demailly. Part of work was finished when X.~G. was supported by CNRS and UCA for a visiting position at UCA. \setcounter{thm}{0}\setcounter{equation}{0} \section{Vector bundles on Tori and factors of automorphy} In this section, we identify vector bundles on a complex torus with factors of automorphy. The latter gives us a useful alternative definition of vector bundles on a higher dimensional complex torus $C$ and the isomorphisms of two vector bundles. General references for line bundle on complex tori are \cite{birkenhake-lange,debarre-book,iena-05} and \cite[p.~307]{GH-book}. Let $\Lambda$ be a $2n$-dimensional lattice in $\mathbb{C}^n$. We may assume that $\Lambda$ is defined by $2n$ vectors $e_1,\dots, e_n, e'_1,\dots,e'_n$ of $\mathbb{C}^n$, where $e_i=(0,\ldots, 0,1,0,\ldots, 0)$ with $1$ being at the $i$-th place, $e'_i=(e'_{i,1},\ldots,e'_{i,n})$ and the matrix $$\operatorname{Im} \tau:=(\operatorname{Im}{e'_{i,j}})_{1\leq i,j\leq n}=:({e''_{i,j}})_{1\leq i,j\leq n}$$ is invertible \cite[exerc. 2, p.~21]{birkenhake-lange}. The compact complex manifold $C:=\mathbb{C}^n/\Lambda$ is called an ($n$-dimensional) complex torus. Unless the lattice is equivalent to another one defined by a diagonal matrix $e'$, $C$ is not biholomorphic to a product of one-dimensional tori. Let $\pi: {\mathbb C}^n\rightarrow C$ be the universal cover of $C$. Its group $\Gamma$ of deck transformations consists of translations $$ T_\lambda\colon z\to z+\lambda, \quad \lambda\in\Lambda. $$ Note that $\Gamma$ is abelian and is isomorphic to ${\mathbb Z}^{2n}$. $\Gamma$ is also isomorphic to $\pi_1(C,0)$ since $\mathbb C^n$ is a universal covering of $C$. Next, we consider equivalence relations for holomorphic vectors bundles on $C$ and $\mathbb C^n$, following the realization proof of Theorem 3.2 in \cite{iena-05}. Let $E$ be a vector bundle of rank $d$ over $C$. The pull-back bundle $\pi^*E$ on $\mathbb C^n$ is trivial and has global coordinates $\hat\xi$. Let $\{U_j\}$ be an open covering of $C$ so that coordinates $\xi_j=(\xi_{j,1},\dots, \xi_{j,d})^t$ of $E$ are well-defined (injective) on $U_j$. Then we have \eq{hxixi} \hat\xi=h_j\xi_j(\pi), \end{equation} where $h_j$ is a non-singular holomorphic matrix on $\pi^{-1}(U_j)$. The transition functions $g_{kj}$ satisfy \eq{gkjh} g_{kj}(\pi)=h_k^{-1}h_j, \quad \text{on $\pi^{-1}(U_k)\cap\pi^{-1}(U_j)$}. \end{equation} For any $z,\lambda$, we know that both $\pi(z+\lambda)$ and $\pi(z)$ are in the same $U_j$ for some $j$. Then we have $$ \hat\xi(z+\lambda)=h_j(z+\lambda)\xi_j(\pi(z))=h_j(z+\lambda)h_j(z)^{-1}\hat\xi(z). $$ We can define \eq{rholaz} \rho(\lambda,z)=h_j(z+\lambda)h_j(z)^{-1}, \quad z\in\pi^{-1}(U_j) \end{equation} as the latter is independent of the choice of $j$ by \re{gkjh}. Therefore, \begin{gather}\label{global-coord} \hat\xi(\lambda+z)=\rho(\lambda,z)\hat\xi(z), \quad \rho(\lambda,z)\in GL(d,\mathbb C),\\ \rho\colon\Lambda\times \mathbb C^n\to GL(d,\mathbb C). \end{gather} Here $\rho$ is called a {\it factor of automorphy}. We can verify that \eq{abelP} \rho(\lambda+\mu,z)=\rho(\lambda,\mu+z)\rho(\mu,z). \end{equation} In particular, if all $\rho(\lambda,z)=\rho(\lambda)$ are independent of $z$, then $\rho(\Lambda)$ is an abelian group. The above construction from a vector bundle $E$ on $C$ to a factor of automorphy can be reversed. Namely, given \re{global-coord}-\re{abelP}, define the vector bundle $E$ on $C$ as the quotient vector space of ${\mathbb C}^n\times{\mathbb C}^d$ via the equivalence relation \eq{eqrel} (z,\xi)\sim (z+\lambda,\rho(\lambda,z)\xi), \quad z\in{\mathbb C}^n,\ \xi\in{\mathbb C}^d,\ \lambda\in\Lambda. \end{equation} We denote the projection from the cylinder ${\mathcal C}:={\mathbb R}^n/{\mathbb Z}^n+i{\mathbb R}^n={\mathbb C}^n/{\mathbb Z}^n$ onto $C$ by $\pi_{{\mathcal C}}$. Therefore, we can define $\pi_{\mathcal C}^*E$ on the cylinder ${\mathcal C}$ by the equivalence relation \eq{eqrelS} (z,\xi)\sim (z+\lambda,\rho(\lambda,z)\xi), \quad z\in{\mathbb C}^n,\ \xi\in{\mathbb C}^d,\ \lambda\in{\mathbb Z}^n. \end{equation} Of course, global coordinates $\hat\xi$, $\rho$, and $g_{jk}$ are not uniquely determined by $E$. However, their equivalence classes are determined. Two vector bundles $E,\tilde E$ are isomorphic if their corresponding transitions $g_{jk},\tilde g_{jk}$ satisfy $\tilde g_{jk}=h_j^{-1}g_{jk}h_k$ where $h_j$ are non-singular holomorphic matrices. Replacing global coordinates $\hat\xi$ by $\nu\hat\xi$ where $\nu\colon\mathbb C^n\to GL(n,d)$ is holomorphic, we can verify that \eq{nuLaz} \nu(\lambda+z)\rho(\lambda, z)\nu(z)^{-1}=:\tilde\rho(\lambda,z) \end{equation} is also a factor of automorphy. Define two factors of automorphy $\rho,\tilde\rho$ to be {\it equivalent} if \re{nuLaz} holds. Therefore, the classification of holomorphic vector bundles is identified with the classification of factors of automorphy. \setcounter{thm}{0}\setcounter{equation}{0} \section{Flat vector bundles} In this section, we will show that the pull-back of a flat vector bundle $E$ on $C$ to the cylinder ${\mathcal C}={\mathbb C}^n/{\mathbb Z}^n$ is always trivial. When $E$ is flat, we can choose global coordinates as follows. We know that $\pi^*E$ is also flat and we can choose its global flat basis, or global flat coordinates $\hat\xi$ by using analytic continuation on ${\mathbb C}^n$ and pulling back flat local coordinates of $E$. In other words, in \re{hxixi} $h_j$ are locally constants while $\xi_j$ are locally flat coordinates. Then $\rho(\lambda,z)$ depend only on $\lambda$, in which case we write $\rho(\lambda)$ for $\rho(\lambda,z)$. As remarked above, $\rho(\Lambda)$ is abelian. When $E$ is unitary (flat), by the same reasoning $\pi^*E$ is unitary and we can choose $h_j$ and $\rho(\lambda)$ to be unitary. A $d\times d$ Jordan block $J_d(\lambda)$ is a matrix of the form $\lambda I_d+N_d$, where $I_d$ is the $d\times d$ identity matrix and $N_d$ is the $d\times d$ matrix with all entries being $0$, except all the $(i,i+1)$-th entries being $1$. A matrix $T$ commutes with $J$ if and only if $$ T=T_d(a):=a_0I_d+\sum_{i>0} a_i N_d^i. $$ Note that $N_d^i=0$ for $i\geq d$. Following \cite[p.~218]{gantmacher1}, we call the above $T$ as well as the following two types of matrices, {\it regular upper triangular} matrices w.r.t. $J$: $$ A=(0,T_d(a)), \quad\text{or} \quad B=\binom{T_{d'}(a')}{0}, $$ where $0$ denotes in $A$ (resp. $B$) a $0$ matrix of $d$ rows (resp. $d'$ columns). Given a Jordan matrix $$ \tilde J=\operatorname{diag}(J_{d_1}(\lambda_1),\dots, J_{d_k}(\lambda_k)). $$ the matrices that commute with $\tilde J$ are precisely the block matrices $$ X= (X_{\alpha\beta})_{m\times m} $$ where $X_{\alpha\beta}=0$ if $\lambda_\alpha\neq\lambda_\beta$, while $X_{\alpha\beta}$ is a regular upper triangular $(d_\alpha\times d_\beta)$ matrix if $\lambda_\alpha=\lambda_\beta$. Such a matrix $X$ is said to be a {\it regular upper triangular matrix } w.r.t. $J$ From the structure of matrices commuting with a Jordan matrix, we can verify the following two results. \begin{prop}Let $A_1,\dots, A_m$ be $2\times 2$ matrices commuting pairwise. Then there is a non-singular matrix $S$ such that all $S^{-1}A_jS$ are Jordan matrices. \end{prop} \begin{exmp} The $3\times 3$ matrices $\lambda I_3+N_3, \mu I_3+N_3^2$ commute, but they cannot be transformed into the Jordan normal forms simultaneously. \end{exmp} The following results on logarithms are likely classical. However, we cannot find a reference. Therefore, we give proofs emphasizing commutativity of logarithms of matrices. We start with the following. \begin{lemma}\label{up-t} Let $A_1,\dots, A_m$ be pairwise commuting matrices. Then there is a non-singular matrix $S$ such that $$ S^{-1}A_jS=(\hat A_{\alpha\beta})_{1\leq\alpha,\beta\leq s}=:\hat A_j, \quad 1\leq j\leq m $$ where $\hat A_1$ is a Jordan matrix and all $\hat A_j$ are upper triangular matrices. \end{lemma} \begin{proof} We may assume that $A_1$ is a Jordan matrix $J=\operatorname{diag}(J_{d_1}(\lambda_1),\dots, J_{d_s}(\lambda_s))$. Then $$ A_j=(X^j_{\alpha\beta})_{1\leq\alpha,\beta\leq s} $$ are regular w.r.t $J$. Note that pairwise commuting non-singular matrices have non-trivial common eigenspaces. The eigenspace of $J$ are spanned $e_{d_1'}, \dots, e_{d'_s}$ with $d'_1=1$ and $d_j'=d_1+\cdots d_{j-1}+1$. To simplify the indices, we may assume that $e_1$ is an eigenvector of all $A_j$. Then the first column of $A_j$ is $a_je_1$ with $a_j\neq0$. The new matrices $\tilde A_j$, obtained by removing all first rows and first columns, still commute pairwise. In particular all $\tilde A_j$ are regular to the new Jordan matrix $\tilde J=\tilde A_1$. By induction on $d$, we can find a non-singular matrix $\tilde S$ which is regular to $\tilde J$ so that all $\tilde S^{-1}\tilde A_j\tilde S$ are upper-triangular. Let $S=\operatorname{diag}(1,\tilde S)$. Now $S^{-1}=\operatorname{diag}(1,\tilde S^{-1})$. We can check that all $\hat A_j:=S^{-1}A_jS$ are upper triangular. Then $ \hat A_1$, $J$ have the same entries, with only one possible exception $$\hat A_{1;12}=s_{11}J_{12}. $$ If $\hat A_{1;12}\neq 0$, dilating the first coordinate can transform $\hat A_{1}$ into the original $J$, while $\hat A_j$ remain upper-triangular. If $s_{12}J_{12}=0$, then $J_{12}$ must be $0$, i.e. $\hat A_1=J$, because one cannot transform a Jordan matrix, $A_1=J$, into a new Jordan matrix, $\hat A_1$, by reducing an entry $1$ to $0$ and keeping other entries unchanged. \end{proof} The above simultaneous normalization of upper-triangular matrices allows us to define the logarithms. The construction of logarithms of non-singular matrices can be found in \cite[p.~239]{gantmacher1}. Here we need to find a definition that is suitable to determine the commutativity of the logarithm of pairwise commuting non-singular matrices. Recall that for a $d\times d$ matrix $A$, the generalized eigenspace $E_\lambda (A)$ with eigenvalue $\lambda$ is the kernel of $(A-\lambda I)^d$, while ${\mathbb C}^d$ is the direct sum of all $E_\lambda (A)$. A matrix $B$ that commutes with $A$ leaves each $E_\lambda (A)$ invariant, i.e. $B(E_{\lambda}(A))\subset E_{\lambda}(A)$. Thus if $A_1,\dots, A_m$ commute pairwise, we can decompose ${\mathbb C}^d$ as a direct sum of liner subspaces $V_j$ such that each $V_j$ is invariant by $A_i$ and admits exactly one eigenvalue of $A_i$. Thus to define $\ln A_j$, we will assume that each $A_j$ has a single eigenvalue on ${\mathbb C}^d$ if we wish. Given a non-singular matrix $A$, a logarithm of $A$ is a matrix $\ln A$ satisfying $$ e^{\ln A}=A $$ where the exponential matrix $e^B=\sum\frac{B^n}{n!}$ is always well-defined. However, $\ln A$ is not unique. For a non-singular upper triangular matrix \begin{equation}\label{defLn0} A=\lambda I_d+ a, \quad a:=(a_{ij})_{1\leq i,j\leq d}, a_{ij=0}\quad \forall i\geq j, \end{equation} we have $a^k=0$ for $k\geq d$. Using the identity $e^{B+C}=e^{B}e^C$ for two commuting matrices $B,C$, we see that $e^{\ln A}=A$ for \begin{equation}\label{defLn0+} \ln A:=(\ln \lambda)I_d-\sum_{k>0}\f{(-\lambda^{-1} a)^k}{k} \end{equation} with $0\leq \operatorname{Im} \ln(.) <2\pi$. For a non-singular Jordan matrix, we can define $$ \ln\operatorname{diag}(J_{d_1}(\lambda_1),\dots, J_{d_m}(\lambda_m))=\operatorname{diag}(\ln J_{d_1}(\lambda_1),\dots, \ln J_{d_m}(\lambda_m)).$$ Note that $\ln\lambda_\alpha=\ln\lambda_\beta$ if and only if $\lambda_\alpha=\lambda_\beta$. Since matrices that commute with a fixed matrix is closed under multiplication by a scalar, addition and multiplication. It is thus clear that if $A$ is an upper triangular matrix that is regular to a non-singular Jordan matrix $J=\operatorname{diag}(J_{d_1}(\lambda_1),\dots, J_{d_s}(\lambda_s))$, then $\ln A$ remains regular to $J$. Equivalently and more importantly, $\ln A$ is regular to the Jordan normal form of $\ln J$, which is $$ \operatorname{diag}(J_{d_1}(\ln\lambda_1),\dots, J_{d_s}(\ln\lambda_s)). $$ \begin{prop}\label{defLnA} Let $A_1,\dots, A_m$ be pairwise commuting $d\times d$ matrices. Then there is a non-singular matrix $S$ such that $\hat A_j:=S^{-1}A_jS$ are block diagonal matrices of the form \begin{equation}\label{hataj} \hat A_j=\operatorname{diag}(\hat A_{j,d_1},\dots, \hat A_{j,d_k}) \end{equation} where all $\hat A_{j,d_i}$ are upper triangular $d_i\times d_i$ matrices, and each $\hat A_{j,d_i}$ has only one eigenvalue $\lambda_{j,i}$. Assume further that all $A_j$ are non-singular. Then \begin{equation}\label{defLn1} \ln A_j:=S\operatorname{diag}(\ln \hat A_{j,d_1}, \dots, \ln \hat A_{j,d_k})S^{-1}, \quad 1\leq j\leq m \end{equation} commute pairwise and $e^{\ln A_j}=A_j$, where $\ln\hat A_{j,d_i}$ are defined by \rea{defLn0}-\rea{defLn0+}. \end{prop} \begin{proof}Note that for pairwise commuting matrices $A_1,\dots, A_m$, we have a decomposition $ {\mathbb C}^d=\bigoplus_{i=1}^tV_i $ where each $A_j$ preserves $V_i$ and has only one eigenvalue $\lambda_{j,i}$. Their restrictions of $A_1,\dots, A_m$ on $V_i$ remain commutative pairwise. Let $\dim V_i=d_i, d_0'=0, d_{i+1}'-d_i'=d_i$. By \rl{up-t}, we can find a basis $$e^*_{d'_{i-1}+1}, \dots, e^*_{d_{i-1}'+d_{i}}$$ for $V_i$ such that $A_1|_{V_i}, \dots, A_m|_{V_i}$ are upper triangular matrices. Using the new basis $e_1^*,\dots, e_d^*$ we can find the matrix $S$ for the decomposition \re{hataj}. Therefore, $ \hat A_{1, d_i},\dots, \hat A_{m,d_i}$ commute pairwise. Assume now that all $\lambda_{j,i}$ are non-zero. It remains to show that $\ln \hat A_{1, d_i},\dots, \ln \hat A_{m,d_i}$ commute pairwise. Write $$ \hat A_{j,i}=\lambda_{j,i}I_{d_i}+W_{j,i} $$ where $W_{j,i}$ are upper triangular matrices and $W_{j,i}^{d_i}=0$. By a straightforward computation, we have $$ [W_{j,i}, W_{j',i}]=[\hat A_{j,i},\hat A_{j',i}]=0. $$ Therefore, $W_{j,i}^k$ commutes with $W_{j',i}^\ell$ for any $k,\ell$. Consequently, the finite sum $$ \ln \hat A_{j,i}=\ln\hat\lambda_{j,i}I_{d_i}-\sum_{k>0}\frac{(-\lambda_{j,i}^{-1}W_{j,i})^k}{k} $$ commutes with $\ln\hat A_{j',i}$. Therefore, $\ln A_1,\dots, \ln A_m$ in \re{defLn1} commute pairwise. \end{proof} \begin{lemma}\label{muit-flows} Let $A_1,\dots, A_n$ be non-singular upper triangular $d\times d$ matrices. Suppose that $A_1,\dots, A_n$ commute pairwise. There exists a linear mapping $w\to\tilde v^z(w):=v(z)w$ in ${\mathbb C}^d$, entire in $z\in{\mathbb C}^n$ such that $v(z)\in GL_d({\mathbb C})$, $v(0)=Id$ and $v(e_j)=A_j$ for $j=1,\ldots,n$. Furthermore, $v({z+z'})=v(z) v({z'})$ for all $z,z'\in{\mathbb C}^n$ \end{lemma} \proof{By \rp{defLnA}, we define pairwise commuting matrices $\ln A_1,\dots, \ln A_n$ such that $e^{\ln A_j}=A_j$. Then the Lie brackets of the vector fields for $\dot w=\ln A_jw, j=1,\dots,n$ vanish and their flows $\varphi_j^t(w)$ commute pairwise. Note that $$ \varphi_j^0(w)=w, \quad \varphi_j^1(w)=e^{\ln A_j}w=A_jw. $$ Define $$ \tilde v^z(w)=\varphi_1^{z_1}\cdots\varphi_n^{z_n}(w). $$ We conclude $\tilde v^{z}\tilde v^{z'}(w)=\tilde v^{z+z'}(w)$, that is $v(z+z')=v(z)v({z'})$. \qedhere } By \rp{defLnA}, we define $\ln\rho(e_1), \dots, \ln\rho(e_{2n})$ and they commute pairwise. We now define $\ln\rho(\lambda)$ for all $\lambda=\sum_{j=1}^{2n}m_je_j\in \Lambda$ as follows $$ \ln\rho\left(\sum_{j=1}^{2n}m_je_j\right):=\sum_{j=1}^{2n} m_j\ln\rho(e_j). $$ Thus the matrices $\ln\rho(\lambda)$ for $\lambda\in\Lambda$ commute pairwise. \begin{prop}\label{pi_SEtrivial} Let $E$ be a flat vector bundle on $C$. Then $\pi_{{\mathcal C}}^*E$ admits a factor of automorphy $\rho$ satisfying $\rho(e_j)=Id$ for $j=1,\dots, n$; in particularly, $\pi_{\mathcal C}^*E$ is holomorphically trivial. \end{prop} \proof{ Let $d$ be the rank of $E$. Let $A_j=\rho(e_j)^{-1}$ for $j=1,\dots, n$. With $v(e_j)=A_j$ and $v(0)=Id_d$, we first see that $$ \tilde\rho(\lambda,z):=v(z+\lambda)\rho(\lambda)v(z)^{-1} $$ satisfies $\tilde\rho(e_j,0)=Id_d$. We want to show that $\tilde\rho(\lambda,z)$ depends only on $\lambda$. Fix $\lambda=\sum_{j=1}^{2n}m_je_j\in \Lambda$. By definition, the matrix $\ln\rho(\lambda)$ commutes with each $\ln\rho(e_j)$, $j=1,\ldots, 2n$. Thus the flow $\varphi_\lambda^t$ of $\dot w= \ln\rho(\lambda)w$ commutes with the flows of $\dot w= \ln\rho(e_j)w$, $j=1,\ldots, 2n$. As in the proof of the previous lemma, we know that $\varphi_\lambda^t(w)$ is linear in $w$ and entire in $t\in{\mathbb C}$. For $z\in {\mathbb C}^n$ and $w\in {\mathbb C}^d$, let $\tilde v^z(w)$ be as defined in the previous lemma. Thus we have $$ \varphi_\lambda^t \tilde v^z(w)=\tilde v^z\varphi_\lambda^t(w). $$ Taking derivatives in $w$ and plugging in $t=1$, we get $$ \exp(\ln\rho(\lambda))v(z)=v(z)\exp(\ln\rho(\lambda)). $$ Since $\exp(\ln\rho(\lambda))=\rho(\lambda)$, we have $\rho(\lambda)v(z)=v(z)\rho(\lambda)$ for all $\lambda\in\Lambda$, $z\in {\mathbb C}^n$. Considering $z=z_1e_1+\cdots +z_ne_n\in \Lambda$, we have $v(z+\lambda)=v(z)v(\lambda)$. Hence, $\tilde\rho(\lambda, z)$ is independent of $z$. We have achieved $\tilde\rho(\lambda)=\nu(z+\lambda)\rho(z)\nu(z)^{-1}$ and $\tilde\rho(e_j)=Id_d$ for $j=1,\dots, n$. Therefore, $\pi|_{\mathcal C}^*E$ is trivial, by the equivalence relation \re{eqrelS}. \qedhere} It is known that there are Stein manifolds with non-trivial vector bundle \cite{forster-rammspott}. Furthermore, we conclude the section by emphasizing that the triviality $\pi^*|_{\mathcal C} E$ relies on the extra assumption that it is a pull-back bundle. The following result is likely known, but we include a short proof for completeness. \begin{prop}The set of holomorphic equivalence classes of flat holomorphic line bundles on ${\mathcal C}$ can be identified with $H^1({\mathcal C},\mathbb C^*)$. The latter is non-trivial. \end{prop} \begin{proof} Each element $\{c_{jk}\}$ in $H^1({\mathcal C},\mathbb C^*)$ is clearly an element in $H^1({\mathcal C},\mathcal O^*)$. We want to show that if $\{c_{jk}\},\{\tilde c_{jk}\}\in H^1({\mathcal C},\mathbb C^*)$ represent the same element in $H^1({\mathcal C}, \mathcal O^*)$, then they are also the same in $H^1({\mathcal C},\mathbb C^*)$. Indeed, we can cover ${\mathcal C}$ by convex open sets $U_1,U_2,U_3,U_4$ such that $U_1\cap U_2\cap U_3\cap U_3$ is non empty. Thus $\{U_i\}$ is a Leray covering. If $\tilde c_{jk}=h_jc_{jk}h_k^{-1}$, where each $h_j$ is a non-vanishing holomorphic function on $U_j$. Take $p$ in all $U_j$. We get $\tilde c_{jk}=c_jc_{jk}c_k^{-1}$ for $c_j=h_j(p)$. Note that $H^1({\mathcal C},\mathbb C^*)$ is non-trivial. Otherwise, the exact sequences $0\to\mathbb Z\to\mathbb C\to\mathbb C^*\to0$ and $0\to H^0({\mathcal C},\mathbb Z)\to H^0({\mathcal C},\mathbb C)\to H^0({\mathcal C},\mathbb C^*)\to 0$ would imply that $H^1({\mathcal C}, {\mathbb Z}) \cong H^1({\mathcal C},{\mathbb C})$, a contradiction. \end{proof} \setcounter{thm}{0}\setcounter{equation}{0} \setcounter{thm}{0}\setcounter{equation}{0} \section{Equivalence of neighborhoods and commuting deck transformations} In this section, we will discuss how the classification of neighhorhoods $U$ of a compact complex manifold $C$ is related to the classification of deck transformations of a holomorphic covering $\tilde U\to U$, where $U,\tilde U$ are chosen carefully and $\tilde U$ contain $C^*$ that covers $C$. When $C^*$ is additionally Stein, we can choose $\tilde U$ to be a neighborhood of $C^*$ in its normal bundle $N_{C^*}(\tilde U)$ by applying a result of Siu. After preliminary results in \rl{deck-tran} and \rl{CinN}, we will return to our previous study where $C$ is a complex torus, and its covering of $C$ is the Stein manifold $C^*=\tilde C$, and $N_C(M)$ is Hermitian flat. We then prove the main result of this paper by using a KAM rapid iteration scheme. Let us start with $\iota: C\hookrightarrow M$, a holomorphic embedding of a compact complex manifold $C$. We shall still denote $\iota(C)$ by $C$. Let $U$ be a neighborhood of $C$ in $M$ such that $U$ admits a smooth, possibly non-holomorphic, \emph{deformation or strong retract} $C$~~\cite[p.~361]{MR3728284}; namely there is a smooth mapping $R\colon U\times [0,1]\to U$ such that $R(\cdot,0)=Id$ on $U$, $R(\cdot,t)=Id$ on $C$, and $R(\cdot,1)(U)=C$. Thus, $\pi_1(U,x_0)=\pi_1(C,x_0)$ for $x_0\in C$ (see~~\cite[p.~361]{MR3728284}). When $M$ is $N_C$, we can find a {\it holomorphic} deformation retraction from a suitable neighborhood of its the zero section onto $C$, by using a Hermitian metric on $N_C$. Let $X$ be a complex manifold and $\mathcal X$ be a universal covering of $X$. Then the group of deck transformations of the covering is identified with $\pi_1(X, x_0)$. The set of equivalence classes of coverings of $X$ is identified with the conjugate classes of subgroups of $\pi_1(X, x_0)$; see~~\cite[Thm.~79.4, p.~492]{MR3728284}. Furthermore, $\pi_1(X, x_0)$ acts transitively and freely on each fiber of the covering and $X$ is the quotient of $\mathcal X$ by $\pi_1(X, x_0)$. \begin{lemma}\label{deck-tran}Let $C$ be a compact complex manifold. Let $\pi\colon C^*\to C$ be a holomorphic covering and $\pi(x_0^*)=x_0$. Suppose that $(M,C)$ (resp. $(M', C)$) is a holomorphic neighborhood of $C$. There is a neighborhood $U$ in $M$ (resp. $U'$ in $M'$) of $C$ and a holomorphic neighborhood $\tilde U$ $($resp. $\widetilde{U'})$ of $C^*$ such that $p\colon\tilde U\to U$ $($resp $p\colon\widetilde{U'}\to U')$ is an extended covering of the covering $\pi\colon C^*\to C$ and $C$ $($resp. $C^*)$ is a smooth strong retract of $U,U'$ $($resp. $\tilde U,\widetilde{U'})$. Consequently, $$ \pi_1(\tilde U,x_0^*))=\pi_1(C^*,x_0^*), \quad \pi_1(U,x_0)=\pi_1(C,x_0). $$ Suppose that $(M,C)$ is biholomorphic to $(M',C)$. Then $U,U', \tilde U,\widetilde{U'}$ can be so chosen that there is covering transformation sending $\tilde U$ onto $\widetilde{U'}$ and fixing $C^*$ pointwise. Conversely, if there is a convering transformation sending $\tilde U$ onto $\widetilde{U'}$ fixing $C^*$ pointwise, then $(U,C),(U',C)$ are holomorphically equivalent. \end{lemma} \begin{proof} Since $\pi\colon C^*\to C$ is a covering map, according to \cite[Thm.~4.9]{vick-book}, it extends to a covering map $p:\tilde U \to U$ such that $\tilde U$ contains $C^*$ and $p|_{C^*}=\pi$. Suppose that $R$ is a strong retraction of $U$ onto $C$. We can lift $R(z,\cdot)\colon[0,1]\to U$ to a continuous mapping $\tilde R(\tilde z,\cdot)\colon[0,1] \to \tilde U$ such that $\tilde R(\tilde z,0)=\tilde z$ and $p\tilde R(\tilde z,.)=R(p(\tilde z),.)$ for all $\tilde z\in\tilde U$. One can verify that $\tilde R$ is a strong retraction of $\tilde U$ onto $C^*$. Suppose that a biholomorphic map $f$ sends $(M,C)$ onto $(M',C)$ fixing $C$ pointwise. We may assume that $f$ is a biholomorphic mapping from $U$ onto $U'$. Then we can lift the mapping $f\pi\colon \tilde U\to { U'}$ to obtain a desired covering biholomorphism $F$, since $(f\pi)_*\pi_1(\tilde U,x_0^*)=\pi_*\pi_1(C^*,x_0^*)$. Conversely, a covering biholomorphism from $\tilde{U}$ onto $\tilde{ U'}$ fixing $C^*$ pointwise clearly induces a biholomorphism from $U$ onto $U'$ fixing $C$ pointwise. \end{proof} With the covering, we can identity $(M,C)$ with $(\tilde M, C^*)/{\,\sim}$ where $\tilde p\sim p$ if and only if $p,\tilde p$ are in the same stack of the covering $\pi$. Applying the above to $(N_C,C)$ and a covering $\pi|_{C^*}\colon C^*\to C$, we have a covering $\hat\pi\colon \widetilde{N_C}\to N_C$ such that $$ C^*\subset\widetilde{N_C}, \quad \pi_1(\widetilde{N_C},x_0^*)=\pi_1(C^*,x_0^*), \quad \pi_1(N_C,x_0)=\pi_1(C,x_0). $$ To simplify the notation, we denote $U,\tilde U$ by $M,\tilde M$ respectively. Thus we have commuting diagrams for the coverings: $$ \begin{matrix} \widetilde{N_C} & \hookleftarrow& C^* & \hookrightarrow & {\tilde M}\\ \hat \pi\downarrow & &\pi_{C^*}\downarrow& & p\downarrow \\ N_C& \hookleftarrow & C & \hookrightarrow & M \end{matrix}. $$ The set of deck transformations of $p$ (resp. $\hat\pi$) will be denoted by $\{\tau_1,\dots, \tau_n\}$ (resp. $\{\hat\tau_1,\dots, \hat\tau_n\}$). If $\pi\colon \tilde U\to U$ is a covering map, $Deck(\tilde U)$ denotes the set of deck transformations. \begin{lemma}\label{CinN} Let $C, C^*, M$ be as in \rla{deck-tran}. Suppose that $C^*$ is a Stein manifold. Let $\omega_0^*$ be an open set of $\widetilde{N_C}$ such that $\hat\pi(\omega^*)$ contains $C$. Then there is an open subset $\omega^*$ of $\omega^*_0$ such that $\hat\pi\omega^*$ contains $C$ and $(M,C)$ is holomorphically equivalent to the quotient space of $\omega^*$ by $Deck( \widetilde{N_C})$. \end{lemma} \begin{proof} By a result of Siu \cite[Cor.~1]{siu-stein}, we find a biholomorphism $L$ from a holomorphic strong retraction neighborhood of $C^*$ in $\tilde M$, still denoted by $\tilde M$ into $N_{C^*}(\tilde M)$ and a biholomorphism $L'$ from a strong retraction neighborhood of $C^*$ in $\widetilde{N_C}$, still denoted by $\widetilde{ N_C}$ into $N_{C^*}( \widetilde{N_C)}$. Furthermore, $L,L'$ fix $C^*$ pointwise. We have \begin{align*} p_*\pi_1(N_{C^*}(\tilde M),x_0^*)&=p_*L^{-1}_*\pi_1(\tilde M,x_0^*)=\pi_1(C,x_0)=\pi_1(L^{-1}\tilde M,x_0^*)\\ &=\hat\pi_*(L')^{-1}_*(\pi_1(\widetilde{N_C},x_0^*)). \end{align*} Both $\hat\pi\circ L'^{-1}\colon N_{C^*}( \widetilde{N_C)})\to M$ and $p\circ L^{-1}\colon N_{C^*}( \tilde M)\to M$ are coverings and the above identifications show that the lifts of the two coverings yield a biholomorphism between neighborhoods of $C^*$ in $\tilde M$ and $N_{C^*}(\widetilde{N_C})$ fixing $C^*$ pointwise. \end{proof} Here, $z=(z_1,\ldots,z_n)$ belongs to the fundamental domain $$ \omega_0=\left\{\sum_{j=1}^{2n} t_je_j\in{\mathbb C}^n\colon t\in[0,1)^{2n}\right\},\quad e_{n+i}:=\tau_i,\quad i=1,\ldots,n. $$ Thus $h$ belongs to the fundamental domain $\Omega_0$ defined $$ {\Omega }_0:=\{(e^{2\pi i\zeta_1}, \dots e^{2\pi i\zeta_n})\colon\zeta\in \omega_0\},\quad { \Omega }_0^+=\{(|z_1|,\dots, |z_n|)\colon z\in {\Omega }_0\}. $$ Thus ${\Omega }_0$ is a Reinhardt domain, being $\{(\nu_1R_1,\dots, \nu_nR_n)\colon |\nu_j|=1\colon R\in \Omega _0^+\}$. We have $$ {\Omega }_0^+=\left\{(e^{-2\pi R_1},\dots, e^{-2\pi R_n})\colon R=\sum_{i=1}^n t_i\operatorname{Im}\tau_i,t\in[0,1)^n\right\}. $$ For $\epsilon>0$, define a (Reinhardt) neighborhood ${\Omega }_\epsilon$ of $\overline{{\Omega }_0}$ by \begin{gather*} \omega_\epsilon:=\left\{\sum_{j=1}^{2n} t_je_j\colon t\in[0,1)^n\times (-\epsilon,1+\epsilon)^{n}\right\}, \quad {\Omega }_\epsilon:=\{(e^{2\pi i\zeta_1}, \dots e^{2\pi i\zeta_n})\colon\zeta\in \omega_\epsilon\}. \end{gather*} For any $\ell$-tuple of indices in $\{n+1,\ldots,2n\}$, we set \begin{gather}{} \omega_\epsilon^{j_1\dots j_\ell}:= \left\{\sum_{j=1}^{2n} t_je_j\colon t_{k}\in(-\epsilon,\epsilon), k-n\in \{j_1,\dots, j_\ell\}; t_{k}\in(-\epsilon,1+\epsilon), k-n\not\in \{j_1,\dots, j_\ell\} \right\}, \\ \tilde \omega_\epsilon^{j_1\dots j_\ell}:= \left\{\sum_{j=1}^{2n} t_je_j\colon t_{k}-1\in(-\epsilon,\epsilon), k-n\in \{j_1,\dots, j_\ell\}; t_{k}\in(-\epsilon,\epsilon), k-n\not\in \{j_1,\dots, j_\ell\} \right\}. \end{gather} Note that $\tilde \omega_\epsilon^{j_1\dots j_\ell}$ and $ \omega_\epsilon^{j_1\dots j_\ell}$ are subset of $\omega_\epsilon$, and $\omega_\epsilon^{1\dots n}= \{\sum_{j=1}^{2n} t_je_j\in\omega_\epsilon\colon t\in[0,1)^n\times(-\epsilon,\epsilon)^n\}$. Then \begin{gather} \Omega_\epsilon^{j_1\dots j_\ell}:=\Omega_\epsilon\cap\bigcap_{k=1}^\ell T_{j_k}^{-1}\Omega_\epsilon=\{(e^{2\pi i\zeta_1}, \dots e^{2\pi i\zeta_n})\colon\zeta\in \omega^{j_1\dots j_\ell} _\epsilon\}, \\ \tilde\Omega_\epsilon^{j_1\dots j_\ell}:=\Omega_\epsilon\cap\bigcap_{k=1}^\ell T_{j_k}\Omega_\epsilon=\{(e^{2\pi i\zeta_1}, \dots e^{2\pi i\zeta_n})\colon\zeta\in \tilde\omega^{j_1\dots j_\ell}\} \end{gather} are connected non-empty Reinhardt domains. Moreover, $\Omega_0^{1\cdots n}:=\cap_{\epsilon>0} \Omega_0^{1\cdots n}$ and $ \tilde\Omega_0^{1\cdots n}=\cap_{\epsilon>0}\Omega_\epsilon^{1\cdots n} $ are diffeomorphic to the real torus $(S^1)^n$. We remark that $ T_iT_j$ maps $\Omega^{ij}_\epsilon$ into $\Omega_\epsilon$ for $i\neq j$, while $T_i\circ T_i$ does not map $\Omega_{\epsilon,r}^{ii}$ into $\Omega_\epsilon$ . With $\Delta_r=\{z\in{\mathbb C}\colon|z|<r\}$, we also define \begin{equation}\label{domains} \omega_{\epsilon,r}:= \omega_\epsilon\times \Delta^d_r, \quad \Omega_{\epsilon,r}:=\Omega _\epsilon\times \Delta^d_r. \end{equation} Throughout the paper, a mapping $(z',v')=\psi^0(z,v)$ from $\omega_{\epsilon,r}$ into ${\mathbb C}^{n+d}$ that commutes with $z_j\to z_j+1$ for $j=1,\dots, n$ will be identified with a well-defined mapping $(h',v')=\psi(h,v)$ from $\Omega_{\epsilon,r}$ into ${\mathbb C}^{n+d}$, where $z,h$ and $z',h'$ are related as in \re{hTj}. A function on $\omega_{\epsilon,r}$ that has period $1$ in all $z_j$ is identified with a function on $\Omega_{\epsilon,r}$. We shall use these identifications as we wish. \begin{prop}Let $C$ be the complex torus and $\pi_{{\mathcal C}}\colon{\mathcal C}={\mathbb C}^n/{{\mathbb Z}^n}\to C$ be the covering. Let $(M,C)$ be a neighborhood of $C$. Assume that $N_C$ is flat. \begin{list}[align=left]{$(\roman{ppp})$}{\usecounter{ppp}} \item Then one can take $\omega_{\epsilon_0,r_0}=\omega_{\epsilon_0}\times \Delta_{r_0}^d$ such that $(M,C)$ is biholomorphic to the quotient of $\omega_{\epsilon_0,r_0}$ by $\tau^0_1,\dots,\tau^0_n$. Let $\tau_j$ be the mapping defined on $\Omega_{\epsilon_0,r_0}$ corresponding to $\tau_j^0$. Then $\tau_1,\dots, \tau_n$ commute pairwise wherever they are defined, i.e. $$ \tau_i\tau_j(h,v)=\tau_j\tau_i(h,v)\quad \forall i\neq j $$ for $(h,v)\in \Omega_{\epsilon_0, r_0}\cap \tau_i^{-1}\Omega_{\epsilon_0,r_0}\cap \tau_j^{-1}\Omega_{\epsilon_0,r_0}$. \item Let $(\tilde M,C)$ be another such neighborhood having the corresponding generators $\tilde\tau_1,\dots,\tilde\tau_n$ of deck transformations defined on $\Omega_{\tilde\epsilon_0,\tilde r_0}$. Then $(M,C)$ and $(\tilde M,C)$ are holomorphically equivalent if and only if there is a biholomorphic mapping $F$ from $\Omega_{\epsilon,r}$ into $\Omega_{\tilde \epsilon,\tilde r}$ for some positive $\epsilon,r,\tilde e,\tilde r$ such that $$ F \tilde\tau_j(h,v)=\tau_jF(h,v),\ j=1,\ldots, n, $$ wherever both sides are defined, i.e. $(h,v)\in \Omega_{\tilde\epsilon,\tilde r}\cap \tilde\tau_j^{-1}\Omega_{\epsilon,r}\cap \Omega_{\epsilon,r}\cap F^{-1}\Omega_{\epsilon,r}.$ \end{list} \end{prop} \begin{proof} We now apply \rl{CinN}, in which $C^*$ is replaced by ${\mathcal C}={\mathbb R}^n/{\mathbb Z}^n+i{\mathbb R}^n$ is a Stein manifold. Assume that $N_C$ is flat. Then according to \rp{pi_SEtrivial}, $N_{{\mathcal C}}( \widetilde{N_C)}=N_{{\mathcal C}}(\tilde M)=\pi_{{\mathcal C}}^*(N_C)$ is the trivial vector bundle ${\mathcal C}\times {\mathbb C}^d$ with coordinates $(h,v)$, while ${\mathcal C}\times\{0\}$ is defined by $v=0$. \end{proof} Set \begin{gather*} {\mathcal P}_\epsilon^+:=\left\{ \sum_{i=1}^n t_i\operatorname{Im}\tau_i\colon t\in (-\epsilon,1+\epsilon)^{n}\right\},\\ {\Omega }_\epsilon^+:=\left\{(e^{-2\pi R_1},\dots, e^{-2\pi R_n})\colon R \in {\mathcal P}_\epsilon^+ \right\}. \end{gather*} Note that ${\mathcal P}_\epsilon^+, {\mathcal P}_0^+$ are $n$-dimensional parallelotopes, and $\Omega_\epsilon^+$ contains $(1,\ldots,1)$, the image of $0\in{\mathcal P}_\epsilon^+$, corresponding to the real torus $(S^1)^n$. Since $\Omega _\epsilon$ is Reinhardt, we have \eq{boundary-of-R} \Omega _\epsilon\supset\Omega _\epsilon^+,\quad (\partial \Omega _\epsilon)^+=\partial(\Omega _e^+), \quad (\partial \Omega _\epsilon)^+:=\{(|h_1|,\dots,|h_n|)\colon h\in \partial \Omega _\epsilon\}. \end{equation} We now apply the above general results to the case where $C$ is a complex torus, $C^*=\tilde C$ and $N_C(M)$ is Hermitian flat. As in \cite{ilyashenko-pyartly-embed}, the deck transformations of $(\tilde N_C,{\mathcal C})$ are generated by $n$ biholomorphisms $\hat\tau_1,\dots, \hat\tau_n$ that preserve ${\mathcal C}$.\begin{gather} \hat \tau_j( h,v = (T_jh, M_jv),\quad M_j:=\operatorname{diag}(\mu_{j,1},\dots, \mu_{j,d})\label{tauhat} \end{gather} with $h,T_j$ being defined by~ \eq{hTj} h=(e^{2\pi i z_1}, \cdots, e^{2\pi iz_n}), \quad T_j:= \operatorname{diag}(\lambda_{j,1},\dots, \lambda_{j,n}),\quad \lambda_{j,k}:=e^{2\pi i\tau_{jk}}. \end{equation} Here the invertible $(d\times d)$-matrix $M_j$ is the factor of automorphy $\rho(e_{n+j})$ of $\pi_{{\mathcal C}}^*N_C$. Recall that each deck transformation $\tau^0_j(z,v)$, $j=1,\ldots, n$, is a holomorphic map defined on $\overline \omega_{\epsilon,r}$. In coordinates $(h,v)$, $\tau_j^0$ becomes $\tau_j$ defined on $\overline\Omega_{\epsilon,r}$. Since $T_CM$ splits, it is a higher-order perturbation of $\hat\tau_j$ with $$ \tau_j(h,0)=( T_jh,0) $$ The above computation is based on the assumption that $N_C$ is flat. We now assume that $N_C$ is {\it Hermitian and flat}, i.e. the $N_C$ admits locally constant Hermitian transition matrices. Then all $\rho(e_{n+j})$ are Hermitian and constant matrices. Since they pairwise commute, they are simultaneously diagonalizable. Hence, we can assume that $M_j=\operatorname{diag}(\mu_{j,1},\ldots, \mu_{j,d})$. We recall from \re{hTj} that $T_j=\operatorname{diag}(\lambda_{j,1},\dots, \lambda_{j,n})$ are already diagonal. \begin{defn}\label{def-nr} The normal bundle $N_C$ is said to be non-resonant if, for each $(Q,P)\in \mathbb{N}^d\times {\mathbb Z}^n$ with $|Q|>1$, each $i=1,\ldots, n$, and each $j=1,\ldots ,d$, there exist $i_h:=i_h(Q,P,i)$ and $i_v:=i_v(Q,P,j)$ that are in $ \{1,\ldots, n\}$ such that $$ \lambda_{i_h}^P \mu_{i_h}^Q-\lambda_{i_h,i }\neq 0\quad\text{and} \quad \lambda_{i_v}^P \mu_{i_v}^Q-\mu_{{i_v},j}\neq 0. $$ \end{defn} \begin{defn}\label{dioph} The pullback normal bundle $N_C$, or $N_C$, is said to be {Diophantine} if for all $(Q,P)\in \mathbb{N}^d\times {\mathbb Z}^n$, $|Q|>1$ and all $i=1,\ldots, n$, and $j=1,\ldots ,d$, \begin{gather} \max_{\ell\in \{1,\ldots, n\}}\left | \lambda_{\ell}^P \mu_{\ell}^Q -\lambda_{\ell,i} \right| > \frac{D}{(|P|+|Q|)^{\tau}},\label{dh}\\ \max_{\ell\in \{1,\ldots, n\}} \left |\lambda_\ell^P \mu_{\ell}^Q-\mu_{{\ell},j}\right | > \frac{D}{(|P|+|Q|)^{\tau}}.\label{dv} \end{gather} We shall choose $i_h$ (resp. $i_v$) to be the index that realizes the maximum of \rea{dh} (resp. \rea{dv}). \end{defn} \begin{remark} If the right-hand sides are replaced by $0$, then $N_C$ is non-resonant. \end{remark} \begin{prop} The properties of being non-resonant and Diophantine is a property of the (abelian) group and not the choice of the generators. \end{prop} \begin{proof} Recall that $$ \lambda_{\ell}=(\lambda_{\ell,1},\ldots,\lambda_{\ell,n})=(e^{2\pi i\tau_{\ell,1}},\ldots, e^{2\pi i\tau_{\ell,n}}). $$ Let $G$ be the group generated by the $\hat \tau_{\ell}$'s. Then, $\{\tilde \tau_{\ell}\}_{\ell}$ defines another set of generators of $G$ if $\tilde \tau_{\ell}=\hat\tau_{1}^{a_{\ell,1}}\cdots \hat\tau_{n}^{a_{\ell,n}}$, $\ell=1,\ldots, n$ where $A=(a_{i,j})_{1\leq i,j\leq n}\in GL_n({\mathbb Z})$ with $\det A=\pm1$. Then, the eigenvalues of $\tilde \tau_{\ell}$ are $$ \tilde \lambda_{\ell,i}=\prod_{k=1}^n \lambda_{k,i}^{a_{\ell,k}},\quad \tilde \mu_{\ell,j}=\prod_{k=1}^{ d} \mu_{k,j}^{a_{\ell,k}}. $$ Hence, we have $$ \tilde \lambda_{\ell}^P\tilde\mu_{\ell}^Q\tilde \lambda_{\ell,i}^{-1}= (\lambda_{1}^P\mu_1^Q\lambda_{1,i}^{-1})^{a_{\ell,1}}\cdots (\lambda_{n}^P\mu_n^Q\lambda_{n,i}^{-1})^{a_{\ell,n}}. $$ Fix $P,Q$ and $i$. Taking the logarithm, we have as $n$-vectors \begin{equation}\label{log} \left(\ln \tilde \lambda_{\ell}^P\tilde\mu_{\ell}^Q\tilde \lambda_{\ell,i}^{-1}\right)_{\ell=1,\dots, n}= A \left(\ln \lambda_{\ell}^P\mu_{\ell}^Q\lambda_{\ell,i}^{-1}\right)_{\ell=1,\dots, n},\mod 2\pi i. \end{equation} Since $A,A^{-1} \in GL_n({\mathbb Z})$, given $P, Q$ and $i$, $$\left(\ln \tilde \lambda_{\ell}^P\tilde\mu_{\ell}^Q\tilde \lambda_{\ell,i}^{-1}\right)_{\ell=1,\dots, n}=0\mod 2\pi i$$ iff $\left(\ln \lambda_{\ell}^P\mu_{\ell}^Q\lambda_{\ell,i}^{-1}\right)_{\ell=1,\dots, n}=0\mod 2\pi i$. Similarly, by considering $\ln \tilde\lambda_{\ell}^P\tilde\mu_{\ell}^Q\tilde \mu_{\ell,i}^{-1}$, we obtain that the non-resonant condition does not depend on the choice of generators. Given $P,Q,i$, if one of the $\lambda_{\ell}^P\mu_{\ell}^Q\lambda_{\ell,i}^{-1}$'s is not close to $1$, then $\left\|(\ln \tilde \lambda_{\ell}^P\tilde \mu_{\ell}^Q\tilde\lambda_{\ell,i}^{-1})_{\ell}\right\|$ is bounded way from $0$. On the other hand, if all $\lambda_{\ell}^P\mu_{\ell}^Q\lambda_{\ell,i}^{-1}$'s are close to $1$, then $|\ln \lambda_{\ell}^P\mu_{\ell}^Q\lambda_{\ell,i}^{-1}|$ (with $\operatorname{Im}\ln$ in $(-\pi,\pi]$) is comparable to $|\lambda_{\ell}^P\mu_{\ell}^Q\lambda_{\ell,i}^{-1}-1|$. Furthermore, taking the module of \re{log}, we obtain $$ \|A^{-1}\|^{-1}\left\|(\ln \lambda_{\ell}^P\mu_{\ell}^Q\lambda_{\ell,i}^{-1})_{\ell}\right\|\leq \left\|(\ln \tilde \lambda_{\ell}^P\tilde \mu_{\ell}^Q\tilde\lambda_{\ell,i}^{-1})_{\ell}\right\|\leq \|A\|\left\|(\ln \lambda_{\ell}^P\mu_{\ell}^Q\lambda_{\ell,i}^{-1})_{\ell}\right\|, $$ where $\|(a_{\ell})_{\ell}\|=\max_{\ell}|a_{\ell}|$. If the latter is bounded below by $\frac{C}{(|P|+|Q|)^{\tau}}$, so is $\left\|(\ln \tilde \lambda_{\ell}^P\tilde \mu_{\ell}^Q\tilde\lambda_{\ell,i}^{-1})_{\ell}\right\|$. \end{proof} \begin{thm} Let $C_n$ be an $n$-dimensional complex torus, holomorphically embedded into a complex manifold $M_{n+d}$. Assume that $T_CM$ splits. Assume the normal bundle $N_C$ has (locally constant) Hermitian transition functions. Assume that $N_C$ is Diophantine. Then some neighborhood of $C$ is biholomorphic to a neighborhood of the zero section in the normal bundle. \end{thm} \begin{remark} When $C$ is a product of $1$-dimensional tori with normal bundle which is a direct sum of line bundles, the above result is due to Il'yashenko-Pyartli~\cite{ilyashenko-pyartly-embed}. \end{remark} We have $\tau_j(h,v) =\hat\tau_j(h,v)+(\tau^h_j(h,v),\tau^v_j(h,v) )$. Here, functions $$ \tau^{\bullet}_j(h,v)=\sum_{Q\in \mathbb{N}^d, |Q|\geq 2}\tau^{\bullet}_{j,Q}(h)v^Q $$ are holomorphic in $(h,v)$ in a neighborhood of $\Omega_{\epsilon,r}$ with values in ${\mathbb C}^{n+d}$. \begin{defn} Set $\Omega_{\epsilon,r}:=\Omega_\epsilon\times\Delta_r^d$, $ \tilde \Omega_{\epsilon,r}:=\overline\Omega_{\epsilon,r}\cup\bigcup_{i=1}^n \hat\tau_i(\overline\Omega_{\epsilon,r})$. Denote by $\mathcal A_{\epsilon,r}$ $($resp. $\tilde{\mathcal A}_{\epsilon, r})$ the set of holomorphic functions on $\overline{\Omega_{\epsilon,r}}$ $($resp. $\overline{\tilde \Omega_{\epsilon,r}})$. If $f\in \mathcal{A}_{\epsilon, r}$ $($resp. $\tilde f\in\tilde{\mathcal A}_{\epsilon, r})$, we set $$ \|f\|_{\epsilon,r}:=\sup_{(h,v)\in\Omega_{\epsilon,r}}| f(h,v)|, \quad |||\tilde f|||_{\epsilon,r}:=\sup_{(h,v)\in\tilde\Omega_{\epsilon,r}}| \tilde f(h,v)|. $$ \end{defn} As such, each $f\in \mathcal{A}_{\epsilon, r}$ can be expressed as a convergent Taylor-Laurent series $$ f(h,v)= \sum_{P\in {\mathbb Z}^n}f_{Q,P}h^Pv^Q $$ for $(h,v)\in \Omega_{\epsilon,r}=\Omega_\epsilon\times\Delta_r^d$. Recall that each holomorphic function on $E_j$ in \rl{connected} admits a unique Taylor-Laurent series expansion on $\tilde\Omega_{\epsilon'}^{1\cdots n}\times \Delta_{\epsilon'}^d$ when $\epsilon'>0$ is sufficiently small. We can state the following, for later use~: \begin{lemma}\label{connected} For $i\neq j$, the set $ \hat\tau_j\overline\Omega_{\epsilon,r}\cap\hat\tau_i (\overline\Omega_{\epsilon,r})$ is a connected Reinhardt domain containing $\tilde \Omega_{\epsilon}^{1\dots n}\times \Delta_{r'}^d$ in ${\mathbb C}^{n+d}$ when $r'>0$ is sufficiently small. \end{lemma} \subsection{Holomorphic functions on $\Omega_{\epsilon,r}$} In this section, we study elementary properties and estimate holomorphic functions $f$ on $\overline\Omega_{\epsilon,r}$. \begin{lemma}\label{laurent} An element $f(h)$ of ${\mathcal A}_{\epsilon}$, that is a holomorphic function in a neighborhood of $\overline \Omega_{\epsilon}$, admits a Laurent series expansion in $h$ \eq{L-exp} f(h)=\sum_{P\in{\mathbb Z}^n} c_Ph^P. \end{equation} The series converges normally on $\Omega_\epsilon$. Moreover, the Laurent coefficients \eq{L-coef} c_P=\f{1}{(2\pi i)^n}\int_{|\zeta_1|=s_1,\dots, |\zeta_n|=s_n} f(\zeta)\zeta^{-P-(1,\dots, 1)}\, d\zeta_1\wedge\cdots \wedge d\zeta_n \end{equation} are independent of $s\in \Omega_\epsilon^+$ and \eq{L-est} |c_P|\leq\sup_{\Omega_\epsilon}| f|\inf_{s\in \Omega_\epsilon^+} s^{-P}. \end{equation} \end{lemma} \begin{proof}Obviously, estimate \re{L-est} follows from \re{L-coef}. Define $A(a,b)=\{w\in {\mathbb C}:a<|w|<b\}$. Fix $h\in \Omega_\epsilon$. Then $|h|:=(|h_1|,\dots, |h_n|)\in \Omega_\epsilon^+$. The latter is an open set and we have for a small positive number $\epsilon=\epsilon(h)$, $$ f(h)=\f{1}{(2\pi i)^n} \int_{\partial A(|h_1|-\epsilon,|h_1|+\epsilon)} \cdots \int_{\partial A(|h_n|-\epsilon,|h_n|+\epsilon)}\f{ f(\zeta)\, d\zeta_1\dots d\zeta_n}{(\zeta_1-h_1)\cdots(\zeta_n-h_n)}. $$ By Laurent expansion in one-variable, we get the expansion \re{L-exp} in which $c_P$ are given by \re{L-coef} if we take $r_j=|h_j|\pm\epsilon$ according to the sign of $p_j\in {\mathbb Z}^{\mp}$. We want to show that $c_P$ is independent of $s\in\Omega_\epsilon^+$. Note that $\Omega_\epsilon^+$ is a connected open set. For any two points $s,\tilde s$ can be connected by a union of line segments in $\Omega_\epsilon^+$ which are parallel to coordinate axes in ${\mathbb R}^n$. Using such line segments, say a line segment $[a,b]\times (s_2,\dots, s_n)$ in $\Omega_\epsilon^+$, we know that $ f(\zeta)\zeta^{-P-(1,\dots, 1)}$ is holomorphic in $\zeta_1$ in the closure of $A(a,b)$ when $|\zeta_2|=s_2,\dots, |\zeta_n|=s_n$. By Cauchy theorem, the integrals are independent of $s_1\in[a,b]$ for these $s_2,\dots, s_n$. This shows that \re{L-coef} is independent of $s$. Finally the series converges uniformly on each compact subset of $\Omega_\epsilon$. Indeed, for a small perturbation of $h$, we can choose $\epsilon(h)$ to be independent of $h$. Then we can see easily that the series converges locally uniform in $h$. \end{proof} Recall that \begin{gather*} {\mathcal P}_\epsilon^+=\left\{\sum_{i=1}^n t_i\operatorname{Im}\tau_i\in{\mathbb R}^n\colon t\in (-\epsilon,1+\epsilon)^{n}\right\},\\ {\Omega}_\epsilon^+=\left\{(e^{-2\pi R_1},\dots, e^{-2\pi R_n})\colon R \in {\mathcal P}_\epsilon^+ \right\}. \end{gather*} \begin{lemma}\label{dist-lemma} There is a constant $\kappa_0>0$ that depends only on $\operatorname{Im}\tau_1,\dots,\operatorname{Im}\tau_n$ such that if $P\in{\mathbb R}^n$ and $\epsilon>\epsilon'$, there exists $R\in {\mathcal P}_{\epsilon}^+$ such that for all $R' \in {\mathcal P}_{\epsilon'}^+$ we have \eq{RR'} (R'-R)\cdot P\leq -\kappa_0(\epsilon-\epsilon')|P|. \end{equation} \end{lemma} \begin{proof}Without loss of generality, we may assume that $P$ is a unit vector. Let $\pi(x)P$ be the orthogonal projection from $x\in{\mathbb R}^n$ onto the line spanned by $P$. Choose $R\in\overline {\mathcal P}_\epsilon^+$ so that $\pi(R)$ has the largest value for $R\in \overline {\mathcal{P}_\epsilon^+}$. Note that $R$ must be on the boundary of ${\mathcal P}_\epsilon^+$ and latter is contained in the half-space $H$ defined by $\pi(y)\leq\pi(R)$ for $y\in{\mathbb R}^n$. Hence, $\partial H$ is orthogonal to $P$. Then for any $R'\in \mathcal {P}_{\epsilon'}^+$, $$ \pi(R)-\pi(R')=\operatorname{dist}(R',\partial H)\geq\operatorname{dist} ( \partial {\mathcal P}_{\epsilon}^+,{\mathcal P}_{\epsilon'}^+)\geq (\epsilon-\epsilon')/C. $$ Therefore, we obtain $(R'-R)\cdot P=-(\pi(R)-\pi(R'))\leq -(\epsilon-\epsilon')/C.$ \end{proof} \begin{remark}Since $ {\mathcal P}_{\epsilon}^+$ is a parallelotope, we can choose $R$ to be a vertex of ${\mathcal P}_{\epsilon}^+$. \end{remark} In what follows, we denote the fixed constant~: \begin{equation}\label{kappa} \kappa:=2\pi\kappa_0. \end{equation} \begin{lemma}[Cauchy estimates]\label{cauchy} If $f\in\mathcal A_{\epsilon, r}$, then for all $(P,Q)\in {\mathbb Z}^n\times \mathbb {N}^d$, \begin{equation}\label{Reps} |f_{Q,P}|\leq \frac{M}{r^{|Q|}\max_{s\in \Omega_\epsilon^+} s^{P}} \end{equation} Furthermore, if $f=\sum_{Q\in {\bf N}^d, P\in {\mathbb Z}^n}f_{QP}h^Pv^Q\in \mathcal A_{\epsilon , r}$ and $0<\delta<{\kappa}\epsilon$, then $$ \|f\|_{\epsilon {-{\delta}/{\kappa}}, re^{-\delta}}\leq \frac{C\sup_{\Omega_{\epsilon,r}}|f|}{\delta^{\nu}}, $$ where $C$ and $\nu$ depends only on $n$ and $d$. \end{lemma} \begin{proof} According to \rl{laurent} and Cauchy estimates on polydiscs, we have for any $s\in \Omega_\epsilon^+$, $$ |f_{Q,P}|\leq \frac{\sup_{\Omega_\epsilon}|f_Q(h)|}{s^{P}}\leq \frac{\sup_{\Omega_{\epsilon,r}}|f|}{s^{P}r^{|Q|}} $$ According to \rl{laurent} and Cauchy estimates for polydiscs, we have if $(h,v)\in \Omega_{\epsilon e^{-\delta'},re^{-\delta}}$, then for all $s\in \Omega_{\epsilon}^+$, \begin{align*} |f_Q(h)|&\leq \frac{\sup_{v\in \Delta^d_r }|f(h,v)|}{r^{|Q|}},\\ |f_{Q,P}h^P|&\leq \left|\f{1}{(2\pi i)^n}\int_{|\zeta_1|=s_1,\dots, |\zeta_n|=s_n} f_Q(\zeta)\frac{h^P}{\zeta^{P}}\, \frac{d\zeta_1\wedge\cdots \wedge d\zeta_n}{\zeta_1\cdots\zeta_n}\right|. \end{align*} Set $s_j=e^{-2\pi R_j}$, $|h_j|=e^{-2\pi R_j'}$, $R=(R_1,\dots, R_n)$ and $R'=(R_1',\dots, R_n')$. By \rl{dist-lemma}, \begin{equation}\label{fourier-type} \inf_{(|\zeta_1|,\dots,|\zeta_n|)=s\in\Omega_{\epsilon}^+}\sup_{h\in \Omega_{\epsilon {-\delta'}}} \left|\frac{h^P}{\zeta^P}\right|=\inf_{R\in \mathcal P_{\epsilon}^+}\sup_{R'\in\mathcal P_{\epsilon {-\delta'}}^+} e^{-2\pi <R-R',P>}\leq e^{-\kappa\delta' |P|}, \end{equation} where the positive constant $\kappa$, defined by \rl{dist-lemma} and \re{kappa}, is independent of $P,\zeta_1,\zeta_2$. Thus \begin{equation}\label{est-small} |f_{Q,P}h^Pv^Q|\leq \frac{\sup_{\Omega_{\epsilon,r}}|f| e^{-\kappa\delta'|P|}r^{|Q|}e^{-\delta|Q|}}{r^{|Q|}}\leq \sup_{\Omega_{\epsilon,r}}|f| e^{-\delta'\kappa |P|}e^{-\delta|Q|}. \end{equation} Hence, setting $\delta':=\delta/\kappa$, we have $$ \|f\|_{\epsilon {-{\delta}/{\kappa}}, re^{-\delta}}\leq \sum_{Q\in {\bf N}^d, P\in {\mathbb Z}^n}|f_{Q,P}h^Pv^Q|\leq \frac{C\sup_{\Omega_{\epsilon,r}}|f|}{\delta^{\nu}}, $$ where $C$ and $\nu$ depends only on $n$ and $d$. \end{proof} \subsection{Conjugacy of the deck transformations} Let us show that there is a biholomorphism $\Phi=(h,v)+\phi(h,v)$ of some neighborhood $\Omega_{\tilde\epsilon,\tilde r}$, fixing ${\mathcal C}$ pointwise (i.e. $\Phi(h,0)=(h,0)$) such that $$ \Phi\circ \hat \tau_i = \tau_i\circ \Phi,\quad i=1,\ldots, n. $$ This reads $\hat\tau_i+\phi(\hat\tau_i)=\hat \tau_i(Id+\phi)+\tau^{\bullet}_i(Id+\phi)$, that is, for all $i=1,\ldots n$, \begin{equation} \mathcal L_i(\phi)=\tau^{\bullet}_i(Id+\phi)+\left(\hat \tau_i(Id+\phi)- \hat \tau_i-D\hat \tau_i.\phi\right)=\tau^{\bullet}_i(Id+\phi). \end{equation} Here we define $\mathcal L_i(\phi):=\phi(\hat\tau_i)-D\hat \tau_i.\phi$ and $ (\mathcal L^h_i(\phi^h),\mathcal L^v_i(\phi^v)):=\mathcal L_i(\phi)$. We have $$ (\mathcal L^h_i(\phi^h),\mathcal L^v_i(\phi^v))=\left(\phi^h(T_ih, M_iv)-T_i\phi^h(h,v), \phi^v(T_ih, M_iv)-M_i\phi^v(h,v)\right). $$ Since $\hat\tau_j$ are linear, then $D\hat\tau_j=\hat\tau_j$. Expand the latter in Taylor-Laurent expansions as \begin{eqnarray*} \mathcal L_i^h(\phi^h)&=&\sum_{Q\in{\mathbb N}^d_2}\left(\sum_{P\in {\mathbb Z}^n}\left(\lambda^P_i\mu_i^Q\times Id_n-T_i\right)\phi^h_{Q,P}h^P\right)v^Q, \quad \phi^h_{Q,P}\in {\mathbb C}^n,\\ \mathcal L_i^v(\phi^v)&=&\sum_{Q\in{\mathbb N}^d_2}\left(\sum_{P\in {\mathbb Z}^n}\left(\lambda^P_i\mu_i^Q\times Id_d-M_i\right)\phi^v_{Q,P}h^P\right)v^Q, \quad \phi^v_{Q,P}\in {\mathbb C}^d.\\ \end{eqnarray*} Recall the notations $ \lambda_{\ell}=(\lambda_{\ell,1},\ldots,\lambda_{\ell,n})$ and $ \mu_\ell=(\mu_{\ell,1},\ldots,\mu_{\ell,d}). $ With $P=(p_1,\ldots, p_n)\in{\mathbb Z}^n$ and $Q=(q_1,\ldots, q_d)\in \mathbb{N}^d$, we have $$ \lambda_{\ell}^P\mu_{\ell}^Q:=\prod_{i=1}^n\lambda_{\ell,i}^{p_i}\prod_{j=1}^n\mu_{\ell,j}^{q_j}. $$ \begin{lemma}\label{computeFij}Let $\tau_j\in\mathcal A_{\epsilon,r}^{n+d}$ with $ \tau_j=\hat\tau_j-F_j$ and $ F_j(h,v)=O(|v|^{q+1}) $ with $q\geq1$. Suppose that $\tau_i\tau_j=\tau_j\tau_i$ in a neighborhood of $\Omega_{0}^{1\cdots n}\times\{0\}$ in ${\mathbb C}^{n+d}$. Then $ \mathcal L_iF_j-\mathcal L_iF_i=O(|v|^{2q+1}). $\end{lemma} \begin{proof Recall that $\hat\tau_i(h,v)$ are linear maps in $h,v$. Also, $\hat\tau_i\hat\tau_j$ sends $\Omega_{\epsilon}^j\times\Delta_\epsilon^d$ into ${\mathbb C}^{n+d}$. Since $\tau_j\in\mathcal A_{\epsilon,r}^{n+d}$ and $\tau_j=\hat\tau_j+O(|v|^2)$, the continuity implies that $\tau_i\tau_j$ is well-defined on the product domain $\Omega^{j}_{\epsilon'}\times\Delta_{r'}^d$ when $\epsilon', r'$ are sufficiently small. Fix $h\in\Omega^{j}_{\epsilon'}$. By Taylor expansions in $v$, we obtain $$ \tau_i\tau_j(h,v)=\hat\tau_i\hat\tau_j(h,v)+ \hat\tau_jF_j (h,v)+F_i\circ\hat\tau_j(h,v)+O(|v|^{2q+1}). $$ Since $\hat\tau_i\hat\tau_j=\hat\tau_j\hat\tau_i$ and $\tau_i\tau_j=\tau_j\tau_i$ in a neighborhood of $\Omega_{0}^{1\cdots n}\times\{0\}$, we get $\mathcal L_iF_j=\mathcal L_jF_i=O(|v|^{2q+1})$ in a possibly smaller neighborhood of $\Omega_{0}^{1\cdots n}\times\{0\}$. \end{proof} We will apply the following result to $F_j=\hat\tau_j-J^{2q}(\tau_j)$, where $J^{2q}(\tau_j)$ denotes the $2q$-jet at $0$ in the variable $v$. These are holomorphic on $\Omega_{\epsilon,r}$. Recall that $$ \Omega^{ij}_{\epsilon',r'}:= \Omega_{\epsilon',r'}\cap\hat\tau_i^{-1}(\Omega_{\epsilon',r'})\cap\hat\tau_j^{-1}(\Omega_{\epsilon',r'}) $$ \begin{prop}\label{cohomo} Assume $N_C$ is Diophantine. Fix $\epsilon_0,r_0,\delta_0$ in $(0,1)$. Let $0<\epsilon'<\epsilon<\epsilon_0$, $0<r'<r<r_0$, $0<\delta<\delta_0$, and $\frac{\delta}{\kappa}<\epsilon$. Suppose that $F_i\in {\mathcal A}_{\epsilon, r}$, $i=1,\ldots, n$, satisfy \begin{equation} \label{almost-com} \mathcal L_i(F_j)-\mathcal L_j(F_i)=0\quad \text{on } \Omega^{ij}_{\epsilon',r'}. \end{equation} There exist functions $G \in \tilde{\mathcal A}_{\epsilon {-{\delta}/{\kappa}},re^{-\delta}}$ such that \begin{align}\label{formal2q+1} \mathcal L_i(G)&=F_i \ \text{on } \Omega_{\epsilon {-{\delta}/{\kappa}},re^{-\delta}}. \end{align} Furthermore, $G$ satisfies \begin{align}\label{estim-sol} \|G\|_{\epsilon {-{\delta}/{\kappa}},re^{-\delta}}&\leq \max_i\|F_{i}\|_{\epsilon,r}\frac{C'}{\delta^{\tau+\nu}},\\ \label{estim-solcompo} \|G\circ\hat\tau_i\|_{\epsilon {-{\delta}/{\kappa}},re^{-\delta}}&\leq \max_i\|F_{i}\|_{\epsilon,r}\frac{ C'}{\delta^{\tau+\nu}}. \end{align} for some constant $\kappa, C'$ that are independent of $F,q,\delta, r,\epsilon$ and $\nu$ that depends only on $n$ and $d$. Furthermore, if $F_j(h,v)=O(|v|^{q+1})$ for all $j$, then \eq{} G(h,v)=O(|v|^{q+1}), \quad G(h,v)=J^{2q}G(h,v). \end{equation} \end{prop} \begin{proof} Since $F_i\in {\mathcal A}_{\epsilon, r}$, we can write $$ F_i(h,v)=\sum_{Q\in \mathbb{N}^d, |Q|\geq2}\sum_{P\in{\mathbb Z}^n}F_{i,Q,P}h^Pv^Q, $$ which converges normally for $(h,v)\in\Omega_{\epsilon,r}$. We emphasize that $F_{i,Q,P}$ are vectors, and its $k$th component is denoted by $F_{i,k,Q,P}$. For each $(Q,P)\in \mathbb{N}^d\times {\mathbb Z}^n$, each $i=1,\ldots, n$, and each $j=1,\ldots ,d$, let $i_h:=i_h(Q,P,i), i_v:=i_v(Q,P,j)$ be in $ \{1,\ldots, n\}$ as in \rd{def-nr}. Let us set \begin{align} G^h_i&:=\sum_{Q\in \mathbb{N}^d, 2\leq|Q|\leq 2q}\sum_{P\in{\mathbb Z}}\frac{F^h_{i_h,i,Q,P}}{\lambda_{i_h}^P\mu_{i_h}^Q-\lambda_{i_h,i}}h^Pv^Q,\quad i=1,\ldots, n\label{sol-coh-h}\\ G^v_j&:=\sum_{Q\in \mathbb{N}^d, 2\leq |Q|\leq 2q}\sum_{P\in{\mathbb Z}}\frac{F^v_{i_v,j,Q,P}}{\lambda_{i_v}^P\lambda_{i_v}^Q-\mu_{i_v,j}}h^Pv^Q,\quad j=1,\ldots. d\label{sol-coh-v}. \end{align} According to \re{almost-com}, we have \begin{equation}\label{compat-coeff} (\lambda_{i_h}^P\mu_{i_h}^Q-\lambda_{i_h,i})F_{m,i,Q,P}^h=(\lambda_{m}^P\mu_{m}^Q-\lambda_{m,i}) F_{i_h,i,Q,P}^h, \quad 2\leq|Q|\leq 2q. \end{equation} Therefore, using \re{compat-coeff}, the $i$th-component of $\mathcal L_m(G)$ reads \begin{align*} \mathcal L_m(G^h_i) = &\sum_{Q\in \mathbb{N}^d, 2\leq |Q|\leq 2q }\sum_{P\in {\mathbb Z}^n}(\lambda_{m}^P\mu_{m}^Q-\lambda_{m,i})\frac{F^h_{i_h,i,Q,P}}{(\lambda_{i_h}^P\mu_{i_h}^Q-\lambda_{i_h,i})}h^Pv^Q\\ = &\sum_{Q\in \mathbb{N}^d,2\leq|Q|\leq 2q}\sum_{P\in {\mathbb Z}^n}F_{m,i,Q,P}^hh^Pv^Q. \end{align*} Proceeding similarly for the vertical component, we have obtained, the formal equality~: \begin{equation}\label{formal-lin} \mathcal L_m(G) = F_m ,\quad m=1,\ldots, n \end{equation} Let us estimate these solutions. According to \rd{dioph} and formulas \re{sol-coh-h}-\re{sol-coh-v}, we have \eq{Ghv} \max_{i,j}\left (|G^h_{i,Q,P}|,|G^v_{j,Q,P}|\right )\leq \max_i|F_{i,Q,P}|\frac{(|P|+|Q|)^{\tau}}{D}. \end{equation} Let $(h,v)\in \Omega_{\epsilon {-{\delta}/{\kappa}},re^{-\delta}}$. According to \re{est-small}, we have \begin{align*} \|G^h_{Q,P}h^Pv^Q\|&\leq \max_i\|F_{i}\|_{\epsilon,r}e^{-\delta(|P|+|Q|)}\frac{(|P|+|Q|)^{\tau}}{D}\\ &\leq \max_i\|F_{i}\|_{\epsilon,r}e^{-\delta/2(|P|+|Q|)}\frac{(2\tau e)^{\tau}}{D\delta^{\tau}}. \end{align*} Summing over $P$ and $Q$, we obtain $$ \|G\|_{\epsilon {-{\delta}/{\kappa}},re^{-\delta}}\leq \max_i\|F_{i}\|_{\epsilon,r}\frac{C'}{\delta^{\tau+\nu}}, $$ for some constants $C',\nu$ that are independent of $F,\epsilon,\delta$. Hence, $G\in \mathcal A_{\epsilon {-{\delta}/{\kappa}},re^{-\delta}}$. Let us prove \re{estim-solcompo}. Let $B:=2\max_{k,i,j}(|\lambda_{k,i}|, |\mu_{k,j}|)$. Then, there is a constant $D'$ such that \begin{gather}\label{sd-ehanced} \max_{\ell\in \{1,\ldots, n\}}\left |\lambda_{\ell}^P\mu_{\ell}^Q-\lambda_{\ell,i}\right| \geq \frac{D'\max_k|\lambda_{k}^P\mu_{k}^Q|}{(|P|+[Q|)^{\tau}},\\ \label{sd-ehanced+} \max_{\ell\in \{1,\ldots, n\}} \left |\lambda_{\ell}^P\mu_{\ell}^Q-\mu_{{\ell},j}\right | \geq \frac{D'\max_k|\lambda_{k}^P\mu_{k}^Q|}{(|P|+[Q|)^{\tau}}. \end{gather} Indeed, if $\max_k|\lambda_k^P\mu_k^Q|<B$, then \rd{dioph} gives \re{sd-ehanced} with $D':= \frac{D}{B}$. Otherwise, if $$|\lambda_{k_0}^P\mu_{k_0}^Q|:=\max_k|\lambda_{k}^P\mu_{k}^Q|\geq B,$$ then $|\lambda_{k_0,i}|\leq \frac{B}{2}\leq \frac{|\lambda_{k_0}^P\mu_{k_0}^Q|}{2}$. Hence, we have $$ \left |\lambda_{k_0}^P\mu_{k_0}^Q-\lambda_{k_0,i}\right|\geq \left||\lambda_{k_0}^P\mu_{k_0}^Q|- |\lambda_{k_0,i}|\right|\geq \frac{|\lambda_{k_0}^P\mu_{k_0}^Q|}{2}. $$ We have verified \re{sd-ehanced}. Similarly, we can verified \re{sd-ehanced+}. Finally, combining all cases gives us, for $m=1,\ldots, n$, \begin{align*} |[G\circ\hat \tau_m]_{QP}| &= \left| G_{Q,P}\lambda_{m}^P\mu_{m}^Q \right|\leq\max_{\ell}|F_{\ell,Q,P}| \frac{|\lambda_{m}^P\mu_{m}^Q|}{|\lambda_{i_h}^P\mu_{i_h}^Q-\lambda_{i_h,i}|}\\ &\leq \max_{\ell}|F_{\ell,Q,P}|\frac{|\lambda_{m}^P\mu_{m}^Q|(|P|+|Q|)^{\tau}}{D'\max_k|\lambda_{k}^P\mu_{k}^Q|}\\ &\leq \max_{\ell}|F_{\ell,Q,P}|\frac{(|P|+|Q|)^{\tau}}{D'}.\end{align*} Hence, $\tilde G_m:=G\circ \hat \tau_m\in \mathcal A_{\epsilon {-{\delta}/{\kappa}},re^{-\delta}}$. We can define $\tilde G\in \tilde{\mathcal A}_{\epsilon {-{\delta}/{\kappa}},re^{-\delta}}$ such that $\tilde G=\tilde G_m\hat\tau_m^{-1}$ on $\hat\tau_m\Omega_{\epsilon,r}$. We verify that $\tilde G$ extends to a single-valued holomorphic function of class $\tilde {\mathcal A}_{\epsilon,r}$. Indeed, $\tilde G_i\hat\tau_i^{-1}=\tilde G_j\hat\tau_j^{-1}$ on $\hat\tau_i\Omega_{\epsilon,r}\cap \hat\tau_j\Omega_{\epsilon,r}$, since the latter is connected by \rl{connected} and the two functions agree with $G$ on $\hat\tau_i\Omega_{\epsilon,r}\cap \hat\tau_j\Omega_{\epsilon,r}\cap \Omega_{\epsilon,r}$ that contains a neighborhood of $\tilde\Omega_\epsilon\times\{0\}$ in ${\mathbb C}^{n+d}$. \end{proof} In what follows, we shall set $\hat\tau_0=Id$ and for any $f \in (\tilde A_{\epsilon,r})^{ n+d}$ and $F\in (\tilde{\mathcal A}_{\epsilon,r})^{n+d}$, we set \begin{align*} |||f|||_{\epsilon,r}&:=\|f\|_{\epsilon,r}+\sum_{i=1}^n\| \hat\tau_i^{-1} f\circ \hat\tau_i\|_{\epsilon,r}= \sum_{i=0}^n\| \hat\tau_i^{-1} f\circ \hat\tau_i\|_{\epsilon,r},\\ F_{(i)}&:=\hat\tau_i^{-1}F\circ\hat\tau_i\in \mathcal A_{\epsilon,r},\quad F_{(0)}:=F,\quad i=1,\ldots, n. \end{align*} \subsection{Iteration scheme} We shall prove the main result through a Newton scheme. Let us define sequences of positive real numbers $$ \delta_{k}:= \frac{\delta_0}{(k+1)^2}, \quad r_{k+1}:=r_ke^{-5\delta_k}, \quad \epsilon_{k+1}:= \epsilon_k { {-\frac{5\delta_k}{\kappa}}} $$ such that \begin{equation}\label{sigma} \sigma:=\sum_{k\geq 0}\delta_k <2\delta_0 \end{equation} We assume the following conditions hold~: \begin{gather} \label{cond1}\delta_0<\frac{\kappa}{20}\epsilon_0,\\ \label{cond2}\delta_0<\frac{\ln 2}{10}. \end{gather} Condition \re{cond1} ensures that $$ \frac{5}{\kappa}\sigma<\frac{10}{\kappa}\delta_0<\frac{\epsilon_0}{2},\quad \text{so that}\quad \epsilon_{k}>\frac{\epsilon_0}{2},\; k\geq 0. $$ Condition \re{cond2} ensures that $$ e^{-5\sigma}>e^{-10\delta_0}>\frac{1}{2},\quad \text{so that}\quad r_k>\frac{r_0}{2},\; k\geq 0. $$ Let $m=5$ be fixed. We define $\epsilon_{k+1}<\epsilon_{k}^{(\ell)}<\epsilon_{k}$ and $r_{k+1}<r_{k}^{(\ell)}<r_{k}$ , $\ell=1,\ldots m$ as follows~: \begin{align*} \epsilon_{k}^{(\ell)}=\epsilon_{k} {-\frac{\ell\delta_k}{\kappa}},&\quad\epsilon_{k+1}:=\epsilon_{k}^{(m)},\\ r_k^{(\ell)}=r_ke^{-\ell\delta_k},&\quad r_{k+1}:=r_k^{(m)} . \end{align*} We emphasize that condition \re{cond1} ensures $\epsilon_{k}^{(\ell)}>0$. Let us assume that for each $i=1,\ldots, n$, $\tau_i^{(k)}=\hat \tau_i+ \tau^{\bullet (k)}_i,$ is holomorphic and on $\Omega_{\epsilon_{k},r_k}$ it satisfies \eq{4.32-15f} ||\tau^{\bullet(k)}||_{\epsilon_{k},r_k}<\delta_k^{\mu}, \quad \tau^{\bullet(k)}(h,v)=O(|v|^{q_k+1}). \end{equation} We further assume that $\tau_i^{(k)},\tau_j^{(k)}$ commute on $\Omega^{ij}_{\epsilon',r'} \subset \Omega_{\epsilon_{k},r_k}$ for some positive $\epsilon'<\epsilon_k,r'<r_k$. We take $$ q_{k+1}=2q_k+1\geq q_02^{k} $$ for $q_0\geq 1$ to be determined. We will define a sequence $\Phi^{(k)}$ with $\Phi^{(k)}(h,0)=(h,0)$. Let us write on appropriate domains \begin{gather*} \Phi^{(k)}=Id+\phi^{(k)},\quad (\Phi^{(k)})^{-1}:=Id-\psi^{(k)},\\ \quad \tau_i^{(k+1)} = \Phi^{(k)}\circ \tau_i^{(k)}\circ (\Phi^{(k)})^{-1},\quad i=1,\ldots, n. \end{gather*} Note that there is a constant $C>0$ (depending only on the $\mu_{i,j}$'s) such that if $\epsilon''<\epsilon'$, $Cr''<r'$ then $$ \Omega^{ij}_{\epsilon'',r''}\subset\Omega_{\epsilon'}\times\Delta_{r'}^d,\quad \Omega_{\epsilon''}\times\Delta_{r''}^d\subset \Omega^{ij}_{\epsilon',r'}. $$ Since the $\tau_i^{(k)}, \tau_j^{(k)}$ commute on $\Omega_{\epsilon'}\times\Delta_{\epsilon'}^d$ for some $\epsilon'>0$ and $\Phi^{(k)}(h,v)=(h,v)+O(|v|^2)$, then $\tau_i^{(k+1)}, \tau_j^{(k+1)}$ still commute on the same kind of domains for a possibly smaller $\epsilon'$. By \rl{computeFij}, we obtain : \begin{align}\label{compatible} \mathcal L_j(\tau^{\bullet (k)}_i)-\mathcal L_i(\tau^{\bullet (k)}_j)= O(|v|^{2q_k+1}). \end{align} We want to find $\phi^{(k)}$ so that \begin{align} \label{tik+1} \tau_i^{(k+1)}:=\hat\tau_i+\tau^{\bullet (k+1)}_i&=\Phi^{(k)}\circ\tau_i^{(k)}\circ (\Phi^{(k)})^{-1}\\ &=\hat \tau_i(Id-\psi^{(k)})+\tau^{\bullet(k)}_i(Id-\psi^{(k)})\nonumber\\ &\quad +\phi^{(k)}\left(\hat \tau_i(Id-\psi^{(k)})+\tau^{\bullet(k)}_i(Id-\psi^{(k)})\right) \nonumber\end{align} is defined and bounded on $ \Omega_{\epsilon_{k+1},r_{k+1}}$. We now define $\Phi^{(k)}$ by applying \rp{cohomo} with these $F_i:=-J^{2q_k}(\tau^{\bullet (k)}_i)$. Let $\phi^{(k)}$ stands for $G$. Therefore, given $0<\delta<\kappa \epsilon_k^{(1)}$, $\phi^{(k)}$ is holomorphic and bounded on $\tilde\Omega_{\epsilon_{k} {-\frac{\delta}{\kappa}},r_ke^{-\delta}}$ and it satisfies on $\Omega_{\epsilon_{k}^{(1)} {-\frac{\delta}{\kappa}},r_k^{(1)}e^{-\delta}}$, \begin{equation}\label{sol-cohom-approx} \mathcal L_i(\phi^{(k)}):=\phi^{(k)}(\hat\tau_i)-D\hat \tau_i.\phi^{(k)}= -J^{2q_k}(\tau^{\bullet (k)}_i),\quad i=1,\ldots,n. \end{equation} Writing formally $\Phi^{(k)}\circ \Psi^{(k)}=Id$ and using linearity of $\hat\tau_i$, we obtain \begin{align} \psi^{(k)}_{(i)} &= \phi^{(k)}_{(i)}(Id-\psi^{(k)}_{(i)}), \quad i=0,\ldots, n,\label{contr-i} \end{align} recalling the notation $\phi^{(k)}_{(i)}=\hat\tau_i^{-1}\phi^{(k)}\circ \hat\tau_i$. According to \rp{cohomo}, we have \begin{equation}\label{phik} |||\phi^{(k)}|||_{\epsilon_k {-\frac{\delta}{\kappa}},r_ke^{-\delta}}\leq ||\tau^{\bullet(k)}||_{\epsilon_k,r_k}\frac{C'}{\delta^{\tau+\nu}}\leq C'\delta_k^{\mu}\delta^{-\tau-\nu}. \end{equation} We recall that the constatn $C'$ does not depend on $k$, nor on $\tau^{\bullet(k)}$. \begin{lemma}\label{l-inclusion} There is constant $\tilde D>0$ (independent of $k$) such that, given a positive number $\mu$, so that $\delta< \min\{\tilde D,\ln 2,\frac{r_0}{2}\}$ satisfies \eq{2C'} 2^{m+2}C'\delta_k^{\mu}\delta^{-\tau-\nu-3}<1, \end{equation} where $m=5 . Then, for all $0\leq \ell \leq m$ and $i=0,\ldots,n$, the maps $\Phi^{(k)}_{(i)}:=Id+\phi^{(k)}_{(i)}$ are biholomorphisms $$ \Phi^{(k)}_{(i)}\colon \Omega_{\epsilon_k {-\frac{(\ell+1)\delta}{\kappa}},r_ke^{-(\ell+1)\delta}}\to \Omega_{\epsilon_k {-\frac{\ell\delta}{2\kappa}},r_ke^{-\frac{\ell\delta}{2}} $$ with inverse $\Psi^{(k)}_{(i)}:=Id-\psi^{(k)}_{(i)} ~:\Omega_{\epsilon_k {-\frac{(\ell+2)\delta}{\kappa}},r_ke^{- (\ell+2)\delta}}\to \Omega_{\epsilon_k {-\frac{(\ell+1)\delta}{\kappa}},r_ke^{-(\ell+1)\delta}}$ satisfying $$\Phi^{(k)}_{(i)}\circ \Psi^{(k)}_{(i)}=\hat\tau_i^{-1}(\Phi^{(k)}\circ\Psi^{(k)})\circ\hat\tau_i=Id$$ on $\Omega_{\epsilon_k {-\frac{(\ell+2)\delta}{\kappa}},r_ke^{-(\ell+2)\delta}}$, provided $q_0>C(\delta_0,\mu)$. \end{lemma} \begin{proof} We have $1-e^{-\delta}> \delta/2$ and $\frac{1}{2}<e^{-\delta}$ for $0<\delta<\ln 2$. Assuming $\delta<\frac{r_0}{2}$, we have $\delta<r_k$ so that $$\operatorname{dist}(\Delta^d _{r_{k}e^{-( 1 +\ell)\delta}},\partial \Delta^d _{r_ke^{-\ell\delta}})=r_ke^{-\ell\delta}(1-e^{- \delta})>\frac{ \delta^2}{2^{\ell+1}}.$$ On the other hand, by \re{boundary-of-R}, we have $$\operatorname{dist}(\Omega_{\epsilon_{k} {-\frac{(\ell+ 1 )\delta}{\kappa}}},\partial \Omega_{\epsilon_{k} {-\frac{\ell\delta}{\kappa}}})=\operatorname{dist}(\Omega_{\epsilon_{k} {-\frac{(\ell+ 1 )\delta}{\kappa}}}^+,\partial \Omega_{\epsilon_{k} {-\frac{\ell\delta}{\kappa}}}^+).$$ Let $(e^{-2\pi R},e^{-2\pi R'})\in \Omega_{\epsilon_{k} {-\frac{(\ell+1)\delta}{\kappa}}}^+\times \partial\Omega_{\epsilon_{k} {-\frac{\ell\delta}{\kappa}}}^+$. Since the matrix $(\operatorname{Im}\tau_i^j)_{1\leq i,j\leq n}$ is invertible, there is a constant $\tilde C$ (independent of $k$) such that $$ |e^{-2\pi R}-e^{-2\pi R'}|\geq \tilde C^{-1}\left| 1-e^{-2\pi (R- R')}\right|>2\pi\tilde C^{-1}|R-R'|\geq 2\pi\tilde C^{-1}\frac{\delta}{\kappa}. $$ Let us set $\tilde D:= 2\frac{2\pi}{\tilde C\kappa}$. Assuming $\delta<\tilde D$, we have $\delta<\tilde D<2^{\ell+2}\tilde D\leq 2^{m+2}\tilde D$. Assume $2^{m+2}C'\delta_k^{\mu}\delta^{-\tau-\nu-3}<1$. Then according to \re{phik}, we have, for $(h,v)\in \Omega_{\epsilon_k {-\frac{\ell\delta}{\kappa}},r_ke^{-\ell\delta}}$, \begin{align}{}\label{dist-min} |\phi^{(k)}_{(i)}(h,v)|&\leq C'\delta_k^{\mu}\delta^{-\tau-\nu}< \frac{\delta^3}{2^{m+2}}<\frac{\delta}{2}\frac{\delta^2}{2^{\ell+1}} \\ &< \frac{\delta}{2}\operatorname{dist}(\Omega_{\epsilon_k {-\frac{(\ell+ 1 )\delta}{\kappa}},r_ke^{-(\ell+ 1 )\delta}},\partial \Omega_{\epsilon_k {-\frac{\ell\delta}{\kappa}},r_ke^{-\ell\delta}}).\nonumber \end{align} By the Cauchy inequality, we have, for $(h,v)\in \Omega_{\epsilon_k {- \frac{(\ell+ 2 )\delta}{\kappa}},r_ke^{-(\ell+ 2 )\delta}}$ \eq{Dphi} |D\phi^{(k)}_{(i)}(h,v)|\leq \frac{C'\delta_k^{\mu}\delta^{-\tau-\nu}}{\operatorname{dist}(\Omega_{\epsilon_k {-\frac{(\ell+ 2 )\delta}{\kappa}},r_ke^{-(\ell+ 2 )\delta}},\partial \Omega_{\epsilon_k {-\frac{ (\ell+1)\delta}{\kappa}},r_ke^{- (\ell+1)\delta}})}\leq \frac{\delta}{2 <1/2. \end{equation} We can apply the contraction mapping theorem to \re{contr-i} together with the last inequality of \re{dist-min}. We find a holomorphic solution $\psi^{(k)}_{(i)}$ such that for $(h,v)\in \Omega_{\epsilon_k {- \frac{(\ell+ 3 )\delta}{\kappa}},r_ke^{-(\ell+ )\delta}}$ \begin{align} |\psi^{(k)}_{(i)}(h,v)|&\leq ||\phi^{(k)}_{(i)}||_{\epsilon_k {-\frac{\delta{({\bf Corrected})} (\ell+ 2)}{\kappa}},r_ke^{- (\ell+2) \delta}} \leq C' \delta_k^{\mu}\delta^{-\tau-\nu} \leq \frac{\delta^3}{2^{m+2}}\label{estimpsi}\\ &<\frac{\delta}{2^{m-\ell-1}}\operatorname{dist}(\Omega_{\epsilon_k {-\frac{(\ell+3)\delta}{\kappa}},r_ke^{-(\ell+3)\delta}},\partial \Omega_{\epsilon_k {-\frac{(\ell+2)\delta}{\kappa}},r_ke^{-(\ell+2)\delta}}).\nonumber \end{align} Hence, we have found a mapping $\Psi^{(k)}_{(i)}:=Id-\psi^{(k)}_{(i)}$ such that $$ \hat\Omega_{\epsilon_k {-\frac{(\ell+3)\delta}{\kappa}},r_ke^{-(\ell+3)\delta}}:=\Psi^{(k)}_{(i)}(\Omega_{\epsilon_k {-\frac{(\ell+3)\delta}{\kappa}},r_ke^{-(\ell+3)\delta}})\subset \Omega_{\epsilon_k {-\frac{(\ell+2)\delta}{\kappa}},r_ke^{-(\ell+2)\delta}}. $$ Also, $\Phi^{(k)}_{(i)}\circ\Psi^{(k)}_{(i)}=Id$ on $\Omega_{\epsilon_k {-\frac{(\ell+3)\delta}{\kappa}},r_ke^{-(\ell+3)\delta}}$. Therefore, $$ \Phi^{(k)}_{(i)}\colon\hat \Omega_{\epsilon_k {-\frac{(\ell+3)\delta}{\kappa}},r_ke^{- (\ell+3)\delta}}\to \Omega_{\epsilon_k {-\frac{(\ell +1)\delta}{\kappa}},r_ke^{-(\ell +1)\delta} $$ is an (onto) biholomorphism such that $(\Phi^{(k)}_{(i)})^{-1}=\Psi^{(k)}_{(i)}$. \end{proof} \begin{prop}\label{prop4.20} Keep conditions on $\delta,\mu$ in \rla{l-inclusion} as well as conditions \rea{cond1},\rea{cond2}. If $\delta_0$ small enough there is possibly larger $\mu>0$ such that if for all $i=1,\ldots, n$, $\tau_i^{(0)}\in ({\mathcal A}_{\epsilon_{0},r_{0}})^{n+d}$ with $|||\tau^{\bullet(0)}_i|||_{\epsilon_{0},r_{0}}\leq \delta_{0}^{\mu}$, then for all $k\geq 0$ we have the following~: \begin{align} \tau_i^{\bullet(k+1)}\in ({\mathcal A}_{\epsilon_{k+1},r_{k+1}})^{n+d}, \quad &\tau_i^{\bullet(k+1)}=O(|v|^{2q_k+1}),\nonumber\\ ||\tau^{\bullet(k+1)}_i||_{\epsilon_{k+1},r_{k+1}}&\leq \delta_{k+1}^{\mu}.\label{tauDotk} \end{align} \end{prop} \begin{proof} Let us first show that $\tau^{(k+1)}_{i}:=\Phi^{(k)}\circ \tau^{(k)}_{i}\circ(\Phi^{(k)})^{-1}$ is well defined on $\Omega_{\epsilon_k {- \frac{\ell\delta}{\kappa}},r_ke^{-\ell\delta}}$ for $m\geq\ell\geq 5$ and all $i=1,\ldots, n$. Here $m$ is fixed from \rl{l-inclusion}. Indeed, we have $$ \tau^{(k+1)}_{i}= \hat\tau_i(I+\phi^{(k)}_{(i)})\circ (I+\hat\tau_i^{-1}\tau_{i}^{\bullet(k)})\circ (\Phi^{(k)})^{-1}. $$ Since $\tau_i^{\bullet(k)}$ is of order $\geq 2q_{k-1}+1$, Schwarz inequality gives $$ \|\hat\tau_i^{-1}\tau^{\bullet(k)}_i\|_{\epsilon_k {- \frac{(\ell-1)\delta}{\kappa}},r_ke^{-(\ell-1)\delta}}\leq \max_i \|\hat\tau_i^{-1}\|_{\epsilon_0,r_0}C e^{-(2q_{k-1}+1)\delta}\|\tau^{\bullet(k)}_i\|_{\epsilon_k {- \frac{(\ell-2)\delta}{\kappa}},r_ke^{-(\ell-2)\delta}}. $$ We recall that $\delta_0$ satisfies \re{cond1} and \re{cond2}. Setting $\delta:=\delta_k$ and if $q_0$ large enough, we have $$ \max_i \|\hat\tau_i^{-1}\|_{\epsilon_0,r_0}C e^{-(2q_{k-1}+1)\delta}\leq \max_i \|\hat\tau_i^{-1}\|_{\epsilon_0,r_0}C e^{-\frac{2^{k-1}}{(k+1)^2}q_0\delta_0}<1. $$ According to \rl{l-inclusion}, \re{2C'} and the distance estimate in \re{dist-min}, we have $$ (I+\hat\tau_i^{-1}\tau_i^{\bullet(k)})\circ (\Phi^{(k)})^{-1}(\Omega_{\epsilon_k {- \frac{\ell\delta}{\kappa}},r_ke^{-\ell\delta}})\subset \Omega_{\epsilon_k {- \frac{(\ell-3)\delta}{\kappa}},r_ke^{-(\ell-3)\delta}} $$ so that $\tau^{(k+1)}_i$ is defined on $\Omega_{\epsilon_k {- \frac{\ell\delta}{\kappa}},r_ke^{-\ell\delta}}$ for $\ell =4\leq m$ since $\phi^{(k)}_{(i)}$ is defined on $\Omega_{\epsilon_k-\delta/\kappa,r_ke^{-\delta/\kappa}}$. For the rest of the proof, we fix $\ell=4$. From the argument above, we have $$ \tau_i^{(k+1)}(\Omega_{\epsilon_k-4\delta/\kappa,r_ke^{-4\delta/\kappa}})\subset \hat\tau_i(\Phi_{(i)}^{k}(\Omega_{\epsilon_k-\delta/\kappa,r_ke^{-\delta/\kappa}}))\subset \hat\tau_i(\Omega_{\epsilon_k,r_k})\subset \hat\tau_i(\Omega_{\epsilon_0,r_0}). $$ Hence, $\tau_i^{\bullet(k+1)}$ is uniformly bounded on $\Omega_{\epsilon_k-4\delta/\kappa,r_ke^{-4\delta/\kappa}}$ w.r.t $k$~: $$ \|\tau^{\bullet (k+1)}_i\|_{\epsilon_k {-4\delta/\kappa,r_ke^{-4\delta}} \leq C. $$ On the other hand, on $\Omega_{\epsilon_k-4\delta/\kappa,r_ke^{-4\delta/\kappa}}$, we have \begin{align} \tau^{\bullet (k+1)}_i&=\hat \tau_i(\phi^{(k)}-\psi^{(k)})+\left(\tau^{\bullet(k)}_i(Id-\psi^{(k)})-\tau^{\bullet(k)}_i\right)\label{remainder1}\\ &\quad+\left(\phi^{(k)}\left(\hat \tau_i-\hat \tau_i\psi^{(k)}+\tau^{\bullet(k)}_i(Id-\psi^{(k)})\right)-\phi^{(k)}(\hat \tau_i)\right) \nonumber \\ &\quad+(\phi^{(k)}(\hat \tau_i)-\hat \tau_i\phi^{(k)}+\tau^{\bullet(k)}_i).\nonumber \end{align} The last term is equal to $\tau^{\bullet(k)}_i-J^{2q_k}(\tau^{\bullet(k)}_i)=O(|v|^{2q_k+1})$. We also have $\phi^{(k)}-\psi^{(k)}=O(|v|^{2q_k+1})$, $\phi^{(k)}=O(|v|^{q_k+1})$. Thus \eq{tauk+1=2q+1} \tau^{\bullet (k+1)}_i=O(|v|^{2q_k+1}). \end{equation} Improving the estimate by the Schwarz inequality, we obtain $$ \|\tau^{\bullet(k+1)}_i\|_{\epsilon_k {- \frac{5\delta}{\kappa}},r_ke^{-5\delta}}\leq C e^{-q_{k+1}\delta}. $$ We have $e^{-x}<1/x$ for $x>0$. For $\delta:=\delta_k< \min\{\tilde D,\ln 2,\frac{r_0}{2}\}$, let $\mu$ satisfy $2^{m+2}C'\delta_k^{\mu-\tau-\nu-3}<1$, in which case assumptions of \rl{l-inclusion} are satisfied. We then obtain $$ C\delta_{k+1}^{-\mu}e^{-q_{k+1}\delta} \leq C \left(\f{(k+2)^2}{\delta_0}\right) ^{\mu +1}\f{1}{ q_02^{k}}<1 $$ provided $q_0>C(\delta_0,\mu)$. \end{proof} Finally, quite classically, since \re{Dphi} and \re{sigma}, the sequence of diffeomorphisms $\{\Phi_k\circ\Phi_{k-1}\circ\cdots\circ \Phi_1\}_k$ converges uniformly on the open set $\Omega_{\frac{\epsilon_{0}}{2},r_{0}e^{-\sigma}}$ to a diffeomorphism $\Phi$ which satisfies $$ \Phi\circ \hat \tau_i = \tau_i\circ \Phi,\quad i=1,\ldots, n $$ provided $q_0\geq C(\delta_0,\mu)$ and $ ||\tau^{\bullet(0)}_i||_{\epsilon_{0},r_{0}}\leq \delta_{0}^{\mu}$ for some $\delta_0,\mu$ are fixed. The condition $q_0>C(\delta_0,\mu,\tau,\nu)$ can be achieved by using finitely many $\Phi_0,\dots, \Phi_{m}$. Then initial condition $ ||\tau^{\bullet(0)}_i||_{\epsilon_{0},r_{0}}\leq \delta_{0}^{\mu}$ can be achieved easily by a dilation in the $v$ variable. Indeed, we apply the dilation in $v$ to the original $\tau_1,\dots,\tau_n$ with $q_0=1$. This allows us to construct $\Phi_0$ in \rp{prop4.20} with $k=0$ and define $\Phi_0\tau_j\Phi_0^{-1}$ to achieve $q_1\geq 2$. Then $\Phi_0\tau_j\Phi_0^{-1}$, $j=1,\dots, n$ still commute pairwise on $\Omega_{\epsilon'}\times\Delta_{\epsilon'}^d$ for some $\epsilon'>0$. Applying the procedure again, this allows us to find $\Phi_1,\dots, \Phi_{k-1}$ to achieve $q_k\geq 2^{k}> C(\delta_0,\mu)$. Finally using dilation we can apply the full version of \rp{prop4.20} for all $k$ to construction a new sequence of desired mapping $\Phi_0,\Phi_1,\dots.$ Hence, the torus $C$ has a neighborhood in $M$ biholomorphic to a neighborhood of the zero section in its normal bundle $N_C$ since there is a biholomorphism fixing ${\mathcal C}$ that conjugate the deck transformations of the covering of the latter to those of the former. \begin{remark} The assumption "$T_CM$ splits" can be replaced by the condition that for all $P\in {\mathbb Z}^n$, for all $Q\in \mathbb{N}^d$, $|Q|=1$ and for all $i=1,\ldots, n$ $$ \max_{\ell\in \{1,\ldots, n\}}\left |\lambda_\ell^P \mu_{\ell}^Q-\lambda_{\ell,i} \right| > \frac{D}{(|P|+1)^{\tau}}. $$ \end{remark} \bibliographystyle{alpha}
1,314,259,996,454
arxiv
\subsubsection*{#1}} \newenvironment{univ}{\begin{quote}}{\end{quote}} \newcommand{\,}{\,} \renewcommand{\theenumi}{\roman{enumi}} \newcommand{\hfill\ensuremath{\Box}}{\hfill\ensuremath{\Box}} \newenvironment{prooflike}[1]{\begin{trivlist}\item\textbf{#1}\ } {\end{trivlist}} \newenvironment{proof}{\begin{prooflike}{Proof}}{\end{prooflike}} \newcommand{\rTo\linebreak[0]}{\rTo\linebreak[0]} \newcommand{\goby}[1]{\rTo^{#1}\linebreak[0]} \newcommand{\,\longmapsto\,}{\,\longmapsto\,} \newcommand{\oppair}[2]{\pile{\rTo^{\scriptstyle #1}\\ \lTo_{\scriptstyle #2}}} \newcommand{\pile{\rTo\\ \lTo}}{\pile{\rTo\\ \lTo}} \newcommand{\pile{\rTo\\ \lTo}}{\pile{\rTo\\ \lTo}} \newcommand{\pile{\rTo\\ \lTo\\ \rTo}}{\pile{\rTo\\ \lTo\\ \rTo}} \newarrow{Goesto}|---{->} \newarrow{Incl}C---> \newarrowmiddle{iso}\sim\sim\sim\sim \newcommand{\ternarytree}{% \setlength{\unitlength}{1ex} \begin{picture}(2.1,2.7)(0,0) \cell{1.3}{0}{b}{\bf\sf Y} \cell{0.63}{1.5}{b}{\bf\sf v} \end{picture}} \newcommand{\mbox{\bf\sf Y}}{\mbox{\bf\sf Y}} \newcommand{\mbox{\bf\sf l}}{\mbox{\bf\sf l}} \newcommand{\cell}[4]{\put(#1,#2){\makebox(0,0)[#3]{\ensuremath{#4}}}} \newcommand{\scriptstyle{\bullet}}{\scriptstyle{\bullet}} \newcommand{\tusual}[1]{% \begin{picture}(4,4)(-2,-2) \cell{-0.2}{0}{c}{#1}% \put(-2,-2){\line(0,1){4}}% \put(-2,2){\line(2,-1){4}}% \put(2,0){\line(-2,-1){4}}% \end{picture}} \newcommand{\toutputrgt}[1]{% \begin{picture}(1,0)(-1,0) \cell{0.4}{0}{l}{#1}% \put(0,0){\line(-1,0){1}}% \end{picture}} \newcommand{\tinputlft}[1]{% \begin{picture}(1,0)(0,0) \cell{-0.4}{0}{r}{#1}% \put(0,0){\line(1,0){1}} \end{picture}} \newcommand{\tinputslft}[2]{% \begin{picture}(1.4,3.8)(-0.4,-1.9) \cell{0}{1.5}{l}{\tinputlft{#1}}% \cell{0}{-1.5}{l}{\tinputlft{#2}}% \cell{0.2}{0.3}{c}{\vdots}% \end{picture}} \title{An abstract characterization of\\ Thompson's group $F$} \author Marcelo Fiore% \thanks{Computer Laboratory, University of Cambridge, UK; Marcelo.Fiore@cl.cam.ac.uk. Partially supported by an EPSRC Advanced Research Fellowship.} \qquad Tom Leinster% \thanks{Department of Mathematics, University of Glasgow, UK; T.Leinster@maths.gla.ac.uk. Partially supported by a Nuffield Foundation award NUF-NAL 04 and an EPSRC Advanced Research Fellowship.} } \date{} \begin{document} \sloppy \maketitle \begin{abstract} We show that Thompson's group $F$ is the symmetry group of the `generic idempotent'. That is, take the monoidal category freely generated by an object $A$ and an isomorphism $A \otimes A \rTo\linebreak[0] A$; then $F$ is the group of automorphisms of $A$. \end{abstract} \section{Introduction} \label{sec:intro} Our purpose in this paper is to clarify an idea concerning Richard Thompson's group $F$: that it is, in a suitable sense, the automorphism group of some object known only to be isomorphic to a combination of two copies of itself. This general idea has been known for some years, but it does not seem to have been observed until now that it can be formalized very succinctly. We prove that $F$ can be defined as follows. Take the monoidal category freely generated by an object $A$ and an isomorphism $A \otimes A \rTo\linebreak[0] A$; then $F$ is the group of automorphisms of $A$. This result first appeared in our 2005 preprint~\cite{FL}. Our characterization is distinct from some superficially similar older characterizations. In particular, it is distinct from Higman's characterization of Thompson's group $V$ as the automorphism group of a certain free algebra, and of $F$ as the subgroup consisting of the `order-preserving' automorphisms~\cite{Hig,Bro,CFP}. It is also distinct from Freyd and Heller's characterization of $F$ via conjugacy idempotents~\cite{FH}. We do not know of any direct way to deduce our characterization from these older ones, or \latin{vice versa}. Intuitively, our result means the following. Suppose that we are handed a mathematical object and told only that it is isomorphic to two copies of itself glued together. We do not know what kind of object it is, nor do we know what `gluing' means except that it is some kind of associative operation. On the basis of this information, what automorphisms does our object have? Our result gives the answer: the elements of $F$. Our description of $F$ is not only conceptually simple, but is also a member of a well-established family: many entities of interest can be described via free categories with structure. For example, the braided monoidal category freely generated by one object is the sequence $(B_n)_{n\geq 0}$ of Artin braid groups~\cite{JS,Mac}. The monoidal category freely generated by a monoid consists of the finite ordinals; in other words, it is the augmented simplex category~\cite{Lawv,Mac}. The symmetric monoidal category freely generated by a commutative monoid consists of the finite cardinals. The symmetric monoidal category freely generated by a commutative Frobenius algebra consists of $1$-dimensional smooth oriented manifolds and diffeomorphism classes of $2$-dimensional cobordisms. (This last example is a strong form of the equivalence between commutative Frobenius algebras and $2$-dimensional topological quantum field theories~\cite{Dij}; see~\cite{Kock}, for instance.) In this vein, our result can be expressed as follows: the monoidal category freely generated by an object $A$ and an isomorphism $A \otimes A \rTo\linebreak[0] A$ is equivalent to the groupoid $1 \amalg F$, where $1$ is the trivial group and $\amalg$ is coproduct of groupoids. Our result is this: \begin{thm} \label{thm:main} Let $\cat{A}$ be the monoidal category freely generated by an idempotent object $(A, \alpha)$. Then $\mathrm{Aut}_{\cat{A}}(A)$ is isomorphic to Thompson's group $F$. \end{thm} To make this paper accessible to as wide a readership as possible, we give the definition of Thompson's group and explain the categorical language used in the statement of this theorem~(\S\ref{sec:term}). (The only new piece of terminology is `idempotent object', which means an object $A$ together with an isomorphism $\alpha: A \otimes A \rTo\linebreak[0] A$.) But first we discuss earlier characterizations of Thompson's group. \smallheading{Related work} Almost as soon as Thompson introduced the group now called $F$, it began to be understood that $F$ was in some sense the automorphism group of an object known only to be isomorphic to a combination of two copies of itself. This intuition is so crucial that it has been formalized in several ways, of which ours is one. An early such formalization, due to Thompson and Higman, was as follows. A \demph{J\'onsson--Tarski algebra}~\cite{JT}, or \demph{Cantor algebra}, is a set $A$ equipped with a bijection $A \times A \rTo\linebreak[0] A$. Thompson's group $V$ is the automorphism group of the free J\'onsson--Tarski algebra on one generator~\cite{Hig,CFP}. Thompson's group $F$ is the subgroup consisting of those automorphisms that are `order-preserving' in a suitable sense \cite{Bro,CFP}. There is a clear resemblance between these descriptions and ours. However, we know of no direct or simple way to deduce our description of $F$ from the earlier one (or indeed the converse). There is a sense in which our description of $F$ is more direct. Whenever one works with sets and their cartesian products, one automatically introduces a symmetry in the form of the natural isomorphism $X \times Y \rTo\linebreak[0] Y \times X$ for sets $X$ and $Y$. In particular, for sets $X$, there is a nontrivial natural automorphism of $X \times X$. In Thompson and Higman's description, symmetry is first created (by working with sets) and then destroyed (by restricting to the order-preserving automorphisms). In our approach symmetry is avoided entirely, by working from the start not with sets, but with objects of a (non-symmetric) monoidal category. This is also what makes it possible to characterize $F$ as the \emph{full} automorphism group of some algebraic structure, rather than just a subgroup. As far as we know, this is the first such characterization. Among all the results related to ours, the closest is probably a theorem of Guba and Sapir~\cite{GS}. Given any presentation of a monoid, they define what they call its Squier complex, a 2-dimensional complex whose connected-components are the elements of the monoid. Every element of the monoid therefore gives rise to a `diagram group', the fundamental group of the corresponding component. They show that the diagram group of the presentation $\langle x \mid x^2 = x \rangle$ at the element $x$ is $F$. The connection between their result and ours can be summarized as follows. First, the Squier complex of this presentation is (up to homotopy) the $2$-skeleton of the classifying space of the monoidal category freely generated by an idempotent object $(A, \alpha)$. (For explanation of the latter phrase, see~\S\ref{sec:term}; for classifying spaces of categories, see~\cite{Seg}, for instance.) Then, the generator $x$ determines a point of the Squier complex, the object $A$ determines a point of the classifying space, and these two points correspond to one another under the homotopy equivalence. Hence the fundamental group at $x$ is the automorphism group of $A$. In this way, their result can be deduced from ours and \latin{vice versa}. Some more distant relatives are the results of Brin~\cite{Brin2}, Dehornoy \cite{De1,De2}, and, ultimately, McKenzie and Thompson~\cite{MT}. In the context of semigroup theory, our work has connections with recent work of Lawson~\cite{Laws,LawsCST}. All of these results express how $F$ arises naturally from two very primitive notions: binary operation and associativity. An advantage of our approach is that it makes this idea precise using only standard categorical language, where other approaches have used language invented more or less specifically for the occasion. A further advantage is that Thompson's group $V$, and even higher-dimensional versions of it, have similar characterizations: for $V$, just replace `monoidal category' by `symmetric monoidal category', or equally `finite-product category'. We do not know whether there is such a characterization of Thompson's group $T$; using braided monoidal categories gives not $T$, but the braided version of $V$ defined in~\cite{Brin1}. Also, given any $n \geq 2$, if we take the monoidal category freely generated by an object $A$ and an isomorphism $A^{\otimes n} \rTo\linebreak[0] A$ then the automorphism group of $A^{\otimes r}$ is canonically isomorphic to the generalized Thompson group $F_{n, r}$ of Brown~\cite{Bro}. Freyd and Heller also gave a short categorical definition of $F$, different from ours: it is the initial object in the category of groups equipped with a conjugacy-idempotent endomorphism~\cite{FH}. Again, there is a striking resemblance between this description and ours; but again, no one (to our knowledge) has been able to find a direct deduction of one from the other. The category of forests and the free groupoid on it, which appear in~\S\ref{sec:proof} below, have been considered independently by Belk~\cite{Belk}. We work throughout with \emph{strict} monoidal categories. (See below for definitions.) However, the non-strict monoidal category freely generated by an idempotent object $(A', \alpha')$ is monoidally equivalent to the strict one, and in particular, the automorphism group of $A'$ is $F$. So, for instance, there is an induced homomorphism from $F$ to the automorphism group of the free J\'onsson--Tarski algebra on one generator. We conjecture that this homomorphism is injective and that its image consists of the order-preserving automorphisms. \smallheading{} In~\S\ref{sec:term} we explain all of the terminology used in the statement of Theorem~\ref{thm:main}. The theorem is proved in~\S\ref{sec:proof}. Our proof involves almost no calculation, but does use some further concepts from category theory, reviewed in the Appendix. (`\emph{Il faut triompher par la pens\'ee et non par le calcul'}% \footnote{One must prevail by thought, not by calculation.}% ---Poincar\'e.) Some readers may feel that the language used in the statement of the theorem represents quite enough category theory for their taste, even without the further categorical concepts used in the proof. For them we sketch, at the end of~\S\ref{sec:term}, an alternative proof, favouring explicit calculation over conceptual argument. The novelty of this work lies almost entirely in~\S\ref{sec:proof} and in the way in which the categorical and algebraic structures are brought together. In particular, the categorical language explained in~\S\ref{sec:term} is absolutely standard; and while not everything in the Appendix is quite so well known, none of it is by any means new. \section{Terminology} \label{sec:term} Here we explain the terminology in the statement of Theorem~\ref{thm:main}. Further information on Thompson's group can be found in~\cite{CFP}; for more on the categorical language, see~\cite{Mac}. We then sketch a calculational proof of Theorem~\ref{thm:main}, requiring no further categorical concepts. \smallheading{Thompson's group $F$} In the 1960s Richard Thompson discovered three groups, now called $F$, $T$ and $V$, with remarkable properties. The group $F$, in particular, is one of those mathematical objects that appears in many diverse contexts and has been rediscovered repeatedly. One definition of $F$ is that it consists of all bijections $f: [0, 1] \rTo\linebreak[0] [0, 1]$ satisfying \begin{enumerate} \item \label{item:F-defn-one} $f$ is piecewise linear (with only finitely many pieces) \item the slope (gradient) of each piece is an integer power of $2$ \item \label{item:F-defn-three} the coordinates of the endpoints of each piece are dyadic rationals. \end{enumerate} For example, the $3$-piece linear function $f$ satisfying $f(0) = 0$, $f(1/4) = 1/2$, $f(1/2) = 3/4$ and $f(1) = 1$ is an element of $F$. In a sense that will be made precise, every element of $F$ can be built from copies of the halving isomorphism $\alpha: [0, 2] \rTo\linebreak[0] [0, 1]$ and its inverse; this is shown for our example $f$ in Figure~\ref{fig:decomp}. \begin{figure} \centering \setlength{\unitlength}{4ex} \begin{picture}(8,3)(-4,-1.5) \thinlines \put(-6,-0.5){\line(6,-1){6}} \put(-6,0.5){\line(6,1){6}} \put(-3,0){\line(6,-1){3}} \put(6,-0.5){\line(-6,-1){6}} \put(6,0.5){\line(-6,1){6}} \put(3,0){\line(-6,1){3}} \thicklines \put(-6,-0.5){\line(0,1){1}} \put(-3,-1){\line(0,1){2}} \put(0,-1.5){\line(0,1){3}} \put(3,-1){\line(0,1){2}} \put(6,-0.5){\line(0,1){1}} \cell{-6.1}{0.5}{r}{0} \cell{-6.1}{-0.5}{r}{1} \cell{-3.1}{1}{r}{0} \cell{-3.1}{0}{r}{1} \cell{-3.1}{-1}{r}{2} \cell{-0.1}{1.5}{r}{0} \cell{-0.1}{0.5}{r}{1} \cell{-0.1}{-0.5}{r}{2} \cell{-0.1}{-1.5}{r}{3} \cell{3.1}{1}{l}{0} \cell{3.1}{0}{l}{1} \cell{3.1}{-1}{l}{2} \cell{6.1}{0.5}{l}{0} \cell{6.1}{-0.5}{l}{1} \cell{-4.5}{0}{c}{\alpha^{-1}} \cell{-1.5}{0.5}{c}{\alpha^{-1}} \cell{-1.5}{-0.75}{c}{\mathrm{id}} \cell{1.5}{0.75}{c}{\mathrm{id}} \cell{1.5}{-0.5}{c}{\alpha} \cell{4.5}{0}{c}{\alpha} \end{picture} \caption{Decomposition of an element of $F$} \label{fig:decomp} \end{figure} So if all we knew about $[0, 1]$ was that it was isomorphic to two copies of itself glued together, $F$ would be the group of \emph{all} automorphisms of $[0, 1]$. This is the spirit of our result. For the proof we will need an alternative, more combinatorial definition of $F$. In what follows, \demph{tree} will mean finite, rooted, planar tree, and a tree is \demph{binary} if precisely two branches grow out of each vertex. Figure~\ref{fig:twotrees} shows a pair of binary trees. Except where mentioned, `tree' will mean `binary tree'. For $n \in \nat = \{0, 1, 2, \ldots\}$, write $\bintr{n}$ for the set of $n$-leafed trees. There are no $0$-leafed trees, and there is just one $1$-leafed tree: the trivial tree $\mbox{\bf\sf l}$ with no vertices at all. A non-trivial tree consists of two smaller trees joined at the root, so the sets $\bintr{n}$ can be defined inductively by \[ \bintr{0} = \emptyset, \quad \bintr{1} = \{ \mbox{\bf\sf l} \}, \quad \bintr{n} = \coprod_{k + m = n} \bintr{k} \times \bintr{m} \quad (n \geq 2). \] By a \demph{subtree} of a tree we mean a subtree sharing the same root. For example, the tree $\ternarytree$ has exactly three subtrees: itself, the unique two-leafed tree $\mbox{\bf\sf Y}$, and the one-leafed tree $\mbox{\bf\sf l}$. Given $n \geq i \geq 1$, we can join to the $i$th leaf of any $n$-leafed tree $\tau$ a copy of the two-leafed tree $\mbox{\bf\sf Y}$, thus forming an $(n + 1)$-leafed tree $\omega^n_i(\tau)$. This defines a map $\omega^n_i: \bintr{n} \rTo\linebreak[0] \bintr{n+1}$. Whenever $\tau$ is a subtree of a tree $\rho$, there is a finite sequence $\omega^{n_1}_{i_1}, \ldots, \omega^{n_r}_{i_r}$ of maps such that \[ \rho = \omega^{n_r}_{i_r} \cdots \omega^{n_1}_{i_1} (\tau). \] Moreover, for any two trees $\sigma$ and $\tau$, there is a smallest tree containing both as subtrees. This can be obtained by superimposing the pictures of $\sigma$ and $\tau$. The following alternative definition of $F$ is given in~\cite[\S 2]{CFP} and in~\cite[1.2]{Belk}. Elements of $F$ are equivalence classes of pairs $(\tau, \tau')$ of trees with the same number of leaves, where the equivalence relation is generated by identifying $(\tau, \tau')$ with $(\omega^n_i (\tau), \omega^n_i (\tau'))$ whenever $\tau, \tau' \in \bintr{n}$ and $1 \leq i \leq n$. Write $[\tau, \tau']$ for the equivalence class of a pair $(\tau, \tau')$. Under this definition, the element of $F$ shown in Figure~\ref{fig:decomp} is the same as the element $[\tau, \tau']$ shown in~Figure~\ref{fig:twotrees}. \begin{figure} \centering \setlength{\unitlength}{3ex} \begin{picture}(11,4.5)(0,-2.5) \thicklines \put(0,0){\line(1,0){1}} \put(1,0){\line(2,1){4}} \put(1,0){\line(2,-1){4}} \put(3,1){\line(2,-1){2}} \cell{1}{0}{c}{\scriptstyle{\bullet}} \cell{3}{1}{c}{\scriptstyle{\bullet}} \put(11,0){\line(-1,0){1}} \put(10,0){\line(-2,1){4}} \put(10,0){\line(-2,-1){4}} \put(8,-1){\line(-2,1){2}} \cell{10}{0}{c}{\scriptstyle{\bullet}} \cell{8}{-1}{c}{\scriptstyle{\bullet}} \cell{3}{-2.5}{b}{\tau'} \cell{8}{-2.5}{b}{\tau} \end{picture} \caption{A representative $(\tau, \tau')$ of an element of $F$} \label{fig:twotrees} \end{figure} In general, $[\tau, \tau']$ can be read as `expand according to $\tau'$ then contract according to $\tau$'. (The order is reversed to agree with the convention of writing maps on the left.) With this in mind, it is clear what the product (composite) $[\tau, \tau']\, [\sigma, \sigma']$ must be when $\tau' = \sigma$: simply $[\tau, \sigma']$. In general, there is a tree containing both $\tau'$ and $\sigma$ as subtrees, so there are maps $\omega^{n_q}_{i_q}, \omega^{m_q}_{j_q}$ for which \[ \omega^{n_r}_{i_r} \cdots \omega^{n_1}_{i_1} (\tau') = \omega^{m_s}_{j_s} \cdots \omega^{m_1}_{j_1} (\sigma), \] and then---inevitably--- \[ [\tau, \tau']\, [\sigma, \sigma'] = [ \omega^{n_r}_{i_r} \cdots \omega^{n_1}_{i_1} (\tau),\, \omega^{m_s}_{j_s} \cdots \omega^{m_1}_{j_1} (\sigma') ]. \] \smallheading{Monoidal categories} A monoid is a set $S$ equipped with a function $S \times S \rTo\linebreak[0] S$ and an element $1 \in S$ obeying associativity and unit laws. Similarly, a \demph{monoidal category} is a category $\cat{M}$ equipped with a functor $\cat{M} \times \cat{M} \rTo\linebreak[0] \cat{M}$ and an object $I \in \cat{M}$ obeying associativity and unit laws. Explicitly, this means that to each pair $(M, N)$ of objects of $\cat{M}$ there is assigned an object $M \otimes N$, and to each pair \[ \left( M \goby{\phi} M', \ N \goby{\psi} N' \right) \] of maps in $\cat{M}$ there is assigned a map $M \otimes N \goby{\phi \otimes \psi} M' \otimes N'$. Functoriality amounts to the equations \[ (\phi' \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, \phi) \otimes (\psi' \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, \psi) = (\phi' \otimes \psi') \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, (\phi \otimes \psi), \quad 1_M \otimes 1_N = 1_{M \otimes N}, \] and the associativity and unit laws apply to maps as well as objects: $(\phi \otimes \psi) \otimes \chi = \phi \otimes (\psi \otimes \chi)$, etc. A \demph{monoidal functor} is a functor $G$ between monoidal categories that preserves the tensor and unit: $G(M \otimes N) = G(M) \otimes G(N)$, etc. For example, a monoidal category in which the only maps are identities is simply a monoid. The monoidal category $\fcat{FinOrd}$ of finite ordinals has as objects the natural numbers; a map $m \rTo\linebreak[0] n$ is an order-preserving function $\{ 0, \ldots, m-1 \} \rTo\linebreak[0] \{ 0, \ldots, n-1\}$; the tensor product is given on objects by addition and on maps by juxtaposition; the unit object is $0$. The monoidal categories and functors considered in this paper are properly called \emph{strict} monoidal. The more general notion of monoidal category includes such examples as the category of abelian groups, in which the tensor product is only associative and unital up to (suitably coherent) isomorphism. \smallheading{Freely generated} We defined $\cat{A}$ as the `monoidal category freely generated by an idempotent object $(A, \alpha)$'. Such use of language is standard in category theory, and extends the familiar notion of free structure in algebra. We now explain what it means. Informally, it means that $\cat{A}$ is constructed by starting with an object $A$ and an isomorphism $\alpha: A \otimes A \rTo\linebreak[0] A$, then adjoining whatever other objects and maps must be present in order for $\cat{A}$ to be a monoidal category. The only equations that hold are those that are forced to hold by the axioms for a monoidal category. Thus, $\cat{A}$ has an object $A$, so it also has an object $A^{\otimes n} = A \otimes \cdots \otimes A$ for each $n \geq 0$ (with $A^{\otimes 0} = I$). The maps are built up from $\alpha$ by taking composites, identities, inverses and tensor products: for instance, there is a map $A \rTo\linebreak[0] A$ given as the composite \[ A \goby{\alpha^{-1}} A \otimes A \goby{\alpha^{-1} \otimes 1_A} A \otimes A \otimes A \goby{1_A \otimes \alpha} A \otimes A \goby{\alpha} A. \] (Compare Figures~\ref{fig:decomp} and~\ref{fig:twotrees}.) Precisely, an \demph{idempotent object} in a monoidal category $\cat{M}$ is an object $M \in \cat{M}$ together with an isomorphism $\mu: M \otimes M \rTo\linebreak[0] M$. (For example, an idempotent object in the monoidal category of sets, where $\otimes$ is cartesian product, is a J\'onsson--Tarski algebra.) A \demph{monoidal category freely generated by an idempotent object} is a monoidal category $\cat{A}$ together with an idempotent object $(A, \alpha)$ in $\cat{A}$, satisfying the following universal property: \begin{univ} \label{p:univ-prop} for any monoidal category $\cat{M}$ and idempotent object $(M, \mu)$ in $\cat{M}$, there is a unique monoidal functor $G: \cat{A} \rTo\linebreak[0] \cat{M}$ such that $G(A) = M$ and $G(\alpha) = \mu$. \end{univ} The universal property determines $(\cat{A}, A, \alpha)$ uniquely, up to isomorphism. That such an $(\cat{A}, A, \alpha)$ exists at all is true for quite general categorical reasons, although in fact we will construct it explicitly. We call $(A, \alpha)$ the \demph{generic idempotent object}. Specifying a monoidal category in this fashion is closely analogous to what one does in algebra when specifying a group, monoid, etc.\ by a presentation. Suppose, say, that we define a monoid $E$ by the presentation $E = \langle e \mid e^2 = e \rangle$. Informally, this means that $E$ is constructed by starting with an element $e$, then adjoining whatever other elements must be present in order for $E$ to be a monoid, then imposing only those equations that are forced to hold by $e^2 = e$ and the axioms for a monoid. (Of course, for this particular presentation it is very easy to describe $E$ explicitly, but for other presentations it is not.) Precisely, it means that $E$ is a monoid equipped with an idempotent element $e$ and satisfying the following universal property: \begin{univ} for any monoid $X$ and idempotent element $x \in X$, there is a unique monoid homomorphism $g: E \rTo\linebreak[0] X$ such that $g(e) = x$. \end{univ} We might call $E$ the `monoid freely generated by an idempotent element', and $e$ the `generic idempotent element', since it is idempotent and satisfies no further equations. Our definition of $\cat{A}$ can be regarded as a categorification of the definition of $E$. Monoids have become monoidal categories, elements have become objects, monoid homomorphisms have become monoidal functors, and equations (such as $e^2 = e$) have become isomorphisms (such as $\alpha: A \otimes A \rTo\linebreak[0] A$). For any monoid $X$, there is a natural one-to-one correspondence between idempotent elements of $X$ and homomorphisms $E \rTo\linebreak[0] X$. Similarly, for any monoidal category $\cat{M}$, there is a natural one-to-one correspondence between idempotent objects in $\cat{M}$ and monoidal functors $\cat{A} \rTo\linebreak[0] \cat{M}$. \smallheading{Automorphism group} Any object $X$ of any category $\cat{X}$ has an \demph{automorphism group} $\mathrm{Aut}_{\cat{X}}(X)$. Its elements are the automorphisms of $X$, that is, the isomorphisms $X \rTo\linebreak[0] X$ in $\cat{X}$. The group structure is given by composition. \smallheading{} This completes the explanation of the language used in Theorem~\ref{thm:main}. We are now in a position to sketch a proof of the theorem based on explicit calculation, which we do for the reasons stated at the end of the Introduction. Let $\cat{A}$ be the category whose objects are the natural numbers and whose maps $m \rTo\linebreak[0] n$ are the bijections $f: [0, m] \rTo\linebreak[0] [0, n]$ satisfying conditions~(\ref{item:F-defn-one})--(\ref{item:F-defn-three}) in the definition of $F$. Then $\cat{A}$ has a monoidal structure given on objects by addition and on maps by juxtaposition, and there is an isomorphism $\alpha: 1 \otimes 1 = 2 \rTo\linebreak[0] 1$ given by division by $2$. We have $F = \mathrm{Aut}_{\cat{A}}(1)$ by definition, so our task is to show that $(\cat{A}, 1, \alpha)$ has the universal property stated above. To do this, first consider trees (binary, as usual). Take a monoidal category $\cat{M}$ and an idempotent object $(M, \mu)$ in $\cat{M}$. Then any $n$-leafed tree $\tau$ gives rise to an isomorphism $\mu_\tau: M^{\otimes n} \rTo\linebreak[0] M$; for instance, if $\tau = \mbox{\bf\sf Y}$ then $\mu_\tau = \mu$, and if $\tau = \ternarytree$ then $\mu_\tau$ is the composite \[ M \otimes M \otimes M \goby{\mu \otimes 1} M \otimes M \goby{\mu} M. \] More generally, define a \demph{forest} to be a finite sequence $(\tau_1, \ldots, \tau_k)$ of trees ($k \geq 0$), and let us say that this forest has $n$ \demph{leaves} and $k$ \demph{roots}, where $n$ is the sum of the numbers of leaves of $\tau_1, \ldots, \tau_k$. Any forest $T = (\tau_1, \ldots, \tau_k)$ with $n$ leaves and $k$ roots induces an isomorphism \[ \mu_T = \mu_{\tau_1} \otimes\cdots\otimes \mu_{\tau_k}: M^{\otimes n} \rTo\linebreak[0] M^{\otimes k}. \] Now, it can be shown that any map $\phi: m \rTo\linebreak[0] n$ in $\cat{A}$ factorizes as \begin{equation} \label{eq:factorization} \phi = \left( m \goby{\alpha_S^{-1}} p \goby{\alpha_T} n \right) \end{equation} for some $p \in \nat$ and forests $S$ and $T$. (The method is given in~\cite[\S 2]{CFP}.) It can also be shown that any monoidal functor $G: \cat{A} \rTo\linebreak[0] \cat{M}$ satisfying $G(1) = M$ and $G(\alpha) = \mu$ must also satisfy \begin{equation} \label{eq:image-map} G(\phi) = \left( M^{\otimes m} \goby{\mu_S^{-1}} M^{\otimes p} \goby{\mu_T} M^{\otimes n} \right). \end{equation} Although $\phi$ may have many factorizations of the form~(\ref{eq:factorization}), further calculations show that the right-hand side of~(\ref{eq:image-map}) is independent of the factorization chosen. Further calculations still show that the $G$ thus defined is a functor, and monoidal. The result follows. \section{Proof of the Theorem} \label{sec:proof} In this section we give a conceptual proof of Theorem~\ref{thm:main}. To do this, we construct the monoidal category $\cat{A}$ freely generated by an idempotent object $(A, \alpha)$. The strategy is to start with a very simple object $\cat{B}$ and apply several left adjoints in succession (Figure~\ref{fig:steps}). \begin{figure} \[ \begin{diagram} & &\fcat{MonGpd} \\ & &\uTo<{L_3} \dashv \dTo~{R_3} \dashv \uTo>{S_3} \\ & &\fcat{MonCat} \\ & &\uTo<{L_2} \dashv \dTo>{R_2} \\ \fcat{Operad} &\rIncl &\fcat{Multicat} \\ \uTo<{L_1} \dashv \dTo>{R_1}& & \\ \fcat{Set}^\nat & & \\ \end{diagram} \hspace*{5em} \begin{diagram} & &\ &\makebox[0em]{\hspace*{-2em}\ensuremath{\cat{E}}} &\ \\ & &\uGoesto &\dGoesto \\ & &\cat{D} &\cat{A} \\ & &\uGoesto & \\ \cat{C} &\rGoesto&\cat{C} & \\ \uGoesto & & & \\ \cat{B} & & & \\ \end{diagram} \] \caption{Steps in the proof} \label{fig:steps} \end{figure} On the one hand, this abstract construction makes the universal property of $\cat{A}$ automatic. On the other, each step of the construction can be described explicitly, so it will be transparent that $\mathrm{Aut}_{\cat{A}}(A) \cong F$. On the left of Figure~\ref{fig:steps}, we have the category $\fcat{Set}^\nat$ of `signatures' and the categories of operads, multicategories, monoidal categories and monoidal groupoids, all non-symmetric. The functors $R_i$ are the evident forgetful functors; they have adjoints $L_i$ and $S_3$ as shown. Definitions, and descriptions of these adjoint functors, are given in the Appendix. On the right of Figure~\ref{fig:steps}, the signature $\cat{B}$ consists of a single binary operation: $\card{\cat{B}_2} = 1$ and $\card{\cat{B}_n} = 0$ for $n \neq 2$. Then $\cat{C} = L_1(\cat{B})$, etc.; thus, the monoidal category $\cat{A}$ is defined by \[ \cat{A} = R_3 L_3 L_2 L_1 (\cat{B}). \] The main insight of the proof is that a pair of trees as in Figure~\ref{fig:twotrees} can be regarded as a span in the category of forests, and multiplication of such pairs in the Thompson group is nothing more than the usual composition of spans (by pullback). The only significant work in the proof is to establish the latter fact. The universal property of $\cat{A}$ is immediate: \begin{propn} $\cat{A}$ is the monoidal category freely generated by an idempotent object. \end{propn} \begin{proof} To lighten the notation, write $R_i (X)$ as $X$. Then for any monoidal category $\cat{M}$, \begin{eqnarray} \lefteqn{\fcat{MonCat} (\cat{A}, \cat{M})} \label{eq:beginning} \\ &\cong & \fcat{MonGpd} (L_3 L_2 L_1 (\cat{B}), S_3 (\cat{M})) \\ &\cong & \fcat{MonCat} (L_2 L_1 (\cat{B}), S_3 (\cat{M})) \\ &\cong & \fcat{Multicat} (L_1 (\cat{B}), S_3 (\cat{M})) \\ &\cong & \{ (M, G) \mid M \in \cat{M}, \ G \in \fcat{Operad} (L_1 (\cat{B}), \mathrm{End}_{S_3 (\cat{M})} (M)) \} \label{eq:mo} \\ &\cong & \{ (M, \mu) \mid M \in \cat{M}, \ \mu \in \fcat{Set}^\nat (\cat{B}, \mathrm{End}_{S_3 (\cat{M})} (M) \} \\ &\cong & \{ \textrm{idempotent objects in } \cat{M} \} \label{eq:Brep} \end{eqnarray} naturally in $\cat{M}$. Most of these isomorphisms are by adjointness; \bref{eq:mo}~is from the final observation in the section on multicategories in the Appendix; \bref{eq:Brep}~is the fact that a map from $\cat{B}$ to another signature $\cat{B'}$ just picks out an element of $\cat{B'}_2$, which in this case is the set of maps $M^{\otimes 2} \rTo\linebreak[0] M$ in the groupoid $S_3(\cat{M})$. Hence $\cat{A}$ represents the functor $J: \fcat{MonCat} \rTo\linebreak[0] \fcat{Set}$ mapping a monoidal category to the set of idempotent objects in it. The generic idempotent object $(A, \alpha) \in J(\cat{A})$ is obtained by tracing the element $1_{\cat{A}}$ through the isomorphisms \bref{eq:beginning}--\bref{eq:Brep}; then $(\cat{A}, A, \alpha)$ has the universal property required. \hfill\ensuremath{\Box} \end{proof} To obtain an explicit description of $(\cat{A}, A, \alpha)$, and in particular of the automorphism group of $A$, we go through each step of the construction using the descriptions of the adjoint functors given in the Appendix. First step: the free operad $\cat{C} = L_1(\cat{B})$ is the operad of (unlabelled, binary) trees; thus, $\cat{C}_n = \bintr{n}$ and composition in $\cat{C}$ is by gluing roots to leaves. Second step: $\cat{D} = L_2 (\cat{C})$ is the monoidal category in which objects are natural numbers and maps $n \rTo\linebreak[0] k$ are forests with $n$ leaves and $k$ roots (as defined in~\S\ref{sec:term}). Composition is by gluing; tensor of objects is addition; tensor of maps is juxtaposition. \begin{lemma} The forest category $\cat{D}$ has pullbacks. \end{lemma} \begin{proof} Any map $T: n \rTo\linebreak[0] k$ in $\cat{D}$ decomposes uniquely as a tensor product $T = T_1 \otimes \cdots \otimes T_k$ with $T_i: n_i \rTo\linebreak[0] 1$, so it suffices to prove that every diagram of the form \[ \begin{diagram} m & & & &m' \\ &\rdTo<{(\tau)} & &\ldTo>{(\tau')}& \\ & &1 & & \\ \end{diagram} \] has a pullback (where $\tau$ and $\tau'$ are trees with $m$ and $m'$ leaves respectively). Indeed, let $\rho$ be the smallest tree containing both $\tau$ and $\tau'$ as subtrees. Then \[ (\tau) \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, (\sigma_1, \ldots, \sigma_m) = \rho = (\tau') \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, (\sigma'_1, \ldots, \sigma'_{m'}) \] for unique $\sigma_i$ and $\sigma'_{i'}$. Writing $p$ for the number of leaves of $\rho$, the square \[ \begin{diagram} & &p & & \\ &\ldTo<{(\sigma_1, \ldots, \sigma_m)} & &\rdTo>{(\sigma'_1, \ldots, \sigma'_{m'})} & \\ m & & & &m' \\ &\rdTo<{(\tau)} & &\ldTo>{(\tau')}& \\ & &1 & & \\ \end{diagram} \] is a pullback. \hfill\ensuremath{\Box} \end{proof} Third step: $\cat{E} = L_3 (\cat{D})$ is the monoidal groupoid in which objects are natural numbers and maps $k' \rTo\linebreak[0] k$ are equivalence classes of spans \[ \begin{diagram} & &n & & \\ &\ldTo<{(\tau'_1, \ldots, \tau'_{k'})} & &\rdTo>{(\tau_1, \ldots, \tau_k)} & \\ k' & & & &k \\ \end{diagram} \] in $\cat{D}$. Equivalence is generated by declaring this span to be equivalent to \[ \begin{diagram} & &p & & \\ & \ldTo<{(\tau'_1, \ldots, \tau'_{k'}) \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, (\rho_1, \ldots, \rho_n)} & & \rdTo>{(\tau_1, \ldots, \tau_k) \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, (\rho_1, \ldots, \rho_n)} & \\ k' & & & &k \\ \end{diagram} \] for any forest $(\rho_1, \ldots, \rho_n)$ with $n$ roots (writing $p$ for its number of leaves), and it makes no difference if we insist that all but one of the $\rho_i$s is trivial and the remaining one is the 2-leafed tree $\mbox{\bf\sf Y}$. Final step: $\cat{A}$ is the underlying monoidal category of $\cat{E}$. Under the isomorphisms \bref{eq:beginning}--\bref{eq:Brep}, the identity $1_\cat{A}$ corresponds to the idempotent object $(1, \alpha)$ in $\cat{A}$, where $\alpha$ is the equivalence class of the span \[ \begin{diagram} & &2 & & \\ &\ldTo<\mathrm{id} & &\rdTo>{(\mbox{\bf\sf Y})} & \\ 2 & & & &1. \\ \end{diagram} \] So to prove Theorem~\ref{thm:main}, we have to show that $\mathrm{Aut}_{\cat{A}}(1) \cong F$. Since $\cat{A}$ is a groupoid, $\mathrm{Aut}_{\cat{A}}(1)$ consists of all maps $1 \rTo\linebreak[0] 1$ in $\cat{A}$. We have just seen that such a map is an equivalence class of pairs $(\tau, \tau')$ of trees with the same number of leaves, where equivalence is generated by $[\tau, \tau'] = [\omega^n_i (\tau), \omega^n_i (\tau')]$ whenever $\tau, \tau' \in \bintr{n}$ and $1 \leq i \leq n$. To compose maps \[ 1 \goby{[\sigma, \sigma']} 1 \goby{[\tau, \tau']} 1, \] form the diagram \[ \begin{diagram} & & & &p & & & & \\ & & &\ldTo<{(\chi_1, \ldots, \chi_m)} & &\rdTo>{(\zeta_1, \ldots, \zeta_n)} & & & \\ & &m & & & &n & & \\ &\ldTo<{(\sigma')} & &\rdTo>{(\sigma)} & &\ldTo<{(\tau')} & &\rdTo>{(\tau)}&\\ 1 & & & &1 & & & &1 \\ \end{diagram} \] in which the square is a pullback; then \[ [\tau, \tau'] \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, [\sigma, \sigma'] = [\tau \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, (\zeta_1, \ldots, \zeta_n),\, \sigma' \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, (\chi_1, \ldots, \chi_m)]. \] There exist $i_1, \ldots, i_r, n_1, \ldots, n_r$ with the property that for all $\pi \in \bintr{n}$, \[ \pi \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, (\zeta_1, \ldots, \zeta_n) = \omega_{i_r}^{n_r} \cdots \omega_{i_1}^{n_1} (\pi), \] and similarly $j_1, \ldots, j_s, m_1, \ldots, m_s$ for $(\chi_1, \ldots, \chi_m)$. Hence \[ \omega^{n_r}_{i_r} \cdots \omega^{n_1}_{i_1} (\tau') = \omega^{m_s}_{j_s} \cdots \omega^{m_1}_{j_1} (\sigma) \] and \[ [\tau, \tau'] \,\raisebox{0.08ex}{\ensuremath{\scriptstyle\circ}}\, [\sigma, \sigma'] = [\omega^{n_r}_{i_r} \cdots \omega^{n_1}_{i_1} (\tau),\, \omega^{m_s}_{j_s} \cdots \omega^{m_1}_{j_1} (\sigma')]. \] But this description of $\mathrm{Aut}_{\cat{A}}(1)$ is exactly the description of $F$ in~\S\ref{sec:term}. Hence $\mathrm{Aut}_{\cat{A}}(1) \cong F$, proving Theorem~\ref{thm:main}. Finally, we remark that the proof can be recast slightly so that the diagram in Figure~\ref{fig:steps} becomes a chain of adjunctions \[ \fcat{Set}^\nat \pile{\rTo\\ \lTo} \fcat{Operad} \pile{\rTo\\ \lTo} \fcat{Multicat}_* \pile{\rTo\\ \lTo} \fcat{MonCat}_* \pile{\rTo\\ \lTo\\ \rTo} \fcat{MonGpd}_*. \] Here $\fcat{Multicat}_*$ denotes the category of multicategories equipped with a distinguished object, and similarly $\fcat{MonCat}_*$ and $\fcat{MonGpd}_*$. The content of the argument is the same.
1,314,259,996,455
arxiv
\section{Introduction} A study of the field--strength distribution between static quark and antiquark yields a detailed picture of the string formation and helps to understand the mechanism of confinement. In particular, comparing the field distributions with those of the dual Meissner model one can clarify the viability of the popular confinement mechanism. In non-abelian theory there are two ways of measuring the field--strength distribution: using connected $\rho^c$ or disconnected $\rho^{disc}$ plaquette averages around the Wilson loop [1]. While both reduce to the same quantity in the Abelian case, in the non-abelian case two measurements yield independent information. Recently [1] $\rho^c$ and $\rho^{disc}$ have been measured for some components of the field $F_{\mu\nu}$ using the cooling method [2]. While the signal for $\rho^{disc}$ was too small, the distribution of $E_{11}$ off the string axis ("the string profile") and along the string from $\rho^c$ was found with good accuracy. Another important quantity measured in [1] was a double plaquette correlator $\gamma^c(x,x')$, yielding additional information on correlation of field strength at two different points $x,x'$ of the string. The purpose of the present paper is: i) to extend the Monte--Carlo (MC) measurements to all orientations of the plaquettes in $\rho^c$ and $\gamma^c(x,x')$, thus yielding the most complete information on the field distribution in the $q\bar{q}$ string; ii) to give interpretation of the results in terms of simpler quantities -- field strength correlators like $\langle F_{\mu\nu}(x)F_{\rho\sigma}(y) \rangle$ (cumulants). Using cluster expansion and non-abelian Stokes theorem [3], one can express $\rho^c$ and $\gamma^c$ in terms of cumulants and keep the lowest ones (physical arguments in favour of dominance of lowest cumulants are given in the review paper [4]). The latter have been recently measured on the lattice [5], yielding a rather small correlation length $T_g=0.2~\mbox{fm}$. Now, using the cumulants, one unambiguously predicts $\rho^c$ and can compare it with MC measurements. This comparison with data from [1] was done in [6] and a good agreement of measured and calculated string profile was found. In particular it was clarified how a small correlation length $T_g=0.2~\mbox{fm}$ yields a bigger radius of the string -- $0.5~\mbox{fm}$. After presenting new measurements in sections 2 and 3, we make in section 4 a more precise and extended calculation of $\rho^c$ in terms of the bilocal cumulant and compare it with MC data. In particular we predict vanishing of some $\rho^c(F_{\mu\nu})$ on symmetry grounds and observe it explicitly in data. We perform the same analysis for $\gamma^c$ and predict vanishing of most combinations of plaquette orientations. The dominating structure -- $\gamma^c(E_{\Vert}, E_{\Vert})$ -- is expressed in terms of the quartic cumulant, which has never been measured on the lattice; the data for $\gamma^c$ allow to make some estimates of it. A short summary of results is given in conclusion. \section{MC study of the field strength tensor.} Using MC technique, we study the spatial distribution of the components of the field strength tensor in presence of a $q\bar{q}$ pair. This generalizes to all the components of the field the results already obtained in [1]. Following~[1], we define: \begin{equation} \rho_{\mu\nu}^c = \frac{\langle tr(WLP_{\mu\nu}(x_{\Vert},x_{\bot})L^+) \rangle}{\langle Tr W \rangle}-1 \end{equation} where $W$ is a Wilson loop, $L$ is a Schwinger line and $P_{\mu\nu}$ is the part of the plaquette proportional to the $\sigma $-- matrices, oriented in order to give the desired component of the field. The coordinates $x_{\Vert}, x_{\bot}$ measure resp. the distance from the edge of the Wilson loop and from the plane defined by the loop, as shown in Fig.~1. In the naive continuum limit $a \to 0$, \begin{equation} \rho^c_{\mu\nu}\simeq a^2 \langle F_{\mu\nu} \rangle_{q\bar{q}} \end{equation} We have used a $16^4$ lattice, taking a $8\times 8$ Wilson loop and $\beta = 2.50$, which is inside the scaling window for the fields. Moving the plaquette in and outside the plane defined by the Wilson loop, we obtain a map of the spatial structure of the field as a function of $x_{\Vert}$ and $x_{\bot}$. Using a controlled cooling technique (see [1,2] and references therein), we eliminate the short-range fluctuations. The long--range non--perturbative effects survive longer to the cooling procedure, showing a plateau of 10--14 cooling steps, while the error becomes smaller. A similar behaviour has been observed for the string tension. The cooling technique allows us to disentangle the signal from the quantum noise with a relatively small statistics. The general patterns of the field configurations are briefly resumed in the following figures.Figure~2 represent a detailed map of the spatial behaviour of the longitudinal component of the chromoelectric field. \begin{itemize} \item Varying $x_{\Vert}$ at fixed $x_{\bot}$, we investigate the structure of the fields in the direction of the axis joining the $q\bar{q}$ pair. In Fig.~3, we show $E_{\Vert}$ and $B_{\bot}$ as functions of $x_{\Vert}$ for $x_{\bot} = 0$, i.e. on the $q\bar{q}$ axis. We find that the electric field remains constant and magnetic transverse field vanishes, as expected on symmetry grounds. \item Varying $x_{\bot}$ at fixed $x_{\Vert}$, we study the transverse shape of the fields. Fig.~4 illustrates the behaviour of the $E_{\Vert}$ component vs. $x_{\bot}$, for different values of $x_{\Vert}$: the field remains constant with respect to $x_{\Vert}$ also outside the plane defined by the Wilson loop, as long as we remain inside the string (i.e. for $x_{\Vert}=3,5$). A detailed study of the transverse shape will be given below. \end{itemize} We find that the parallel electric field is squeezed in flux tubes, as already found in [1]. The results in [1] were consistent with a gaussian behaviour of the flux tube profile inside the string. In order to estimate the size of our tubes and to check the consistency of the result with previous measurements, we have again performed a fit of the transverse shape inside the string (for $x_{\Vert}=3$) with the function \begin{equation} E_{\Vert}=\exp(\kappa - \mu^2x^2_{\bot}) \end{equation} finding $\mu=0.30\pm0.01$, with $\chi^2 / \mbox{d.o.f.} = 0.993$. This indicates that the flux tubes have a transverse size of the order of 3 lattice spacings at $\beta =2.50$, which corresponds to a physical value $\mu^{phys}/\Lambda_{latt}=85 \pm 4$, in agreement with~[1]. In what follows, motivated by the measured form of the field strength correlators~[5] and by the analysis in terms of cumulants~[3], we will find that data are equally consistent with the form eq.~(18), which is also suggested by the mechanism of confinement via dual superconductivity. \section{Field strength correlators} In the last years, a systematic study on non--perturbative effects in QCD in terms of the gluon field strength correlators has been developed (see Ref. [3] and references therein) and the behaviour of these correlators in the vacuum has been investigated by lattice simulations [5]. As pointed out by the authors of [3], studying the field correlators in presence of $q\bar{q}$ pair, could provide further informations for describing color confinement. Therefore we measure the operator: \begin{equation} \gamma^c =\frac{\langle Tr\{WSV_{PP'}S^+\}\rangle}{\langle Tr W \rangle}-\langle TrV_{PP'} \rangle \end{equation} where \begin{equation} V_{PP'}=P_{\mu\nu}LP'_{\rho\sigma}L^+-\frac{1}{2}P_{\mu\nu}TrP'_{\rho\sigma} \end{equation} where $W$ is a Wilson loop, $S$ is a Schwinger line connecting the Wilson loop to the $V_{PP'}$ operator, $P$ and $P'$ are two plaquettes, located at $x$ and $x'$ respectively, and $L$ is a Schwinger line connecting them. In the naive continuum limit, we have \begin{equation} \gamma^c=a^2\langle F' \rangle +a^4[\langle FF' \rangle_{qq}- \langle FF' \rangle_{0}] \end{equation} where $F$ and $F'$ are respectively the field components at $x$ and $x'$. Varying the orientations of the two plaquettes, we obtain the different components of the correlators. The measurements have been done on a $12^4$ lattice at $\beta = 2.50$ using $6\times 6$ Wilson loop. Again, we used controlled cooling to reduce the fluctuations. We have measured $\gamma^c$ with the following two types of orientations of $F$ and $F'$. \noindent (i) In the first case: $\bullet~~P$ is held fixed on the $q\bar{q}$ axis at 1 lattice spacing from the border of the Wilson loop, while its orientation is varied in the 6 possible directions; $\bullet~~P'$ is moved in- and outside the plane of the Wilson loop, its orientation is kept fixed in the $E_{\Vert}$ direction; $\bullet~~x_{\Vert}$ and $x_{\bot}$ identify the position of $P'$ with respect to $P$. \noindent (ii) In the second type of measurements: $\bullet$ both the position and the orientation of $P$ are kept fixed; the plaquette is in the same position as before and its orientation corresponds to the $E_{\Vert}$ component; $\bullet~~P'$ is moved as before and its orientation is changed. We finally define the irreducible correlator $\bar{\gamma}^c$ as follows \begin{equation} \bar{\gamma}^c \equiv\gamma^c(x,x') - \rho^c(x') \approx a^4[\langle FF'\rangle_{q\bar{q}}-\langle FF'\rangle_0] \end{equation} \noindent {}From (7) it is clear, that $\bar{\gamma}^c$ contains only double plaquette correlations. \noindent Most of the data for $\gamma^c$ and $\bar{\gamma}^c$ are compatible with zero net effect, within two standard deviations. In Table~1, we report the data for $\bar{\gamma}^c$ in case when both plaquettes $P$ and $P'$ are kept fixed in the $E_{\Vert}$ direction. In the next Section we compare the Monte--Carlo measurements with the predictions from the cumulant (cluster) expansion and will see that indeed all orientations except $E_{\Vert}, E_{\Vert}$ should give zero result due to simple symmetry arguments. \section{Extracting bilocal and quartic field-strength correlators from the Monte--Carlo data} For the contour $C$ shown in Fig. 1 we denote direction along the $q\bar{q}$ axis $x_{\Vert} = x_1$, while that of $x_{\bot} = x_2$, and the Euclidean temporal axis is $x_4$. All the construction in Fig. 1 is taken at a fixed value of $x_3$. Using the non-abelian Stokes theorem and the cluster expansion [3] for $\rho^c_{\mu\nu}$ in (1) one has (see [6] for details of derivation) \begin{equation} \rho^c_{\mu\nu}(x_1,x_2,x_4)= a^2\int d\sigma_{14}(x'_1,x'_4) \Lambda_{\mu\nu} \end{equation} where \begin{equation} \Lambda_{\mu\nu} =\frac{1}{N_c}tr \langle E_1(x'_1,0,x'_4)\Phi F_{\mu\nu}(x_1,x_2,x_4) \Phi^+ \rangle +..., \end{equation} $\Phi$ is the parallel transporter (Schwinger line) from the point $(x'_1,0, x'_4)$ to $(x_1,x_2,x_4)$, and dots imply contribution of higher order cumulants, containing additional powers of $E_1$ [3]. We shall keep throughout this Section only the lowest cumulants (containing lowest power of $E_1$) and compare our prediction with the MC data of previous sections. The bilocal correlator $\Lambda_{\mu\nu}$ can be expressed in terms of two independent Lorentz scalar functions $D((x_{\mu}-x'_{\mu})^2)$, $D_1((x_{\mu}-x'_{\mu})^2)$ (see [3] and the appendix 1 of the last ref. in [3]) \begin{equation} \Lambda_{14}=D+D_1+(h^2_1+h^2_4)\frac{dD_1}{dh^2} \end{equation} \begin{equation} \Lambda_{24}=(h_1h_2)\frac{dD_1}{dh^2}~~,~~~ \Lambda_{34}=(h_1h_3)\frac{dD_1}{dh^2} \end{equation} \begin{equation} \Lambda_{23}\equiv 0,~~ \Lambda_{13}=h_3h_4\frac{dD_1}{dh^2}~;~~ \Lambda_{12}=h_2h_4\frac{dD_1}{dh^2} \end{equation} Here $h_{\mu}=(x-x')_{\mu}$. \noindent Since all construction in Fig. 1 is at $x_3=x'_3=0$ we have $h_3\equiv 0$ and hence \begin{equation} \rho^c_{23}=\rho^c_{34}=\rho^c_{13} \equiv 0 \end{equation} The only nonzero components are $\Lambda_{14}, \Lambda_{24}$ and $\Lambda_{12}$. For the latter the contribution to $\rho^c$ can be written as \begin{equation} \rho^c_{12}(x_1,x_2,x_4)= a^2\int^R_0 dx'_1\int^{\frac{T}{2}}_{-\frac{T}{2}} dx'_4 (+x_2)(x_4-x'_4)\frac{dD_1(h^2)}{dh^2} \end{equation} When $x_4=0$ ( and this is where measurements of $\rho^c_{12}$ have been done), $\rho^c_{12}$ vanishes because of antisymmetry of the integrand in (14). \noindent Hence only $\rho^c_{14} $ and $\rho^c_{24}$ are nonzero, and only those have been measured to be nonzero. To make comparison with data more quantitative, let us exploit recent MC calculation of $D$ and $D_1$ [5], which imply that both $D$ and $D_1$ are of exponential form \begin{equation} D_1(h^2)= D_1(0)\exp(-\mu_1 h); D(h^2) = D(0) \exp (-\mu h) \end{equation} $$D_1(0) \approx \frac{1}{3} D(0);~~ \mu_1\approx \mu $$ Inserting this into (8), (14) we have \begin{equation} \rho^c_{14}(x_1,x_2;0)= a^2\int^R_0 dx'_1\int^{\frac{T}{2}}_{-\frac{T}{2}} dx'_4 [D(0)+D_1(0)-\frac{(h^2_4+h^2_1)}{2h}D_1(0)]e^{-\mu h} \end{equation} with $$h_4=-x'_4~,~~h_1=x_1-x_1'~~,~~ h^2=h^2_4+h^2_1+x^2_2;$$ \noindent For $\rho^c_{24}$ similarly one obtains \begin{equation} \rho^c_{24}(x_1,x_2;0)= -a^2\mu x_2 \int^R_0 dx'_1\int^{\frac{T}{2}}_{-\frac{T}{2}} dx'_4 \frac{(x_1-x_1')}{2h}D_1(0)e^{-\mu h} \end{equation} {}From (16) and (17) one can deduce, that \\ (i) $\rho^c_{24}$ should vanish for $x_2=0$\\ (ii) $\rho^c_{24}$ changes sign for $x_1=\frac{R}{2}$, i.e. in the middle of the string length.\\ (iii) $\rho^c_{24}$ is about $1/3$ of $\rho^c_{14}$. \noindent All properties (i)--(iii) are supported by the data. \noindent Finally, we can make a detailed comparison of our prediction for $\rho^c_{14}$ in (16) with data. One obtains a simple analytic result for $\rho^c_{14}(x_2\equiv x_{\bot})$ in case of a very long string. The transverse shape measured at the middle is given by [6] \begin{equation} \rho^c_{14}=\frac{2\pi a^2}{\mu^2}[D(0)(1+\mu x_2) - D_1(0)\frac{1}{2} (\mu x_2)^2]e^{-\mu x_2} \end{equation} As shown in [6], this shape is in good agreement with the previous data, obtained in [5]. Here we calculate $\rho^c_{14}$ as a function of $x_1,x_2$ from (16) keeping $D_1(0)=\frac{1}{3} D(0)$. We then fit the data for $x_{\Vert}=3$ to evaluate $\mu$ and $a^2 D(0)$. We find: \[ \mu \approx 0.19~\mbox{fm},~~~~ a^2 D(0) \approx 3.92 \times 10^7 \] with a $\chi^2 / \mbox{d.o.f} = 0.17$. \noindent The value of $\mu$ is in good agreement with [5], while we find that $a^2~D(0)$ is one order of magnitude smaller than in the previous measurements. We recall that our data here are obtained for $SU(2)$, while in~[5] the gauge group was $SU(3)$. This should account for the one order of magnitude between the two results. \noindent These results allow to predict all curves for other values of $x_{\Vert}$ and $x_{\bot}$: the agreement with the numerical results is very satisfactory as can be seen from Fig.~4. We turn now to the double correlator $\bar{\gamma}^c_{\mu\nu}$, Eq. (7). We again use the non-abelian Stokes theorem and the cluster expansion, to represent $\bar{\gamma}^c$ as \begin{equation} \bar{\gamma}^{c}_{\mu\nu,\mu'\nu'}(x,x')=\frac{a^4}{4!} \int dy_1 dy_4du_1du_4\{\ll E_1(y)\Phi E_1(u)\Phi F_{\mu\nu}(x)\times \end{equation} $$\Phi F_{\mu'\nu'}(x')\Phi\gg+ \: perm \}$$ $$+a^4\frac{(-i)}{3!}\int dy_1dy_4\{\ll E_1(y)\Phi F_{\mu\nu}(x)\Phi F_{\mu'\nu'}(x') \Phi\gg+ \: perm \} $$ where the sum is over permutations of the order in which $E_1$ and $F_{\mu\nu}$ appear under the sign of the cumulant; the latter is denoted by double angular brackets and implies that vacuum insertions are subtracted from the averages of the field strengths. \noindent In our MC calculations partly reported in Table 2, the orientation of the plaquette $P$ or $P'$ was kept along the plane 14, and we should consider accordingly in (19) always $E_1(x)$ or $E_1(x')$ inside the cumulants. \noindent Now the symmetry requirements impose severe conditions on the nonzero values of $\bar{\gamma}_{\mu\nu,\mu'\nu'}^{c}$. Since both $P$ and $P'$ are chosen in the middle of the $x_4$ interval for the Wilson loop $[-\frac{T}{2},\frac{T}{2}]$, one can use the symmetry with respect to the change $x_4\to -x_4$. In this way one can show that all odd--power averages of the type $\ll E_1(u)\Phi E_1(y)\Phi ...E_1(P')\Phi\gg$ should vanish, since they are odd with respect to $x_4\to -x_4$. \noindent This property holds if one inserts in this odd--power cumulant additionally several magnetic field operators $B_i, B_k...$ \noindent Similarly, the correlator containing $B_i(P)$ should depend on , e.g. \begin{equation} \ll E_1(u)\Phi B_i(x)\Phi E_1(x')\Phi\gg \sim e_{ikl}h_kh_l \end{equation} and therefore should vanish whenever $h_k$ or $h_l$ are zero for both intervals, with $h_{\mu}=(u-x)_{\mu}, ~~h'_{\mu}= (u-x')_{\mu}$. In this way one proves that (20) vanishes for $i=1$ or 2 identically, since $h_3=h'_3\equiv 0$. For $i=3$ the combination (20) should vanish when $h_2=h'_2=0$. Using those criteria we keep in our Table 1 only results for $\gamma_{14,14}$, where the signal is largest. The contribution to $\gamma_{14,14}$ comes only from the quartic cumulants in (19), since as we discussed above, the triple correlator $\ll E,\Phi E_1\Phi E_1\Phi\gg$ vanishes identically . One can conclude from the Table 1, that the quartic cumulant sharply decreases for large $x_{\bot}$, and the transverse shape of the string, using $\bar{\gamma}_c$, is similar to that deduced from $\rho_c$, the thickness of the string being of the order of 3 lattice spacings. All the correlations $\gamma_{\mu\nu,\mu'\nu'}$, which should vanish on symmetry grounds, have been found to be zero within two standard deviations. In these cases, a higher statistics should be necessary for a clear answer. \section{Conclusions} One should stress three different aspects of the results reported above. First of all, we have presented the most detailed measurements of the connected correlators made so far. There is an agreement between the field distributions of this paper and those reported in [1], where only correlators of $E_{11}$ were measured to be nonzero. Here with better statics all components of magnetic field $\vec{B}$ and electric field $\vec{E}$ are given for all possible configurations of one probing plaquette (for $\rho_c$) and two probing plaquettes (for $\bar{\gamma}^c$). Symmetry requirements impose severe restrictions on $\rho_c$ and $\bar{\gamma}^c$ (independently on cluster expansion) and predict zero results in most cases, expect for $E_{11}(\rho^c_{14}), E_2(\rho^c_{24})$ and for $\bar{\gamma}^c(E_{11}, E_{11})$. Results are compatible with the predictions, and hence statistical errors seem to be reliable. Secondly, our results serve to check the validity and usefulness of the cluster expansion for the signals like $\rho^c$, $\bar{\gamma}^c$. This expansion allows to express $\rho^c$ and $\bar{\gamma}^c$ in terms of a simpler (and more fundamental) bilocal correlator $G_2\equiv \langle F\Phi F\Phi \rangle$, which has been measured earlier [5] and thus yields a clear prediction for $\rho^c$ and $\bar{\gamma}^c$. Comparison with measured results made earlier in [6] and here in more detail in Fig. 3,4, shows a good agreement and supports the fundamentality of the bilocal correlator, which is known to define the nonperturbative dynamics of confinement [3]. In particular the asymptotics of the string profile at large $x_{\bot}$ is shown to be exponential, see eq. (18), just as the asymptotics of $G_2$, measured in [5]. This in contrast to the behaviour inside the string, where the Gaussian--like flattening is observed before [1] and also in this paper. This effect is connected to the flattening of $G_2(x)$ at small $x$, necessary for its regularity at $x=0$ (Note that the latter property of $G_2$ was not taken into account in (15)) and also because of the smearing effect due to integration in (16), yielding polynomial factors in (18). Of special importance is the first estimate of quartic cumulant $G_4\equiv \langle (F\Phi)^4 \rangle$ through $\bar{\gamma}^c$ in (19). To obtain a more quantitative measure of $G_4$, we represent the coordinate dependence of $G_4$ as an exponent similarly to $G_2$ (15) with the same $\mu$. In this case the dimensionless ratio is $$\frac{G_4(0)}{(G_2(0))^2}=\frac{\bar{\gamma}^c(0)}{\rho^c(0))^2}\approx 2-3$$ \noindent This kind of estimate would result from the instanton--type vacuum with instantons of small size $\rho, \rho\leq 0.3 fm$ and typical density of one instanton per $fm^4$~[7]. Finally, let us compare the transverse shape of the string (the string profile) measured in Table 1 and analytically written in (18) with that of the dual Meissner effect. As is known from the theory of the type-II superconductors [8, Eq.(48.11)] the asymptotics of the magnetic field distribution off the vortex line is exponential, $B(r)\sim exp (-r/\delta)$, where $\delta$ is the so-called London's penetration length. One can see that the dual Meissner effect is capable of reproducing such fine details of the confinement picture, as the field distribution around the string. This conclusion agrees with a recent study made in a completely different approach in [9]. \vspace{2cm} \underline{\bf Acknowledgements} This work has been started, when one of the authors (Yu.S.) was visiting Department of Physics of Pisa University. It is a pleasure for him to thank the Department for a kind hospitality, and to acknowledge a financial support of the Russian Fund for Fundamental Research. LDD wants to thank the Italian ALEPH group at CERN for the warm hospitality during the final part of this work. \clearpage
1,314,259,996,456
arxiv
\section{Introduction} In order to appreciate the important role of the idea of simplicity, it is worth reviewing one of the most challenging open questions, concerning our understanding of science. Most scientists believe that the main goal of their work, namely that of finding {\em better theories} than those representing the state of the art, is well defined and the criteria for success are reliable and do not depend on the particular culture dominating the scientific community to which they belong. Although the scientists are not immune to disputes, even bitter, the latter occur on rather minor issues, compared to the common grounds that unite the scientific community. In particular, it is certainly not true that {\em for any} two competing theories, all scientists agree on which one is better, but {\em there do exist many and significant} pairs of theories where all scientists agree that one is unambiguously better than the other. Moreover, many issues, that divided the scientists in the past, are now fully settled. This high level of convergence begs for an explanation. A challenge for philosophy of science is to understand whether the standards that scientists perceive as reliable are actually well-grounded on unambiguous cognitive values---and, if so, identify such values---or, alternatively, identify the cultural bias in the scientists' assessments, and show how different---but in principle equally admissible---cultural prejudices would lead to different assessments, {\em even in questions where the scientists unanimously agree}. In order to justify the first conclusion, one should identify {\em general} and {\em durable} criteria for comparing two scientific theories, which are based on {\em unambiguous cognitive values}. Moreover, the criteria should be {\em usable in practice} to select among real scientific theories. Incommensurability \citep{Kuhn1, Bird-SEP} is sometimes believed to be a stumbling block undermining any {\em general} criterion for the comparison of scientific theories. The alternative is to acknowledge the necessity of irreducibly different criteria of theory appraisal, for different scientific domains. This is a favored view among many philosophers, which is also not strongly opposed by scientists, who have limited authority to judge beyond their own disciplines (and might even be seduced by the shortsighted illusion that granting the full responsibility of the judgment to {\em experts} is good for them). But, it should be clear that the lack of a {\em general} criterion is ultimately equivalent to no reliable criterion at all, with the consequence that {\em anything goes} \citep{Feyerabend}. In fact, it is not uncommon that a dispute over the scientific value of a method, or of a theory, results in the foundation of a new discipline, with its own alleged scientific standards and experts. If we deny any general standard in science, we have to accept such practices as perfectly justified ways of doing science. A general criterion for the comparison of different scientific theories---which has also an obvious cognitive value---is empirical adequacy\footnote{In this paper, the precise definition of {\em empirical adequacy} does not play any important role. Only the concept of {\em empirical equivalence} matters, and it is defined later.}, but it cannot be the only one. In fact, empirical adequacy can be easily improved by introducing ad-hoc assumptions and building more and more complex theories that adapt themselves to the data, without producing any cognitive advantage. It has been argued \citep{Sokal1C} that there is often just one theory---at best---that is compatible with the data and it is not {\em crazy} (such as theories that might be motivated by solipsism, radical skepticism and other implausible speculations). This suggests that empirical adequacy should be sufficient for theory appraisal, provided that one excludes crazy theories. But unfortunately, there is no sharp distinction between crazy and non-crazy theories. How many ad-hoc assumptions are we willing to accept before declaring a theory crazy? For example, a full class of gravitational theories within the parametrized post Newtonian (ppN) formalism \citep{lrr-2006-3} are in agreement with the experimental data as precisely as general relativity (GR). But GR is still unanimously regarded as unambiguously better than most of those theories\footnote{This does {\em not} refer to those ppN theories that are in {\em better} agreement with some experimental data than GR, like those used to model Dark Matter. These do represent interesting alternatives, and are the reason why the ppN formalism is studied.}. These are not crazy theories at all, but we should nevertheless be able to tell precisely why GR is a better theory than the other empirically equivalent ppN ones, otherwise we might have no strong argument against publishing also, say, post Ptolemaic terms in scientific journals... It is therefore necessary to define some other epistemologically relevant measure, besides agreement with the data. But, which one? The ability of a theory to {\em predict} nontrivial, yet unobserved, phenomena is rightly considered a strong evidence of success (see \citealp{HDouglas}, which contains a recent review). Predictions are certainly invaluable tools of theory selection, in everyday practice of science. But, defining precisely what {\em predictions} are, turns out to be subtler than one might expect. For instance, it is not too hard to hit a prediction by producing many possible extensions of an already successful theory. Are such shots in the dark also 'predictions'? Predictions are valuable only if their alternatives cannot be equally well justified, which, essentially, leads again to the necessity of characterizing ad-hoc assumptions, in the first place. Scientific theories are often evaluated for the opportunities of {\em technological applications} that they promise to open. But, either these advantages can be reformulated simply in terms of better empirical adequacy, or, if not, it is interesting to know {\em why} some theories seem to offer more opportunities than others {\em in spite of being empirically equivalent}. Hence, applications do not answer our question (they are rather one of the motivations for our question). One of the most popular tools for theory selection is {\em falsifiability} \citep{Popper}. But, because of the Quine-Duhem thesis \citep{Quine2Dogs}, almost no theory can be falsified, as long as any ad-hoc assumption may be freely added to it. Therefore, discriminating between the introduction of ad-hoc assumptions and truly new theories is necessary also to ensure the effectiveness of the criterion of falsifiability. The idea of {\em reduction} of a theory to a more fundamental one \citep{Nagel}---even if only partially \citep{KemenyOppenheim} or in some limit \citep{Nowak}---together with the related idea of {\em unification}, singles out essential aspects of true scientific progress. However, from a logical point of view, nothing prevents the reducing (or unifying) theory from being an artificial superposition of old theories, made of many and complex assumptions. Reductions and unifications represent true progress only if, at the end of the process, some old assumptions can be dropped. All this strongly suggests that defining some measure of the amount and/or complexity of the {\em assumptions} does not only represent a cognitive value in itself, but also a prerequisite for a precise characterization of many other classic goals of science as well. The idea is not new. Many philosophers and scientists (e.g., \citealp{Mach,Poincare}, to mention only two of the most influential and modern authors) have stressed the importance of {\em simplicity, economy of thought} and related concepts\footnote{The previous discussion makes clear that what matters, in order to assess the cognitive value of a theory, is always the complexity of its {\em assumptions}. By contrast, the complexity of its {\em consequences} and {\em results} may very well be high, which is desirable in a theory that aims at describing the world and its manifest complexity.}. But, a precise and general definition is problematic (see e.g., \citealp{Sober_PoS}). The main obstacle lies in the fact that any conceivable characterization of simplicity inevitably depends either on the language in which the theory is formulated, or on some other choice which is equally hard to justify. A few prominent examples can better clarify this point. A theory is usually defined as more {\em parsimonious} \citep{Baker-SEP}\footnote{The review of \citet{Baker-SEP} distinguishes {\em syntactic} from {\em ontological} definitions of simplicity. However, any general definition of simplicity, once it is made precise, it becomes syntactic, in some sense. This is the case also for parsimony. In this paper, simplicity is always to be understood as syntactic simplicity} if it postulates less entities. But there is no natural and general way to count the number of entities, and any prescription in this sense inevitably introduces an arbitrary subdivision of the world into elementary kinds, without convincing justification. Alternatively, parsimony can be made precise by identifying the ontological commitment of a theory with the domain of its logical quantifiers \citep{Baker-SEP}. But this property is not invariant under reformulation of the theory \citep{Quine_OI}. Another famous definition of simplicity counts the {\em number of free parameters} that appear in the formulation of the theory \citep{Popper}. This is well defined within a fixed class of theories with a fixed common parameterization, but it becomes arbitrary beyond that. A further well known example is the proposal of \citet{GoodmanSimple}, that stimulated much interest and further developments, especially in the 50s and the 60s. In this case, the complexity of the theory depends on the choice of the set of {\em primitive predicates}, which is effectively analogous to the choice of the language \citep{Schwartz_onGoodman}. Finally, the concept of simplicity derived from {\em Kolmogorov complexity} (KC) \citep{Solomonoff,Kolmogorov,Chaitin} has been used by many authors, in recent years, to determine the so-called universal prior probabilities in a Bayesian context (see \citealp{LiVitanyi1997,GruenwaldVitanyi2008} for reviews). It is well known that KC is defined only up to a constant, that depends on the language. KC is well suited to study asymptotic properties of theories describing an increasing amount of empirical data, while keeping the language fixed. But, KC cannot be used to compare the simplicity of different theories (each expressed in its own preferred language) with fixed empirical data. In fact, for any scientific theory, it is always possible to find a suitable language in which the theory assumes a trivially simple form \citep{Kelly-razor}. It should be stressed that the language dependence that characterizes any precise definition of simplicity is not a problem in itself: an awkward language should obviously produce a complex formulation of the theory. But, if {\em any} theory can be made trivially simple by a suitable choice of the language, then the concept of simplicity looses any interest. The idea of simplicity is only meaningful if the {\em simplest} formulation of realistic theories is {\em not trivial}. Unfortunately, a common, general argument (hereafter called {\em trivialization} argument) shows that all previous examples suffer this problem, unless the admissible languages are somehow limited. But, how should we justify such limitations? It is sometimes argued (see, e.g., \citealp{Psillos}, chap. 11) that the special language that can reduce a theory to a trivial form is artificial and not based on {\em natural kinds}. This shifts the problem to the one of characterizing what natural kinds are, which has no convincing solution either \citep{NatKind}. But there is also a deeper reason to be skeptical about this approach: one of the main tasks of science is precisely to discover new kinds (and new languages), which may look weird now, but eventually enable a deeper understanding of the laws of nature. The revision of the concept of {\em time} introduced by Einstein and the formulation of particle physics in terms of {\em quarks} are obvious examples. In this paper it is stressed that {\em measurability}, rather that {\em naturalness} is the key. In fact, scientific theories typically contain concepts that are {\em in principle not measurable}. Such unmeasurable concepts should obviously not be used to ground the empirical content of a scientific theory. Unmeasurable concepts can certainly be used to formulate the principles of a theory, but then, in order to compute the complexity of the theory, also the cost of defining the measurable concepts from those used in the principles should be taken into account. This idea can be applied to any of the characterizations of simplicity mentioned above. It should be stressed that this paper does {\em not} to propose a new notion of complexity, but rather shows how the proper consideration of the empirical content of a scientific theory prevents a trivialization of essentially any notion of simplicity. The obstacles preventing trivialization are illustrated in detail with reference to the definition of simplicity given in Section \ref{sec:concise} ({\em conciseness}). But the same ideas can be applied to essentially any acceptable characterization of the simplicity of the assumptions, as discussed in Section \ref{sec:alt}. The requirement that the formulation of a theory should provide a connection to its measurable concepts may seem too weak and easy to fulfill. In fact, as shown in Section \ref{sec:trivial}, this requirement does not rule out such theories as {\em ``all emeralds are grue''} \citep{grue}, and it also does not offer a solution to the curve fitting problem (see e.g., \citealp{Sober_PoS}). But, such {\em toy-models} of scientific theories are only significant if they capture the relevant features of {\em realistic} theories. The arguments in Sections \ref{sec:simple-stab} and \ref{sec:gen-less} show that those models are indeed inadequate. It is only when the theory becomes sufficiently rich of consequences that qualitatively new features appear: the connection with measurable concepts becomes difficult to achieve for those languages that are designed to make most {\em realistic} scientific theories trivially concise. In particular, it can be proved that the simple (but not too simple) theory analyzed in Section \ref{sec:simple-stab} contains unmeasurable concepts. Moreover, such concepts appear naturally, when one tries to reformulate the theory in a very concise form. This provides evidence that the general trivialization argument reviewed in Section \ref{sec:trivial} is not conclusive, and it also suggests that the obstacles to trivialization are unlikely to be evaded. Lacking evidence to the contrary, the fact that some theories can be formulated more concisely than others cannot be regarded as purely conventional. Achieving a concise formulation of a realistic scientific theory is far from easy and highly valuable. The discussion above makes clear that the notions of simplicity which are significant for science cannot be properties of the logical or syntactic structure of the theory alone. Instead, they must depend also on the connection between the theory and the experience. For this reason, before examining any concept of simplicity, it is necessary to define precisely what the {\em empirical content} of a theory is, and what its {\em empirical (i.e. measurable) concepts} are. The traditional approach to these issues is represented by the syntactic {\em received} view of scientific theories, originally formulated by the logical empiricists \citep{CarnapAufbau,Carnap:1958a}. The main problem with that view is its reliance on a theory-independent observational language, in order to verify the empirical adequacy of a theory and compare different theories among each others. But no such language exists, as it has been convincingly shown by a vast literature (e.g., \citealp{Kuhn1,Quine2Dogs,PutnamWTAN,SuppeWW,FraassenImage}). Perception itself is theory-laden \citep{Quine_praise} and a self-sufficient phenomenal language is an illusion. The causal theory of reference for physical magnitude terms \citep{PutnamER} is often regarded as a way to achieve stability of reference---and hence enable the comparison of theories with the experience and among each others---in spite of the theory ladenness of observations. In this paper, the causal theory of reference is not regarded as a tool to {\em ensure} the stability of the reference, but rather as a framework to examine {\em under which assumptions} the reference is sufficiently reliable for the present purposes. These observations lead to the identification of those syntactic elements that are necessary to describe the interplay between the empirical content of a theory and its simplicity, without running into the pitfalls of the received view and while being consistent with the now widely accepted semantic view \citep{FraassenRep} of theories. The main message of this paper is that {\em a clear identification of the empirical content of a theory lies at the heart of the problem of simplicity}. The paper is organized as follows. Section \ref{sec:def-th} introduces those elements of scientific theories which are needed to provide a relevant characterization of simplicity. These are further analyzed in Appendix \ref{app:comm}, in order to show their consistency. Section \ref{sec:simple} introduces and examines the notion of conciseness. In particular, Sections \ref{sec:simple-stab} and \ref{sec:gen-less} show that most realistic theories cannot be made arbitrarily concise by any known procedure. Section \ref{sec:alt} extends the previous result to other definitions of simplicity. Finally, Section \ref{sec:DDS-TS} examines the possibility that different definitions of simplicity may converge to produce a consistent characterization of the goals of science. \section{Scientific Theories And Empirical Concepts} \label{sec:def-th} As stressed in the Introduction, in order to provide a characterization of simplicity which is significant for science, we need to identify precisely a few elements that are part of any scientific theory. In particular, we need to specify the role of the {\em principles} and that of the {\em empirical concepts} of a theory. Similar concepts occupied the central stage in the traditional syntactic view of scientific theories \citep{CarnapAufbau,Carnap:1958a,Feigl}, but the latter included unacceptable assumptions that have been the object of detailed criticisms in the past 50 years, that are briefly reviewed later. On the other hand, modern semantic views \citep{SuppesWST,FraassenImage}, concentrate on other aspects of scientific theories (e.g., models), which are not directly usable for our purposes. However, \citet{PutnamER} has shown that the empirical concepts ({\em physical magnitude terms}) of a scientific theory can be characterized without running into the inconsistencies of the traditional view. In this section, we introduce those elements in a way that mimics the received view, where the latter is unproblematic, but also introduces the crucial corrections dictated by the causal theory of reference for physical magnitude terms \citep{PutnamER}. Many comments are postponed to Appendix \ref{app:comm}. In particular, it is shown in Section \ref{app:sem} that this approach is not inconsistent with a semantic view. To our purposes, a {\bf scientific theory}, may be viewed as the union of the following elements: a set of abstract {\em principles}, a set of {\em results}, a set of {\em empirical concepts} and the {\em language} that is used to express all the previous elements. The {\bf principles} are abstract, in the sense that they make use of concepts which are only defined implicitly through the principles themselves. They merely describe a network of symbols \citep{Feigl}, and can be seen as a set of mathematical axioms\footnote{In this paper, the words {\em principles, postulates, laws, axioms, assumptions and hypotheses} are regarded as equivalent. No restriction to first order logic is assumed.}. Each theory is regarded as a multidisciplinary collection of principles that include {\em all} assumptions (from the logical rules of deduction to the modeling of the experimental devices and of the process of human perception) which are needed to derive the results of the theory and compare them with the experiments, including a complete estimate of the uncertainties. All such principles have the same epistemological status: even logic rules are to be considered working assumptions and there may be theories that adopt different ones. The {\bf results} comprise all theorems, formulae, rules, solutions of equations, models etc. that have been derived from the principles of the theory. The set of results is introduced as a distinct element of the theory, because its derivation from the principles is not automatic, but requires original intuitions. Moreover, when a new theorem is proved, the theory may acquire new empirical consequences and become richer. The principles and the results are necessarily formulated in some {\bf language}\footnote{Because some languages may complicate the comparison with the experiments, as shown in Section \ref{sec:simple-stab}, it is convenient, in general, to regard different formulations as different theories. Nevertheless, we may, for brevity, still refer to two different formulations of the same theory, if one is simply the translation of the other in a different language.}. Its terms may be conventionally divided \citep{Feigl}, into {\bf derived concepts}, if they have an {\em explicit} definition in terms of other concepts of the theory, or {\bf primitive concepts}, if they are only implicitly defined through the principles. The {\bf empirical (or measurable) concepts} (ECs) have a double characterization: they are concepts of the theory (either primitive or derived), and they are also endowed with a set of {\em prototypes} \citep{Rosch}. The {\bf prototypes} of a concept are the subjective examples that a person bears in mind as typical instances of that concept. When we need to decide whether a particular phenomenon is an occurrence of a concept or not, we can compare what we observe to our personal set of prototypes and decide by analogy. In other words, a prototype for an EC is a typical member of the {\em extension} of that EC. Obviously, this does not yet explain how such prototypes could provide a solid base to science. This is where the causal theory of reference \citep{PutnamER} plays a role, but the discussion is postponed to Appendix \ref{app:comm}. The ECs are further distinguished into {\bf basic empirical concepts} (BECs), that are empirically characterized (interpreted) {\em only} through a set of prototypes, and {\bf operationally defined empirical concepts} (ODEC), for whom the theory allows the deduction of a precise operational definition in terms of the BECs\footnote{We are not interested in defining the concept of {\em directly} measurable: if---under the assumptions of the theory---measuring $A$ implies a definite value of $B$, both $A$ and $B$ are ECs. It is up to the theory to decide which one, if any, is also a BEC.}. All concepts for which we have neither prototypes nor rules to build them are {\bf not empirical} (NEC). The fact that we do not have prototypes or rules associated to a certain concept does not mean, in general, that it is impossible to build one. In fact, some NECs may turn out to be ECs after the discovery of some new experimental technique. There are, however, also NECs that could not possibly become ECs. This crucial observation is discussed in Section \ref{sec:crucial}. Note, that there is no relation, in general, between primitive concepts and BECs. The former are related to the logical structure of the theory, while the latter to the availability of prototypes. In other words, there is no obstacle to the existence of primitive-NECs or derived-BECs, as shown in the example in Section \ref{sec:example}. The division into ECs and NECs evokes the traditional distinction between observational and theoretical terms \citep{Carnap:1958a}. However---contrary to Carnap's observational terms---the ECs are theory {\em dependent}. In the received view, the observational terms were supposed to represent theory independent sense-data and provided the basis for radical reductionism and verification theory and also the basis for the comparison of different theories. This reconstruction cannot be defended anymore after \citet{Quine2Dogs}\footnote{Note that \citet{Quine_praise} himself defends the usefulness of observation sentences, once their theory-ladenness is made clear.}: no universal concept can be assumed to be translatable into a purely sense-data language and hence must be assumed to have a meaning only within some theory. For this reason, the ECs are here introduced as an additional label for some theoretical concepts\footnote{Note that the prototypes themselves, like any experiment, do not depend on any theory: they are historical events. But this does not allow to produce theory independent BECs, because both the selection and the description of those prototypes that should be relevant to characterize a BEC can only be theory dependent.}. It is of course not obvious how the BECs can enable the comparison of the empirical statements of different theories. This is discussed in Appendix \ref{app:compare}. A different objection \citep{PutnamWTAN,SuppeWW} against the observational-theoretical division deserves special attention, because it is independent of the theory-ladenness of the observational terms. \citet{PutnamWTAN} has observed that there are no terms in the English dictionary that may be regarded as univocally either observational or theoretical. For example, the property of being {\em red} can be empirically verified in ordinary objects of macroscopic size, but its observability is questionable, or certainly impossible, for sufficiently small objects. \citet{SuppeWW} has further recognized that the observational-theoretical division could be more complex than a simple bipartition of dictionary {\em terms}, and could involve the context in which the terms are used. But he has also argued that such division, if it exists, would be extremely complex, in a way that it is hopeless to characterize. These observations are correct: the ECs are not simple dictionary terms. They include the full specification of the experimental conditions that the theory considers relevant (and this reinforces their theoretical dependence). Moreover, understanding which setup may allow which measurement is the hard and ingenious work of experimental scientists. Drawing the complete distinction between the ECs and the NECs would require the classification of all realizable experimental arrangements where any quantity could be measured. This is clearly not feasible. Moreover, the boundary between ECs and NECs is populated by concepts associated to quantities that can be measured only with such poor precision that it is questionable whether they are ECs at all. However, from a philosophical point of view, a precise and comprehensive compilation of all the ECs is unnecessary: it is sufficient to recognize that for each scientific theory at least some ECs exist and they can all be constructed on the basis of both the theory and a small set of BECs. Only the full list of BECs must be made explicit, as discussed in Section \ref{sec:simple}. Also the BECs may not be just dictionary terms: they are rather selected because of their assumed unambiguity. For example, most modern scientific theories tend to reduce all the BECs to the reading of the digital displays of some experimental devices, for which suitable models are assumed. The classes introduced in this section are summarized in the table below. \begin{center} \begin{tabular}{|c|c:c|} \hline \multicolumn{3}{|c|}{Concepts of the theory}\\ \hline \multicolumn{2}{|c:}{ECs}& \multirow{2}{*}{NECs} \\ \cline{1-2} BECs & ODECs & \\ \hline \end{tabular} \end{center} \subsection{An Example} \label{sec:example} Consider, for example, a theory that, besides standard mathematical and logical axioms, also assumes the Gay-Lussac's law of gases at fixed volume: $P=c T$, where $P$ represents the pressure, $T$ the temperature and $c$ is a constant. Here, $P$, $T$ and $c$ are primitive concepts. Let us also assume a suitable model for the thermometer and the barometer, which can be used, however, only in limited ranges. As a result $P$ and $T$ are ECs within those ranges and NECs outside them. These allow the definition of other ECs such as $c=P/T$, which is hence a ODEC. A typical prototype for the EC of $T$ at a reference temperature $T_{\rm ref}\pm\Delta T$ consists in a sample of real gas equipped with a thermometer that displays the value $T_{\rm ref}$ with a precision of at least $\Delta T$. The ECs corresponding to measurements of different temperatures can be characterized by similar prototypes, but they can also be {\em operationally} defined using the theory (in particular a model for the thermometer) and a single BEC at the reference temperature $T=T_{\rm ref}\pm\Delta T$. The choice of the temperature $T_{\rm ref}$ which is selected as BEC is arbitrary. But it is important that the {\em necessary} prototypes can be reduced to those at a single temperature $T=T_{\rm ref}\pm\Delta T$, while all other (measurable) $T$ correspond to ODECs. \subsection{A Crucial Property Of The ECs} \label{sec:crucial} With no loss of generality, it can be always assumed that the ECs represent properties whose value is either yes or no. In fact, any measurement of a real-valued quantity is equivalent to assess whether its value lies or not within some intervals $[x \pm \Delta x]$, for some $x$ and $\Delta x$. (Given the limited precision of all measurements, this is also closer to the experimental praxis.) In this case {\em a valid prototype should be associated to a single connected interval}. This requirement is necessary to comply with the intuitive idea of prototype: a single prototype must correspond to a single outcome of a measurement---as inaccurate as it might be---and not to many precise outcomes at the same time. If this is not the case for one prototype (e.g. because the outcome was poorly recorded), a clearer prototype should be provided. If this is also not possible, one can only conclude that the corresponding concept is not empirical. In the example of the previous section, a prototype was represented by an experimental setup where the temperature of a given sample of gas was measured. Typically, the thermometer would let us read a number somewhere between 30.1 \textcelsius\; and 30.2 \textcelsius. We can accept some uncertainty, which is, in this case $\sim$0.1 \textcelsius. Now, imagine that we find a report of the previous day stating that the temperature was measured once and the result was ``either 29.31$\pm$ 0.01 \textcelsius\; or 32.05$\pm$0.01 \textcelsius''. We would conclude that there was a mistake in taking or recording that measurement and we would repeat it. Experimental results cannot be in macroscopic quantum mechanical superposition states! This remark plays a central role in this work. Section \ref{sec:simple-stab} shows that the requirement stated here---which is indispensable\footnote{Note that this is certainly not a {\em sufficient} condition in order that a concept is an EC.}, in order that the ECs have any chance of actually being empirical---cannot be fulfilled by those very concepts that would naturally make a theory trivially concise. \subsection{Empirically Equivalent Theories} \label{sec:EET} Consistently with the motivations given in the Introduction, we are only interested in considering the relative simplicity of {\em empirically equivalent} theories. Empirical equivalence is defined here. Each scientific theory is motivated by some questions. A {\bf question} for the theory $T$ consists in the specification of the values of some concepts of the theory (e.g., the initial conditions or other choices within the alternatives offered by the principles) and a list of concepts that the theory is expected to determine. For example, in astronomy a valid question is: determine the motions of the planets in the sky, knowing the positions and velocities at some initial time. It is convenient to distinguish two kinds of questions: {\bf empirical questions}, that contain {\em only} ECs, and {\bf technical questions}, that also contain non-empirical concepts of the theory. Examples of the latter are questions concerning what cannot be measured in principle, such as the quantum mechanical wave function, or in practice, because of technical limitations that may be overcome eventually. Two theories $T$ and $T'$ are said {\bf empirically comparable}, relatively to the sets of ECs ${\cal E}$ of $T$ and ${\cal E}'$ of $T'$, if there is a one-to-one correspondence ${\cal I}$ between ${\cal E}$ and ${\cal E}'$ and---under this correspondence---the experimental outcomes are interpreted in the same way by the two theories, i.e. those concepts that are identified via ${\cal I}$ possess the same prototypes. Note that, if $T$ and $T'$ are comparable for some ECs, then all the empirical questions---limited to those ECs---of one theory are also empirical questions for the other. Finally, two theories $T$ and $T'$ are said {\bf empirically equivalent}, relatively to ${\cal E}$ and ${\cal E}'$, if they are comparable and all their results concerning the ECs in ${\cal E}$ and ${\cal E}'$ are equal (within errors) under the correspondence ${\cal I}$. \section{Simple But Not Trivial} \label{sec:simple} This central section shows that there is no reason to expect that realistic theories can be expressed in an arbitrarily simple form by a suitable choice of the language, while also preserving their empirical content. First, for the sake of definiteness, a particular definition of simplicity ({\em conciseness}) is introduced in Section \ref{sec:concise}. The {\em trivialization} argument, according to which a trivial formulation of any theory {\em always} exists, is reviewed in Section \ref{sec:trivial}. But, a gap in the argument is also pointed out, inasmuch the measurability of the concepts used in the trivial formulation is not granted. This is not a remote possibility: in Section \ref{sec:simple-stab} an elementary theory, that involves chaotic phenomena, is analyzed in detail. It is actually easy to identify a very concise formulation for it, but precisely those concepts that naturally enable such trivial formulation can be {\em proved} to be non measurable. This simple (but not too simple) theory underlines a serious difficulty in closing the gap of the trivialization argument. In Section \ref{sec:gen-less} it is stressed that the obstacles identified in Section \ref{sec:simple-stab} are not due to some very peculiar features of that theory, but they are rather general. In fact, they are expected to emerge whenever a theory possesses sufficiently complex consequences. In view of this, it seems very unlikely that the gap in the trivialization argument might be closed, for any relevant set of realistic scientific theories. Finally, Section \ref{sec:alt} considers other possible characterizations of the simplicity of the assumptions, besides conciseness. It is shown that any acceptable (as defined below) characterization of the complexity of the assumptions poses the same obstacles to its trivialization, as conciseness does. The fact that different characterizations of simplicity are nontrivial does not imply that they are equivalent, when used for theory selection. This interesting issue is addressed in Section \ref{sec:DDS-TS}. \subsection{Definition Of Conciseness} \label{sec:concise} Let {\bf $\sigma(T^{(L)})$} denote the {\bf string encoding all the principles} of a theory $T^{(L)}$, where it is emphasized that the theory is formulated in the language $L$. As already stressed, it is crucial that the string $\sigma(T^{(L)})$ include also the definitions of all the BECs in terms of the primitive concepts of the theory. In this way, anybody able to recognize the BECs of $T^{(L)}$ would find in $\sigma(T^{(L)})$ all the ingredients that are needed to check\footnote{In order to {\em derive} the results of $T^{(L)}$, the string $\sigma(T^{(L)})$ is not sufficient, without further original ideas. However, $\sigma(T^{(L)})$ is sufficient to {\em check} the validity of any given derivation. \label{fn:derive}} which results are correctly deduced from $T^{(L)}$, which questions they answer, and compare them with the experiments. The {\bf complexity} ${\cal C}(T^{(L)})$ is defined as the length of $\sigma(T^{(L)})$\footnote{It is interesting to compare this definition with Kolmogorov complexity. The Kolmogorov complexity of a string $x$ is defined as the length of the shortest program written in a {\em fixed} Turing-complete language, that outputs $x$. We could have defined also our complexity as the length of the shortest program that outputs the string $\sigma(T^{(L)})$. However, in the present context, the language depends on the theory. It is therefore equivalent and simpler to define the complexity directly as the length of $\sigma(T^{(L)})$, because if we find a shorter program, we can choose that program as $\sigma(T^{(L)})$. Note that $\sigma(T^{(L)})$ is {\em not} expected to produce theorems or formulae automatically (see footnote \ref{fn:derive}). Finally, Kolmogorov theory does not distinguish ECs from NECs, although it would not be difficult to introduce an equivalent distinction between realizable and unrealizable Turing machines.}\;\footnote{Note that $\sigma(T^{(L)})$ includes all the principles, but not the questions, which are potentially unlimited. However, a theory $T$ cannot cheat by hiding the principles inside the questions, because the empirical questions translated from another theory $T'$ through the correspondence ${\cal I}$ (see Section \ref{sec:EET}) would miss this information and would have no answer in $T$.}. The length of the string is measured in the alphabet associated to the language $L$. Note that one cannot tell, in general, whether a given $\sigma(T^{(L)})$ represents the shortest possible formulation of the principles of $T$ in the language $L$. The string $\sigma(T^{(L)})$ is simply the shortest {\em known} formulation\footnote{This is analogous to the fact that the Kolmogorov complexity function is not computable in general \citep{LiVitanyi1997}, and most applications of Kolmogorov theory refer to the available compression methods.} in the language $L$. The discovery of a shorter encoding represents the discovery of a new result of the theory, enabling a higher conciseness. Finally, the {\bf conciseness} of $T^{(L)}$ is defined as the inverse of the complexity ${\cal C}(T^{(L)})$. \subsection{Arguments For The Triviality Of Conciseness} \label{sec:trivial} The philosophical literature contains many examples of theories that can be expressed in a very simple form by a suitable choice of the language. The classic example is the theory asserting: {\em all emeralds are green if they were first observed before January 1st 2020 and blue if first observed after that date} \citep{grue}. This statement can be shortened to {\em all emeralds are grue}, by a suitable definition of {\em grue}. Another example is provided by the curve fitting problem \citep{Sober_PoS}. Higher degree polynomials may appear more complex than lower degree ones, but the complexity disappears under a suitable change of variables. The concept of conciseness does {\em not} help in deciding which formulation is simpler in these cases. In fact, both concepts of green and grue are perfectly measurable and hence acceptable as BECs. Similarly, high degree polynomials may look unappealing, but they can be defined and computed precisely in terms of the original (measurable) variables. The problem with these toy-models is that they miss some essential features of realistic scientific theories, insofar as they have very few consequences. As soon as the theory becomes sufficiently rich of consequences, qualitatively new obstacles appear, and the path toward a concise {\em and} measurable formulation is lost, as shown in the example of the next section. There is also a common {\em general} argument holding that the formulation of {\em any} theory can be made arbitrarily simple. In the case of conciseness, such {\bf trivialization argument} goes as follows\footnote{In the context of Kolmogorov complexity, the corresponding argument has been presented in \citet{Kelly-razor,DelahayeZenil}.}. Imagine that, in the language $L$, the long string $\sigma(T^{(L)})$ cannot be compressed further with any known method. Then one can always define a new language $L'$, which is identical to $L$ except that it represents the long string $\sigma(T^{(L)})$ with the single character $\Sigma$. Obviously, it is impossible to deduce any nontrivial result from a theory whose principles are just '$\Sigma$'. However, this might not be necessary, if all the results of $T$ could still be implicit in the {\em interpretation} of $\Sigma$. In general, one should expect the concept $\Sigma$ to be difficult to interpret in terms of the empirical data. But the fact that $\Sigma$ may be {\em difficult} to measure is not sufficient to exclude the formulation of the theory in the language $L'$: difficult measurements can be learned and are routinely conducted by experimental scientists. The key point is that there exist concepts that are {\em provably not measurable} (examples are given in the next section). In order to be conclusive, the trivialization argument should demonstrate that $\Sigma$ can always be chosen among the measurable concepts of the theory. This task has never been undertaken in the literature\footnote{Remarkably, simplicity and measurability---both classic topics in philosophy of science---have been rarely combined.}. The proof that $\Sigma$ can be chosen---in general---to be measurable is not only missing, it also looks quite unrealistic. In fact, the following section illustrates an example of a theory where the natural choices of $\Sigma$ can be {\em proved} to be unmeasurable. Alternative choices of $\Sigma$ cannot be excluded. But, on the basis of this example, assuming the general existence of measurable $\Sigma$ is definitely not plausible. Even if the primitive concepts of the theory are not measurable, it is still possible to define other measurable concepts and select them as BECs. In fact, any sentence in the new language $L'$ can still be translated into the original language $L$ and vice versa. However, the definition of conciseness requires to take into account also the length of the string that defines all the BECs in terms of the primitive concepts of the theory. In the following example, also this approach is considered, but it happens to lead to lengthier expressions. \subsection{A Not-too-simple Theory} \label{sec:simple-stab} The goal of this section is to show that there exist concepts that are {\em provably not measurable}, and that such concepts appear naturally when trying to reduce a theory to a trivial form. This demonstrates a serious gap in the trivialization argument, which does not ensure that unmeasurable concepts can be avoided. To this end, we consider the theory (called ${\cal B}$) which is defined by the laws of classical mechanics applied to a single small (approximately point-like) ball on a billiard table with a mushroom shape (see, e.g., \citealp{MushroomB} and Figure \ref{fig:M}). This is defined by a curved boundary on the top side (the cap) joint to a rectangular boundary with sharp corners on the bottom side (the stem). Such billiards possess chaotic behaviors, when the initial conditions are chosen within certain values, which are assumed in the following. The nice feature of such billiards is that the trajectory of the ball can be computed exactly at any time---in spite of its chaotic nature. This enables precise statements about the (non-)measurability of the quantities relevant to this discussion. \begin{figure}[htb] \centering \includegraphics[scale=0.30]{Mushroom.eps} \caption{The black/solid interval $\xi_1(t_0)$ represents a range of initial conditions in the coordinates $\xi_1$ at time $t_0$. This is also an interval in the coordinate $q_x$. After a few bounces, the interval $\xi_1(t_0)$ is transformed into at least three disjoint sets (contained in the three blue/dashed lines labeled by $\xi_1(t_1)$). The figure has been produced with the help of the program made available by \citet{billmat}. \label{fig:M}} \end{figure} The theory ${\cal B}$ can be naturally expressed in the language $L$ that makes use of the coordinates $z$, where $z:=(\vec{q},\vec{p})$ denotes together the position $\vec{q}:= (q_x,q_y)$ and momentum $\vec{p}:=(p_x,p_y)$ of the ball. The only BEC that needs to be assumed corresponds to assessing, within some fixed precision, whether the ball at time $t_0$ lies at a reference point $\vec{q}_{\rm ref}$ in the table, and whether it has a reference momentum $\vec{p}_{\rm ref}$. Any other measurements of position or momentum (at any time) can be operationally defined from this single BEC and the principles of the theory. In fact, the measurement procedures are exactly the same at any time, since the theory is manifestly time invariant, when expressed in the coordinates $z$ (this does not hold in the coordinates $\xi$, introduced below). Measurements of position and time have necessarily limited precision, which is assumed, for definiteness, at the level of a millimeter and a tenth of a second, respectively. It is assumed also, for simplicity, that the walls are perfectly elastic, that the ball does not spin and the friction is negligible for a time sufficient for the ball to perform a large number of bounces. Assuming the standard Hamiltonian formalism, the dynamics of this system is completely defined by the function: $H(z)=H(\vec{q},\vec{p}) = \frac{\vec{p}^2}{2m} + V(\vec{q})$, where $m$ is the mass of the ball, and $V(\vec{q})=0$ for all $\vec{q}$ inside the billiard and $V(\vec{q})=\infty$ outside. These formulae contribute to the length of $\sigma({\cal B}^{(L)})$ with about 35 characters, to which one should add a few more characters to describe the boundary conditions ($B_M(z)=0$) associated to the mushroom shape of the billiard. Since the BEC of this theory ($z_{\rm ref}$) already appears among the primitive concepts, no further definition is needed. Finally, the contribution to $\sigma({\cal B}^{(L)})$ due to all the standard psychological, physical, mathematical and logical assumptions, is ignored, since it remains unaltered throughout this discussion. Following the idea of the trivialization argument, there is a special language ($L'$), that makes the principles of ${\cal B}$ very concise. The trivialization argument does not explain how to build such a language, nor how to connect it to measurable quantities. However, it is not difficult to find a suitable language for the theory ${\cal B}$. In fact, a natural choice for $L'$ is defined by those coordinates $\xi = (\xi_1, \xi_2, \xi_3, \xi_4)$, in which Newton's laws take the exceedingly concise form ``$\xi=$ constant''. Such choice of coordinates can be defined (with respect to the language $L$) by setting $\xi = \xi(t_0) = z(t_0)$ at a reference time $t_0$ and then assigning the same value of $\xi$ to all future and past configurations that belong to the same trajectory $z(t)$. There are now two possibilities. Imagine, first, that we want to keep the original BEC $z_{\rm ref}=(\vec{q}_{\rm ref},\vec{p}_{\rm ref})$. In this case, the single BEC $z_{\rm ref}$ measured at $t_0$ does not suffice, because the principles of the theory do not provide the relation between the coordinates $\xi$ and the coordinates $z$ at any time different from $t_0$. Hence, we do not know how to perform measurements at times different from $t_0$. The BECs ($z$) at time $t \neq t_0$ can be related to the primitive concepts $\xi$ by using the Hamiltonian $H(z)$, the boundary conditions $B_M(z)$, and computing the evolution of the trajectories from $t_0$ to $t$. These are computable but very cumbersome expressions, that becomes more and more complex after each bounce. Since we do not want to include $H(z)$ and $B_M(z)$ among the principles, such expressions are the only link we have between the principles and the BECs, and hence we have to include them in $\sigma({\cal B}^{(L')})$, as required by the definition of Section \ref{sec:concise}. This implies that $\sigma({\cal B}^{(L')})$ grows indefinitely with the time separation from $t_0$, while $\sigma({\cal B}^{(L)})$ remains fixed. The second possibility is to drop the coordinates $z$ altogether, and use the $\xi$ coordinates not only as primitive concepts in the formulation of the theory, but also directly as BECs. This leads to a theory that we denote $\overline{{\cal B}}^{(L')}$, which---apparently---could be much more concise than ${\cal B}^{(L)}$ and yet empirically equivalent to it. The problem is that the $\xi$ coordinates, which have a clear interpretation at reference time $t_0$, cannot be empirically detected at time $t_1$, a few bounces after $t_0$, with the same precision they were at $t_0$. This is not just {\em practically difficult} but {\em intrinsically impossible}, because the system ${\cal B}$ displays chaotic dynamics \citep{ChaoticDynamics}, which is characterized by a high sensitivity to the starting conditions. This means that two initially nearby trajectories diverge very fast in time. To illustrate the consequence of this in a simple way, let us restrict the attention to the two coordinates $q_x$ and $\xi_1$ of the ball. By construction, they coincide at $t_0$ (i.e., for any interval at $t_0$, $[q_x\pm\Delta] = [\xi_1\pm\Delta]$, where $\Delta=1$mm), but at $t_1$ the trajectories that were close at $t_0$ have taken many different directions. Consequently, the interval $[q_x\pm\Delta]$ at $t_1$ corresponds to many disjoint and very small intervals\footnote{Because of the sharp (non-differentiable) corners in the boundaries of the mushroom shaped billiard, the Poincar\'{e} map---that associates the coordinates of the initial points to those of the evolved points---is not continuous. Hence, a single interval in the parameter set of the initial conditions is split, after each bounce, into disjoint intervals.} in the coordinate $\xi_1$. Conversely, any interval $[\xi_1\pm\Delta]$ at $t_1$ corresponds to many disjoint and very small intervals in the coordinate $q_x$ (see Figure \ref{fig:M}). But, there is an important difference between the intervals $[q_x\pm\Delta]$ and $[\xi_1\pm\Delta]$ at $t_1$: prototypes for the former are possible, while for the latter are not, as a matter of {\em principle}, because we have no way to measure the many disjoint pieces that compose $[\xi_1\pm\Delta]$. Of course, the measurable $[q_x\pm\Delta]$ intervals could be expressed in the $\xi$ coordinates as the union of many extremely small disjoint intervals, but, as required in Section \ref{sec:def-th}, these cannot be associated to valid prototypes, and hence the $\xi$ cannot be ECs at $t_1$. In conclusion, the obvious requirement that ECs are associated to connected intervals is sufficient to formally exclude---in agreement with the intuition---the $\xi$ concepts as empirical. In order to use the $\xi$ coordinates to characterize the system at time $t_1$, it would be necessary to introduce a new coordinates system: besides the $\xi$ with reference at $t_0$, one would need the $\xi^{(t_1)}$, with reference at $t_1$, and the procedure should be repeated for a full sequence of times $t_i$. But, the measurements of $\xi^{(t_i)}$ cannot be operationally defined from those of $\xi^{(t_0)}$, since, as shown in the previous paragraph, the size of the overlaps of the respective intervals is much below the experimental sensitivity. Hence, new BECs---and corresponding new prototypes---are needed for each different time $t_i$. In order to keep the same empirical adequacy as the original theory, the new theory should define essentially as many BECs as experimental data, which would make again $\sigma(\overline{{\cal B}}^{(L')})$ extremely large. \subsection{Other Scientific Theories} \label{sec:gen-less} In the previous section we have examined a particular theory, and showed that the tools at our disposal fail to make it more concise. Hence, the theory ${\cal B}$ illustrates some obstacles that prevent closing the gap in the general trivialization argument of Section \ref{sec:trivial}. In this section we further note that similar obstacles appear quite in general for realistic theories. This should convince the reader that a recovery of some version of the trivialization argument, covering a relevant set of scientific theories, is very unlikely. One reason is that, as stressed in Section \ref{sec:def-th}, scientific theories are multidisciplinary collections of principles gathered from different domains of science. Because of this, it is sufficient that the mechanism described in the previous section applies in one corner of the theory, to constrain the possible languages in all other sectors. Given that the vast majority of real physical systems admit chaotic phenomena, it is easy to appreciate the effectiveness of this constraint. Another reason, which is less compelling but more general, is the following. If the laws of a theory are expressed in a form that is so concise that no nontrivial result can be deduced, then all the consequences of the theory must be evident in the BECs of the theory. It follows that, either the theory has very limited consequences, or it needs to introduce a large number of BECs, or---finally---the interpretation of the BECs is very rich. But in this last case, it should not be too difficult to identify not only practical but also {\em fundamental} obstacles to the measurability of those BECs. It is clear that this argument applies only to theories with sufficiently complex consequences. Even the idealized solar system, that played a glorious role in the history of science, is not rich enough---alone---to exhibit the idea above. In fact, it may not be impossible to reduce the Ptolemaic model to a very concise theory by using a small set of suitable BECs. After all, the orbital motion of a few idealized celestial bodies is an exactly integrable and periodic system. But, as soon as one considers, for example, Newton's laws in more general contexts, the amount and the variety of phenomena that can be described becomes arbitrarily large, while the set of laws and BECs remains small. Also the curve fitting problem---which is often employed as a toy-model to discuss simplicity in the philosophical literature---is not rich enough to show any {\em insuperable} conflict between conciseness and empirical adequacy, as we have already seen. Indeed, {\em it is only in a sufficiently rich system that the conciseness of the description may come into insurmountable conflict with the accuracy of the description}. This argument is expected to be relevant not only for highly mathematical sciences, but for all theories that entail many different empirical consequences. An exhaustive analysis of the implications of this idea for all scientific fields is obviously impossible here, but one general conclusion can be drawn: for any theory, no trivial formulation can be assumed to exist (in terms of ECs) unless it is explicitly found. Hence the available most concise formulation acquires an objective cognitive value. \subsection{Nontriviality Of Other Characterizations Of Simplicity} \label{sec:alt} In the previous sections we have seen that the trivialization argument fails---in general---to reduce the value of conciseness, as defined in Section \ref{sec:concise}. Here, we show that the same result holds for any {\em acceptable} definition of the complexity of the assumptions. In order that a notion of complexity/simplicity be {\bf acceptable}, we require at least the following two properties. First, the complexity of a theory should take into account the cost of defining the BECs of the theory in terms of the concepts appearing in the principles (the primitive concepts). Second, the complexity of an expression must be higher than the complexity of any of its proper sub-expressions\footnote{We also assume that the complexity function takes integer values, so that the increments cannot be infinitesimal.}. These properties are presumably not sufficient to characterize an acceptable notion of complexity/simplicity, but they are certainly necessary. These properties hold, in particular, for our notion of conciseness. They also hold for the notion of parsimony \citep{Baker-SEP}, which measures (somehow) the domain of the logical quantifiers that appear in the postulates, or the notion of simplicity of \citet{GoodmanSimple}, that measures the amount and the complexity of the set of primitive predicates\footnote{Since the distinction between the BECs and the primitive concepts is usually not stressed, when discussing simplicity, the first property is not apparent from \citet{Baker-SEP} and \citet{GoodmanSimple}. But it is obvious, once the definitions of the BECs in terms of the primitive concepts are included among the postulates of the theory.}. If we re-examine the theory of Section \ref{sec:simple-stab}, the same argument goes through unchanged, except for the points where the complexity of the theories ${\cal B}^{(L')}$ and $\overline{{\cal B}}^{(L')}$ needs to be computed. The latter obviously depend on the definition of complexity, but both theories contain expressions that grow indefinitely with the number of empirical observations to which the theory can be compared. In fact, the expressions relating the BECs of ${\cal B}^{(L')}$ to the principle $\Sigma$ become more and more cumbersome, with increased time separation of the measurement from the reference point. While, in the case of the theory $\overline{{\cal B}}^{(L')}$, it is the number of BECs that grows indefinitely with time. According to the second requirement stated above, the complexity of a growing expression must grow. Therefore, we must conclude that none of those two theories can be simpler than the original theory ${\cal B}^{(L)}$, independently of the particular definition of complexity which is used. \section{Different Notions Of Simplicity And The Goals Of Science} \label{sec:DDS-TS} In Section \ref{sec:simple} we saw that the general argument for triviality fails, once the empirical content of the theory is properly taken into account. Under these conditions, essentially any acceptable characterization of the simplicity of the assumptions becomes nontrivial. Thanks to this result, it becomes meaningful to ask whether different characterizations of simplicity also lead to approximately the same theory selection, when applied to a significant set of real scientific theories. Furthermore, do they also lead to the same theory selection that may be defined by other classic values in science? These questions are very important. The consistency of different criteria would strongly support the high cognitive value of any such criterion. Moreover, it would fully justify the scientists' belief that some theories are unambiguously better than other (empirically equivalent) ones. Such consistency can never be proved conclusively. It is only possible to accumulate evidence in its favor or falsify it\footnote{In this sense, philosophical theories are not different from scientific theories.}. This can be done by examining different definitions of simplicity (or different virtues) and applying them to a significant set of real scientific theories. Each of these cases clearly requires a dedicated effort, to be duly investigated. In the rest of this paper we only take a small step in this direction, in order to convince the reader that the consistency mentioned above is not at all unlikely. Section \ref{sec:diff} presents a general argument in support of the consistency of criteria based on different definitions of the simplicity of the assumptions. As said, this is far from conclusive, but it suggests an interesting challenge for philosophy of science. In the subsequent sections the concept of conciseness is examined in more detail, in order to show that it captures significant features of the goals of science. First, in Section \ref{sec:rel-int} it is shown how conciseness can be estimated in practice. In Section \ref{sec:ad-hoc} the efficacy of conciseness in penalizing theories with many ad-hoc assumptions is emphasized. Section \ref{sec:examples} offers a brief overview of other virtues. \subsection{Are Different Notions Of Simplicity Equivalent?} \label{sec:diff} We have seen that the formulation of a theory must include the definition of its BECs in terms of its primitive concepts. Under a different characterization of simplicity, the same theory could achieve its simplest formulation by using different BECs. However, the constraints that the BECs should be measurable (ECs) and rather unambiguous (in order to preserve empirical adequacy) make it {\em very difficult to find formulations that are radically different from the traditional one}, which is often already the result of strong efforts of simplification (according to some intuitive notion of simplicity). If the choice of the possible formulations is practically limited to small variations from the traditional one, then the different definitions of complexity must be applied to the same (or very similar) formulations. Moreover, we typically want to compare theories that differ only by a rather limited set of assumptions (see also Section \ref{sec:rel-int}). These observations together imply that we typically have to compare different definitions of complexity applied to {\em very similar} and {\em rather short} strings. If so, one should expect that simple theories, according to one criterion, be also simple according to the others, since a short formulation has necessarily also few quantifiers, few predicates, and (except for very peculiar cases) also the converse is true. This suggests that all the definitions mentioned in Section \ref{sec:alt} may lead to essentially the same theory selection, when applied to real cases. This argument is certainly not conclusive. It is conceivable that some alternative notion of simplicity might exist, which is still legitimate and very much different from the intuitive one, and for this reason it might have been overlooked by the scientists. It is also possible that the scientists might be overlooking alternative formulations of their theories that would reveal the prejudices behind their assessments of simplicity. However, this can be determined only by providing explicit alternatives and not by general arguments. In the lack of valid alternatives, the simplest available formulation retains an objective cognitive value. \subsection{Practical Estimate Of Conciseness} \label{sec:rel-int} The rest of Section \ref{sec:DDS-TS} examines the notion of conciseness and compares it to other classic cognitive values. The first issue is its practical estimate. A first remark is that, in order to minimize the conciseness of a theory, it is very hard to use languages that are radically different from the traditional one. In fact, this would correspond to a major new discovery. If we are limited to small departures from the traditional language, then the conciseness can be estimated by simple inspection of the length of the principles expressed in their traditional form. A second remark is that a precise computation of ${\cal C}(T)$ is not realistic, even in a given language, and even for very simple theories as the one analyzed in Section \ref{sec:simple-stab}. But, we are never interested in the absolute value of ${\cal C}(T)$. The interesting problem, in practice, is always to compare two theories that share most of the assumptions and are empirically equivalent. In these cases, the difference ${\cal C}(T)-{\cal C}(T')$ between two theories $T$ and $T'$ is typically easy to estimate---possibly using informal languages---and not impossible to compute exactly. As an example of how one can estimate the difference ${\cal C}(T)-{\cal C}(T')$ in an informal language, consider the two theories of special relativity (SR) and classical Galilean relativity (CR)\footnote{Since the two theories are not empirically equivalent, the comparison is interesting only from the technical point of view of computing their conciseness.}. In their modern most concise formulations, the two theories differ by a single postulate, which is, in the case of CR: {\em time and space intervals are constant in all inertial frames}, while for SR it reads: {\em the speed of light is constant in all inertial frames}. A suitable language can make these postulates considerably shorter, but both theories need at least one symbol for each of the concepts of {\em time}, {\em space}, {\em interval}, {\em velocity}, {\em light}, etc. This shows that CR cannot be made more concise than SR, without a (presently unknown) radical revision of the formulation of these theories. Consequently, if we had to correct the wrong predictions of CR by adding ad-hoc hypothesis, we would certainly attain a much more complex formulation than SR. \subsection{Conciseness, Ad-hoc Assumptions And Information} \label{sec:ad-hoc} This section examines the efficacy of conciseness in penalizing theories that include many ad-hoc assumptions. As stressed in the Introduction, defining a measure for the amount and complexity of the assumptions is a prerequisite for a precise characterization of many classic cognitive values in science. It is well known that the presence of ad-hoc assumptions is difficult to characterize from a strictly logical point of view. For example, adding more assumptions makes a theory more restrictive. But, the property of being restrictive is not a good characterization of having many ad-hoc assumptions, because the best theories are extremely restrictive and admit only what really happens. What is bad in ad-hoc assumptions is not that they introduce restrictions, but that we are unable to express them without adding {\em more words}, while a good theory manages to be very restrictive with few words. Consideration of the syntax, besides the logical structure, is clearly necessary to represent the intuitive idea of ad-hoc assumptions. If the shortest formulation of the theory $T'$ is not longer than the one of $T$, then $T'$ cannot be seen as the addition of ad-hoc assumptions on top of $T$, even if $T'$ implies $T$. This means that a {\em nontrivial} measure of conciseness can---at least in some cases---exclude that a new theory is obtained by adding ad-hoc assumptions. For example, most theories of gravity within the ppN formalism \citep{lrr-2006-3} are build as modification of Einstein's (or Newton's) theory of gravity. For many of those ppN theories, we cannot imagine a way to express them more concisely than Einstein's (Newton's) theory itself: we know how to formulate them only by formulating Einstein's (Newton's) theory first, and then adding further elements. Moreover, under the reasonable assumption that the Lagrangian formalism and differential geometry are standard tools which are needed anyway, it is hard to imagine a theory as concise as general relativity and empirically equivalent to it. These ppN theories are generally recognized as possessing more ad-hoc assumptions than general relativity, and they actually correspond to longer formulations. Another example is the following. Assuming that a given thermometer was not working properly on some specific occasions, may explain a few strange results. But, if we try to explain all strange measurements of temperature in this way, we have to add to the general theory a huge list of specific exceptions. Alternatively, assuming that all thermometers of some brand give a wrong answer 10\% of the times can contribute to provide a more consistent description of a large set of measurements around the world, with a limited increase of the complexity of the theory. The latter procedure is clearly less ad-hoc and more concise than the former. The sensitivity to ad-hoc assumptions is a consequence of a more basic virtue of concise theories: out of two theories with the same consequences, the more concise one provides evidence that {\em the needed information to obtain the same results is less} than it would be expected from less concise theories. This is also confirmed by the following observation. If a scientist is confronted with two formulations of the same theory she would never erase the shorter one, even if it misses other qualities that she might find desirable. In that case she would keep both. This highlights an important {\em cognitive advantage} of the most concise formulation, which is completely independent from any reference to reality, and hence fits well an {\em empiricist} view of science. \subsection{Conciseness And The Goals Of Science} \label{sec:examples} This section sketches some connections between the concept of conciseness and other classical criteria for theory appraisal. Again, this is not meant to show any superiority of conciseness with respect to other characterizations of simplicity, but rather to exemplify how a well defined and nontrivial characterization of simplicity gives the chance to establish explicit connections with other cognitive values. As already mentioned in the introduction, the idea of conciseness may enable a more precise formulation of the idea of {\em unification}. Two theories are unified when they are substituted by a single theory that answers at least all the questions previously answered by the original two theories. If the unification is not mere juxtaposition, some of the old assumptions should appear as duplicated and be combined in a single one, or be both dropped in favor of another more powerful assumption.\footnote{It may be the case that a unifying theory introduces more sophisticated mathematical tools. But, according to the definition in Section \ref{sec:def-th}, a scientific theory is necessarily a multidisciplinary collection of assumptions coming from different fields. Sophisticated mathematical tools---besides being generally very concise---have usually many fields of applicability, which considerably reduce their impact in the overall conciseness.} This suggests that most interesting cases of unification have also produced more concise theories, although a systematic historical analysis would be certainly needed to assess this point conclusively. A similar argument can be used to interpret many cases of {\em reduction} of scientific theories as cases of increased conciseness\footnote{Note that, if $T$ is more empirically adequate than $T'$, it is not very interesting to compare the conciseness of $T$ to the one of $T'$, but rather to the one of a theory $T''$, which is obtained by adding to $T'$ suitable assumptions able to correct the wrong predictions of $T'$. }. Classic examples are Newton's reduction of Kepler's laws to the laws of mechanics, and the reduction of thermodynamics to statistical mechanics\footnote{It is controversial whether the latter is an example of reduction, but it is anyway an example of increased conciseness.}. In the first case, the laws that describe mechanical phenomena are shown to be sufficient also to explain astronomical phenomena; in the second case the laws of mechanics and probability are sufficient to explain also thermodynamical phenomena. Both cases correspond to the realization that all the phenomena under consideration can be explained with less overall assumptions. Other examples are being provided, currently, by computational sciences, that have achieved tremendous successes in reducing various phenomenological laws to more fundamental ones. Among the recognized values, that a scientific theory should have, is also that of {\em coherence} with the other accepted theories. This does not seem to be related to conciseness. But, in our approach (see Section \ref{sec:def-th}), a scientific theory is necessarily a multidisciplinary collection of {\em all} the assumptions that are needed to derive the results that can be compared to real experiments. In this context, coherence between the different domains of science is not a virtue: it is a necessity, that is assumed at the start. An original application is the explanation of the problem of {\em fine tuning} in the standard model of elementary particles. This problem lies in the fact that the fundamental parameters of the model need to be known with a very large number of digits, in order to reproduce (even with moderate precision) the experimental values. Since the fundamental parameters must be included in the principles of the theory, this is, effectively, a problem of conciseness. The idea of conciseness can also explain why {\em solipsism} is void of interest. Solipsism cannot be excluded neither on logical nor on empirical grounds. The problem with solipsism is rather the unnecessary amount of assumptions that need to be made in order to explain the experience. In fact, the experiences {\em reported} to the subject by other people require different explanations---and hence additional postulates---from those explaining the {\em direct} experiences of the subject. What the subject sees can be explained much more {\em concisely} by assuming a underlying reality, independent of the mind. Finally, one should also mention that there exist research programs that aim at recognizing signatures of {\em irreducible complexity} in nature. In such programs, conciseness cannot be a value, by construction. But, this is consistent with the fact that those goals are not recognized by the vast majority of the scientific community, since no evidence can possibly exclude the existence of yet uncovered more concise rules. \section{Conclusions And Perspectives} \label{sec:conclusions} Scientists often regard simpler assumptions as unambiguously preferable to complex ones. Moreover, most classic standards of progress in science implicitly rely on a characterization of the simplicity of the assumptions, in order to acquire a precise meaning. Any precise definition of simplicity---which is relevant in this sense---necessarily requires the examination of the principles of the theory. Moreover, in order to evade general arguments for the triviality of any notion of simplicity, it is also necessary to establish a formal connection between the principles and the measurable concepts of the theory (ECs). This paper shows explicitly how the principles and the ECs can be included in a view of scientific theories, which is not in contradiction with modern views, and avoids the pitfalls of the traditional view. Although the ECs are, in general, theory dependent, each theory includes concepts that are empirical by construction of the theory itself. The ECs are important not only in order to compare the theory with the experiments and with other theories, but also to constraint its possible formulations. In fact, {\em a theory must be expressed in a language able to represent its empirical content}. The importance of this requirement cannot be appreciated when considering isolated toy-theories, that entail only few consequences. But it becomes crucial for realistic theories, whose consequences are many and complex. In fact, in those cases, improving the simplicity of the formulation may conflict with the need of preserving its accuracy. This is illustrated through the inspection of a specific example of a theory and by employing the precise notion of conciseness. As a result, {\em the fact that some theories are more concise than others is not purely conventional. It is as objective as the fact that some quantities are measurable and others are not}. The concept of conciseness introduced in this paper is just one of the many possible characterization of simplicity. Here it is used mainly as an example, showing that a nontrivial characterization of simplicity is possible. Similar arguments can be applied to other definitions of simplicity, that also become nontrivial, once the precise connection to measurable quantities is taken into account. These observations lead naturally to the important question whether different---nontrivial---definitions of simplicity induce approximatively the same theory selection, when applied to a significant set of real cases. A further question is whether these criteria are also consistent with the other classic standard of progress in science. The availability of a class of nontrivial definitions of simplicity is a crucial pre-requisite to address these questions precisely. {\em A positive answer to these questions would provide a solid philosophical justification on support of the scientists' belief that some theories are unambiguously better than other (empirically equivalent) ones}. This paper cannot support a positive answer conclusively, but it argues, through a few general considerations and some examples, that this possibility is not presently excluded. In order to prove the scientists wrong, it is necessary to identify a legitimate definition of simplicity that contradicts some of the assessments that are universally held by the scientists. General arguments about its existence are not sufficient. \paragraph{\bf Acknowledgments} Email exchanges with L.D.~Beklemishev, G.~Chaitin, M.~Hutter and H.~Zenil, as well as the valuables comments of the anonymous referees are gratefully acknowledged. The author is a member of the Interdisciplinary Laboratory for Computational Science (LISC) and acknowledges support from the AuroraScience project.
1,314,259,996,457
arxiv
\subsection{#1}} \newcommand{\sctOne}[1]{\subsubsection{#1}} \newcommand{\BOp}[1]{\widehat{B}_{#1}} \newcommand{\widehat{G}_0}{\widehat{G}_0} \newcommand{P_{\mathrm{can}}}{P_{\mathrm{can}}} \makeatletter \begin{document} \title{Emergent centrality in rank-based supplanting process} \author{Kenji\ Shimomura$^1$, Yasuhiro\ Ishitsuka$^2$, and Hiroki\ Ohta$^3$} \affiliation{ $^1$Center for Gravitational Physics and Quantum Information,\\ Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto, 606–8502, Japan \\ $^2$Institute of Mathematics for Industry, Kyushu University, Fukuoka, 819–0395, Japan \\ $^3$Department of Human Sciences, Obihiro University of Agriculture and Veterinary Medicine, Hokkaido, 080-8555, Japan} \date{\today} \begin{abstract} We propose a stochastic process of interacting many agents, which is inspired by rank-based supplanting dynamics commonly observed in a group of Japanese macaques. In order to characterize the breaking of permutation symmetry with respect to agents' rank in the stochastic process, we introduce a rank-dependent quantity, {\em overlap centrality}, which quantifies how often a given agent overlaps with the other agents. We give a sufficient condition in a wide class of the models such that overlap centrality shows perfect correlation in terms of the agents' rank in zero-supplanting limit. We also discuss a singularity of the correlation in the case of interaction induced by a Potts energy. \end{abstract} \maketitle \section{Introduction} One of promising candidates for going a step further in studying a many-body system is to construct a lattice model of interacting elements or agents which describes a many-body system. This strategy is not only applied to equilibrium systems \cite{Baxter} but also nonequilibrium systems \cite{Odor}. One of such attentions has been put on nonequilibrium lattice models such as driven lattice gas \cite{DLG}, ASEP \cite{ASEP}, ABC model \cite{ABC}, zero range process \cite{ZRP1,ZRP2}, etc., where emergent macroscopic property such as phase transition is one of the topics to be elucidated. Recent development on active matter has focused on experimentally realizable systems such as colloidal or biological systems showing various phase transition such as flocking transition, lane formation, or motility-induced phase separation \cite{Vicsek,Lane1,Bacteria}. In the framework of statistical physics, it is of interest to look for an analytically tractable and minimal model for such phenomena \cite{V-Solon,Lane2,Hydro}. Apart from such model-based studies, in the context of network theory, the concept of centrality plays one of important roles in studying a given network induced by a many-body system consisting of inhomogeneous agents. Centrality has been particularly used in the literature of social network analysis to characterize which element on a network is the most influential. Depending on the purpose of network analysis, various measures of centrality such as degree centrality, closeness centrality, PageRank, eigenvector centrality, etc., \ have been proposed and found to be useful to characterize network structures \cite{Freeman, Bonacich, Barrat,Newman}. As an example of inhomogeneous agents, primate species often live through interacting with members in a group \cite{primatebook}. It has been reported in a primate species that the individuals, which we call agents, with high rank in social dominance tend to have high rank also in the eigenvector centrality of adjacency matrix for a graph composed from agents positions \cite{pecentrality1,pecentrality2}. In particular, Japaneses macaques form a group living together and each agent in a group has its rank along the linear social dominance in the group, leading to rank-dependent repulsion between two agents \cite{supplanting,Nakagawa}. This is so-called {\it supplanting} phenomenon which we mainly focus on. In this paper, we propose a new type of nonequilibrium lattice model, which is inspired by a supplanting phenomenon occurring between two agents in a group of Japanese macaques. The main objective of this paper is to show that a new type of macroscopic correlation appears when supplanting process, which is one class of interaction with broken detailed balance, is added to an equilibrium system. It turns out that this problem can be mapped to computing a type of centrality, which we call {\em overlap centrality}, for a complete graph derived naturally from the correlations of agents' positions. This paper consists of five sections. In Section \ref{Model}, we introduce a class of models breaking permutation symmetry that we study in this paper. In Section \ref{Potts}, focusing on the case of the model where the interaction induced by the Potts energy is assumed, we provide a brief review of the equilibrium properties, introduce overlap centrality, and compute it by exact diagonalization of transition matrix. In Section \ref{Rig}, we provide the proof for the main result that overlap centrality characterizing how often a given agent overlaps with the other agent shows perfect correlation with respect to the ranking of the given agent in zero-supplanting limit. This result holds rather generally, which is not limited to the case where the Potts energy is assumed as the source of an equilibrium interaction. Further, a conjecture about the existence of a singularity of the correlation is discussed for the case of the Potts energy. In Section \ref{CR} as concluding remarks, we summarize the results and some subjects of future considerations. \section{Model}\label{Model} Let $N \ge 2$ be the number of agents, and $L \ge 3$ be the length of the one-dimensional lattice $X \coloneqq {\mathbb Z}/L{\mathbb Z} = \bkb{0,1,\ldots,L-1}$. Let us denote by $i\in\{1,2,\cdots,N\}$ an agent, and by integer $x_i\in X$ the position of agent $i$. We also regard the number $i$ identifying an agent as \textit{rank} of that agent. We say that rank $i$ is \textit{higher} (\textit{lower}) than $j$ if $i<j$ ($i>j$) ; for example, rank $i$ is higher than rank $i+1$. Let us also write the collection of elements $a_i$ labelled by $1 \le i \le n$ as $(a_i)_{i=1}^{n}$ or the bold symbol $\bv{a}$. In particular we write a set of positions of agents as $\bvec{x} = (x_i)_{i=1}^{N}$. Hereafter, we call $\bv{x} = (x_i)_{i=1}^{N}$ a \textit{configuration}. We consider a hopping map $f_i^{\pm}$ such that $f_i^{\pm}\bvec{x}\coloneqq (x_j\pm\delta(i,j))_{j=1}^N$, where $\delta(i,j)$ is the Kronecker delta. Note that the periodic boundary condition in terms of positions is automatically assumed by definition of $X$. \subsection{Equilibrium dynamics}\label{sct:Equilbrium_dynamics} We consider a general class of energy function $E(\bm{x}) = E(x_1, x_2, \dots, x_N)$ which is permutation symmetric in the following sense: \begin{equation}\label{sym} E(x_1, x_2, \dots, x_N) = E(x_{\sigma(1)}, x_{\sigma(2)}, \dots, x_{\sigma(N)}), \end{equation} for any permutation $\sigma \in \mathfrak{S}_N$ of $N$ elements, where $\mathfrak{S}_N$ is the symmetric group of order $N$. In addition, let $\beta$ be a parameter determining the magnitude of the energy including the sign $\pm$. As an example of the models belonging to the above class, one can consider the following $L$-state Potts energy $E(\bv{x})$ on the complete graph where each agent connects with all agents \cite{Potts}: \begin{align}\label{eq:PottsEnergy} E(\bv{x}) = -\frac{2(L-1)\log (L-1)}{L-2}\dfrac{1}{2N}\sum_{i=1}^N\sum_{j=1}^N\delta(x_i,x_j). \end{align} This case means that an agent interacts with the other agents only if they have overlaps. In this sense, this Potts model on the complete graph is equivalent to the agents with an on-site interaction in one dimension, which may be a simple model to describe interacting agents. The coefficient of \eqref{eq:PottsEnergy} is adjusted so that the phase transition point $\beta_c$ in the equilibrium state is equal to 1, which we will discuss in more detail in Section \ref{Computation of partition function at equilibrium}. Let us consider a Markov process with discrete time $t$, where during one time step between $t$ and $t+1$, only one of the following possible transitions may occur. The transition probability $T_0(\bv{x}\to f_i^{\pm}\bv{x})$ from each configuration $\bv{x}$ to the configuration $f_i^{\pm}\bv{x}$ for any agent $i$ is \begin{align} \label{eq0:T0coeff} T_0(\bv{x}\to f_i^{\pm}\bv{x}) = \dfrac{1}{2N}\dfrac{1}{1+\exp \left(\beta \mathcal{D}_i^{\pm}E(\bv{x})\right) }, \end{align} where \begin{align} &\mathcal{D}_i^{\pm}E(\bv{x}) := E(f_i^{\pm }\bv{x})-E(\bv{x}). \end{align} This leads to that the joint probability $P_t(\bv{x})$ of configuration $\bv{x}$ at time $t$ satisfies the following master equation: \begin{align} P_{t+1}(\bv{x}) = &\sum_{\bv{x}'\neq\bv{x}}P_t(\bv{x}')T_0(\bv{x}'\to\bv{x})\nonumber\\ &+ P_t(\bv{x}) \Big( 1-\sum_{\bv{x}'\neq \bv{x}}T_0(\bv{x}\to\bv{x}') \Big), \label{mas} \end{align} where the summation over $\bv{x'}$ is done for all of the possible configurations such that $T_0(\bv{x}\to\bv{x}')$ and $T_0(\bv{x}'\to\bv{x})$ are defined above. The Gibbs distribution \begin{align} P_{\mathrm{can}}(\bv{x}):=\dfrac{1}{Z_{N}(\beta)}\exp\left( -\beta E(\bv{x}) \right), \label{eq:GibbsDistr} \end{align} where $Z_{N}(\beta):=\sum_{\bv{x}}\exp(-\beta E(\bv{x}))$, is the stationary solution $P_{\mathrm{st}}(\bv{x})$ of the master equation (\ref{mas}), satisfying \begin{align} \sum_{\bv{x}'}P_{\mathrm{st}}(\bv{x}')T_0(\bv{x}'\to\bv{x}) =P_{\mathrm{st}}(\bv{x})\sum_{\bv{x}'\neq \bv{x}}T_0(\bv{x}\to\bv{x}'), \end{align} because the Gibbs distribution satisfies the detailed balance condition: \begin{align} P_{\mathrm{can}}(\bv{x})T_0(\bv{x}\to\bv{x'}) =P_{\mathrm{can}}(\bv{x'})T_0(\bv{x'}\to\bv{x}), \end{align} for any pair $\bv{x},\bv{x'}$ realized by the above dynamics. \begin{figure} \includegraphics[width=6cm,clip]{Images/Fig1.pdf} \caption{(Color online) Schematic illustration of a transition step from a configuration described by (a) to a configuration described by (c) in the model. From (a) to (b), agent $2$ hops to the right site, and from (b) to (c), agent $3$ among two possibly supplanted agents, which are $3$ and $5$, is supplanted by agent $2$ and hops to the left site. Four arrows in (b) means that, in the above transition, two agents $3$ and $5$ in $S(\bm{x}, 2, +)$ could be supplanted, and the direction of supplanting process could be either to the left or to the right.} \label{pic} \end{figure} \subsection{Broken detailed balance by supplanting}\label{sct:BDB_by_supplanting} Next, we consider to add a supplanting process to the equilibrium dynamics introduced above, which breaks the detailed balance condition. Let us imagine a process in which an agent $i$ hops to a position $y_i=x_i\pm 1$ from position $x_i$ in accordance with the equilibrium transition probability $T_0$, and then an agent at $y_i$, which has a rank $j$ such that $i<j$, is stochastically forced to hop to the position $y_i +1$ for $d=+$, or $y_i -1$ for $d=-$. In this process, an agent with a higher rank $i$ supplants another agent with a lower rank $j$. This is why we call such a process the \textit{supplanting process}. Note that the configuration $\bvec{x}$ turns to be the configuration $f_j^d f_i^\pm \bvec{x}$ through the whole supplanting process. See Fig.\ \ref{pic} for a graphical reference of the process. For convenience, let us introduce the following set \begin{equation}\label{eq:DefOfS} S(\bm{x}, i, \dPM) \coloneqq \{ i < j \le N \mid x_j = x_i \dPM 1 \}, \end{equation} which is a set of every agent whose rank is lower than $i$ at position $x_i \dPM 1$. That is, those agents could be supplanted by the agent $i$ when the agent $i$ hops to position $x_i\pm 1$. Suppose that every agent in $S(\bvec{x},i,\pm)$ has the same chance to be chosen as the supplanted one, and that the direction $d$ of hopping by supplanting is determined with equal probability $1/2$. Explicitly, we define the transition probability from $\bv{x}$ to $f_j^{d}f_i^{\pm}\bv{x}$ for $d\in\{+,-\}$ and $j\in S(\bm{x}, i, \dPM)$: \begin{align} \label{eq0:Tcoeff} T(\bv{x}\to f_j^{d}f_i^{\pm }\bv{x})= \dfrac{p}{2}\frac{1}{1+p\# S(\bm{x}, i, \dPM)} T_0(\bv{x}\to f_i^{\pm}\bv{x}), \end{align} where $p\in \mathbb{R}_+\coloneqq[0,\infty)$ is the parameter of supplanting rate and $\#S$ is the number of all the elements of a set $S$. When $p\to 0$, supplanting rarely occurs, and when $p\to\infty$, supplanting almost always occurs. Note that at most one agent is supplanted in a single transition regardless of the value of $p$. On the other hand, for the case of $j\notin S(\bm{x}, i, \dPM)$, it holds that \begin{align} T(\bv{x}\to f_j^{d}f_i^{\pm}\bv{x})=0. \end{align} Further, the probability of transition $T(\bv{x}\to f_i^{\pm}\bv{x})$ is modified from $T_0$ as \begin{align} \label{eq:Tnonmove} T(\bv{x}\to f_i^{\pm}\bv{x}) = \frac{1}{1+p\# S(\bm{x}, i, \dPM)} T_0(\bv{x}\to f_i^{\pm}\bv{x}). \end{align} Totally, the following holds: \begin{align}\label{eq:T0andT} &T_0(\bv{x}\to f_i^{\pm}\bv{x}) \nonumber \\ &= T(\bv{x}\to f_i^{\pm}\bv{x}) +\sum_{\substack{j\in S(\bm{x}, i, \dPM) \\ d = \pm}}T(\bv{x}\to f_j^{d}f_i^{\pm}\bv{x}). \end{align} Then, the master equation for the joint probability $P_t(\bv{x})$ governing the above stochastic process is as follows: \begin{align}\label{eq:mas2} P_{t+1}(\bv{x})= &\sum_{\bv{x}'}P_t(\bv{x}')T(\bv{x}'\to\bv{x})\nonumber\\ &+P_t(\bv{x})(1-\sum_{\bv{x}'\neq \bv{x}}T(\bv{x}\to\bv{x}')), \end{align} where the summation over $\bv{x'}$ is done for all of the possible configurations such that $T(\bv{x}'\to\bv{x})$ and $T(\bv{x}\to\bv{x}')$ are defined. Since $T(\bm{x} \to f_i^{\pm} \bm{x})$ is positive for any $\bm{x}$ and $i$ with finite $\beta, p, N, L$, and $E$, any state can reach any state in this stochastic process; that is, the stochastic process defined above is an irreducible Markov process. In the presence of $p>0$, the supplanting process does not hold the detailed balance condition because of the asymmetric property in terms of agents rank; when a supplanting occurs in one step, i.e., an agent supplants another agent, the reverse process never occurs in any single step. Thus, obviously the stationary solution $P_{\mathrm{st}}(\bv{x})$ is no longer the Gibbs distribution of a given energy function. Note that in the limit of $p\to 0$, the detailed balance condition in terms of a given energy function is recovered. Thus, we can also regard $p$ as the strength of violation of detailed balance condition. \section{The case of the Potts energy}\label{Potts} In this section, we focus on the case of the Potts energy defined by (\ref{eq:PottsEnergy}). We briefly review the known equilibrium properties and compute nonequilibrium stationary distribution by exact diagonalization of transition matrix corresponding to the master equation (\ref{eq:mas2}). Further, we introduce overlap centrality and its correlation with agents' rank, which are calculated using the computed stationary distribution. \subsection{Computation of partition function at equilibrium}\label{Computation of partition function at equilibrium} As a preliminary, we consider the equilibrium case with $p=0$. The equilibrium ferromagnetic Potts model on the complete graph has two phases; one is the ordered phase for stronger interaction, and another is the disordered phase for weaker interaction, which are separated by a first-order transition point if $L\ge 3$ \cite{Potts}. In the context of this paper, the ordered state can be regarded as the condensate state of agents, that is, the state where all agents are located at the same position. Let us look into more detail for the computation of the above results. When $p=0$ and interaction strength $\beta$ is positive ($\beta>0$), corresponding to the case of attractive interaction, the partition function $Z_N(\beta):=\sum_{\bv{x}} \exp\big({-}\beta E(\bv{x}) \big)$ for $N\to\infty$ can be explicitly expressed, by which the equilibrium transition point is computed exactly. Concretely, by performing the Stratonovich--Hubbard transformation \cite{mean-field_Potts2013, mean-field_Potts2020} with \begin{align} &\exp\bkc{\frac{K\beta}{2N}\sum_{x\in X}\bka{\sum_{i=1}^N \delta(x_i,x)}^2}\notag\\ &= \prod_{x\in X}\sqrt{\frac{NK\beta}{2\pi}}\int_{\mathbb R}\mathrm{d} q\exp\bkc{-\frac{NK\beta}{2}q^2+K\beta q\sum_{i=1}^N\delta(x_i,x)}, \end{align} we obtain \begin{align} &Z_N(\beta)= \bka{\frac{NK\beta}{2\pi}}^{L/2}\int_{\mathbb{R}^{L}}\mathrm{d}^{L}q \exp \left(-N\beta f_\beta(\bv{q}) \right),\\ &f_\beta(\bv{q})=\frac{K}{2}\sum_{j\in X}q_{j}^{2}-\beta^{-1}\log\bka{\sum_{j\in X} \exp\bka{K\beta q_{j}}}, \end{align} where \begin{align} K = \frac{2(L-1)\log(L-1)}{L-2}. \end{align} For $N\gg 1$, the minimal value of $f_\beta(\bvec{q})$ as a function of the order parameter $\bvec{q}$ behaves effectively as the free energy density of the Potts model as follows: \begin{align} \lim_{N\to\infty}\frac{-\beta^{-1}\log Z_N(\beta)}{N} = \min_{\bvec{q}\in{\mathbb R}^L}f_\beta(\bvec{q}). \end{align} Taking $\frac{\partial f_\beta}{\partial q_i}(\bvec{q})=0$ to minimize $f_\beta(\bvec{q})$, we obtain the stationary condition \begin{align}\label{eq:stationary_cond2} q_i\exp(-K\beta q_i) = \Big(\sum_{j\in X}\exp(K\beta q_j)\Big)^{-1} \end{align} for each $i\in X$. From \eqref{eq:stationary_cond2}, we see that \begin{align}\label{eq:ab_constraint} q_i\exp(-K\beta q_i) =q_j \exp(-K\beta q_j), \end{align} for any $i, j \in X$. Since the equation $q e^{-K\beta q} = c$ has at most two real solutions for a constant $c > 0$, there are two real numbers such that $q_i$ is equal to one of the real numbers. From \eqref{eq:stationary_cond2}, we also have \begin{align}\label{eq:q_constraint} \sum_{j\in X}q_j=1. \end{align} Thus, a necessary condition for the order parameter $\bvec{q}$ to minimize $f_\beta(\bvec{q})$ is described below. Keeping with \eqref{eq:ab_constraint} and \eqref{eq:q_constraint}, one of the following conditions (i) and (ii) is satisfied: \begin{enumerate}[(i)] \item It holds that $\bm{q} = \tilde{\bvec{q}}^{(0)} \coloneqq \dfrac{1}{L} (1, 1, \dots, 1).$ \item There exist an integer $n\in\bkb{1,\dots,L-1}$ and two distinct real numbers $a_n = a_n(\beta), b_n = b_n(\beta)$ satisfying \begin{gather} \label{eq:eq_of_ab} a_n \exp(-K\beta a_n) = b_n \exp(-K\beta b_n), \\ na_n+(L-n)b_n = 1. \label{eq:eq_of_ab2} \end{gather} Moreover, $n$ components of $\bvec{q}$ are $a_n$ and remaining $(L-n)$ components of $\bvec{q}$ are $b_n$. \end{enumerate} For example, if $n=1$, the solutions are described by $\bvec{q} = \tilde{\bvec{q}}^{(i)}$ for $1 \le i \le L$, where \begin{align} \tilde{q}^{(i)}_j \coloneqq \begin{dcases} a_1(\beta) & (\text{if $j = i$}) \\ b_1(\beta) & (\text{if $j \neq i$}) \end{dcases}. \end{align} In Ref.\ \cite{Ellis}, it has been shown that the set of the global minimum points $\bvec{q}$ of $f_\beta(\bvec{q})$ corresponds to the case of $\bvec{q} = \tilde{\bvec{q}}^{(0)}$ or $n=1$, depending on $\beta$. Concretely, the set is described as \begin{align} \begin{dcases} \bkb{\tilde{\bvec{q}}^{(1)}(\beta),\tilde{\bvec{q}}^{(2)}(\beta),\ldots,\tilde{\bvec{q}}^{(L)}(\beta)} & (\text{if}\ 0<\beta<1) \\ \bkb{\tilde{\bvec{q}}^{(0)},\tilde{\bvec{q}}^{(1)}(1),\ldots,\tilde{\bvec{q}}^{(L)}(1)} & (\text{if}\ \beta=1) \\ \bkb{\tilde{\bvec{q}}^{(0)}} & (\text{if}\ \beta>1). \\ \end{dcases} \end{align} Note that the equations \eqref{eq:eq_of_ab} and \eqref{eq:eq_of_ab2} with $n=1$ determine the value $a_1 \neq b_1$ uniquely, and the resulting functions $a_1(\beta), b_1(\beta)$ are differentiable in the region $0 < \beta < 1$. The expectation value of energy density is also expressed as \begin{align}\label{eq:energy_density} \lim_{N\to\infty}\frac{\braket{E}_{\text{can}}}{N} = \begin{dcases} \frac{\partial}{\partial\beta}\beta f_\beta(\tilde{\bvec{q}}^{(1)}(\beta)) & (\text{if}\ 0<\beta<1) \\ \frac{\partial}{\partial\beta}\beta f_\beta(\tilde{\bvec{q}}^{(0)}) & (\text{if}\ \beta>1). \end{dcases} \end{align} Thus, one can show that the energy density \eqref{eq:energy_density} exhibits a discontinuous jump at $\beta=1$, which is the phase transition point of the Potts model. \subsection{State vector description} Let us move onto the model with general $p \ge 0$. In this case, we need to explicitly consider the dynamics in order to compute the stationary distribution of the model. We would like to describe the stochastic process by transition matrices with some basic linear operators. For more detailed description and derivation, see Appendix \ref{sct:TransMatr}. Let $H_X$ be the one-agent state space. It is considered as a complex vector space with inner product $\braket{\cdot | \cdot}$, and has an orthonormal basis $\{ |x \rangle \mid x \in X \}$ over the set of complex numbers $\mathbb{C}$. Then the $N$-times self-tensored space $H_X^{\otimes N}$ can be identified to the $N$-agent state spaces. For a configuration of agents $\bm{x} = (x_1, x_2, \dots, x_N) \in X^N$, the corresponding state vector is $|\bm{x} \rangle = |x_1\rangle \otimes |x_2 \rangle \otimes \dots \otimes |x_N\rangle$. The space $H_X^{\otimes N}$ has a natural inner product induced by $\langle \cdot | \cdot \rangle$, and the set of the state vectors $\{|\bvec{x} \rangle\}_{\bvec{x} \in X^N}$ is an orthonormal basis. We use the same symbol $\langle \cdot | \cdot \rangle$ to write the inner product on $H_X^{\otimes N}$. We associate the probability $P(\bvec{x})$ for agents' configuration $\bvec{x}$ with a state $\right\rangle{P}\in H_X^{\otimes N}$ as follows: \alit{ \braket{\bvec{x}|P} = P(\bvec{x}), } or \alit{ \right\rangle{P} = \sum_{\bvec{x}\in X^N}P(\bvec{x})\right\rangle{\bvec{x}}. } For a given state $\right\rangle{P_t}$ at time $t$, the time evolution of the state is described as follows: \begin{align} \right\rangle{P_{t+1}}=\TransOp{}{}\right\rangle{P_t}, \label{eq:Transition} \end{align} where $\TransOp{}{}$ is a transition matrix on $H_X^{\otimes N}$ such that \eqref{eq:Transition} is equivalent to the master equation \eqref{eq:mas2} for the joint probability. Similarly, $\TransOp{0}{}$ is the transition matrix $\TransOp{}{}$ when $p=0$. We introduce some basic operators. A hopping map $f_i^\pm$ to the right (resp.\ the left) corresponds to the operator defined by $\ShiftOp{i}{\pm}$; for $\bm{x} = (x_1, x_2, \dots, x_N) \in X^N$, \begin{align} \ShiftOp{i}{\dP} | \bm{x} \rangle &= |\Shift{i}{\dP} \bm{x} \rangle \notag \\ &= \right\rangle{x_1} \otimes \right\rangle{x_2} \otimes \dots \otimes \right\rangle{x_{i} + 1} \otimes \dots \otimes \right\rangle{x_N}, \\ \ShiftOp{i}{\dM} | \bm{x} \rangle &= |\Shift{i}{\dM} \bm{x} \rangle \notag \\ &= \right\rangle{x_1} \otimes \right\rangle{x_2} \otimes \dots \otimes \right\rangle{x_{i} - 1} \otimes \dots \otimes \right\rangle{x_N}. \end{align} Next we define projection operators. For a site $y \in X$ and an agent $1 \le i \le N$, we define $\CheckOp{i}{y}$ by \begin{equation} \CheckOp{i}{y} |\bm{x}\rangle = \begin{dcases} |\bm{x} \rangle & \text{(if $x_i = y$)} \\ 0 & \text{(if $x_i \neq y$)}, \end{dcases} \end{equation} for $\bm{x} = (x_1, x_2, \dots, x_N) \in X^N$. Then for a configuration $\bm{y} \in X^N$, we define the operator $\CheckOp{}{\bm{y}}$ as $\prod_{1 \le i \le N} \CheckOp{i}{y_i}$. Finally, we denote $\mathop{\widehat{\mathrm{id}}}\nolimits_H$ (resp.\ $\mathop{\widehat{\mathrm{id}}}\nolimits_H^{\otimes N}$) as the identity operator on $H_X$ (resp.\ $H_X^{\otimes N}$). Using these notions, we can describe the transition matrices $\TransOp{0}{}$ and $\TransOp{}{}$. First, $\TransOp{0}{}$ is \begin{align} \TransOp{0}{} &= \sum_{\substack{1 \le i \le N \\ d = \pm}} \sum_{\bm{x} \in X^N} \Big( T_0(\bm{x} \to f_i^{d} \bm{x}) \ShiftOp{i}{d} \nonumber \\ &\qquad + \left( \frac{1}{2N}-T_0(\bm{x} \to \Shift{i}{d} \bm{x}) \right) \mathop{\widehat{\mathrm{id}}}\nolimits_H^{\otimes N} \Big) \CheckOp{}{\bm{x}}. \end{align} Then $\TransOp{}{}$ is written as \begin{widetext} \begin{align} \TransOp{}{} &= \sum_{\substack{1 \le i \le N \\ d = \pm}} \sum_{\bm{x} \in X^N} \Bigg( \sum_{\substack{j \in S(\bm{x}, i, d) \\ d' = \pm}} T(\bm{x} \to \Shift{j}{d'}\Shift{i}{d} \bm{x}) \ShiftOp{j}{d'}\ShiftOp{i}{d} + T(\bm{x} \to \Shift{i}{d} \bm{x}) \ShiftOp{i}{d} + \Big(\frac{1}{2N}-T_0(\bm{x} \to \Shift{i}{d} \bm{x})\Big) \mathop{\widehat{\mathrm{id}}}\nolimits_H^{\otimes N} \Bigg) \CheckOp{}{\bm{x}} \label{eq0:Trep} \\ &= \TransOp{0}{} + \sum_{\substack{1 \le i \le N \\ d = \pm}} \sum_{\bm{x} \in X^N} \sum_{\substack{j \in S(\bm{x}, i, d) \\ d' = \pm}} T(\bm{x} \to \Shift{j}{d'}\Shift{i}{d} \bm{x}) (\ShiftOp{j}{d'} - \mathop{\widehat{\mathrm{id}}}\nolimits_H^{\otimes N}) \ShiftOp{i}{d} \CheckOp{}{\bm{x}}, \label{eq1:Trep} \end{align} \end{widetext} where we used \eqref{eq:T0andT}. For the definition of coefficients, see \eqref{eq0:T0coeff}, \eqref{eq0:Tcoeff}, and \eqref{eq:Tnonmove}. On this settings, the transition matrix $\TransOp{}{}=\TransOp{}{}(\beta,p)$ is naturally regarded as a linear operator on $H_X^{\otimes N}$. Let $\right\rangle{P(\beta,p)}$ be the unique stationary state of $\TransOp{}{}(\beta,p)$ satisfying \alit{\label{eq:stationary state for T} \TransOp{}{}(\beta,p)\right\rangle{P(\beta,p)} = \right\rangle{P(\beta,p)}. } One can show that, since $\TransOp{}{}$ is irreducible, $\right\rangle{P(\beta,p)}$ exists and is uniquely determined by Perron--Frobenius theorem. For the latter discussion, let us consider the symmetry of the transition matrices $\TransOp{0}{}$ and $\TransOp{}{}$. We introduce permutation operators $\PermOp{\sigma}$ on $H_X^{\otimes N}$. For a given element $\sigma \in \mathfrak{S}_N$ of the symmetry group $\mathfrak{S}_N$ of agents, we define \begin{equation}\label{eq:PermOpDefinition} \PermOp{\sigma}\right\rangle{\bvec{x}} \coloneqq \right\rangle{\sigma^{-1}(\bvec{x})}, \end{equation} where $\sigma^{-1}(\bvec{x})\coloneqq\bka{x_{\sigma^{-1}(j)}}_{j=1}^N$. Then, we have \begin{gather}\label{eq:commuT0} \PermOp{\sigma}^\dag\TransOp{0}{}\PermOp{\sigma} = \TransOp{0}{},\\ \label{noncommuT} \PermOp{\sigma}^\dag\TransOp{}{}\PermOp{\sigma} \neq \TransOp{}{}, \end{gather} where $\PermOp{\sigma}^\dag$ is the Hermitian conjugate of $\PermOp{\sigma}$ (see \eqref{eq:TotalT0Permutation} and \eqref{eq:NotCommutesTransOp}). Note that $\PermOp{\sigma}$ is unitary: $\PermOp{\sigma}^\dag=\PermOp{\sigma}^{-1}=\PermOp{\sigma^{-1}}$. In the sense of relation \eqref{eq:commuT0}, the equilibrium dynamics described by $\TransOp{0}{}$ holds permutation symmetry. In contrast, the whole dynamics by $\TransOp{}{}$ breaks the permutation symmetry, as described by \eqref{noncommuT}. \subsection{Exact diagonalization of transition matrices} \begin{figure} \includegraphics[width=8cm,clip]{Images/Fig2.pdf} \caption{(Color online) Probabilities of the configurations determined by stationary distribution for $\beta=2$ (circles), $\beta=0$ (rectangles), and $\beta=-1$ (diamonds); $p=0$ (blue or the left at each column) and $p=10$ (red or the right); $N=L=3$. The state of $(0,0,0)$ means that three agents are located at the same site. The state of $(0,1,2)$ means that each of three agents is located at the different position, respectively. The other three states mean that two agents are located at the same site and another agent is located at one of the different sites.} \label{ff} \end{figure} We perform exact diagonalization for the transition matrix $\TransOp{}{}$ to obtain the eigenvalues and their corresponding eigenvectors. Thus, the stationary distribution corresponds to the eigenvector with the maximum real part, which is 1, of the eigenvalue. Note that the number of the states is $L^N$, which gets exponentially large as a function of $N$. As shown in Fig.\ \ref{ff} with $p=0$, at $\beta=0$, the joint probability of each configuration shows the same value. As $\beta$ is increased from $0$, the joint probability of condensate configuration $(0,0,0)$ is much higher than that of the other configurations. Conversely, as $\beta$ is decreased from $0$, the joint probability of the configuration $(0,1,2)$, where all the agents are separated, is much higher than that of the other configurations. On the other hand, the joint probability of configuration with a pair of two agents overlapping at the same site and the other located at the different site such as $(1,0,0), (0,1,0), (0,0,1)$ does not depend on the pair for $p=0$. When $p=10$, the joint probabilities of configurations $(1,0,0)$, $(0,1,0)$, $(0,0,1)$ are distinct. Concretely, those probabilities with $\beta=2$ and $\beta=-1$ increase and decrease, respectively, in the order of $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$. This means that higher-ranked agents (resp.\ lower-ranked agents) tend to overlap more frequently for $\beta=2$ (resp.\ for $\beta=-1$). This can be interpreted as a typical consequence of supplanting process. In order to discuss how the configuration is condensed, let us introduce the normalized expectation value of the Potts energy in terms of a probability distribution $P(\bvec{x})$ as follows: \begin{align}\label{eq:expectation_value_of_Potts_energy} M:= \frac{1}{N^2} \sum_{\bv{x}}\sum_{i=1}^N\sum_{j=1}^N\delta(x_i,x_j)P(\bv{x}). \end{align} Note that by definition, $M$ takes $1$ as the maximum value in the case of $P(\bvec{x})=\prod_{k=1}^N\delta(x,x_k)$. In Fig.\ \ref{order}, using the computation of the stationary distribution by the exact diagonalization, $M$ is shown as a function of $p$ and $\beta$. Relating to Section \ref{Computation of partition function at equilibrium}, in the case of the equilibrium distribution corresponding to the case with $p=0$, $M$ is rewritten by using $\braket{E}_{\text{can}}$ as \begin{align} M = -\frac{2}{NK}\sum_{\bv{x}}E(\bvec{x})\frac{e^{-\beta E(\bv{x})}}{Z_N(\beta)} = -\frac{2}{K}\frac{\braket{E}_{\text{can}}}{N}, \end{align} which means that $M$ is also discontinuous at the equilibrium phase transition point $\beta=1$ in the thermodynamic limit $N\to\infty$. \begin{figure} \includegraphics[width=8cm,clip]{Images/Fig3.pdf} \caption{(Color online) The $\beta$-dependence of the normalized expectation value of the energy $M=\dfrac{1}{N^2}\sum_{x}\sum_{i,j}\delta(x_i,x_j)\braket{\bvec{x}|P(\beta,p)}$ for various values of $p$ with $L=6, N=6$.} \label{order} \end{figure} \subsection{Overlap centrality and its correlation coefficient} \begin{figure} \includegraphics[width=7cm,clip]{Images/Fig4-1.pdf} \includegraphics[width=7cm,clip]{Images/Fig4-2.pdf} \caption{(Color online) Heatmap of neighbor matrix $\mathcal{R}$ determined by the stationary distribution with the diagonal components left out. The color corresponds to $r_{ij}$ for pair of two agents $(i,j)$. Parameters: (top) $\beta=-1, p=1, L=6, N=6$. (bottom) $\beta=1, p=1, L=6, N=6$.} \label{nei} \end{figure} \begin{figure*} \begin{tabular}{cc} \begin{minipage}[t]{0.5\linewidth} \includegraphics[width=7cm,clip]{Images/Fig5-1.pdf} \includegraphics[width=7cm,clip]{Images/Fig5-2.pdf} \includegraphics[width=7cm,clip]{Images/Fig5-3.pdf} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \includegraphics[width=7cm,clip]{Images/Fig5-4.pdf} \includegraphics[width=7cm,clip]{Images/Fig5-5.pdf} \includegraphics[width=7cm,clip]{Images/Fig5-6.pdf} \end{minipage} \end{tabular} \caption{(Color online) The overlap centrality $O_i$ determined by the stationary distribution as a function of agent $i$ for $L=N=6$. The solid lines are linear regressions for $O_i$ and $i$. The left and right column correspond to the cases of $p=0.1$ and $p=10.0$, respectively. The top, center, and bottom rows correspond to the cases of $\beta=1$, $\beta=0$, and $\beta=-1$, respectively. For negative $\beta$, $\phi$ is close to $1$ regardless of the value of $p$.} \label{neigen} \end{figure*} In order to characterize the correlation among agents, we may consider the neighbor matrix $\mathcal{R}:= (r_{ij})_{i,j}$ defined as \begin{align}\label{eq:definition_of_neighbor_matrix} r_{ij}:= \sum_{\bv{x}}\delta(x_i,x_j)P(\bv{x}). \end{align} For example, if $P(\bvec{x})$ is the uniform distribution then $r_{ij}=1/L$, and if $P(\bvec{x})=\prod_{k=1}^N\delta(x,x_k)$ then $r_{ij}=1$. The latter gives the maximum value of $r_{ij}$. Note that $r_{jj}=1$ for any $j$. The entry $r_{ij}$ means how often agent $i$ and $j$ are located at the same site under the distribution $P(\bvec{x})$. In Fig.\ \ref{nei}, we show heatmaps of neighbor matrices computed from the stationary distribution. It demonstrates that at $\beta=1$, the agents with higher rank have more overlaps with the other agents, and conversely at $\beta=-1$, the agents with lower rank have more overlaps with the other agents. In order to quantify how often a given agent overlaps with the other agents in total, we introduce the overlap centrality as a function of rank $i$ using the entries of the neighbor matrix: \begin{align}\label{eq:definition_of_overlap_centrality} O_i:=\sum_{\substack{1\le j\le N\\j\neq i}}r_{ij}. \end{align} That is, we regard the agent $i$ having larger value of the overlap centrality as more influential one compared to the other agents having lower values of the overlap centrality. When the probability distribution $P(\bvec{x})$ is permutation symmetric, i.e., $P(\sigma(\bvec{x}))=P(\bvec{x})$ for any $\sigma\in\mathfrak{S}_N$, the overlap centrality does not depend on rank $i$. Note that \begin{align}\label{MOrel} M=\displaystyle \dfrac{1}{N^2}\sum_{i=1}^N O_i+\dfrac{1}{N} \end{align} holds by the definition. As shown in Fig.\ \ref{neigen}, we compute the overlap centrality $O_i$ computed from the stationary distribution, showing that the overlap centrality has a plus slope at attractive interaction of $\beta=1$ and has a minus slope at repulsive interaction of $\beta=-1$. In order to quantify the class of the overlap centrality in terms of the slope, we measure the correlation coefficient $\phi$ of the overlap centrality with respect to agents' rank. This is defined as \begin{align}\label{eq:correlation_coefficient} \phi:=\frac{1}{N}\sum_{i=1}^N\dfrac{(O_i-\frac{1}{N}\sum_{j=1}^NO_j)(i-\frac{1}{N}\sum_{j=1}^Nj)}{s_Os_I}, \end{align} where we define \begin{align} s_O^2 &:= \frac{1}{N}\sum_{i}(O_i-\frac{1}{N}\sum_{j=1}^NO_j)^2, \\ s_I^2 &:= \frac{1}{N}\sum_{i}(i-\frac{1}{N}\sum_{j=1}^Nj)^2 = \frac{(N-1)(N+1)}{12}. \end{align} By definition, when $|\phi|=1$, $O_i$ is a linear function of $i$. We check the condition when this quantity $\phi$ is not defined. Since we set $N \ge 2$, the denominator of $\phi$ is zero exactly when $s_O^2 = 0$. This corresponds to the case when $O_i$ is constant as a function of $i$. In this case, we say that the quantity $\phi$ is {\em singular}. In Fig.\ \ref{cc}, we show the $\beta$-dependence of the correlation coefficient $\phi$. In the weak supplanting condition with $p\ll 1$ such as $p=0.1$ or $p=0.01$, $\phi$ is close to $+1$ for negative $\beta$. As $\beta$ increases, $\phi$ sharply changes its sign around $\beta=0$, and turns out to be $-1$ for positive $\beta$. We will discuss this behavior in a more general setting in Section \ref{Rig}. \begin{figure} \includegraphics[width=7cm,clip]{Images/Fig6.pdf} \caption{(Color online) Correlation coefficient $\phi$ as a function of $\beta$ for $L=N=6$.} \label{cc} \end{figure} \section{Analytic results for overlap centrality}\label{Rig} Let us discuss the weak-supplanting limit of $p\ll 1$, where general results in terms of overlap centrality are available. Although we have focused on only the Potts energy as $E(\bvec{x})$ in Section \ref{Potts}, hereafter, we consider all of the models which belong to the general class of energy functions satisfying the permutation symmetry condition \eqref{sym}. The neighbor matrix $\mathcal{R}$ \eqref{eq:definition_of_neighbor_matrix}, the overlap centrality $O_i \; (1 \le i \le N)$ \eqref{eq:definition_of_overlap_centrality}, and the correlation coefficient $\phi$ of the overlap centrality with agents’ rank \eqref{eq:correlation_coefficient} can also be defined for the general cases in the same manner. These are indeed the main subject in this section and also this paper. Hereafter, we use the state vector description, fix parameters $\beta, L, N$ as arbitrary values, and consider $p$-dependence of the dynamics unless otherwise specified. The main goal in this section is to derive perfect correlation, which means that the correlation coefficient \eqref{eq:correlation_coefficient} satisfies $\phi=\pm 1$, in the weak-supplanting limit of $p\to 0$ as long as $\phi$ is not singular in the sense mentioned after the definition of $\phi$. In order to take a step forward, we start with introducing auxiliary stochastic processes {\eqref{tn0matrix}}, by which the transition matrix of the model can be completely reconstructed. Then, using this decomposition property {\eqref{eq:supplanting_decomposition}}, we construct another decomposition form {\eqref{eq:BetaDecomp0}} of the transition matrix, which we call {\it beta decomposition}, where the asymptotic behaviors in terms of $p\to 0$ can be rigorously estimated. Note that the assumption of \eqref{sym} is essential in the derivation of key properties \eqref{eq:PermCanVec} and \eqref{eq:CompBOp}. \subsection{Decompositions of transition matrix}\label{sct:Decomp} In this subsection, we would like to introduce \textit{beta decomposition} \eqref{eq:BetaDecomp0} of the operator $\TransOp{}{}$. This decomposition enables us to investigate an asymptotic behavior of $\TransOp{}{}$ for $p \ll 1$ because of an asymptotic property \eqref{eq:BetaEstimate1}. To describe it, first we introduce another decomposition, called \textit{supplanting decomposition}. This is relatively easy to describe, and simplifies the description of beta decomposition. For details, see Appendix \ref{sct:SDecomp} and \ref{sct:BetaDecomp}. Let us define the partial sum in \eqref{eq0:Trep}. For an integer $1 \le n \le N-1$, we define \begin{equation}\label{tn0matrix} \begin{aligned} \TransOp{n}{} \coloneqq & \sum_{\substack{1 \le i \le N \\ d = \dPM}} \sum_{\substack{\bm{x} \in X^N}} \delta(\#S(\bm{x}, i, d) , n)\\ &\Bigg[\sum_{\substack{j \in S(\bm{x}, i, d) \\ d' = \dPM}} T(\bm{x} \to \Shift{j}{d'}\Shift{i}{d} \bm{x}) (\ShiftOp{j}{d'} - \mathop{\widehat{\mathrm{id}}}\nolimits_H^{\otimes N}) \Bigg] \ShiftOp{i}{d} \CheckOp{}{\bm{x}}. \end{aligned}\end{equation} This operator $\TransOp{n}{}$ is the second term in \eqref{eq1:Trep} restricting indices $\bm{x}, i, d$ to those with $\#S(\bm{x}, i, d) = n$. For any subset $\mathcal{S} \subseteq \{1, 2, \dots, N-1\}$, the matrix $\TransOp{0}{} + \sum_{i \in \mathcal{S}}\TransOp{i}{}$ is also a stochastic matrix. For example, the matrix $\TransOp{0}{} + \TransOp{n}{}$ represents the supplanting process only when $\#S(\bm{x}, i, d) = n$. By definition, we have \begin{equation}\label{eq:supplanting_decomposition} \TransOp{}{} = \TransOp{0}{} + \TransOp{1}{} + \TransOp{2}{} + \dots + \TransOp{N-1}{}. \end{equation} This is a decomposition of the operator $\TransOp{}{}$, which we call the \textit{supplanting decomposition}. Note that the coefficients $\langle \bm{y} | \TransOp{n}{} | \bm{x} \rangle$ of $n$-th term $\TransOp{n}{}$ for $n \ge 1$ are estimated as \begin{equation}\label{eq:STermEstimate} | \langle \bm{y} | \TransOp{n}{} | \bm{x} \rangle | \le \frac{p}{2}. \end{equation} For details, see Appendix \ref{sct:nthCoeff}. In particular, the coefficients of $\TransOp{n}{}$ are estimated by $\mathcal{O}(p)$ when $p \to +0$. Moreover, since at most one of $\langle \bm{y} | \TransOp{n}{} | \bm{x} \rangle \; (1 \le n \le N-1)$ is non-zero for fixed $\bm{x}$ and $\bm{y}$, we also have \begin{equation} \label{eq:DifferenceOpEstimate} | \langle \bm{y} | (\TransOp{}{} - \TransOp{0}{} )| \bm{x} \rangle | \le \frac{p}{2}. \end{equation} Though we explicitly estimate the coefficients of $\TransOp{n}{}$ in (\ref{eq:STermEstimate}), the supplanting decomposition does not give effective truncation of $\TransOp{}{}$ in terms of small $p$. Thus, we look for another decomposition of $\TransOp{}{}$ \begin{equation} \label{eq:BetaDecomp0} \TransOp{}{} = \TransOp{0}{} + \BetaTerm{1} + \dots + \BetaTerm{N-1} \end{equation} satisfying \begin{equation}\label{eq:BetaEstimate0} \langle \bm{y} | \BetaTerm{m} | \bm{x} \rangle = \mathcal{O}(p^{m}) \text{ as } p \to +0. \end{equation} If we find such an expansion, we have \begin{equation}\label{eq:BetaEstimate1} \TransOp{}{} = \TransOp{0}{} + \BetaTerm{1} + \dots + \BetaTerm{m} + \mathcal{O}(p^{m+1}) \end{equation} for $1 \le m \le N-1$. It would not be straightforward to practically find such an expansion. Nevertheless, in this case, through a rather tricky procedure as shown in Appendix \ref{sct:PrfBetaDecomp}, one can prove that \eqref{eq:BetaDecomp0} and \eqref{eq:BetaEstimate0} are satisfied by the following definition of $\BetaTerm{m}$: \begin{align}\label{ummatrix} \BetaTerm{m} \coloneqq &\frac{(-1)^{m+1} B(m, 1+1/p)}{p} \nonumber \\ &\quad \sum_{m \le n \le N} \binom{n-1}{m-1}(1+np) \TransOp{n}{}, \end{align} for $1 \le m \le N-1$. Here, $B(a,b)$ is the beta function and $\displaystyle\binom{n-1}{m-1}$ is the binomial coefficient. See \eqref{eq:BetaDecomp} and \eqref{eq:BetaDecompEstimate} of Appendix \ref{MATRIX} for the detail of the derivation; see also Remark \ref{rem:Addendum} for the motivation of this decomposition. We call this expansion \textit{beta decomposition}. Note that, for an integer $m \ge 1$, \begin{equation}\label{betafunction} B \left( m, 1+\frac{1}{p} \right) = \frac{(m-1)!p^{m}}{(1+p)(1+2p)\dots(1+mp)}. \end{equation} By substituting (\ref{tn0matrix}) and (\ref{betafunction}) into (\ref{ummatrix}) with certain sets of transformations in Appendix \ref{sct:AnotherDescr} and \ref{sct:AnotherBetaDecomp}, we reach another representation of $\BetaTerm{m}$ as in \eqref{eq:SubOp} and \eqref{eq:UmDescr}: \begin{equation}\label{u1} \begin{aligned} \BetaTerm{m} = &\frac{(-1)^{m-1}(m-1)!p^m}{2(1+p)(1+2p)\dots(1+mp)} \\ &\times \sum_{\substack{1 \le i < i_1 < \dots < i_{m} \le N}} \left[ \sum_{1 \le k \le m} (\ShiftOp{i_k}{+} + \ShiftOp{i_k}{-} - 2\mathop{\widehat{\mathrm{id}}}\nolimits^{\otimes N}_H) \right] \\ &\qquad \times\CheckOp{i, i_1, \dots, i_m}{} \TransOp{0,\text{move}}{i}, \end{aligned} \end{equation} where we define \begin{align}\label{tmove} \TransOp{0,\text{move}}{i} &\coloneqq \sum_{d = \pm} \sum_{\bm{x} \in X^N} T_0(\bm{x} \to \Shift{i}{d} \bm{x}) \ShiftOp{i}{d} \CheckOp{}{\bm{x}}, \\ \CheckOp{i, i_1, \dots, i_m}{} &\coloneqq \sum_{x \in X} \CheckOp{i}{x} \CheckOp{i_1}{x} \dots \CheckOp{i_m}{x}. \end{align} This representation of $\BetaTerm{m}$ is suitable for the further calculation related to permutation symmetry in Section \ref{sct:Perfect_correlation}. In the following sections, $\BetaTerm{1}$ is the one we mainly consider in the weak supplanting limit $p \to +0$. Recall that $\PermOp{\sigma}$ defined in \eqref{eq:PermOpDefinition} denotes the permutation operator corresponding to a permutation $\sigma \in \mathfrak{S}_N$. In Appendix \ref{sct:PermOperators} and \ref{sct:TransFirst}, we obtain commutation relations between permutation operators and other operators: \begin{align} \PermOp{\sigma}\ShiftOp{i_1}{\dPM} &= \ShiftOp{\sigma(i_1)}{\dPM}\PermOp{\sigma}, \label{eq:CommPerm1} \\ \PermOp{\sigma}\CheckOp{i_0,i_1}{} &= \CheckOp{\sigma(i_0),\sigma(i_1)}{}\PermOp{\sigma}, \label{eq:CommPerm2} \\ \PermOp{\sigma}\TransOp{0,\text{move}}{i_0} &= \TransOp{0,\text{move}}{\sigma(i_0)}\PermOp{\sigma}. \label{eq:CommPerm3} \end{align} These relations will be used in \eqref{eq:CompBOp}. \subsection{Existence of perfect correlation}\label{sct:Perfect_correlation} By using beta decomposition of the transition matrix obtained above, we are going to show that the correlation coefficient $\phi$ exhibits perfect correlation $|\phi|=1$ in the limit of $p \to 0$. One can use the Brillouin--Wigner type perturbation theory \cite{Brilloin-Wigner-type perturbation theory} to rewrite the stationary state $\right\rangle{P(\beta,p)}$ of $\TransOp{}{}(\beta, p)$ as a perturbation expansion from the stationary state of $\TransOp{0}{}$. For that purpose, we introduce some symbols. Let $\right\rangle{P_{\mathrm{can}}(\beta)}$ be the stationary state of $\TransOp{0}{}(\beta)$, i.e., the state corresponding to the Gibbs distribution \eqref{eq:GibbsDistr}: \alit{ \right\rangle{P_{\mathrm{can}}(\beta)} \coloneqq \sum_{\bvec{x}}\frac{\exp(-\beta E(\bvec{x}))}{Z(\beta)}\right\rangle{\bvec{x}}, } which satisfies \alit{ \TransOp{0}{}(\beta)\right\rangle{P_{\mathrm{can}}(\beta)} = \right\rangle{P_{\mathrm{can}}(\beta)}, } and \begin{align} \sum_{\bvec{x}}\braket{\bvec{x}|P_{\mathrm{can}}(\beta)} = 1. \end{align} Note that $\right\rangle{P_{\mathrm{can}}(\beta)}$ is invariant under a permutation, that is, it holds that \begin{align}\label{eq:Pcan_sym} \PermOp{\sigma}\right\rangle{P_{\mathrm{can}}(\beta)}=\right\rangle{P_{\mathrm{can}}(\beta)} \end{align} for any $\sigma\in\mathfrak{S}_N$, because of the permutation symmetry condition \eqref{sym} for energy function. Let $\text{\rm pr}(\beta)$ be a projection operator on $H_X^{\otimes N}$ to the orthogonal complement of the subspace ${\mathbb C}\right\rangle{P_{\mathrm{can}}(\beta)}$: \alit{\label{eq:projection op pi} \text{\rm pr}(\beta) \coloneqq \mathop{\widehat{\mathrm{id}}}\nolimits_H^{\otimes N}-\frac{\right\rangle{P_{\mathrm{can}}(\beta)}\left\langle{P_{\mathrm{can}}(\beta)}}{\braket{P_{\mathrm{can}}(\beta)|P_{\mathrm{can}}(\beta)}}, } and $\widehat{G}_0(\beta)$ be a linear operator from $H_X^{\otimes N}$ to itself: \alit{ \widehat{G}_0(\beta) \coloneqq \bka{\mathop{\widehat{\mathrm{id}}}\nolimits_H^{\otimes N}-\TransOp{0}{}(\beta)}^{-1}\text{\rm pr}(\beta). } Here, we need to set the coefficient of the term $\mathop{\widehat{\mathrm{id}}}\nolimits_H^{\otimes N}$ in $\widehat{G}_0(\beta)$ as the eigenvalue $1$ corresponding to the eigenvector $\right\rangle{P(\beta, p)}$ of $\TransOp{}{}(\beta, p)$ (see \eqref{eq:stationary state for T}). With the above notations, $\right\rangle{P(\beta,p)}$ can be written as follows: \alit{\label{ptheory} &\right\rangle{P(\beta,p)}\\ &= C(\beta,p)\bkc{ \right\rangle{P_{\mathrm{can}}(\beta)}+\sum_{n=1}^\infty \bka{\widehat{G}_0 (\TransOp{}{}-\TransOp{0}{})}^n\right\rangle{P_{\mathrm{can}}(\beta)} }, } where $C(\beta,p)$ is the positive normalization factor of $\right\rangle{P(\beta,p)}$ such that $\sum_{\bvec{x}\in X^N}\braket{\bvec{x}|P(\beta,p)}=1$. Since $\TransOp{}{}-\TransOp{0}{}=\mathcal{O}(p)$, one can estimate that $C(\beta,p)=1+\mathcal{O}(p)$. As obtained in (\ref{eq:BetaDecomp0}) and (\ref{u1}), within the asymptotic regime of small $p$, $\TransOp{}{}-\TransOp{0}{}$ is can be expanded with powers of $p$, and then we have \alit{ \TransOp{}{}-\TransOp{0}{} = \BetaTerm{1} + \mathcal{O}(p^2), } where \alit{\label{u10} \BetaTerm{1} = \frac{p/2}{1+p}\sum_{1\le i_0<i_1\le N}\bka{\ShiftOp{i_1}{\dP}+\ShiftOp{i_1}{\dM}-2\mathop{\widehat{\mathrm{id}}}\nolimits_{H}^{\otimes N}}\CheckOp{i_0,i_1}{}\TransOp{0,\text{move}}{i_0}. } By substituting (\ref{u10}) into (\ref{ptheory}), we have \alit{\label{eq:perturbative_decomposition} &\right\rangle{P(\beta,p)}\\ &= C(\beta,p)\bkc{ \right\rangle{P_{\mathrm{can}}(\beta)} + \widehat{G}_0\BetaTerm{1}\right\rangle{P_{\mathrm{can}}(\beta)} + \mathcal{O}(p^2) }\\ &= C(\beta,p)\bkc{ \right\rangle{P_{\mathrm{can}}(\beta)} + \frac{p}{1+p}\sum_{1\le i_0<i_1\le N}\BOp{i_0,i_1}\right\rangle{P_{\mathrm{can}}(\beta)}}\\ &\hspace{70mm} + \mathcal{O}(p^2), } where we define: \alit{\label{eq:B_i0i1} \BOp{i_0,i_1} \coloneqq \frac{1}{2}\widehat{G}_0\bka{\ShiftOp{i_1}{\dP}+\ShiftOp{i_1}{\dM}-2\mathop{\widehat{\mathrm{id}}}\nolimits_{H}^{\otimes N}}\CheckOp{i_0,i_1}{}\TransOp{0,\text{move}}{i_0}. } This operator $\BOp{i_0,i_1}$ is dependent on $\beta$ but independent of $p$. Thus, using $r_{ii}=1$ for any $i$, the overlap centrality can be written as follows: \alit{\label{eq:expression_overlap_centrality_by_A} &O_i = \sum_{1\le j\le N}\sum_{\bvec{x}\in X^N}\delta\bka{x_i,x_j}\braket{\bvec{x}|P(\beta,p)} -1 \\ &= C(\beta,p)\bkc{ \mathcal{A}_0(i) +\frac{p}{1+p} \sum_{1\le i_0<i_1\le N}\mathcal{A}_1(i,i_0,i_1)} -1\\ &\hspace{70mm} +\mathcal{O}(p^2), } where \begin{align} &\mathcal{A}_0(i)=\sum_{1\le j\le N}\sum_{\bvec{x}\in X^N}\delta\bka{x_i,x_j} \braket{\bvec{x}|P_{\mathrm{can}}(\beta)},\\ \label{eq:definition_of_A1} &\mathcal{A}_1(i,i_0,i_1)= \sum_{1\le j\le N}\sum_{\bvec{x}\in X^N}\delta\bka{x_i,x_j}\left\langle{\bvec{x}}\BOp{i_0,i_1}\right\rangle{P_{\mathrm{can}}(\beta)}. \end{align} Here $\mathcal{A}_0(i)$ and $\mathcal{A}_1(i, i_0, i_1)$ are constant as a function of $p$. Recall that $\PermOp{\sigma}$ is the permutation operator corresponding to a permutation $\sigma \in \mathfrak{S}_N$ (see \eqref{eq:PermOpDefinition} for definition). By using \eqref{eq:Pcan_sym}, we have \begin{align} \label{eq:PermCanVec} &\mathcal{A}_0(i)\notag\\ &= \sum_{1\le j\le N}\sum_{\bvec{x}\in X^N}\delta\bka{x_i,x_j}\braket{\bvec{x}|\PermOp{\sigma}|P_{\mathrm{can}}(\beta)}\notag\\ &= \sum_{1\le j\le N}\sum_{\bvec{x}\in X^N}\delta\bka{x_{\sigma(i)},x_{\sigma(j)}}\braket{\bvec{x}|P_{\mathrm{can}}(\beta)}\notag\\ &= \sum_{1\le j\le N}\sum_{\bvec{x}\in X^N}\delta\bka{x_{\sigma(i)},x_{j}}\braket{\bvec{x}|P_{\mathrm{can}}(\beta)} \notag\\ &= \mathcal{A}_0(\sigma(i)). \end{align} This means that $\mathcal{A}_0(i)$ does not depend on $i$: \begin{equation} \mathcal{A}_0(i) = \mathcal{A}_0(1) \eqqcolon \mathcal{B}_0. \end{equation} Moreover, by using \eqref{eq:CommPerm1}, \eqref{eq:CommPerm2}, \eqref{eq:CommPerm3}, and \eqref{eq:Pcan_sym}, we have $\PermOp{\sigma} \BOp{i_0,i_1} = \BOp{\sigma(i_0),\sigma(i_1)}\PermOp{\sigma}$. Therefore, it follows that \begin{align} \label{eq:CompBOp} &\mathcal{A}_1(i,i_0,i_1)\notag\\ &= \sum_{1\le j\le N}\sum_{\bvec{x}\in X^N}\delta\bka{x_i,x_j}\left\langle{\bvec{x}}\PermOp{\sigma}^{-1}\BOp{\sigma(i_0),\sigma(i_1)}\PermOp{\sigma}\right\rangle{P_{\mathrm{can}}(\beta)}\notag\\ &= \sum_{1\le j\le N}\sum_{\bvec{x}\in X^N}\delta\bka{x_{\sigma(i)},x_{\sigma(j)}}\left\langle{\bvec{x}}\BOp{\sigma(i_0),\sigma(i_1)}\right\rangle{P_{\mathrm{can}}(\beta)}\notag\\ &= \sum_{1\le j\le N}\sum_{\bvec{x}\in X^N}\delta\bka{x_{\sigma(i)},x_j}\left\langle{\bvec{x}}\BOp{\sigma(i_0),\sigma(i_1)}\right\rangle{P_{\mathrm{can}}(\beta)}\notag\\ &= \mathcal{A}_1(\sigma(i),\sigma(i_0),\sigma(i_1)) \end{align} for any $\sigma\in\mathfrak{S}_N$. In this sense, $\mathcal{A}_{1}$ preserves permutation symmetry in spite of the permutation-symmetry breaking of $\TransOp{}{}$. By this equation, we obtain \begin{align}\label{eq:definition_of_B123} \mathcal{A}_1(i,i_0,i_1) = \begin{dcases} \mathcal{A}_1(1, 1, 2) \eqqcolon \mathcal{B}_1 & (\text{if}\ i=i_0)\\ \mathcal{A}_1(2, 1, 2) \eqqcolon \mathcal{B}_2 & (\text{if}\ i=i_1)\\ \mathcal{A}_1(3, 1, 2) \eqqcolon \mathcal{B}_3 & (\text{if}\ i \neq i_0, i_1) \end{dcases} \end{align} for any $i$ and any pair $i_0<i_1$. These quantities $\mathcal{B}_1, \mathcal{B}_2$, and $\mathcal{B}_3$ are independent of $p$. Note that for a given agent $i$, the number of pairs $(i_0,i_1)$ which satisfies each of the above conditions corresponding with $\mathcal{B}_1$, $\mathcal{B}_2$, and $\mathcal{B}_3$ is $N-i$, $i-1$, and $(N-1)(N-2)/2$, respectively. Therefore, we obtain \alit{\label{eq:sum_of_A1} &\sum_{1\le i_0<i_1\le N}\mathcal{A}_1(i,i_0,i_1)\\ &= (N-i)\mathcal{B}_1+(i-1)\mathcal{B}_2+\frac{(N-1)(N-2)}{2}\mathcal{B}_3. } Substituting \eqref{eq:sum_of_A1} into \eqref{eq:expression_overlap_centrality_by_A}, we find that \begin{align} \label{eq:overlap_centrality_estimate} O_i = \frac{p}{1+p}C(\beta,p)(\mathcal{B}_2-\mathcal{B}_1)i+c_0+\mathcal{O}(p^2), \end{align} where $c_0$ is a real number independent of rank $i$: \begin{align} c_0 &= \frac{p}{1+p}C(\beta,p)\bka{N\mathcal{B}_1-\mathcal{B}_2+\frac{(N-1)(N-2)}{2}\mathcal{B}_3} \notag\\ &\qquad +C(\beta,p)\mathcal{B}_0-1. \label{eq:Compute_c0} \end{align} With the estimation of $C(\beta,p)=1+\mathcal{O}(p)$, we can rewrite \eqref{eq:overlap_centrality_estimate} as \begin{align} \label{eq:overlap_centrality_estimate2} O_i = \frac{p}{1+p}(\mathcal{B}_2-\mathcal{B}_1)i+c_0+\mathcal{O}(p^2). \end{align} Note that, in \eqref{eq:expression_overlap_centrality_by_A}, \eqref{eq:overlap_centrality_estimate}, and \eqref{eq:overlap_centrality_estimate2}, the terms in $\mathcal{O}(p^2)$ could depend on $i$. We write the remaining term in \eqref{eq:overlap_centrality_estimate2} as \begin{equation} \varepsilon_i(\beta,p) \coloneqq O_i - \frac{p}{1+p}(\mathcal{B}_2-\mathcal{B}_1)i - c_0 = \mathcal{O}(p^2). \end{equation} Let us suppose $\mathcal{B}_2 \neq \mathcal{B}_1$. The remainder term $\varepsilon_i$ can be ignored when $p \ll 1$ compared to the term $\dfrac{p}{1+p}(\mathcal{B}_2-\mathcal{B}_1)i.$ After ignoring $\varepsilon_i$, the overlap centrality $O_i$ is a linear function with respect to rank $i$. Therefore the perfect correlation holds: \begin{align}\label{eq:B_1-B_2_and_phi} \phi \to \begin{dcases} +1 & (\text{if}\ \mathcal{B}_1<\mathcal{B}_2) \\ -1 & (\text{if}\ \mathcal{B}_1>\mathcal{B}_2) \end{dcases} \quad \text{as}\ p\to +0. \end{align} This is consistent with the observation in Fig.\ \ref{cc} as long as $p$ is small because the value of $\phi$ approaches $+1$ or $-1$ very closely. The above discussion gives another indication. Ignoring $\varepsilon_i$, the linear dependency of the overlap centrality $O_i$ with respect to rank $i$ comes from permutation symmetry \eqref{eq:CompBOp} in $\mathcal{A}_1$. Note that, as mentioned by \eqref{noncommuT}, the permutation symmetry is broken in the transition matrix if $p \neq 0$, but this symmetry is partially recovered in the quantity $\mathcal{A}_1$. Note that one can also derive \begin{align} M &= \frac{p}{1+p}\bka{\dfrac{N-1}{2N}(\mathcal{B}_1+\mathcal{B}_2) + \dfrac{(N-1)(N-2)}{2N}\mathcal{B}_3} \notag \\ &\qquad+C(\beta, p)\dfrac{\mathcal{B}_0}{N}+\mathcal{O} (p^2) \end{align} by using (\ref{MOrel}). Evaluating the sign of $\phi$ requires concrete calculation of both $\mathcal{B}_1$ and $\mathcal{B}_2$, but it gets complicated to obtain their analytic expressions for a given energy function such as the Potts energy \eqref{eq:PottsEnergy}. Nevertheless, it is still feasible to perform such a calculation in the case of $\beta=0$ as discussed in the next subsection. It is because at $\beta=0$, transition matrix $\TransOp{0}{}(\beta)$ gets independent of the form of energy function, resulting in getting close to that of the free random walk process. \subsection{Singularity in $\phi$ at $\beta=0$}\label{sec:Singularity} One can calculate the overlap centrality at $\beta=0$ for the asymptotic regime of small $p$ concretely. As a result, we show that at $\beta=0$, $\mathcal{B}_1=\mathcal{B}_2$ holds, which means that the correlation coefficient $\phi$ is singular at $\beta=0$. From the definitions of $\mathcal{B}_1$, $\mathcal{B}_2$ in \eqref{eq:definition_of_B123} and $\mathcal{A}_1$ in \eqref{eq:definition_of_A1}, we have \begin{gather}\label{eq:B1def} \mathcal{B}_1 = \sum_{1\le j\le N}\sum_{\bvec{x}\in X^N}\delta(x_1,x_j)\braket{\bvec{x}|\BOp{1,2}|P_{\mathrm{can}}(\beta)},\\ \label{eq:B2def} \mathcal{B}_2 = \sum_{1\le j\le N}\sum_{\bvec{x}\in X^N}\delta(x_2,x_j)\braket{\bvec{x}|\BOp{1,2}|P_{\mathrm{can}}(\beta)}. \end{gather} In order to calculate $\mathcal{B}_1$ and $\mathcal{B}_2$ at $\beta=0$, we need derive an explicit expression of $\braket{\bvec{x}|\BOp{1,2}|P_{\mathrm{can}}(0)}$. First, let us define \begin{align} \kket{k} \coloneqq \sum_{x\in X}\frac{e^{ikx}}{\sqrt{L}}\right\rangle{x} \in H_X, \end{align} for $k=2\pi n/L$ and $n\in\mathbb{Z}/L\mathbb{Z}$. The orthonormal system $\bkb{\kket{k}}_k$ spans the whole space $H_X$ over $\mathbb{C}$. In particular \begin{align} \right\rangle{P_{\mathrm{can}}(0)}=\bka{\dfrac{\kket{0}}{\sqrt{L}}}^{\otimes N} \end{align} holds. Note that, for $\bm{k} = (k_1, k_2, \dots, k_N)$ with $k_i=2\pi n_i/L$ and $n_i\in\mathbb{Z}/L\mathbb{Z}$ with $1 \le i \le N$, \begin{align} \kket{\bvec{k}}\coloneqq\kket{k_1}\otimes\cdots\otimes\kket{k_N}\in H_X^{\otimes N} \end{align} is an eigenvector of $\TransOp{0}{}(\beta = 0)$ with an eigenvalue \begin{align} \dfrac{1}{N}\bka{\cos^2\dfrac{k_1}{2}+\cdots+\cos^2\dfrac{k_N}{2}}. \end{align} The vector $\kket{\bvec{k}}$ is also an eigenvector of $\widehat{G}_0(\beta=0)$, and the eigenvalue is \begin{align}\label{eq:eigenvalue_of_GOp} N\bka{\sin^2\dfrac{k_1}{2}+\cdots+\sin^2\dfrac{k_N}{2}}^{-1} \end{align} for any $\bvec{k}\neq \bvec{0} = (0, 0, \dots, 0)$. Next, substituting the definition \eqref{eq:B_i0i1} of $\BOp{1,2}$ into the term $\braket{\bvec{x}|\BOp{1,2}|P_{\mathrm{can}}(0)}$, we obtain \begin{align}\label{eq:xB12Pcan0} &\braket{\bvec{x}|\BOp{1,2}|P_{\mathrm{can}}(0)}\notag\\ &= \frac{1}{2}\left\langle{\bvec{x}}\widehat{G}_0(0)(\ShiftOp{2}{\dP}+\ShiftOp{2}{\dM}-2\mathop{\widehat{\mathrm{id}}}\nolimits_H^{\otimes N})\notag\\ &\quad\quad\quad \times\CheckOp{1,2}{}\TransOp{0,\mathrm{move}}{1}{(\beta=0)}\right\rangle{P_{\mathrm{can}}(0)}. \end{align} In order to obtain more explicit expression, let us multiply $\right\rangle{P_{\mathrm{can}}(0)}$ by $\TransOp{0,\mathrm{move}}{1}$, $\CheckOp{1,2}{}$, $(\ShiftOp{2}{\dP}+\ShiftOp{2}{\dM}-2\mathop{\widehat{\mathrm{id}}}\nolimits_H^{\otimes N})$, and $\widehat{G}_0(0)$ from the left, successively. Reminding of the definition (\ref{tmove}) of $\TransOp{0,\mathrm{move}}{1}$, we have \begin{align}\label{eq:T0move_acting_on_P0} \TransOp{0,\mathrm{move}}{1}(0)\right\rangle{P_{\mathrm{can}}(0)} = \frac{1}{2N}\right\rangle{P_{\mathrm{can}}(0)}, \end{align} and can show that \begin{align}\label{eq:CheckOp_acting_on_P0} \CheckOp{1,2}{}\right\rangle{P_{\mathrm{can}}(0)} &= \sum_{x\in X}\frac{\right\rangle{x}}{L}\otimes\frac{\right\rangle{x}}{L}\otimes\bka{\frac{\kket{0}}{\sqrt{L}}}^{\otimes(N-2)}\notag\\ &= \frac{1}{L^2}\sum_{k_1}\kket{k_1}\otimes\kket{{-}k_1}\otimes\bka{\frac{\kket{0}}{\sqrt{L}}}^{\otimes(N-2)}. \end{align} By using \eqref{eq:T0move_acting_on_P0} and \eqref{eq:CheckOp_acting_on_P0}, we obtain \begin{align}\label{eq:DeltaXiTP} &(\ShiftOp{2}{\dP}+\ShiftOp{2}{\dM}-2\mathop{\widehat{\mathrm{id}}}\nolimits_H^{\otimes N})\CheckOp{1,2}{}\TransOp{0,\mathrm{move}}{1}(0)\right\rangle{P_{\mathrm{can}}(0)}\notag\\ &= -\frac{1}{2N}\frac{1}{L^2}\sum_{k_1}4\sin^2\frac{k_1}{2}\kket{k_1}\otimes\kket{{-}k_1}\otimes\bka{\frac{\kket{0}}{\sqrt{L}}}^{\otimes(N-2)}. \end{align} Substituting \eqref{eq:DeltaXiTP} into \eqref{eq:xB12Pcan0}, and using \eqref{eq:eigenvalue_of_GOp}, we find \begin{align}\label{eq:xB12Pcan} & \braket{\bvec{x}|\BOp{1,2}|P_{\mathrm{can}}(0)} \notag \\ &= -\frac{1}{2L^2}\sum_{k_1 \neq 0}\frac{e^{ik_1 x_1}}{\sqrt{L}}\frac{e^{-ik_1 x_2}}{\sqrt{L}}\bka{\frac{1}{L}}^{N-2}\notag\\ &= -\frac{1}{2L^{N+1}}\bka{L\delta(x_1,x_2)-1}. \end{align} Recalling \eqref{eq:B1def} and \eqref{eq:B2def} with \begin{align} \label{eq:CombinatorialSum} &\sum_{\bm{x} \in X^N} \delta(x_i,x_j) (L\delta(x_1, x_2) - 1) \notag \\ &= \begin{dcases} L^{N-1}(L-1) & (\text{if}\ (i,j) = (1,2) \text{ or } (2,1)) \\ 0 & (\text{otherwise}), \end{dcases} \end{align} one can calculate \begin{align} \mathcal{B}_1 = \mathcal{B}_2 = -\frac{L-1}{2L^2}. \end{align} Thus, it turns out that $\phi$ is singular at $\beta=0$. One can also calculate $\mathcal{B}_0=N/L$, and using \eqref{eq:CombinatorialSum}, $\mathcal{B}_3 = 0$. By substituting the value of $\mathcal{B}_\ell$ ($\ell=0,1,2,3$) into \eqref{eq:Compute_c0} and \eqref{eq:overlap_centrality_estimate2}, the overlap centrality $O_i$ is $\mathcal{O}(N/L)$. This is consistent with the uniform distribution corresponding to the case of $\beta=0$. Therefore, it is reasonable that $\mathcal{B}_0=\mathcal{O}(N/L)$ and $\mathcal{B}_1 =\mathcal{B}_2 =\mathcal{O}(L^{-1})$ as functions of $N$ and $L$. Note that one can also evaluate \begin{equation} M=\dfrac{1}{L}\bka{C(0,p) - \dfrac{p}{2(1+p)}\dfrac{N-1}{N}\dfrac{L-1}{L}} +\mathcal{O}(p^2). \end{equation} \subsection{Comparison between exact diagonalization and analytic result} In this subsection, we shall illustrate the behavior of the correlation coefficient $\phi$ computed by exact diagonalization in comparison with analytic discussion in Section \ref{Rig}. In order to consider $\beta$-dependence of $\phi$, let us fix $p$ as a small but non-zero value and change the value of $\beta$. When the parameter $\beta$ varies with satisfying the condition \begin{equation} \label{eq:no_longer_dominant} \mathcal{B}_2(\beta,p)-\mathcal{B}_1(\beta,p)= \mathcal{O}(\varepsilon_i(\beta, p)), \end{equation} the linear term $(\mathcal{B}_2 - \mathcal{B}_1)i$ in $O_i$ is no longer dominant in the rank-dependence. As a result, $\phi$ could change continuously from $\phi=-1$ to $\phi=1$. In this case, \eqref{eq:B_1-B_2_and_phi} does not necessarily hold. Exact diagonalization indicates that the range of $\beta$, where $\phi$ takes a value close to $\pm 1$, is wider as $p$ is smaller. Let us remind of the observation in the exact diagonalization in the case of the Potts energy that the value of $\phi$ sharply changes around $\beta=0$ for small $p$ as shown in Fig.\ \ref{cc}. Assuming that this observation is universal for sufficiently small $p$, by combining the existence of perfect correlation and the singularity of $\phi$ at $\beta=0$, it may be a reasonable conjecture that, at least, in the case of the Potts energy, $\phi$ becomes discontinuous at $\beta=0$ as a function of $\beta$ in the limit of $p\to +0$. \subsection{Correspondence between Overlap centrality and Eigenvector centrality}\label{relation_between_centralities} Let us discuss the relation between the overlap centrality $\bv{O}=(O_i)_{i=1}^N$ and the other existing ways to define centrality. First, the overlap centrality has a connection to another existing centrality in the following sense. Let us consider a weighted complete graph, where each agent is regarded as a vertex, and an element $r_{ij}$ of the neighbor matrix $\mathcal{R}$ for the pair of agents $i,j$ is regarded as the weight of the edge $(i,j)$. Then, the overlap centrality defined above is equivalent to the strength centrality of the complete graph constructed above, which has been introduced in the field of network science \cite{Freeman, Barrat}. Second, one can also define the eigenvector centrality of the complete graph mentioned above as the eigenvector $\bv{V}=(V_i)_{1\le i\le N}$ of $\mathcal{R}$ with the maximum eigenvalue. Indeed, one may show that the eigenvector centrality is directly related to the overlap centrality when $p \ll 1$ in the following manner: \begin{align}\label{ov-centrality} \bv{V} \propto \frac{1}{N^{3/2}c}\bv{O}-\gamma\times(1,1,\ldots,1)^T + \mathcal{O}(p^2), \end{align} where $c$ and $\gamma$ are constants with $\mathcal{O}(1)$. In particular, combining with \eqref{eq:overlap_centrality_estimate2}, we have \begin{align} V_i &\propto \frac{p}{1+p}\frac{\mathcal{B}_2 - \mathcal{B}_1}{N^{3/2}c} i+ \left(\frac{c_0}{N^{3/2}c} -\gamma\right) + \mathcal{O}(p^2) \end{align} where the coefficient of proportionality is independent of $i$. Thus, the eigenvector centrality as well as the overlap centrality depends on the rank $i$ linearly if the term of $\mathcal{O}(p^2)$ is ignored in \eqref{ov-centrality}. Remarkably, that relation \eqref{ov-centrality} holds for general probability distribution which breaks the permutation symmetry of agents weakly. See Appendix \ref{ECENTER} for the explicit two conditions to hold the relation \eqref{ov-centrality} in a more general form. \section{Concluding remarks}\label{CR} In this paper, we have proposed a stochastic process without both detailed balance and permutation symmetry, which is inspired by the supplanting phenomenon of Japanese macaques. We have derived a sufficient condition $\mathcal{B}_1\neq\mathcal{B}_2$ under which perfect correlation appears between the overlap centrality and the rank of agents in the regime of weak supplanting limit $p\to+0$. Even for small but non-zero $p$, concrete analysis by exact diagonalization shows that $\phi$ is very close to the perfect correlation in the case of the Potts energy if $\beta$ is far from $0$. Another problem on the singular behaviors of $\phi$ around $\beta = 0$ is to identify the effects which essentially cause those singular behaviors. Compared to the equilibrium Potts model, the model with supplanting process does not have permutation symmetry in terms of agents, and also not have detailed balance. In our derivation of perfect correlation, broken permutation symmetry is one of essential parts, but we are not aware of the effects from broken detailed balance. Concerning this point, one can consider an equilibrium model keeping with detailed balance without permutation symmetry by, for example, an energy function $\sum_{i,j}J_{i,j}\delta(x_i,x_j)$, where $J_{i,j}$ is a function of agents $i,j$ such as $i\times j$. If one could prove that there does not exist the singularity of $\phi$ for such general equilibrium models, one could presumably expect that both permutation symmetry breaking and broken detailed balance are essential for causing the singularity. Such a motivation has been raised for broken rotational symmetry observed in active matter \cite{V-Tasaki}. Indeed, it has been proven for a general class of systems having potentials dependent on position and velocity that rotational symmetry cannot be broken in equilibrium. This implies that the observed phase transitions associating with broken rotational symmetry in active matters are caused purely by nonequilibrium effects such as broken detailed balance. We will need somewhat similar ideas. We remark that the term \textit{permutation symmetry} has been used in this paper in various manners depending on the quantity to which the term is applied. See the list of the various ways of the term in Table \ref{tab:polymorphism}. \begin{table*} \centering \begin{tabular}{c|c|cc} & Class of Quantity & \multicolumn{2}{c}{ \begin{tabular}{c} Definition for the quantity \\ to be \textit{permutation symmetric} \end{tabular} } \\ \hline (a) & function $f(i_1,i_2,\ldots,i_n)$ depending on ranks $i_1,i_2,\ldots,i_n$ & $f(\sigma(i_1),\sigma(i_2),\ldots,\sigma(i_n))=f(i_1,i_2,\ldots,i_n)$ & \multirow{5}{*}{for any $\sigma\in\mathfrak{S}_N$} \\ (b) & function $f(\bvec{x})$ depending on agents' configuration $\bvec{x}\in X^N$ & $f(\sigma(\bvec{x}))=f(\bvec{x})$ & \\ (c) & state vector $\right\rangle{\psi}\in H_X^{\otimes N}$ & $\PermOp{\sigma}\right\rangle{\psi}=\right\rangle{\psi}$ & \\ (d) & linear operator $\widehat{A}$ from $H_X^{\otimes N}$ to itself & $\PermOp{\sigma}\widehat{A}\PermOp{\sigma}^{-1}=\widehat{A}$ & \\ \end{tabular} \caption{ How to use the term \textit{permutation symmetry} depending on the classes of quantity. For example of each class, the case (a) is applied to \eqref{eq:CompBOp}, (b) to \eqref{sym}, (c) to \eqref{eq:Pcan_sym}, and (d) to \eqref{eq:commuT0}.} \label{tab:polymorphism} \end{table*} Let us briefly discuss the obtained results in the context of linear response theory. If one defines a susceptibility of $M$ as $\chi(p):={\mathrm{d} M}/{\mathrm{d} p}$ in terms of $p$, we can obtain \begin{align} \chi(0)&= \frac{N-1}{2N}(\mathcal{B}_1+\mathcal{B}_2) + \frac{(N-1)(N-2)}{2N}\mathcal{B}_3 \notag \\ &\qquad + \partial_p C(\beta, 0) \mathcal{B}_0. \end{align} Note that the coefficient $\partial_p C(\beta, 0)$ of $\mathcal{B}_0$ is described as \begin{align}\label{bfinal} -\binom{N}{2} \sum_{\bm{x} \in X^N} \left\langle{\bm{x}} \BOp{i_0,i_1}\right\rangle{P_{\mathrm{can}}(\beta)} \end{align} for some $1 \le i_0 < i_1 \le N$. By a similar computation to \eqref{eq:CompBOp}, we can find that \eqref{bfinal} does not depend on $i_0$, $i_1$, and $p$. So far, it is not obvious for us to connect those quantities with the other known quantities, which remains as an open problem. Whereas our above discussions focus on the case of small $p \ll 1$, we move onto the case of any $p$. The result of exact diagonalization implies that perfect correlation with $\phi=1$ would hold for negative $\beta$ sufficiently far from $0$. This behavior is of interest in that the strong correlation effect originating from hard core repulsion may stabilize perfect correlation with $\phi=1$. However, our strategy of the perturbation with respect to $p$ is unavailable for not small $p\not\ll 1$. For this reason, it is not clear whether the perfect correlation is derived by use of permutation symmetry in a similar way to the case of $p\ll 1$. In order to tackle this problem, the situation with restricted values of $L$ and $N$ is to be considered. As an example, let us take the case satisfying $N=L+1$ with the Potts energy. In this case, if the repulsion is sufficiently strong, then each site is occupied by at least an agent, and there is a single site occupied by two agents. Then, one of the two agents occupying the site can hop in accordance with the equilibrium dynamics, while all of the other agents cannot hop effectively. This leads to reduction of the transition of states and could help us to analyze the overlap centrality of this model. Note that the above discussion is based on the Potts energy, and is not necessarily applied to the other case with general energy form. It remains for future work to perform further numerical calculation for energy functions other than the Potts energy as well as to explore analytic methods for general $p$. Let us briefly discuss the possibility of phase transition lines in parameter space $(\beta,p)$ for non-zero $p$. First, the equilibrium phase transition point $\beta=\beta_c$ with $p=0$ for the case of Potts energy might extend toward non-zero $p$ as a nonequilibrium phase transition line $\beta=\beta_c(p)$ where $M$ shows singular jump. Second, the point $\beta=0$ with $p\to + 0$ where correlation coefficient shows singularity might also extend toward non-zero $p$ as another nonequilibrium phase transition line $\beta=\beta_0(p)$ where $\phi$ shows a singular jump. However, since our analysis is limited to the parameter region close to $p=0$, it is necessary to perform large scale numerical simulations or develop another analytical methods in order to capture the true limiting behavior of large system size for non-zero $p$. This remains to be an intriguing future study. From a mathematical viewpoint, {\it beta decomposition} of the transition matrix could be useful in more general context. This expression gives the exact lowest order of transition matrix in terms of $p$. Nevertheless, the concrete expression of it looks complicated and then it would not be straightforward to use the expression for a given purpose. Further, it should be noted that the minor modification of supplanting process prevents us to obtain the beta decomposition. In this sense, the present version of the supplanting process can be regarded as a specially tractable case. It remains as an open question how beta decomposition can be applied to calculation of the other quantities and whether one can find other tractable cases in this direction. Let us remark on the direct relation between eigenvector centrality and strength centrality. Unifying two relations among centralities discussed in Section \ref{relation_between_centralities}, one can also have another conclusion in the context of network theory. Consider a complete graph whose edges are weighted by the same value with small fluctuation. Then the neighbor matrix $\mathcal{R}$ whose entries are weights of edges has a decomposition $\tilde{\mathcal{R}}^{(1)} + \tilde{\mathcal{R}}^{(2)}$ similar to \eqref{eq:decomp_with_fluctuation}, and the graph holds a linear relation similar to \eqref{ov-centrality} between the strength centrality and the eigenvector centrality. Lastly, we mention the results in this paper relevant to behaviors of members in a group of primate species. Ranking has been widely known to be one characteristic structure which primates species has when they live as a group \cite{primatebook}. Indeed, it has been found that ranking affects the spatial location of the members in a group \cite{rank_location1,rank_location2,pecentrality1,pecentrality2} and the distance between the members \cite{rank_distance1,rank_distance2}. The overlap centrality observed in the model proposed in this paper might shed light on how one could estimate the ranking structures of such a group and its environmental condition. \\ \begin{acknowledgements} The authors thank T.\ Yamaguchi for discussing his field study about Japanese macaques and telling us the relevant references to our work. \end{acknowledgements} \onecolumngrid
1,314,259,996,458
arxiv
\section{Introduction} \label{1} Traditionally, a large proportion of distribution network load is supplied by centralized fossil-fired power plants, resulting in considerable emissions of greenhouse gas carbon dioxide \cite{RE2}. The ever-worsening climate change has escalated the urgent need of distributed energy resources (DERs) in the distribution network, including micro-turbines (MT), rooftop photovoltaic (PV) panels and small wind turbines. However, current policies such as feed-in tariff fail to promote the integration of DERs \cite{energyshare} and are insufficient to fulfill the goal of carbon neutrality \cite{RE5}. Recently, the technological advance in energy system management enables a novel electricity market design named peer-to-peer (P2P) energy market \cite{RE1,RE3}, which facilitates the consumption of renewable energy. Decentralized platforms for P2P energy trading transactions with the aid of blockchain technology are developed in \cite{blockchain1,blockchain2}. Besides, generalized Nash game formulation is widely adopted to formulate the energy sharing mechanism \cite{GNE,GNE2}. As for the decentralized optimization algorithm to clear the P2P market, the P2P market is designed as a social welfare maximization problem and the alternating direction multiplier method (ADMM) is employed to achieve consensus among market players \cite{ocadmm,admm2,coordinated}. Other approaches include primal-dual gradient method \cite{primaldual}, Relaxed Consensus + Innovation \cite{consensus}, etc.. Within the aforementioned P2P trading frameworks, individual participants are more inclined to trade directly with their counterparts in the energy community \cite{RE4} rather than with the upstream grid. Therefore, the energy community can reduce the energy loss due to the long-distance transmission and is expected to scale down carbon emissions. Regarding carbon neutrality, researchers have made endeavors to shed light on low-carbon operations in the power industry. A straightforward approach is to consider low-carbon factors by means of objective functions or price signals. In \cite{objectivecarbon,objectivecarbon2}, the goal of minimizing energy cost is combined with the minimization of ${\rm{C}}{{\rm{O}}_{\rm{2}}}$ emissions and the problem is further formulated as a multi-objective optimization program. In contrast, a energy-carbon integrated price is coined in \cite{pricesignal} based on carbon emission flow and further incentives multiple energy systems to operate in a low-carbon mode implicitly, but it ignores the energy sharing among different entities. As a supplement, reference \cite{objectivecarbon3} considers multi-energy sharing among energy communities and incorporates carbon tax policy to curb carbon emissions. Another alternative is to introduce a carbon transactive market which is similar to the practice in the energy sector. Carbon market usually refers to a cap-and-trade market \cite{zhang2020emission} where all market participants can trade carbon emission allowances and should surrender corresponding proportion of allowances for the ${\rm{C}}{{\rm{O}}_{\rm{2}}}$ emissions. Conventionally, the production responsibility principle \cite{pro_principle} is adopted, which means energy producers should be accountable for carbon emissions. Recent works have combined the P2P energy market with the carbon market based on this accounting method. In \cite{carbon_blockchain}, all microgrids are motivated to form a grand coalition to transact energy and carbon allowances. Nevertheless, market clearance is solved by the distribution system operator (DSO) and individual privacy concerns may occur. A three-layer framework to trade energy and carbon allowances is established in \cite{carbon_blockchain1}. Notwithstanding the decentralized settling procedure, the exchange of carbon allowances is launched in each time slot, which is scarce in practice. In recent years, personal carbon trading (PCT) has been viewed as a promising scheme targeted at reducing carbon emissions at the individual and household level \cite{PCT}. The difference is that PCT applies the consumption responsibility principle, i.e., consumers are responsible for carbon emissions precipitated by energy usage \cite{pro_principle,objectivecarbon3}. In a PCT scheme, each consumer is assigned with an initial allocation of carbon allowances and can trade with other consumers. A coupled electricity and emission trading market considering end-users' carbon responsibility is introduced in \cite{wang2020carbon}, but the electricity market is centralized and consumers are penalized for excessive carbon emissions instead of exchanging allowances. The carbon allowances trading is proposed in \cite{PCT1}, while the transactive energy trading is omitted and the identities of allowance sellers/buyers are assigned beforehand. All of the aforementioned references do not tackle the threat of uncertainty, which is imposed by the presence of increasing penetration of renewable energy sources. Existing works have looked into different approaches to compensate for these uncertainties \cite{gonzalez2021electricity,9618645,8789684,9178314}. In \cite{gonzalez2021electricity}, node-to-node balancing participation factors are leveraged to procure reserve of controllable generators to keep the bulk power system balanced. Flexibility of users is exploited to accommodate deviations of renewable energy outputs in the real-time market via a P2P energy sharing mechanism \cite{9618645}. As for the day-ahead P2P market, the uncertainty is traded with conventional generators or end-users in \cite{8789684,9178314}. Nevertheless, the process of balancing uncertainty is possible to induce more carbon emissions (i.e., emissions resulted from upward reserve supplied by conventional generators), which should be addressed in the carbon market. Summing up the above, the following issues still need to be further addressed: 1) how to establish a day-ahead decentralized market that can trade energy, uncertainty and carbon allowances jointly in the energy community. 2) how to take into account the exceeding carbon emissions incurred by conventional generators' upward reserve. To this end, this paper proposes a novel community-level P2P market which can trade day-ahead energy, uncertainty and carbon allowances simultaneously. The market participants are constituted of three parts, i.e., renewable agents, conventional generators and users. Renewable agents are supposed to compensate for their forecast errors by procuring reserve from conventional generators and flexibility from users. The definition of flexibility in this paper is the same as that in \cite{9618645}, which is the adjustable capacity the demand can provide in the demand response program. Besides, the carbon market is established under the PCT scheme, and the need to predetermine the participants' identities (sellers or buyers) is obviated through a carbon allowance sharing mechanism. The main contributions of this paper are summarized as follows. 1) A joint energy, uncertainty and carbon allowance trading market is developed for the energy community. The proposed framework not only enables energy clearing and carbon allowance sharing simultaneously, but also hedges against the uncertainty. 2) We leverage the consumption responsibility principle and propose that renewable agents are responsible for acquiring sufficient allowances, which effectively covers potential carbon emissions precipitated due to uncertainty balance and ensures the total emissions are within the prescribed limit. 3) A fully decentralized optimization method is developed based on a combination of a modified tightening McCormick method and ADMM, ensuring accuracy while excluding privacy concerns. The remainder of this paper is organized as follows. Section \ref{2} presents the proposed trading framework and market formulations. Section \ref{3} provides the distributed solution techniques. Case studies are conducted in Section \ref{4}. Finally, the conclusions of this paper follow in Section \ref{5}. \section{Trading Framework and Market Formulation} \label{2} \subsection{Trading Framework} In this paper, a set \(\Omega\) of participants are considered in the joint market, which can be split into three categories, i.e., \(\Omega_{u}\) for users, \(\Omega_{r}\) for renewable agents (RES), such as photovoltaics, and \(\Omega_{g}\) for conventional generators (CG), such as MTs and combined heat and power units (CHP). The joint market is proposed for the day-ahead market and the time interval is 1h. The trading framework is depicted in Fig. \ref{fig1}. In the energy market, users choose to buy clean energy from RESs or fossil energy from CGs alternatively. The renewable source generation is featured with uncertainty and thus, RESs need to procure regulating capacity from CGs or users to balance potential forecast errors in the real-time stage, otherwise they will be punished for not fulfilling the contract made in the day-ahead market. Regarding carbon allowances transactions, users and RESs trade allowances to cover the incurred carbon emissions. Under the PCT scheme, based on individual consumption profiles, users who fail to cover emissions need to purchase allowances in the market, while others with excessive allowances can choose to sell them in the market. The flexibility of users and reserve of CGs contracted in the day-ahead market should be dispatched by RESs who deviate from their predictions at the real-time stage. Then the dispatched reserve becomes another source of carbon emissions. To deal with emissions induced during uncertainty balance, we propose that RESs are accountable and should purchase adequate carbon allowances, which is consistent with the consumption responsibility principle. Moreover, the users with a surfeit of allowances can sell the allowances to the community manager, which can incentivize users to lead a low-carbon life. The renewable generation not consumed inside the community can be accommodated by the manager as well. All market participants communicate with the community manager to clear the market. \begin{figure}[!t] \centering \includegraphics[width=4in]{frame.pdf} \caption{Proposed market framework in the energy community} \label{fig1} \vspace{-6mm} \end{figure} \subsection{Market Formulation} \subsubsection{Modelling Uncertainty} Firstly, we model the uncertainty in order to quantify forecast errors. Only the energy deficiency case is considered in this paper since the surplus generation can be curtailed or accommodated by the system operator in the real-time stage \cite{9178314}. Instead of assuming Gaussian distributed forecast errors, here we only adopt mean and standard deviation of the error to capture uncertainty. Let \(\boldsymbol{\omega_i^{t^0}}\) denote the random forecast error of RES \textit{i}, which can be divided into two parts: negative component denoted as \(\boldsymbol{\omega_i^{t-}}\) and positive component denoted as \(\boldsymbol{\omega_i^{t+}}\), and it is assumed that \(\mathbb{P} (\boldsymbol{\omega_i^{t^0}}\leq0)=0.5\), \(\mathbb{P} (\boldsymbol{\omega_i^{t^0}}\geq0)=0.5\). Next, to model the case that only generation deficiency is considered, a mixed random variable is defined as follows: \begin{equation} {\boldsymbol{\omega_i^t}} = \begin{cases} 0,&{\text{if}}\ {\boldsymbol{{\omega}_i^{t^0}}\geq0} ,\\ {\boldsymbol{\omega_i^{t-}},}&{\text{if}}\ {\boldsymbol{\omega_i^{t^0}}\leq0}. \end{cases} \end{equation} Thus, it can be easily deduced that \(\mathbb{E}(\boldsymbol{\omega_i^t})=\frac{1}{2}\mu_i^t\), \(\text{Var}(\boldsymbol{\omega_i^t})=\frac{1}{2}(\delta_i^t)^2+\frac{1}{4}(\mu_i^t)^2\). \subsubsection{Energy Trading} The proposed energy market is a bilateral trading market where each participant decides its trading quantity with its neighborhoods. The market equilibrium is represented by the following balancing constraints: \begin{equation} Es_{ij}^t+Eb_{ij}^t=0, \quad \forall i\in\Omega_u,j\in\Omega_g\cup\Omega_r \end{equation} Each user determines the row vector \(\textbf{Eb}_{i[\cdot]}^t\), while each RES/CG determines the column vector \(\textbf{Es}_{[\cdot] i}^t\). Besides, the trade quantities of sellers are restricted to be non-negative: \begin{equation} \textbf{Es}^t\succeq \textbf{0} \end{equation} \subsubsection{Uncertainty Trading} In this paper, the participation factor is adopted to model the bilateral uncertainty trading: $\alpha_{ij}^t$ denotes the participation factor based on which CG \textit{i} produces energy to compensate the uncertainty $\boldsymbol{\omega_j^t}$, and $\beta_{ij}^t$ denotes the participation factor based on which user \textit{i} is willing to curtail its flexible load to compensate $\boldsymbol{\omega_j^t}$. RESs and CGs, as well as users, should achieve consensus on these uncertainty transactions when reaching the equilibrium: \begin{equation} \alpha_{ij}^{r,t}+\alpha_{ij}^t=0, \quad \forall i\in\Omega_g,j\in\Omega_r \end{equation} \begin{equation} \beta_{ij}^{r,t}+\beta_{ij}^t=0,\quad \forall i\in\Omega_u,j\in\Omega_r \end{equation} Each RES decides column vectors \(\textbf{A}_{[\cdot]i}^{r,t}\) and \(\textbf{B}_{[\cdot]i}^{r,t}\), while each CG/user decides the row vector \(\textbf{A}_{i[\cdot]}^{t}\)/\(\textbf{B}_{i[\cdot]}^{t}\). Similarly, the participation factors cannot be negative: \begin{equation} \textbf{A}^t\succeq\textbf{0} \end{equation} \begin{equation} \textbf{B}^t\succeq\textbf{0} \end{equation} RES \textit{j} needs to match the forecast error with the participation factors through uncertainty trading, which means the sum of the participation factors must equal to minus one (since $\alpha_{ij}^{r,t}$/$\beta_{ij}^{r,t}$ and $\alpha_{ij}^{t}$/$\beta_{ij}^{t}$ are opposite in sign): \begin{equation} \sum\limits_{i \in \Omega_g} {\alpha_{ij}^{r,t}}+ \sum\limits_{i \in \Omega_u} {\beta_{ij}^{r,t}}=-1,\quad\forall j \in \Omega_r \end{equation} \subsubsection{Carbon Market} As is stated before, the players in the carbon market are users and RESs. The initial daily carbon allowances $\Psi_i^0$ are allocated to users, and then they purchase/sell allowances, respectively, to satisfy individual constraints. Meanwhile, RESs who own no initial allocation have to purchase allowances to counterbalance emissions resulting from upward reserve provided by CGs. Via the sharing mechanism, the carbon allowance trading process can therefore be represented by the balancing constraint below: \begin{equation} \sum\limits_{i\in\Omega_u\cup\Omega_r}c_i=0 \end{equation} The user who sells allowances in the market can sell them to the community manager alternatively: \begin{equation} 0\leq c_i^s\leq M*id_i \end{equation} \begin{equation} -M*id_i\leq c_i\leq M*(1-id_i) \end{equation} where $id_i$ is a binary variable denoting the identity of the user, i.e., 1 for seller while 0 for buyer. After the clearance of the carbon market, each participant possesses a certain amount of carbon allowances $\Psi_i$: \begin{equation} {\Psi_i} = \begin{cases} c_i,&{\text{if}}\ \textit{i}\in\Omega_r,\\ \Psi_i^0+c_i-c_i^{s},&{\text{if}}\ \textit{i}\in\Omega_u \end{cases} \end{equation} \noindent \textbf{\textit{Remark}:} Note that the participants are not permitted to purchase more allowances from the manager since the total initial allocation is set as a cap for the whole community. \subsubsection{Individual Constraints} At the equilibrium of the energy market, the power set-point of each participant is equal to the summation of its trade quantity: \begin{equation} p_{u,i}^t=-(\textbf{1})^{\intercal}\cdot\textbf{Eb}_{i[\cdot]}^t,\quad \forall i \in \Omega_u \end{equation} \begin{equation} p_{r/g,i}^t=\textbf{1}\cdot\textbf{Es}_{[\cdot] i}^t, \quad \forall i \in \Omega_r\cup\Omega_g \end{equation} which is also bounded by the following constraints: \begin{equation} \underline{p}_{g,i}^t\leq p_{g,i}^t\leq\overline{p}_{g,i}^t,\quad \forall i \in \Omega_g \end{equation} \begin{equation} \underline{p}_{u,i}^t\leq p_{u,i}^t\leq\overline{p}_{u,i}^t,\quad \forall i \in \Omega_u \end{equation} \begin{equation} p_{r,i}^t+\hat{p}_{r,i}^t= P_{r,i}^t,\quad \forall i \in \Omega_r \end{equation} Following (17), we assume that the "green energy" not consumed in the community $\hat{p}_{r,i}^t$ can be accommodated by the community manager. However, participating in uncertainty balancing induces deviations in the output of CGs and energy consumption of users, which are given by: \begin{equation} \widetilde{p}_{g,i}^t=p_{g,i}^t-\boldsymbol{\omega^t}\cdot \textbf{A}_{i[\cdot]}^t, \quad \forall \textit{i} \in \Omega_g \end{equation} \begin{equation} \widetilde{p}_{u,i}^t=p_{u,i}^t+\boldsymbol{\omega^t}\cdot \textbf{B}_{i[\cdot]}^t,\quad\forall i \in \Omega_u \end{equation} where \(\boldsymbol{\omega^t}=\begin{pmatrix}\boldsymbol{\omega_1^t }& \boldsymbol{\omega_2^t} & \cdots & \boldsymbol{\omega_{|\Omega_r|}^t} \end{pmatrix}\) is a random row vector containing all RESs' uncertainties at time \textit{t}, and \(\widetilde{p}_{g,i}^t\)/\(\widetilde{p}_{u,i}^t\) denotes the actual energy set-point of CGs/users, which is a random variable. Therefore, to ensure the limits are respected, chance-constraints are introduced: \begin{equation} \mathbb{P}(\widetilde{p}_{g,i}^t\leq\overline{p}_{g,i}^t)\geq 1-\varepsilon_{g,i},\quad \forall i \in \Omega_g \end{equation} \begin{equation} \mathbb{P}(\widetilde{p}_{u,i}^t\geq\underline{p}_{u,i}^t)\geq 1-\varepsilon_{u,i},\quad \forall i \in \Omega_u \end{equation} Constraints (18) and (19) enforce that the power limits should be respected with a predefined probability \(1-\varepsilon_{g/r,i}\), which can be further converted into second-order cone formulations with the aid of Chebyshev approximation\cite{summers2015stochastic}: \begin{equation} p_{g,i}^t-\textbf{M}^t\cdot\textbf{A}_{i[\cdot]}^t+z_{g,i}S_{g,i}^t\leq\overline{p}_{g,i}^t,\quad \forall i \in \Omega_g \end{equation} \begin{equation} -p_{u,i}^t-\textbf{M}^t\cdot\textbf{B}_{i[\cdot]}^t+z_{u,i}S_{u,i}^t\leq-\overline{p}_{u,i}^t,\quad\forall i \in \Omega_u \end{equation} where \(\textbf{M}^t=\mathbb{E}(\boldsymbol{\omega^t})\) is the mean value of \(\boldsymbol{\omega^t}\), and \(z_{g/u,i}=\sqrt{(1-\varepsilon_{g/u,i})/{\varepsilon_{g/u,i}}}\). The covariance matrix of \(\boldsymbol{\omega^t}\) is denoted as \(\boldsymbol{\Sigma}^t\) and the formulations for \(S_{g,i}^t/S_{u,i}^t\) are: \(S_{g,i}^t=\left\lVert (\boldsymbol{\Sigma}^t)^{1/2}(\textbf{A}_{i[\cdot]}^t)^\intercal\right\rVert_2\), \(S_{u,i}^t=\left\lVert (\boldsymbol{\Sigma}^t)^{1/2}(\textbf{B}_{i[\cdot]}^t)^\intercal\right\rVert_2\). Users' carbon allowances should cover their corresponding emissions: \begin{equation} CE_i=-\sum\limits_t\sum\limits_{j\in\Omega_g}\sigma_jEb_{ij}^t\leq \Psi_i,\quad \forall i\in \Omega_u \end{equation} While for RESs, potential carbon emissions incurred by dispatching upward reserve can be calculated as follows: \begin{equation} \widetilde{CE}_i=\sum\limits_t\sum\limits_{j\in\Omega_g}\sigma_j\alpha_{ji}^{r,t}\cdot \boldsymbol{\omega_i^t},\quad\forall i \in \Omega_r \end{equation} RES \textit{i} must procure sufficient allowances to cover the above emissions: \begin{equation} \mathbb{P}(\widetilde{CE}_i\leq \Psi_i)\geq 1-\varepsilon_{r,i},\quad \forall i \in \Omega_r \end{equation} Similarly, the above constraint can be transformed into a second-order cone constraint: \begin{equation} -\mathbb{E}(\boldsymbol{\omega_i})\cdot\textbf{m}_i+\sqrt{(1-\varepsilon_{r,i})/\varepsilon_{r,i}}\left\lVert\boldsymbol{\Xi_i}^{1/2}(\textbf{m}_i)^\intercal\right\rVert_2\leq \Psi_i \end{equation} \begin{equation} m_i^t=-\sum\limits_{j\in\Omega_g}\sigma_j\alpha_{ji}^{r,t},\quad \forall i \in \Omega_r \end{equation} where \(\boldsymbol{\omega_i}=\begin{pmatrix} \boldsymbol{\omega_i^1} & \boldsymbol{\omega_i^2} & \cdots &\boldsymbol{\omega_i^T} \end{pmatrix}\) is a row vector containing RES \textit{i}'s uncertainties throughout the scheduling horizon and \(\textbf{m}_i=\begin{pmatrix} m_i^1 & m_i^2 & \cdots &m_i^T \end{pmatrix}\). \(\boldsymbol{\Xi_i}\) is the covariance matrix of \(\boldsymbol{\omega_i}\). \subsubsection{Expected Social Welfare Maximization Problem} It is assumed that all market participants collaboratively minimize the overall cost of the group. Therefore, the objective function can be formulated as follows: \begin{equation} \begin{split} obj=\sum\limits_t[\mathbb{E}(\sum\limits_{i\in\Omega_g}C_i(\widetilde{p}_{g,i}^t))-\mathbb{E}(\sum\limits_{i\in\Omega_u}U_i(\widetilde{p}_{u,i}^t))-\sum\limits_{i\in\Omega_r}r_e^{t}\hat{p}_{r,i}^t]-\sum\limits_{i\in\Omega_u}r_c^sc_i^s \end{split} \end{equation} where \(C_i(p)=c_{2,i}p^2+c_{1,i}p+c_{0,i}\), \(U_i(p)=d_{2,i}p^2+d_{1,i}p\). \(r_e^{t}\) is the selling price for renewable generation at time \textit{t}, and $r_c^s$ is the selling price for carbon allowances. Substituting (18) and (19) into (29), the objective can be further converted into the following expression: \begin{equation} \begin{split} obj & = \sum\limits_t {[\sum\limits_{i \in {\Omega _g}} {({c_{2,i}}{{(p_{g,i}^t)}^2} + {c_{1,i}}p_{g,i}^t + {c_{0,i}}} } - (2{c_{2,i}}p_{g,i}^t+ {c_{1,i}})(\textbf{M}^t\cdot\textbf{A}_{i[\cdot]}^t) + {c_{2,i}}({(\textbf{M}^t\cdot\textbf{A}_{i[\cdot]}^t)^2} \\ & + {(S_{g,i}^t)^2})) - \sum\limits_{i \in {\Omega _u}} {({d_{2,i}}{{(p_{u,i}^t)}^2}}+ {d_{1,i}}p_{u,i}^t + (2{d_{2,i}}p_{u,i}^t + {d_{1,i}})(\textbf{M}^t\cdot\textbf{B}_{i[\cdot]}^t)\\ &+ {d_{2,i}}({(\textbf{M}^t\cdot\textbf{B}_{i[\cdot]}^t)^2}+ {(S_{u,i}^t)^2})) - \sum\limits_{i\in\Omega_r}r_e^{t}\hat{p}_{r,i}^t] -\sum\limits_{i\in\Omega_u}r_{c}^sc_i^s\\ \end{split} \end{equation} Summing up the above, the problem can be formulated as: \begin{equation} \begin{aligned} &\min &&obj\\ &\textrm{s.t.} \quad(2)-(17),&& (22)-(24), \quad(27)-(28) \label{eq:LagrangeRelax} \end{aligned} \end{equation} \section{Distributed Solution Techniques} \label{3} In order to solve (31) in a privacy-preserving manner, two obstacles need to be addressed: 1) The uncertainties bring bilinear terms into the objective function (30), which makes it nonconvex function. 2) The constraints (2), (4)-(5) and (9) are coupled among different participants. In this section, we will provide a two-stage iterative method that includes a Relax-ADMM-Contraction loop as described below. \subsection{Convexification of the Objective Function--- Relax} The bilinear terms are normally eliminated through McCormick envelopes \cite{mccormick1976computability}. Firstly, the following auxiliary variables are introduced for simplicity: \begin{equation} \pi_{g,i}^t=\textbf{M}^t\cdot\textbf{A}_{i[\cdot]}^t,\quad\forall i \in \Omega_g \end{equation} \begin{equation} \pi_{u,i}^t=\textbf{M}^t\cdot\textbf{B}_{i[\cdot]}^t,\quad\forall i \in \Omega_u \end{equation} \begin{equation} \chi_i^t=p_{g,i}^t\pi_{g,i}^t,\quad\forall i \in \Omega_g \end{equation} \begin{equation} \varphi_i^t=p_{u,i}^t\pi_{u,i}^t,\quad\forall i \in \Omega_u \end{equation} The lower and upper bounds of $\pi_{g,i}^t$ and $\pi_{u,i}^t$ can be easily deduced as follows: \begin{equation} \underline{\pi}_{g,i}^t=\underline{\pi}_{u,i}^t=-\left\lVert\textbf{M}^t\right\rVert_\infty \end{equation} \begin{equation} \overline{\pi}_{g,i}^t=\overline{\pi}_{u,i}^t=0 \end{equation} Then the McCormick envelope is employed to reformulate the objective as a convex function: \begin{equation} \begin{split} obj & = \sum\limits_t {[\sum\limits_{i \in {\Omega _g}} {({c_{2,i}}{{(p_{g,i}^t)}^2} + {c_{1.i}}p_{g,i}^t + {c_{0,i}} - 2{c_{2,i}}\chi _i^t} }- {c_{1,i}}\pi _{g,i}^t \\ & + {c_{2,i}}({(\pi _{g,i}^t)^2} + {(S_{g,i}^t)^2})) - \sum\limits_{i \in {\Omega _u}} {({d_{2,i}}{{(p_{u,i}^t)}^2}}+ {d_{1,i}}p_{u,i}^t + 2{d_{2,i}}\varphi _i^t \\ & + {d_{1,i}}\pi _{u,i}^t + {d_{2,i}}((\pi _{u,i}^t){{\rm{}}^2}+ {(S_{u,i}^t)^2})) - \sum\limits_{i \in {\Omega _r}} {{r_e^{t}}\hat{p}_{r,i}^t}] -\sum\limits_{i\in\Omega_u}r_{c}^sc_i^s \end{split} \end{equation} Additional constraints need to be incorporated: \begin{subequations}\label{eq:2} \begin{align} \chi_i^t&\geq \underline{p}_{g,i}^t\pi_{g,i}^t+\underline{\pi}_{g,i}^tp_{g,i}^t-\underline{p}_{g,i}^t \underline{\pi}_{g,i}\\ \chi_i^t&\geq \overline{p}_{g,i}^t\pi_{g,i}^t+\overline{\pi}_{g,i}^tp_{g,i}^t-\overline{p}_{g,i}^t \overline{\pi}_{g,i}\\ \chi_i^t&\leq \overline{p}_{g,i}^t\pi_{g,i}^t+\underline{\pi}_{g,i}^tp_{g,i}^t-\overline{p}_{g,i}^t \underline{\pi}_{g,i}\\ \chi_i^t&\leq \underline{p}_{g,i}^t\pi_{g,i}^t+\overline{\pi}_{g,i}^tp_{g,i}^t-\underline{p}_{g,i}^t \overline{\pi}_{g,i}\\ \varphi_i^t&\geq \underline{p}_{u,i}^t\pi_{u,i}^t+\underline{\pi}_{u,i}^tp_{u,i}^t-\underline{p}_{u,i}^t \underline{\pi}_{u,i}\\ \varphi_i^t&\geq \overline{p}_{u,i}^t\pi_{u,i}^t+\overline{\pi}_{u,i}^tp_{u,i}^t-\overline{p}_{u,i}^t \overline{\pi}_{u,i}\\ \varphi_i^t&\leq \overline{p}_{u,i}^t\pi_{u,i}^t+\underline{\pi}_{u,i}^tp_{u,i}^t-\overline{p}_{u,i}^t \underline{\pi}_{u,i}\\ \varphi_i^t&\leq \underline{p}_{u,i}^t\pi_{u,i}^t+\overline{\pi}_{u,i}^tp_{u,i}^t-\underline{p}_{u,i}^t \overline{\pi}_{u,i} \end{align} \end{subequations} Following the above procedure, the objective function is transformed into a convex function. \subsection{Distributed Negotiation Mechanism--- ADMM} A decentralized market mechanism is essential for keeping transparency and privacy of the joint market and is expected to motivate players in the community to participate. In this paper, a distributed optimization method based on ADMM is adopted to split the global optimization problem into smaller, individual optimization problems. These local problems are solved by market players with limited information exchanges with the community manager. Based on the exchange form of ADMM \cite{boyd2011distributed}, the whole procedure for solving (31) is presented as follows. \subsubsection{Local Optimization of Each Player} In the remainder, the cost/utility of each CG/user in (38) will be denoted as $\hat{C}_i^t/\hat{U}_i^t$ for simplicity. For each user \textit{i}, its decision variable set is $\xi_i^u=\{\textbf{p}_{u,i}, \textbf{Eb}_{i[\cdot]}, \textbf{B}_{i[\cdot]}, \boldsymbol{\pi}_{u,i}, \boldsymbol{\varphi}_i, c_i, c_i^s, id_i\}$. The local optimization problem of user\textit{ i} at a given iteration \textit{k} is: \begin{equation} \setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt} \begin{alignedat}{2} &\xi_i^{u(k+1)} &&=\arg\min\sum\limits_{t} \big[-\hat{U}_i^t+\sum\limits_{j\in\Omega_r}\lambda_{ij}^{t(k)}\beta_{ij}^t+\sum\limits_{j\in\Omega_r}\frac{\rho}{2}(\beta_{ij}^t-\beta_{ij}^{t(k)}+\hat{\beta}_{ij}^{t(k)})^2\\ & &&+\sum\limits_{j\in\Omega_r\cup\Omega_g}\upsilon_{ij}^{t(k)}Eb_{ij}^{t}+\sum\limits_{j\in\Omega_r\cup\Omega_g}\frac{\gamma}{2}(Eb_{ij}^{t}-Eb_{ij}^{t(k)}+\hat{E}_{ij}^{t(k)})^2 \big]\\ & &&+\theta^{(k)}c_i+\frac{\phi}{2}(c_i-c_i^{(k)}+\overline{c}^{(k)})^2-r_{c}^sc_i^s\\ & \textrm{s.t.} &&(7), (10)-(13), (16), (23)-(24), (33), (39) \label{eq:Relax} \end{alignedat} \end{equation} For each RES \textit{i}, its decision variable set is $\xi_i^r=\{\textbf{p}_{r,i}, \textbf{Es}_{[\cdot] i}, \textbf{A}_{[\cdot] i}^r,\textbf{B}_{[\cdot] i}^r, c_i\}$. The local optimization problem of RES \textit{i} at a given iteration \textit{k} is: \begin{equation} \begin{alignedat}{2} &\xi_i^{r(k+1)} &&=\arg\min\sum\limits_{t}\big[-r^{c,t}\hat{p}_{r,i}^t+\sum\limits_{j\in\Omega_u}\lambda_{ji}^{t(k)}\beta_{ji}^{r,t}+\sum\limits_{j\in\Omega_u}\frac{\rho}{2}(\beta_{ji}^{r,t}-\beta_{ji}^{r,t(k)}+\hat{\beta}_{ji}^{t(k)})^2\\ & &&+\sum\limits_{j\in\Omega_g}\eta_{ji}^{t(k)}\alpha_{ji}^{r,t}+\sum\limits_{j\in\Omega_g}\frac{\tau}{2}(\alpha_{ji}^{r,t}-\alpha_{ji}^{r,t(k)}+\hat{\alpha}_{ji}^{t(k)})^2+\sum\limits_{j\in\Omega_u}\upsilon_{ji}^{t(k)}Es_{ji}^t\\ & &&+\sum\limits_{j\in\Omega_u}\frac{\gamma}{2}(Es_{ji}^t-Es_{ji}^{t(k)}+\hat{E}_{ji}^{t(k)})^2\big]+\theta^{(k)}c_i+\frac{\phi}{2}(c_i-c_i^{(k)}+\overline{c}^{(k)})^2\\ & \textrm{s.t.} &&(3), (8), (12), (14), (17), (27)-(28) \end{alignedat} \end{equation} For each CG \textit{i}, its decision variable set is $\xi_i^g=\{\textbf{p}_{g,i},\textbf{Es}_{[\cdot] i},\textbf{A}_{i[\cdot]},\boldsymbol{\pi}_{g,i},\boldsymbol{\chi}_i\}$. The local optimization problem of CG \textit{i} at a given iteration \textit{k} is: \begin{equation} \begin{alignedat}{2} &\xi_i^{g(k+1)} &&=\arg\min\sum\limits_t\big[\hat{C}_i^t+\sum\limits_{j\in\Omega_r}\eta_{ij}^{t(k)}\alpha_{ij}^t+\sum\limits_{j\in\Omega_r}\frac{\tau}{2}(\alpha_{ij}^t-\alpha_{ij}^{t(k)}+\hat{\alpha}_{ij}^{t(k)})^2\\ & &&+\sum\limits_{j\in\Omega_u}\upsilon_{ji}^{t(k)}Es_{ji}^t+\sum\limits_{j\in\Omega_u}\frac{\gamma}{2}(Es_{ji}^t-Es_{ji}^{t(k)}+\hat{E}_{ji}^{t(k)})^2\big]\\ & \textrm{s.t.} &&(3), (6), (14)-(15), (22), (32), (39) \end{alignedat} \end{equation} \subsubsection{Global Variable Update} After gathering all the local information from market players, the community manager is in charge of updating the global variables and then broadcasting the results to all the players. To be specific, the update procedure at a given iteration \textit{k} is as follows: \begin{subequations}\label{eq:4} \begin{align} \hat{\alpha}_{ij}^{t(k+1)}&=\frac{1}{2}(\alpha_{ij}^{t(k+1)}+\alpha_{ij}^{r,t(k+1)})\label{eq:4a}\\ \hat{\beta}_{ij}^{t(k+1)}&=\frac{1}{2}(\beta_{ij}^{t(k+1)}+\beta_{ij}^{r,t(k+1)})\label{eq:4b}\\ \hat{E}_{ij}^{t(k+1)}&=\frac{1}{2}(Eb_{ij}^{t(k+1)}+Es_{ij}^{t(k+1)})\label{eq:4c}\\ \overline{c}^{(k+1)}&=\frac{1}{\vert\Omega_u\cup\Omega_r\vert}\sum\limits_ic_i^{(k+1)} \end{align} \end{subequations} \subsubsection{Dual Price Update} At the end of each iteration, the dual prices need to be updated following the steps below: \begin{subequations}\label{eq:5} \begin{align} \theta^{(k+1)} &=\theta^{(k)}+\phi\overline{c}^{(k+1)}\label{eq:5A}\\ \lambda_{ij}^{t(k+1)} &=\lambda_{ij}^{t(k)}+\rho\hat{\beta}_{ij}^{t(k+1)} \label{eq:5B}\\ \eta_{ij}^{t(k+1)} &=\eta_{ij}^{t(k)}+\tau\hat{\alpha}_{ij}^{t(k+1)} \label{eq:5D}\\ \upsilon_{ij}^{t(k+1)} &=\upsilon_{ij}^{t(k)}+\gamma\hat{E}_{ij}^{t(k+1)}\label{eq:5G} \end{align} \end{subequations} \subsubsection{Stopping Criteria} The above problem is a convex one except for the nonconvex constraints (10) and (11). Nevertheless, since the non-convexity arises from Boolean constraints and only exists in each user's local problem, the ADMM procedure can still be carried out \cite{9142270,boyd2011distributed}. The proposed distributed mechanism converges as long as the total local residuals fall below the global stopping criteria: \begin{subequations}\label{6} \begin{align} se^{(k)}&=\sum\limits_{t}\Vert \textbf{Es}^{t(k)}+\textbf{Eb}^{t(k)}\Vert_F^2\leq \epsilon^{pri}_{e}\\ sr^{(k)}&=\sum\limits_{t}\Vert \textbf{A}^{t(k)}+\textbf{A}^{r,t(k)}\Vert_F^2\leq \epsilon^{pri}_{r}\\ sd^{(k)}&=\sum\limits_{t}\Vert \textbf{B}^{t(k)}+\textbf{B}^{r,t(k)}\Vert_F^2\leq \epsilon^{pri}_{d}\\ sc^{(k)}&=(\sum\limits_ic_i^{(k)})^2\leq \epsilon^{pri}_{c}\\ te^{(k)}&=\sum\limits_{t}\Vert\hat{\textbf{E}}^{t(k)}-\hat{\textbf{E}}^{t(k-1)}\Vert_F^2\leq\epsilon^{dual}_{e}\\ tr^{(k)}&=\sum\limits_{t}\Vert\hat{\textbf{A}}^{t(k)}-\hat{\textbf{A}}^{t(k-1)}\Vert_F^2\leq\epsilon^{dual}_{r}\\ td^{(k)}&=\sum\limits_{t}\Vert\hat{\textbf{B}}^{t(k)}-\hat{\textbf{B}}^{t(k-1)}\Vert_F^2\leq\epsilon^{dual}_{d}\\ tc^{(k)}&=(\overline{c}^{(k)}-\overline{c}^{(k-1)})^2\leq\epsilon^{dual}_{c} \end{align} \end{subequations} where $\Vert \cdot\Vert_F$ denotes the Frobenius norm, and $\epsilon_e^{pri}\sim\epsilon_c^{dual}$ are the corresponding thresholds. \subsection{Tightening Bound--- Contraction} Traditional McCormick envelope usually relax the bilinear term at the sacrifice of accuracy and feasibility. The relaxed version of the market model above renders a lower-bound solution without promising the feasibility of the original model. Hence, a heuristic bound contraction algorithm modified from \cite{deng2021optimal} is adopted in this paper to improve the precision of the traditional McCormick envelopes, which can iteratively strengthen the bounds of $p_{g,i}^t,p_{u,i}^t,\pi_{g,i}^t$ and $\pi_{u,i}^t$. This is achieved by using a decreasing scalar to tighten the bounds according to the solutions from the last iteration. Besides, as stated in \cite{deng2021optimal}, the updated bounds should be the intersection of the result-oriented bounds and the initial bounds to ensure the feasibility of the original model. Therefore, at a given iteration $n$, the bounds should be updated based on the following rules: \begin{subequations} \begin{align} \underline{p}_{g,i}^{t}&=\max\{(1-\epsilon^n)p_{g,i}^{t*},\underline{p}_{g,i}^{t,ini}\} \\ \overline{p}_{g,i}^{t}&=\min\{(1+\epsilon^n)p_{g,i}^{t*},\overline{p}_{g,i}^{t,ini}\} \\ \underline{p}_{u,i}^{t}&=\max\{(1-\epsilon^n)p_{u,i}^{t*},\underline{p}_{u,i}^{t,ini}\} \\ \overline{p}_{u,i}^{t}&=\min\{(1+\epsilon^n)p_{u,i}^{t*},\overline{p}_{u,i}^{t,ini}\} \\ \underline{\pi}_{g,i}^{t}&=\max\{(1+\epsilon^n)\pi_{g,i}^{t*},\underline{\pi}_{g,i}^{t,ini}\} \\ \overline{\pi}_{g,i}^{t}&=\min\{(1-\epsilon^n)\pi_{g,i}^{t*},\overline{\pi}_{g,i}^{t,ini}\} \\ \underline{\pi}_{u,i}^{t}&=\max\{(1+\epsilon^n)\pi_{u,i}^{t*},\underline{\pi}_{u,i}^{t,ini}\} \\ \overline{\pi}_{u,i}^{t}&=\min\{(1-\epsilon^n)\pi_{u,i}^{t*},\overline{\pi}_{u,i}^{t,ini}\} \end{align} \end{subequations} where $\epsilon^n=\epsilon^{n-1}-\kappa$ is a decreasing scalar, $(\cdot)^*$ denotes the solution from the last iteration and $(\cdot)^{ini}$ denotes the bound used in the first iteration. The discrepancy between signs in (48) and (49) is because $\pi_{u,i}^t$ and $\pi_{g,i}^t$ are always non-positive while $p_{u,i}^t$ and $p_{g,i}^t$ are always non-negative. Upon the updates of the bounds of the decision variables, the McCormick envelopes (39) and (40) are updated accordingly. The update procedure will terminate once the maximal relative error of the bilinear constraints (34) and (35) fall below a reasonable level: \begin{subequations} \begin{align} err^g&=\max\limits_{t,i}\vert(\chi_i^t-p_{g,i}^t\pi_{g,i}^t)/\chi_i^t\vert \leq\delta^g\\ err^u&=\max\limits_{t,i}\vert(\varphi_i^t-p_{u,i}^t\pi_{u,i}^t)/\varphi_i^t\vert\leq\delta^u \end{align} \end{subequations} To sum up, the whole procedure of the Relax-ADMM-Contraction loop is presented in Algorithm 1. \begin{figure}[!t] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \let\@latex@error\@gobble \begin{algorithm}[H] \caption{Relax-ADMM-Contraction} \begin{algorithmic}[1] \iffalse \ENSURE $\hat{\textbf{E}}^t,\hat{\textbf{A}}^t,\hat{\textbf{B}}^t,c_i,p_{g,i}^t,p_{u,i}^t,p_{r,i}^t$ \REPEAT \COMMENT{Tightening McCormick Envelope} \STATE Derive the relaxed original problem (38)-(39). \STATE \textbf{Initialization:} $k\gets0$, dual prices, global variables \REPEAT[ADMM Procedure] \STATE \textbf{Local optimization of each player:} \FORALL{$i\in\Omega_u$} \STATE Solve (40) and obtain $\xi_i^{u(k+1)}$ \ENDFOR \FORALL{$i\in\Omega_r$} \STATE Solve (41) and obtain $\xi_i^{r(k+1)}$ \ENDFOR \FORALL{$i\in\Omega_g$} \STATE Solve (42) and obtain $\xi_i^{g(k+1)}$ \ENDFOR \STATE \textbf{Community manager coordination:} \STATE Updates and broadcasts global variables: (43) \STATE Updates and broadcasts dual prices: (44) \STATE $k\gets k+1$ \UNTIL{convergence conditions (45) is satisfied} \STATE \textbf{Bound Contraction:} \FORALL{$i\in\Omega_u\cup\Omega_g$} \STATE Update local decision variables (46) \STATE Update local constraints (39) \ENDFOR \STATE $\epsilon^{n+1}=\epsilon^n-\kappa$ \STATE $n\gets n+1$ \UNTIL{convergence condition (47) is satisfied} \fi \Ensure $\hat{\textbf{E}}^t,\hat{\textbf{A}}^t,\hat{\textbf{B}}^t,c_i,p_{g,i}^t,p_{u,i}^t,p_{r,i}^t$ \Repeat \Comment{Tightening McCormick Envelope} \State Derive the relaxed original problem (38)-(39). \State \textbf{Initialization:} $k\gets0$, dual prices, global variables \Repeat \Comment{ADMM Procedure} \State \textbf{Local optimization of each player:} \ForAll{$i\in\Omega_u$} \State Solve (40) and obtain $\xi_i^{u(k+1)}$ \EndFor \ForAll{$i\in\Omega_r$} \State Solve (41) and obtain $\xi_i^{r(k+1)}$ \EndFor \ForAll{$i\in\Omega_g$} \State Solve (42) and obtain $\xi_i^{g(k+1)}$ \EndFor \State \textbf{Community manager coordination:} \State Updates and broadcasts global variables: (43) \State Updates and broadcasts dual prices: (44) \State $k\gets k+1$ \Until{convergence conditions (45) is satisfied} \State \textbf{Bound Contraction:} \ForAll{$i\in\Omega_u\cup\Omega_g$} \State Update local decision variables (46) \State Update local constraints (39) \EndFor \State $\epsilon^{n+1}=\epsilon^n-\kappa$ \State $n\gets n+1$ \Until{convergence condition (47) is satisfied} \end{algorithmic} \end{algorithm} \vspace{-6mm} \end{figure} \noindent \textbf{\textit{Remark}:} To accelerate the convergence, the solutions of dual prices and global variables from the outer iteration $n-1$ will be adopted as initial values at the iteration $n$. The effectiveness of this warm-start method will be illustrated in Section \ref{4}. We now state that the dual prices exactly constitute the competitive market equilibrium prices. \noindent \textbf{\textit{Proposition 1.}} Using $\pi^{e,t}_{ij}=-\upsilon_{ij}^t, \pi^{u,t}_{ij}=-\eta_{ij}^t, \pi^{d,t}_{ij}=-\lambda_{ij}^t, \pi^c=\theta$ as the bilateral energy prices, upward reserve prices, flexibility prices and carbon allowance price, respectively, constitutes a competitive market equilibrium. \noindent \textbf{\textit{Proof.}} We firstly define the individual profit-maximizing problem for RES \textit{i} as follows: \begin{equation} \begin{alignedat}{2} &\min \quad&&\sum\limits_t\big[-r^{e,t}\hat{p}_{r,i}^t-\sum\limits_{j\in\Omega_u}\pi_{ji}^{d,t}\beta_{ji}^{r,t}-\sum\limits_{j\in\Omega_g}\pi_{ji}^{u,t}\alpha_{ji}^{r,t}\\ & &&-\sum\limits_{j\in\Omega_u}\pi_{ji}^{e,t}Es_{ji}^t\big]+\pi^cc_i\\ &\rm{s.t.}\quad&&(3), (8), (12), (14), (17), (27)-(28) \end{alignedat} \end{equation} It can be inferred from (41) that the market outcome will solve the following local optimization problem for RES \textit{i} after the convergence of the market: \begin{equation} \begin{alignedat}{2} &\min \quad&&\sum\limits_t\big[-r^{e,t}\hat{p}_{r,i}^t+\sum\limits_{j\in\Omega_u}\lambda_{ji}^{t}\beta_{ji}^{r,t}+\sum\limits_{j\in\Omega_g}\eta_{ji}^{t}\alpha_{ji}^{r,t}+\sum\limits_{j\in\Omega_u}\upsilon_{ji}^{t}Es_{ji}^t\big]+\theta c_i\\ &\rm{s.t.}\quad&&(3), (8), (12), (14), (17), (27)-(28) \end{alignedat} \end{equation} Applying $\pi^{e,t}_{ij}=-\upsilon_{ij}^t, \pi^{u,t}_{ij}=-\eta_{ij}^t, \pi^{d,t}_{ij}=-\lambda_{ij}^t, \pi^c=\theta$, it can be found that the outcome of the market will solve the individual profit-maximizing problem (49) likewise. The same procedure can be applied to users and CGs as well. Based on \cite[Definition 1]{o2005efficient}, the set of prices $\{\pi^{e,t}_{ij},\pi^{u,t}_{ij},\pi^{d,t}_{ij},\pi^c\}$ constitutes a competitive equilibrium. \qed \section{Case Study}\label{4} \subsection{Case Parameters} In this study, the energy community consisting of eight market participants, i.e., three MTs, three users and two PV generators is presented. The parameters of MTs and users are listed in Table \ref{t1}\&\ref{t2}. The forecast output curves of PVs are depicted in Fig. 2 and the standard deviations $\sigma_i^t$ for the PV prediction errors are set as 10\% of the predicted outputs. Then, the standard deviation and mean value of the negative error component are generated based on the following rules\cite{gonzalez2021electricity}: \begin{equation} \delta_i^t=\sigma_i^t\sqrt{\frac{\pi-2}{\pi}},\quad\mu_i^t=\sigma_i^t\sqrt{\frac{2}{\pi}} \end{equation} \begin{table}[htp] \begin{center} \caption{Parameters of micro-turbines} \label{t1} \renewcommand{\arraystretch}{1.2} \begin{tabular}{| c | c | c | c |} \hline Parameters & MT1 & MT2 & MT3\\ \hline $c_0\;\big[\$\big]$ & 2.01 & 2.01 & 2.03\\ \hline $c_1\;\big[\$/\rm{kW}\big]$ & 0.045 & 0.050 & 0.052\\ \hline $c_2\;\big[\$/\rm{kW}^2\big]$ & 0.00021 & 0.00021 & 0.00019\\ \hline $\overline{p}_g\;\big[\rm{kW}\big]$ & 260 & 270 & 220\\ \hline $\underline{p}_g\;\big[\rm{kW}\big]$ & 0 & 0 & 0\\ \hline $\sigma_i\;\big[\rm{kg}\cdot \rm{kW}^{-1}\big]$ & 0.870 & 0.935 & 0.910\\ \hline \end{tabular} \end{center} \vspace{-6mm} \end{table} \begin{table}[htp] \begin{center} \caption{Parameters of users} \label{t2} \renewcommand{\arraystretch}{1.2} \begin{tabular}{| c | c | c | c |} \hline Parameters & U1 & U2 & U3\\ \hline $d_1\;\big[\$/\rm{kW}\big]$ & 0.0870 & 0.0765 & 0.0600\\ \hline $d_2\;\big[\$/\rm{kW}^2\big]$ & -0.00014 & -0.00014 & -0.000125\\ \hline \end{tabular} \end{center} \vspace{-6mm} \end{table} \begin{figure}[htp] \centering \includegraphics[width=0.8\linewidth]{loadPV} \caption{Day-ahead PV forecast output and upper bound of loads} \label{fig2} \vspace{-3mm} \end{figure} The upper bound of the load profile of each user is shown in Fig. 2 and the lower bound is set to 40\% of the upper bound. The selling price for carbon allowances $r_c^s$ is set as \$0.003/kg and the day-ahead selling price for electricity $r_e^t$ is set as \$0.06/kWh. The initial carbon allowance allocation is based on an equal per capita allocation, which is 1800 kg per user in this study. All the confidence levels for chance constraints are set as 0.95. In the ADMM procedure, the tolerance levels for primal residuals are set to $10^{-6}$ for $sr$ and $sd$, and $10^{-4}$ for $se$ and $sc$. Tolerance levels for dual residuals are chosen the same as the corresponding primal residuals. The stopping criteria for the bound contraction are chosen to be $10^{-2}$. All the optimization problems are carried out in MATLAB 2021b platform using Gurobi \cite{gurobi} solver along with Yalmip\cite{yalmip}. The simulations run on a computer featuring AMD Ryzen 7 5800H @3.20GHz and 16 GB of RAM. \subsection{Market Outcome} In this part, we will firstly discuss the outcomes of the proposed joint market. Fig. 3 and Fig. 4 present the outcomes of electricity trading and uncertainty balance, respectively. Fig. 3 illustrates consumption profile for each user, including the load, lower bound of the load (Lb) and components of the load. It can be seen that for each user, the electricity purchase and demand reach the balance in each time slot. In addition, since MT3 is more expensive and has a relatively high carbon intensity, users procure the least energy from MT3 as revealed in Fig. 3. Conversely, MT1, who has the least cost parameter and carbon intensity, possesses the largest percentage among all three MTs. This statement also explains the absence of MT3 in uncertainty balance as depicted in Fig. 4, which illustrates the participation factors of users and MTs in compensating for the uncertainty. It is obvious that for each PV, the sum of the participation factors of users and MTs is equal to 1, which implies that the forecast error can be fully offset. Meanwhile, user3 does not participate in uncertainty balance during all time periods because it has reached its lower bound in the energy market as shown in Fig. 3, excluding itself from providing flexibility. \begin{figure}[htp] \centering \subfloat[User 1]{\includegraphics[width=0.45\textwidth]{ELectricity_outcome_U1.pdf}}\hfill \subfloat[User 2]{\includegraphics[width=0.45\textwidth]{ELectricity_outcome_U2.pdf}}\par \subfloat[User 3]{\includegraphics[width=0.45\textwidth]{ELectricity_outcome_U3.pdf}} \caption{Optimal energy consumption of each user in the day-ahead market} \label{fig3} \vspace{-6mm} \end{figure} \begin{figure}[htp] \centering \subfloat[PV 1]{\includegraphics[width=0.5\textwidth]{Reserve_R1.pdf}} \subfloat[PV 2]{\includegraphics[width=0.5\textwidth]{Reserve_R2.pdf}} \caption{Uncertainty balance of each PV in the day-ahead market} \label{fig4} \vspace{-4mm} \end{figure} To achieve carbon allowance balance, PVs are required to purchase allowances from the market and users can decide whether to procure allowances to consume more energy or sell superfluous allowances to make profits based on their consumption profiles. The outcome of the carbon market is provided in Table \ref{tab3}. It is revealed that in this case, all users can sell allowances to both PVs and the community manager by adjusting their consumption behaviors. Besides, following KKT conditions, it is trivial to state that the price in the allowance sharing is the same as the selling price to the community manager. \begin{table}[tp] \begin{center} \caption{The outcome of the carbon market} \label{tab3} \renewcommand{\arraystretch}{1.2} \begin{tabular}{| c | c | c | c | c | c |} \hline Participants& U1 & U2 & U3 & PV1 & PV2\\ \hline \multicolumn{6}{|c|}{Allowance Sharing}\\ \hline Price \big[\$/\rm{kg}\big] & \multicolumn{5}{c|}{0.003}\\ \hline Quantity \big[\rm{kg}\big] & 204.06 & 204.06 & 204.06 & -307.05 & -305.13\\ \hline \multicolumn{6}{|c|}{Sold to the Community Manager}\\ \hline Price \big[\$/\rm{kg}\big] & \multicolumn{5}{c|}{0.003}\\ \hline Quantity \big[\rm{kg}\big] & 584.20 & 987.95 & 716.55 & / & /\\ \hline \end{tabular} \end{center} \vspace{-6mm} \end{table} Next, the influences of the carbon allowance price and electricity price on the market outcomes are discussed. It is assumed that the carbon allowance price ranges from \$0.001/kg to \$0.006/kg and the electricity selling price is increased from \$0.04/kWh to \$0.08/kWh. The variations in social welfare, total allowances sold to the manager and total PV generations sold to the manager are displayed in Fig. 5. \begin{figure}[!t] \centering \subfloat[Social Welfare]{\includegraphics[width=0.45\textwidth]{Variance_socialwelfare.pdf}}\hfill \subfloat[Total allowances sold to the manager]{\includegraphics[width=0.45\textwidth]{Variance_allowance.pdf}}\par \subfloat[Total PV generations sold to the manager]{\includegraphics[width=0.45\textwidth]{Variance_electricity.pdf}} \caption{Social welfare, total allowances and PV generations sold to the manager as a function of electricity and carbon allowance price} \label{fig5} \vspace{-3mm} \end{figure} It can be observed from Fig. 5 (a) that the social welfare is improved along with the increment in electricity and allowance price since the whole community can sell allowances and PV generations at a more favorable price. Fig. 5 (b)\&(c) reveal that the total allowances sold to the community manager are reduced with the increase of electricity price and the decrease of allowance price whereas the total PV generations sold to the manager soar. The decline in allowance price undermines the economic incentives for users to cut down daily consumption and to prefer green energy. Meanwhile, the increment in electricity price additionally stimulates the readiness of PVs to sell energy to the community manager instead of users. \subsection{The Effect of Carbon Allowance Sharing and Load Flexibility} In this part, the effects of the carbon allowance sharing mechanism and load flexibility are demonstrated. To make a comparison, three different cases are considered here. \textit{Case 1:} The joint market devised in this study is discussed, where carbon allowance sharing and load flexibility are both included. \textit{Case 2:} The proposed joint market is considered, except for load flexibility. \textit{Case 3:} The proposed joint market is adopted, while the carbon market model is modified according to \cite{carbon_blockchain}. In this case, the participants can only purchase/sell allowances to the operator at fixed prices and the allowance balancing constraint (9) is omitted. The selling price herein is still \$0.003/kg and the purchase price is set as \$0.009/kg. The discrepancies in social welfare and total allowances held inside the community among the above three cases are presented in Table \ref{tab4}. \begin{table}[!t] \centering \caption{Results in different cases} \label{tab4} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c|c|c|} \hline Cases & Social Welfare [\$] & \begin{tabular}[c]{@{}c@{}}Total Allowances Held Inside \\the Community [kg]\end{tabular} \\ \hline 1 & 13.9858 & 3111.3 \\ \hline 2 & 13.5043 & 3295.7 \\ \hline 3 & 10.4972 & 3064.5 \\ \hline \end{tabular} \end{table} As can be vividly discerned from Table \ref{tab4}, the whole community has more demands for carbon allowances in case flexibility is not considered. This increase results from the need of PVs to balance uncertainty. In Case 2, the only access for PVs to balance uncertainty is purchasing upward reserve of MTs, which, therefore, inevitably precipitates more carbon emissions and demands for allowances. The decrease in the allowances sold to the community manager also lowers the social welfare. Hence, employing the flexibility of users occupies a crucial role in promoting social welfare and reducing carbon emissions. In Case 3, the conspicuous shrinkage in social welfare is mainly due to the high cost of purchasing carbon allowances, which suppresses the need of PVs for allowances, and thereby saps the will of PVs to procure upward reserve of MTs. Compared with Case 1, it can be inferred that the carbon allowance sharing mechanism helps facilitate allowance trading among participants, generate a more affordable purchase price, and thus, improve total social welfare. \subsection{Convergence Analysis} \begin{figure} \centering \subfloat[Primal residuals]{\includegraphics[width=0.8\linewidth]{primal_residuals.pdf}} \subfloat[Dual residuals]{\includegraphics[width=0.8\linewidth]{dual_residuals.pdf}} \caption{Primal and dual residuals during different rounds} \label{fig6} \vspace{-4mm} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{objective.pdf} \caption{Social welfare of the proposed algorithm and the centralized algorithm} \label{fig7} \vspace{-4mm} \end{figure} The evolution of the primal residuals and dual residuals through the proposed algorithm are plotted in Fig. 6. To accelerate the convergence speed, an adaptive penalty factor method as put forward in \cite[Chapter 3]{boyd2011distributed} is employed. As shown in Fig. 6, the proposed algorithm reaches stopping criteria within 3 bound contraction iterations (denoted as 'round') and a total of 488 ADMM iterations, verifying its convergence performance. Besides, the fluctuations in the social welfare are presented in Fig. 7. The optimal social welfare in the proposed algorithm is \$13.9858 and \$13.9869 in the centralized algorithm, which results in a negligible optimality gap of less than 0.01\%. This indicates that the proposed solution techniques can effectively cope with the decentralized trading in the energy community. Next, we will herein illustrate the efficiency of the warm-start method mentioned in Section 3.3. Although the convergence of ADMM is guaranteed for any initialization points, a good starting point can dramatically reduce the iterations and computation complexity. In this study, it is reasonable to adopt the results obtained from the last round as an approximation of accurate ones since these results provide a lower bound to the original problem. Based on the fixed penalty factor version of ADMM, the total iterations required to converge under two different situations are provided in Table \ref{tab5}, one for results without a warm start and the other for results with a warm start. For the first round, both of the two methods involves 1969 ADMM iterations since they start from the same initialization points. However, for the second round, a transparent reduction in the number of iterations appears when the warm start is employed, demonstrating that the warm-start method contributes to the superior convergence performance. By comparison with Fig. 6, it is also found that a marked increase in total iterations occurs when adopting fixed penalty factor method, which verifies the superiority of the adaptive penalty factor method. \begin{table}[!t] \centering \caption{Iteration numbers with/without warm start} \label{tab5} \begin{tabular}{|c|c|c|c|} \hline \textbf{Iterations} & Round 1 & Round 2 & Total \\ \hline No Warm start & 1969 & 1518 & 3487 \\ \hline With Warm start & 1969 & 33 & 2002 \\ \hline \end{tabular} \end{table} \section{Conclusion} \label{5} This paper proposes a joint day-ahead market paradigm where participants simultaneously trade energy, uncertainty, and carbon allowances. Moreover, this paper considers the possible excessive carbon emissions caused during uncertainty balance, which are non-negligible for the management of the carbon emission. Simulations have revealed several merits of the devised market: 1) The introduction of carbon allowance trading guarantees that the total carbon emission of the energy community does not exceed the prescribed limit; 2) Proper electricity price and allowance price are conducive to the local consumption of renewable energy and the reduction of carbon emissions; 3) The renewable agents can fully offset the uncertainty by procuring reserve from conventional generators and flexibility from users; 4) The proposed Relax-ADMM-Contraction loop is privacy-friendly, and can simultaneously yield trading quantities and trading prices. However, it should be noted that the Chebyshev approximation is generally far too conservative, and thereby may lead to the decline in the market efficiency. A further study should therefore concentrate on the methods to bypass the conservatism. \section*{Acknowledgements} This work was supported in part by the National Key R \& D Program of China (No. 2020YFE0200400), and in part by the National Natural Science Foundation of China (No. 52177077).
1,314,259,996,459
arxiv
\section{Introduction} \section{Introduction} Single-layer cuprate oxides $RE_2$CuO$_4$ (where $RE$ means rare-earth) possess three structural isomers. The T'-type isomer has \color{black} four oxygens coordinated around the Cu ion, the T*-type has five, and the T-type has six. Historically, all forms of $RE_2$CuO$_4$, regardless of their oxygen-coordination numbers, have been assumed to be charge transfer (CT) Mott insulators, and the parent compounds of high-transition-temperature superconductors. However, the reported observation of superconductivity in T'-type $RE_2$CuO$_4$ has raised the possibility that the ground states of $RE_2$CuO$_4$ structural isomers are not unique ~\cite{Matsumoto2009, Asai2011, Takamatsu2012}; this inaugurated the study of how different oxygen coordinations affect physical properties of the isomers~\cite{Das2009, Weber2010a, Weber2010}. So far, superconductivity in Ce-free (undoped) T'-type $RE_2$CuO$_4$ has been observed in limited samples such as thin films after adequate oxygen reduction annealing\cite{Matsumoto2009}. In the case of single-crystal and high-temperature synthesized powder samples of $RE_{2-x}$Ce$_x$CuO$_4$, grown by the conventional method, superconductivity has not been realized at $x$ = 0 even for annealed samples\cite{Takagi1989,Brinkmann1995,Fujita2003,Horio2015a}. Given that the ground state of the T'-type $ RE_2$CuO$_4 $ is metallic, the removal of partially existing excess oxygens at the apical site is the key to achieving superconductivity; this can be done homogeneously for the thin film because of the large surface-to-volume ratio. Recent studies of photoemission and soft x-ray absorption spectroscopy have shown evidence of anneal-induced electron-doping\cite{Horio2015a, Wei2016, Song2017, Horio2018e, Horio2018b, Horio2018f}. On the basis of angle-resolved photoemission spectroscopy experiments on thin-film Pr$_2$CuO$_4$, Horio et al. have concluded that oxygen non-stoichiometry induced by annealing in the CuO$_2$ plane and/or the $RE_2$O$_2$ layer is the origin of such doping\cite{Horio2018e}. Importantly, this suggests that structural disorder is present even in the superconducting sample and that the superconductivity in T'-type $RE_2$CuO$_4$ is the result of electron doping. Thus, it is essential to understand the relationship between the anneal-induced variation of the electronic state and the occupancy (or lack of occupancy) of each site by oxygens. There are two possible scenarios for electron doping through annealing: (i) Electron doping into the Cu 3$d_ {x^2-y^2} $ upper Hubbard band (UHB) by induced oxygen deficiency and/or by removal of the apical oxygen. (See Fig. \ref{CT_Cu-Obond_v5}(c).) In this case, the increased electron density after annealing ($n_{\rm AN}$) is given by $n_{\rm AN} = 2\delta$, where $\delta$ is the amount of reduced oxygen in the formula unit due to annealing. (ii) Self-doping associated with the collapse of the CT gap\cite{Yokoyama2006, Adachi2013, Yamazaki2017}. (See Fig. \ref{CT_Cu-Obond_v5}(d).) It has been proposed that the removal of the apical oxygen lowers the Madelung energy of the Cu 3$d_{x^2-y^2}$ UHB\cite{Adachi2013}, resulting in the hybridization with the O 2$p$ band on the Fermi level. In this situation, both electron and hole carriers could be generated simultaneously. A recent X-ray absorption near-edge structure (XANES) study on high-temperature synthesized Pr$_{2-x}$Ce$_x$CuO$_4$ (PCCO)\cite{Asano2018} found that, although $n_{\rm AN} = 2\delta$ in the smaller $\delta$ region, it tends to be higher in the larger $\delta$ region. This result suggests that the sequential electron-doping process (i) gives way to the self-doping process (ii) with increasing $\delta$. To understand the mechanism of undoped superconductivity in T'-type $RE_2$CuO$_4$, it is necessary to clarify how annealing induces electron transfers not seen in the non-superconducting compound. For this purpose, a low-temperature synthesized powder sample of T'-type La$_{1.8}$Eu$_{0.2}$CuO$_4$ (LECO) would be an excellent candidate system\cite{Takamatsu2012}. However, superconducting LECO can be obtained only in the form of powder, and therefore, the experimental method of the electronic state is limited. In this study, we performed Cu {\it K}-edge X-ray absorption fine structure (XAFS) measurements on LECO and high-temperature synthesized Nd$_2$CuO$_4$ (NCO). The latter is non-superconducting even after annealing. By investigating the XANES spectra of LECO, we found significant evolution of the electronic state at Cu sites due to annealing, with a large $n_{\rm AN}$ value of 0.40 electrons per Cu. Extended X-ray absorption fine structure (EXAFS) analysis, furthermore, indicated that the strength of the Cu-O bond in the CuO$_2$ plane had softened due to annealing, which is consistent with the screening effect on phonons in the metallic state. In NCO, on the other hand, $n_{\rm AN}$ and $\delta$ were found to be 0.05 electrons per Cu 0.035 per unit formula, respectively, approximately following the $n_{\rm AN}=2\delta$ relation; no evidence of the softening of the Cu-O bond due to annealing was observed. Since the $\delta$ values of LECO and NCO are comparable, these results suggest that the variation of the electronic state due to annealing proceeds in a different manner in each compound. \section{Sample preparation and XAFS experiment} \begin{table}[tb] \begin{center} \caption{Lattice constants for as-sintered (AS) and annealed (AN) compounds of La$_{1.8}$Eu$_{0.2}$CuO$_4$ (LECO) and Nd$_{2}$CuO$_4$ (NCO).} \begin{tabular}{cccc} \hline ~~~~~~~~~~ & ~~~~~~~~ & ~$a$ (\AA)~ & ~$c$ (\AA)~ \\ \hline LECO & AS & 3.9994(7) & 12.485(4) \\ LECO & AN & 4.0037(2) & 12.459(1)\\ NCO & AS & 3.9452(5) & 12.176(1) \\ NCO & AN & 3.9463(5) & 12.172(1) \\ \hline \end{tabular} \label{lattice_const} \end{center} \end{table} As-sintered (AS) polycrystalline samples of LECO were prepared by the low-temperature synthesis method, described previously \cite{Takamatsu2012}. Superconducting LECO with a transition temperature ($T_{\rm c}$) of 20 K was obtained by annealing the AS samples in vacuum at 700 $^{\circ}$C for 24 h. The oxygen loss $\delta$ in LECO due to annealing was not determined precisely but a maximum value of 0.05 per unit formula was obtained from neutron diffraction measurements~\cite{Sato}. AS polycrystalline samples of NCO were synthesized by the solid-state reaction method. Annealed (AN) NCO was prepared by annealing the AS NCO in flowing Ar gas at 750 $^{\circ}$C for 12 h. The value of $\delta$ in NCO was determined (by the weight loss of the sample through annealing) to be 0.035 per unit formula. AN NCO is an insulator that shows magnetic order below $\sim$260 K \cite{Suzuki2019}. The phase purity of the samples was checked by X-ray powder diffraction. Lattice constants evaluated by Rietveld analysis on the X-ray diffraction pattern for LECO and NCO are shown in Table \ref{lattice_const}. In both LECO and NCO, the in-plane (out-of-plane) lattice slightly elongates (shrinks) due to annealing. \begin{figure}[tb] \begin{center} \includegraphics[width=82mm]{LECO_XANES_v3.pdf} \caption{(Color online) (a) Cu {\it K}-edge XANES spectra for as-sintered (AS) and annealed (AN) La$_{1.8}$Eu$_{0.2}$CuO$_4$ (LECO). The XANES spectrum for as-sintered Nd$_2$CuO$_4$ (NCO) is plotted as a reference. The inset is the enlarged spectra for the energy between 8975~eV and 8986~eV. (b) The difference spectra for LECO and NCO, which are obtained by subtracting the spectra of the AS compounds from those of AN ones. } \label{LECO_XANES_v3} \end{center} \end{figure} Cu $K$-edge XAFS measurements were performed with the transmission mode at the BL01B1 and BL14B1 beamlines at the SPring-8 synchrotron radiation facility. Using an Si(111) double-crystal monochromator, we measured XAFS spectra at 300~K on small pellets (7 mm in diameter and 0.5 mm in thickness), which were mixed with boron nitride for self-support. XAFS spectra consist of XANES and EXAFS spectra, reflecting the unoccupied electronic state and the local structure around the Cu site, respectively. The temperature dependence of the EXAFS spectra was measured from 10 K to 300 K to analyze the atomic displacement factor $C{_2}(=\sigma_{\rm s}+\sigma_{\rm d})$ of the Cu-O$_{\rm p}$ bond. Here, $\sigma_{\rm s}$ ($\sigma_{\rm d}$) is the static (dynamical) displacement component and is attributed to the random displacement of atomic positions (thermal vibration of atoms). O$_{\rm p}$ represents oxygen in the CuO$_2$ plane. \section{Results} \begin{figure}[tb] \begin{center} \includegraphics[width=\linewidth]{LECO_ChiR_v4.pdf} \caption{(Color online) The absolute value of the Fourier transform of the EXAFS oscillations for (a) La$_{1.8}$Eu$_{0.2}$CuO$_4$ and (b) Nd$_{2}$CuO$_4$ measured at 20, 110, 200, and 300 K. The solid bars denote the corresponding positions for Cu--O$_{\rm p}$, Cu--$RE$ ($RE$ = La, Eu, Nd), Cu-O$_{\rm RE}$, and Cu-Cu paths. O$_{\rm p}$ and O$_{\rm RE}$ represent oxygen sites in the CuO$_2$ plane and in the {\it RE}O layer, respectively. The results for AS samples are shifted along the vertical direction by 0.4 ${\rm \AA}^{-2}$.} \label{LECO_ChiR} \end{center} \end{figure} Figure \ref{LECO_XANES_v3}(a) shows XANES spectra for AS and AN LECO. The spectrum for AS NCO is also plotted in Fig. \ref{LECO_XANES_v3}(b) as a reference. The AS samples of LECO and NCO exhibit quite similar spectra, indicating the same electronic state in both AS compounds. This similarity is consistent with the published results for AS Pr$_2$CuO$_4$ \cite{Asano2018}. The small difference in the structure around 8994 eV and 9000 eV, energies corresponding to 1$s$-4$p\sigma$ transitions, can be understood by the variation of the size of rare-earth atoms\cite{Liang1995}. Therefore, the ground state of AS T'-type $RE_2$CuO$_4$ does not depend on the synthesis method. The identification of the same ground state in AS LECO and NCO is an important starting point for discussing the variation of the electronic state in the two compounds due to annealing. As seen in Fig. \ref{LECO_XANES_v3}(a), annealing induces a drastic change in the XANES spectrum of LECO. The intensity around energies corresponding to the 1{\it s}--4{\it p}$\pi$ transitions (8983 eV and 8991 eV) increases, while that around the 1{\it s}--4{\it p}$\sigma$ transitions (8994 eV and 9000 eV) decreases. A similar spectral change has been observed for PCCO with Ce-substitution; it suggests electron doping into the sample\citep{Oyanagi1990, Kosugi1990, Liang1995, Asano2018}. \begin{figure}[tb] \begin{center} \includegraphics[width=\linewidth]{DebyeWaller_v6.pdf} \caption{(Color online) Temperature dependence of the atomic displacement factor for the Cu-O$_{\rm p}$ bond $C_2$ in (a) as-sintered and annealed La$_{1.8}$Eu$_{0.2}$CuO$_4$ and (b) Nd$_2$CuO$_4$. Solid lines are the fitting results using Eq. (\ref{eq:debye_waller}).} \label{DebyeWaller_v6} \end{center} \end{figure} To see the spectral change due to annealing more clearly, we subtract the XANES spectrum of AS LECO from that of AN LECO. The difference spectrum of LECO is shown in Fig. \ref{LECO_XANES_v3}(b) together with the analogous result for NCO. The amplitude of the difference spectrum is much larger in LECO, demonstrating that annealing has a larger effect on the electronic state. The peak at 8981 eV in the difference spectrum reflects the 1$s$-4$p\pi$ dipole transition of Cu$^+$; the existence of a positive intensity, therefore, indicates the formation of Cu$^+$ (3$d^{10}$) sites in the sample due to annealing. By analyzing the intensity, we can evaluate $n_{\rm AN}$, taking the previous XAFS results of AS PCCO into account\cite{Asano2018a, Asano2018}. The value of $n_{\rm AN}$ obtained for LECO was 0.40 electrons per Cu, which is much larger than the electron density of 0.21 electrons per Cu in superconducting PCCO with $x$ = 0.16\cite{Asano2018}. Thus, a large number of electron carriers exist in superconducting LECO, while the number of doped electrons in non-superconducting NCO due to annealing is rather small ($n_{\rm AN}$ $\sim$ 0.05 electrons per Cu). Because the $\delta$ values of LECO and NCO are comparable with each other, the variation of the electronic state caused by annealing differs between the two compounds. We will discuss the doping process due to annealing for LECO and NCO later. Next, we analyzed the EXAFS spectra. Since the amount of removed oxygen is tiny, one may think that the change in the XANES spectra reflects only the local information around the removed oxygens. Thus, we examined that the variation of the electronic state due to annealing is caused throughout the LECO sample, from the structural point of view. Figure \ref{LECO_ChiR} shows $|\chi(r)|$, the absolute value of the Fourier transform of the EXAFS oscillations $k|\chi(k)|$ in the region $3 \leq k \leq 10~{\rm \AA}^{-1}$ for LECO and NCO. (The results for AS samples are shifted along the vertical direction by 0.4 ${\rm \AA}^{-2}$ for visual clarity.) In the figure, the Fourier-transform peaks/shoulders corresponding to Cu-O$_{\rm p}$, Cu-$RE$ ($RE$ = La, Eu, Nd), Cu-O$_{\rm RE}$, and Cu-Cu bonds can be seen. For both LECO and NCO, the overall shape of $|\chi(r)|$ is the same for AS and AN samples. Thus, the structural change due to annealing is negligible or small. However, the amplitude is smaller at each temperature for LECO than for NCO, meaning that there is considerably more static disorder in the superconducting LECO than in the non-superconducting NCO. The amplitude of each peak/shoulder decreases upon warming due to the thermal vibration of the atoms. The intensity of the peak at $\sim$1.5 $\AA$ has been analyzed to obtain the atomic displacement factor $C_2$ for the Cu-O$_{\rm p}$ bond. Figures \ref{DebyeWaller_v6}(a) and \ref{DebyeWaller_v6}(b) show the temperature dependence of $C_2$ for LECO and NCO, respectively. (To make different annealing effects on LECO and NCO more visible, $C_2$ for AS samples has been shifted along the vertical direction.) Upon warming, the $C_2$ of AN LECO increases more rapidly than that of AS LECO. This rapid increase indicates the softening of the Cu-O$_{\rm p}$ bond due to annealing, and is consistent with the result of the present XAFS; the large number of mobile carriers induced by annealing can soften phonons due to the screening effect. The softening of longitudinal optical phonons (corresponding to the Cu-O bond stretching mode) by electron doping has indeed been observed at around the $\Gamma$ point in T'-type cuprates\cite{DAstuto2002, Braden2005}. Therefore, we conclude that the variation of the electronic state due to annealing is a bulk phenomenon taking place throughout the entire LECO sample. On the other hand, no clear annealing effect on the thermal evolution of $C_2$ was observed in NCO, for which $n_{\rm AN}$ is rather small. \begin{table}[tb] \begin{center} \caption{Energy of Einstein oscillator $\hbar\omega_{\rm E}$ for as-sintered (AS) and annealed (AN) compounds of La$_{1.8}$Eu$_{0.2}$CuO$_{4}$ (LECO) and Nd$_{2}$CuO$_4$ (NCO), amplitude of softening due to annealing $\Delta{\omega_E}$, static atomic displacement factor $\sigma_{\rm s}$.} \begin{tabular}{ccccc} \hline ~~~~~~~~~~ & ~~~~~~~~ & ~$\hbar\omega_{\rm E}$ (meV)~ & ~$\Delta{\hbar\omega_E}$ (meV)~ & $\sigma_{\rm s}$ ($10^{-3}~{\rm \AA}^{2}$) \\ \hline LECO & AS & 44.3(4) & - & 2.0(1) \\ LECO & AN & 40.6(4) & 3.7(4) & 4.1(1)\\ NCO & AS & 44.8(5) & - & 0(1)\\ NCO & AN & 44.2(4) & 0.6(5) & 0.5(1)\\ \hline \end{tabular} \label{Einstein_energy} \end{center} \end{table} \begin{figure}[tb] \begin{center} \includegraphics[width=\linewidth]{CT_Cu-Obond_v5.pdf} \caption{(Color online) (a) Energy of the charge transfer gap $\Delta_{\rm CT}$ as a function of the Cu-O$_{\rm p}$ bond-length $d_{\rm Cu-O_{\rm p}}$ for T-type La$_2$CuO$_4$ (T-La), T*-type cuprates (T*), T'-type Pr$_2$CuO$_4$ (T'-Pr), T'-type Nd$_2$CuO$_4$ (T'-Nd), T'-type Sm$_2$CuO$_4$ (T'-Sm), T'-type Eu$_2$CuO$_4$ (T'-Eu), T'-type Gd$_2$CuO$_4$ (T'-Gd), and T'-type La$_{1.8}$Eu$_{0.2}$CuO$_4$ (T'-La$_{1.8}$Eu$_{0.2}$)\cite{Arima1991, Cooper1990, TokuraPRB1990, Tajima1990}. All of T'-type $RE_2$CuO$_4$ are as-sintered (AS) compounds. $\Delta_{\rm CT}$ calculated by the ionic and cluster model is also plotted \cite{Ohta1991}. The gray solid line is a guide to the eye. Schematic picture of the density of states for (b) AS compounds, (c) annealed (AN) Nd$_{2}$CuO$_4$ (NCO) and (d) AN La$_{1.8}$Eu$_{0.2}$CuO$_4$ (LECO). See the text. } \label{CT_Cu-Obond_v5} \end{center} \end{figure} Based on the Einstein model, the temperature dependence of $C_{2}$ was analyzed using the equation \begin{eqnarray} C_{2} &=& \sigma_{\rm s} + ({\hbar}/2\mu\omega_E)^2{\rm coth}(\hbar{\omega_E}/2k_{\rm B}T), \label{eq:debye_waller} \end{eqnarray} where $\omega_E$ is the oxygen frequency of Einstein oscillators, and $\mu$ is the reduced mass of copper and oxygen atoms. Here, $\hbar$ and $k_{\rm B}$ represent the reduced Planck constant and Boltzmann constant, respectively. For all samples, we assumed a temperature-independent $\sigma_{\rm s}$, since no evidence of structural transitions was observed in the measured temperature range. The values of $\hbar\omega_E$ evaluated for each sample are summarized in Table \ref{Einstein_energy}. In the AS compounds, the $\hbar\omega_E$ values for LECO and NCO are comparable with each other. However, $\omega_E$ in LECO decreases by about 10$\%$ due to annealing, while the annealing effect on $\omega_E$ in NCO is negligible. The quantitative analysis further revealed a larger $\sigma_{\rm s}$ in AN LECO than in AS LECO, indicating the enhancement of static disorder due to annealing. \section{Discussion} The present study has demonstrated for the first time a drastic variation of the electronic state in LECO associated with the appearance of superconductivity due to annealing. Here, we discuss the possible origin of this variation, as well as the reason that annealing has different effects on the electronic states in LECO and NCO. The focus of our discussion will be on the size of the CT gap. Mott insulators such as T-type La$_2$CuO$_4$ are characterized by the CT gap (with gap energy $\Delta_{\rm CT}$) between the Cu $3d_{x^{2}-y^{2}}$ UHB and the O 2$p$ band\cite{Zaanen1985} (Fig. \ref{CT_Cu-Obond_v5}(b)). Since a clear gap structure has been observed in the optical conductivity spectra of AS NCO\cite{Arima1991}, the similarity in the XANES spectra of AS LECO and AS NCO demonstrates that AS LECO is a Mott insulator. However, the anneal-induced electronic states of the two compounds are quite different. NCO’s $n_{\rm AN}$ (0.05 electrons per Cu) and $\delta$ (0.035) approximately follow the $n_{\rm AN}=2\delta$ relation. This means that the removal of one oxygen atom predominantly generates two electrons in the system, which is consistent with electron-doping into the UHB without disappearance of the CT gap, as shown in Fig. \ref{CT_Cu-Obond_v5}(c). By contrast, LECO’s $n_{\rm AN}$ of 0.40 electrons per Cu is far beyond the expected value from the $n_{\rm AN}=2\delta$ relation, given that $\delta$ $\leq$ 0.05. Thus, very many electrons exist in the superconducting sample. As was discussed in connection with the previous XANES results for PCCO, the emergence of so many electrons suggests the simultaneous generation of holes to maintain charge neutrality. This situation is difficult to explain by a picture based on electron doping into the UHB with finite $\Delta_{\rm CT}$. In the case of LECO, the removal of oxygen would cause the collapse of the CT gap, and as a result, both electrons and holes emerge at the Fermi level in the hybridized UHB and O 2$p$ bands. (See Fig. \ref{CT_Cu-Obond_v5}(d).) The collapse of the CT gap may occur by lowering the energy of UHB and/or broadening the bandwidth to touch on the O 2$p$ band. We speculate that such a distinct evolution is attributable to the different size of $\Delta_{\rm CT}$ in the AS compound. Figure \ref{CT_Cu-Obond_v5}(a) shows the $\Delta_{\rm CT}$ reported by optical studies \cite{Arima1991, Cooper1990, TokuraPRB1990, Tajima1990} as a function of the Cu-O bond-length $d_{\rm Cu-O}$ for T, T*, and T'-type $RE_2$CuO$_4$. $\Delta_{\rm CT}$, which is correlated with the Madelung energy at the Cu sites, is smaller in the compounds with lower coordination. Furthermore, in the T'-type $RE_2$CuO$_4$ with four coordination, $\Delta_{\rm CT}$ decreases with increasing $d_{\rm Cu-O}$. Since the $d_{\rm Cu-O}$ for AS LECO corresponding to $a$/2 is $2.00~{\rm \AA}$, LECO is expected to have the smallest $\Delta_{\rm CT}$ among $RE_2$CuO$_4$. A CT gap with small $\Delta_{\rm CT}$ easily collapses when the in-gap state is filled by a small amount of electron-doping and/or when the UHB is lowered by the removal of apical oxygen\cite{Adachi2013}. Therefore, even though the ground states of LECO and NCO are the same, reduction annealing induces a marked variation of the electronic state in LECO. Although the true ground state of T'-type $RE_2$CuO$_4$ cannot be elucidated due to the lack of information on the oxygen stoichiometry of the samples, the present study provides an important clue for understanding undoped superconductivity in T'-type cuprates. According to the results shown in Fig. \ref{CT_Cu-Obond_v5}(a), the CT gap potentially closes even in the AS compound with a sufficiently large value of $d_{\rm Cu-O}$, and therefore, the metallic state will appear. Combined with the experimental fact that superconductivity is induced with a smaller Ce concentration in T'-type $RE_{2-x}$Ce$_x$CuO$_4$ having a larger in-plane lattice constant\cite{Naito2002, Fujita2003, Krockenberger2008, Krockenberger2014}, this implies that undoped superconductivity could be realized in T'-type $RE_2$CuO$_4$. Thus, the ground state of T'-type $RE_{2-x}$Ce$_x$CuO$_4$ can be controlled by the Cu-O bond-length. This opens the way for a unified understanding of the physical properties of T'-type cuprates from the viewpoint of the size of $\Delta_{\rm CT}$. Finally, we will briefly discuss static disorder in T'-type $RE_2$CuO$_4$. In the $RE_2$CuO$_4$ cuprate oxides, the T'-type structure tends to transform into the T-type one as the average radius $R_{\rm AV}$ of the rare-earth ion increases \cite{Xiao1989, Chaudhri1997, Ikeda2002, Zhang2002, Imai2007}. This is to alleviate the size mismatch between Cu-O-Cu and $RE$-$RE$ bonds. Our EXAFS measurements revealed a larger value of $\sigma_{\rm s}$ in LECO ($R_{\rm AV}$ = 1.164 $\AA$) than in NCO ($R_{\rm AV}$ = 1.123 $\AA$). This is probably due to LECO’s greater structural instability, since LECO locates near the phase boundary between T' and T structures against $R_{\rm AV}$. In the vicinity of the phase boundary, the elongation of the in-plane lattice constant due to annealing easily enhances the structural instability and increases $\sigma_{\rm s}$, as seen in Table \ref{Einstein_energy}. Muon spin relaxation measurements suggest the coexistence of short-range magnetic order with superconductivity in the ground state of AN LECO\cite{Adachi2016}. From the structural point of view, the existence of a magnetic order in superconducting LECO is attributable to a partial localization of carriers due to the large structural disorder. That is, the structural disorder accompanied by the disorder of the electrostatic potential causes the localization of carriers, leading to the appearance of static magnetism in the competing superconducting state. Thus, $T_{\rm c}$ in LECO could be increased by the suppression of magnetic order through the relaxation of structural disorder. In summary, reduction annealing effects on the electronic states at Cu sites in LECO and NCO have been investigated by Cu $K$-edge XAFS measurements. The analysis of XANES spectra has revealed a significant evolution of the electronic state in LECO due to annealing with a large induced electron density $n_{\rm AN}$ of 0.40 electrons per Cu. For NCO, the variation is much smaller, with $n_{\rm AN}$ only 0.05 electrons per Cu. The EXAFS analysis showed evidence of phonon softening of the Cu-O bond in the CuO$_2$ plane due to annealing for LECO but not for NCO. Therefore, the electronic states for the two compounds vary by distinct processes, although $\delta$, the amount of oxygen loss due to annealing, is comparable for the two systems. The distinct evolution of the electronic states for LECO and NCO can be attributed to the difference in size of the charge-transfer energy gap in the AS compounds; the electronic state in LECO, which has a smaller $\Delta_{\rm CT}$, is more sensitive to the oxygen removal that could introduce the electron-doping and/or variation of the band structure. \section*{Acknowledgements} We thank Y. Kimura for support in the analysis of XAFS data. The synchrotron radiation experiments were performed at the BL01B1 and BL14B1 of SPring-8 with the approval of the Japan Synchrotron Radiation Research Institute (JASRI) (Proposal No. 2016A1603 and No. 2017B3611). M. F. is supported by Grant-in-Aid for Scientific Research (A) (16H02125), K. I. is supported by Grant-in-Aid for Scientific Research (B) (16H04004), and Y.K. is supported by Grant-in-Aid for Scientific Research (B) (17H02915). \bibliographystyle{aps}
1,314,259,996,460
arxiv
\section{Introduction} The Early Anthropogenic Hypothesis considers that the natural evolution of climate consisted in a decline in greenhouse gas concentrations throughout the Holocene, leading today to conditions favourable to accumulation of ice in the Northern Hemisphere (\citet{Ruddiman11aa} and references therein). This hypothesis supposes an important premise: it is possible to predict the slow evolution of climate several millennia ahead. Indeed, suppose that climatologist lived 8,000 years ago. What the Early Anthropogenic Hypothesis says is that a forecast for the 8,000 years to come made by this early climatologists would have been a decline in greenhouse gas concentrations and ultimately, glacial inception. Throughout this paper we will consider that this premise should not be taken for granted. The general problem of predicting the evolution of the climate system several millennia ahead may be tackled from different perspectives. One method of prediction consists in seeking episodes in the past during which the climatic conditions and forcings were analogous to those of the Early Holocene, and observe the subsequent climate evolution. The other method consists in using models, generally numerical models, which account for known physical constraints on the dynamics of the climate system. The two methods are briefly reviewed and discussed in the next section of this article. Most investigators generally appreciate that the two methods need, in practice, to be combined. The question, addressed in section 3, is how observations and models may be combined in a way that is transparent, optimal, and avoids the dangers of a circular reasoning. The methodology proposed here consists in (a) formulating stochastic dynamical systems capturing palaeoclimate dynamics. These may be viewed as generators of palaeoclimatic histories, designed to encapsulate information inferred from experiments with large numerical models and/or other hypotheses about climate system dynamics; (b) calibrate the parameters and initial conditions of these dynamical systems on palaeoclimate observations. We explain why the Bayesian statistical framework is adapted to this programme. \section{Traditional approaches to predict the slow evolution of climate} \subsection{The analogues method and its limitations} Among the solutions envisioned by Edwar Lorenz \citeyearpar{lorenz69aa} to forecast weather, was an empirical approach also known the `analogues method'. Considering that similar causes should lead to similar effects\textit{ at least within a finite time horizon}, weather predictions may be attempted by seeking \textit{analogous} synoptic situations to today's in the archives of meteorological services. Likewise, predicting the natural climate evolution during the Holocene may be addressed by identifying in palaeoclimate archives situations for which the state of climate and the forcing conditions were similar to those experienced during the Holocene. At these times scales climate is influenced by the slow changes in incoming solar radiation, and the latter is a function of the Earth orbital elements (characterised by the eccentricity $e$ and the true solar longitude of the perihelion $\varpi$) and the tilt---or obliquity---of the equator on the ecliptic, denoted $\varepsilon$. These three elements have all specific and well-known effects on the seasonal and spatial distributions of incoming solar radiation (insolation). They also have their own periodicities: You never encounter twice a same orbital configuration (see, on this topic, the early works of \citet{Berger77ber2} and \cite{berger79cim}). Palaeoclimatologists therefore tend to consider specific measures of insolation, supposed to summarise in one quantity the effect of insolation on climate, and look at past times where that specific measure of insolation was the same as during the Holocene. However, the choice of an `analog' depends on how these measures are constructed \citep{Loutre08aa}. To illustrate this point consider three measures of insolation classically referred to in the tradition of Milankovitch's theory (\figref{fig:insol}): daily mean insolation at the summer solstice (21th June), calculated according to \cite{berger78}; (b) insolation integrated over the caloric summer (the 180 days of highest insolation, assuming a 360-day year) \citep{Berger1978139}---this measure is favoured by \citet{ruddiman07rge}--- (c) insolation integrated over all days for which insolation exceeds 300 W/m$^2$ (after a proposal by \cite{Huybers08aa}). All are calculated here for a latitude of 65$^\circ$ N. It is easily shown that these measures of insolation are very well approached by a linear combination of climatic precession ($e\sin(\varpi)$) and obliquity ($\varepsilon$); they are here ordered by increasing influence of obliquity vs climatic precession. \begin{figure}[ht] \begin{center} \includegraphics{Fig_Insol_Analogs/insol_analogs-o.pdf} \end{center} \caption{ Three measures of insolation covering the period $-800$\,ka to $+100$\,ka (departure to the present) calculated using the \cite{berger78} solution, aligned with a stack of 57 benthic foraminifera $\delta^{18}O$ \citep{lisiecki05lr04} and a reconstruction of atmospheric CO$_2$ concentration \citep{Luethi08aa,petit99}. Circles mark insolations equal to the present. Times corresponding to CO$_2$ concentrations above 250~ppm and benthic $\delta^{18}O$ below 3.8 per mil are highlighted to subjectively mark interglacial conditions. \label{fig:insol}} \end{figure} Circles on \figref{fig:insol} indicate times at which the different insolation measures equal their present-day values. Call these `insolation matches'. Those occurring during a climate environment analogous to the present Holocene (arbitrarily defined as a CO$_2$ concentration larger than 250~ppm and benthic $\delta^{18}O$ less than 3.8~per mil) are highlighted by vertical bars. The timing of insolation matches differ by about 1~ka (thousands of years) depending on which insolation is considered: for example the time of insolation match at marine isotopic stage (MIS) 11 ranges between $-402.9$~ka (daily mean at solstice) and $-404.5$~ka (Huybers' thresholded sum). In general, though, insolation levels similar to the present day slightly precede or coincide with times during which ice volume markedly increased. Bearing in mind the caveats introduced by uncertainties on the chronology of the palaeoclimate records, this observation suggests that today's insolation levels are close to those that prevailed at the end of previous interglacial periods. However, observe also that present-day insolations are at a minimum on their secular course (daily mean solstice) or close to it (season-integrated insolations). This remark invites us to be prudent about the prediction of a natural end to the Holocene around the present, again assuming no anthropogenic perturbation. More crucial differences among insolation measures lie in their dynamics. Climate does not respond instantaneously to the astronomical forcing. Ice build-up and ocean alkalinity adjustments involve time scales of the order of 10,000 years at least. Predicting the evolution of climate therefore requires one to consider the history of astronomical forcing over long enough a time-window, corresponding to the time needed by the different climate system components to adjust dynamically to that specific astronomical history. \cite{ruddiman07rge} and \cite{Loutre08aa} noted that the astronomical history preceding $-325$~ka (marine isotopic stage 9) admittedly looks as a reasonable candidate to the last 20~ka by reference to season-integrated insolations (consider the 'shapes' of the caloric summer insolation before $-325$~ka); though, it is not straightforward to demonstrate that this really is the `best possible analog' quantitatively, in a way that is convincingly robust and objective (Loutre, pers. comm. and further tests made for this article). There are further caveats to a straightforward use of insolation analogues to predict climate change. First, which insolation (daily mean solstice or seasonal integration) is the best predictor of climate change may depend on the level of glaciation \citep{Rohling201097}. Second, the analogues method supposes as a premise that one single possible climate history may correspond to a given history of astronomical forcing. In the formal language of dynamical system theory this property is called \textit{general synchronisation} of climate on astronomical forcing (see \cite{PhysRevE.51.980} for a formal definition). It is shown in the next section that this might not be the case. Third, climate cycles evidently changed in shape and variance over the last million years due to slow changes in the environment. For example, the mid-Pleistocene revolution, when glacial cycles shifted from a period of 40~ka to a period of the order of 100~ka is explained by \citet{saltzman90sm} as a bifurcation related to a slow, tectonically-driven decline in background levels of CO$_2$. \citet{Clark06aa} review a number of alternative explanations. The mid-Br\"unhes transition about 400~ka ago, after which interglacial periods became characterised by higher greenhouse gas concentrations than before this transition, is still unexplained (see the review of \cite{Tzedakis09aa}). Consequently, one cannot be assured that the levels of insolation leading to glacial inception determined on the basis of an event 400~ka ago still hold valid today. \subsection{Using climate simulators to predict the glacial inception} The other approach to the problem of predicting an ice age proposed by Edwar Lorenz relies on a mathematical model of the Earth's climate. In a visionary paper entitled ``climate modelling as a mathematical problem'' \citep{Lorenz70aa}, he expresses the hope that \textit{``in the 21st century''}, large mathematical models accounting for \textit{``every feature of the atmosphere and its environment''} as well as ocean circulation and vegetation could be run and predict ice ages. The `mathematical' models envisioned by Lorenz are known today as `general circulation models' or `global climate models'. They reproduce many features of the climate system visible at the global scale (the ocean circulation, modes of variability such as El-Ni\~no) based on equations that are expressed at the level of a grid cell. We prefer to call them 'simulators' in order to avoid confusion with other meanings of the word `model'. Accounting for physical constraints is naturally considered to strengthen the reliability of a prediction, especially if this prediction involves a situation that is unprecedented. Lorenz's proposal is therefore a generally well accepted way of dealing with the problems of climate predictability at all time scales. However, it has a number of drawbacks that need to be emphasised in the present context. An often-heard objection to Lorenz' proposal is computing time. State-of-the-art simulators resolving the dynamics of the ocean and the atmosphere are designed to be run on time scales of the order of a century. One experiment of thousand years with such a simulator may take months of computing time when it is possible. Experiments over several millennia therefore require changes in the simulator design. Modellers have generally chosen to degrade, sometimes very drastically, the resolution of the atmosphere and ocean and simplify or suppress many of the parametrisations in order to gain computing time. Cloud cover, for example, may simply be considered as constant. In the meantime, they attempted to account for the dynamics of other components of the climate system, such as the ice sheets and the carbon cycle, which react on the longer time scales. The simulators satisfying these criteria are known as Earth Models of Intermediate Complexity, or simply EMICS \citep{claussen02emic}. EMIC experiments designed to test the early anthropogenic hypothesis were presented soon after the original publication of \citet{ruddiman03}. An EMIC called CLIMBER-SICOPOLIS (\citep{petoukhov00,Calov05aa}) was used to estimate the effects of declining CO$_2$ concentrations during the Holocene from about 265~ppm to about 235~ppm (a similar decline was imposed on methane concentrations). With that scenario, CLIMBER does not accumulate ice any time during the Holocene, while it correctly reproduces the increase in ice area during the Eemian / Weschelian transition (115~ka ago) \cite{claussen05}. Similar experiments were carried out with the LLN-2D simulator of \cite{gallee92}. Consistent with the experiments with CLIMBER, reasonable no-antropogenic scenarios with CO$_2$ declining to about 240$~ppm$ cause almost no accumulation before several tens of thousand years in the future. Only very drastic declines in CO$_2$ concentrations, such as to reach a CO$_2$ concentration of 225~ppm at present and 206~ppm 4~ka in the future, yield some ice accumulation (about 27~m of equivalent eustatic sea-level in 22~ka). Results of experiments with EMICS may however be questioned because they misrepresent (if at all) features of the climate system that may be important to understand and predict its slow evolution, such as monsoons, western boundary currents, modes of variability like ENSO and the details of the ocean circulation in the Southern Ocean. For these reasons a number of experiments were published, based on simulators with more state-of-the-art representations of ocean and atmosphere dynamics. These simulators are not designed to compute the accumulation of ice but they are useful to indicate spots where snow is likely to accumulate from one year to the next, and/or to analyse the conditions propitious for accumulation. They also provide information about the sensitivity of the ocean surface and circulation to changes in the astronomical forcing and greenhouse gas concentrations. In the present special issue, three articles discuss experiments with the Community Climate Model in different configurations: \citet{Vavrus11aa} examine the influence of topography representation on the snow accumulation process and find that accounting for higher resolution topography increases the sensitivity of snow accumulation on the external forcing; \cite{Kutzbach11aa} examine the influence of decreasing greenhouse gas concentrations on sea-surface temperatures and attempts to quantify the effects on the carbon cycle; they suggest that the feedback of the surface cooling on the carbon cycle is substantial enough to accommodate Ruddiman's suggestion of a natural amplification of the natural perturbation \cite{ruddiman07rge}. \cite{Vettoretti11aa} uses the ocean-atmosphere version of the Community Climate Model and compares the effects of decreasing CO$_2$ concentrations with those of the orbital forcing on snow accumulation and the abyssal circulation in the Atlantic. They come to a somewhat challenging conclusion for the Early Anthropogenic Hypothesis, that is, astronomical forcing is a more important driver of ice accumulation than CO$_2$. Earlier, \citet{Vettoretti04aa} presented a series of eight experiments with the Canadian Climate Centre Atmospheric Model GCM2 to identify the conditions propitious to year-to-year snow accumulation on ice nucleation sites (see also references to closely related work in that article). They found that obliquity greatly influences snow accumulation in their model. Here, I would like to point to perhaps a more fundamental challenge to Lorenz's vision. Even the most sophisticated simulators are an incomplete (truncated) representation of reality. Some of the truncated processes are represented with semi-empirical formulae, called parameterisations (see \cite{Palmer05aa} for a very accessible overview). A fraction of this uncertainty---called the parametric uncertainty---can be quantified by considering the sensitivity of predictions on parameters involved in the representation of the truncated processes. Pioneering works of \cite{murphy04qump} on this subject lead to great research activity. However, it will never be possible to guarantee that all the physical and biogeochemical processes are correctly and accurately represented in a numerical model of the climate system. There is structural uncertainty. In particular, remember that CO$_2$ is active not only as a greenhouse gas, but also as a fertiliser and an ocean acidifier; changes in CO$_2$ concentrations at glacial-interglacial time scales have innumerable consequences on life, and hence on climate. Saltzman already noted \cite{saltzman02book}(sect. 5.1) that expecting from a climate simulator to reproduce ice ages without any prior information about the timing of glacial-interglacial cycles is illusory, so huge would the accuracy needed on the snow accumulation imbalance and carbon dioxide fluxes to capture the rate and timing of glacial cycles. Given that glacial cycles cannot be simulated \textit{ab initio}, past observations have to be somehow incorporated into the simulator. Very often, the procedure is quite implicit: The uncertain parameters of the climate simulator are varied until the simulated climate (or climate changes) is reasonably realistic. This is called tuning. One must be clear about the fact that published predictions of the next glacial inception are all obtained with climate simulators 'tuned' on past climate observations. For example, the LLN-2D simulator, used in \citet{Berger2002An-exceptionall}, \cite{loutre03gpc} and \cite{loutre04epsl} was tuned to capture the timing and rate of the latest glacial inception, about 115~ka ago (Andr\'e Berger, pers. comm.). Confidence in the simulator was gained by the fact that once tuned, it reproduces the last glacial-interglacial cycle \citep{gallee92} reasonably well, as well as the dynamics of previous interglacial periods \citep{loutre03gpc} and, admittedly with more difficulties, the succession of marine isotopic stages 12-11-10 (400,000 years ago) \citep{loutre04epsl}. The problem is that tuning a simulator is a fairly non-transparent and non-optimal way of using past climate information for the prediction. It is non-transparent in the sense that it is not always clear which information has been incorporated and which one was effectively predicted (or 'hindcasted'); it is non-optimal in the sense that only a fraction of the palaeoclimate information has been used in the inference process, and that the simulator can never be perfectly calibrated. Furthermore, the `satisfaction' criteria which preside the tuning process are not explicitly defined and the consequences of the deviations between simulator results and observations on predictions are not explicitly quantified. \section{Dynamical systems as models of glacial cycles \label{sec:dyn.sys}} \subsection{What do we learn from simple dynamical systems?} The complexity and multitude of mechanisms involved in glacial-interglacial cycles may leave the modeller hopeless. Fortunately, this is a fact of nature that very complex systems may exhibit fairly regular and structured dynamics. This phenomenon, called 'emergence', is related to the fact there are constrains on the system taken as a whole. The dynamics of the system may therefore be inferred and understood by identifying the constraints that are relevant at the scale of interest without having to consider all the interactions between the individual components of the system. This argument justifies the interest of scientists for `conceptual' models of glacial cycles. These models do have educational interest and they also have predictive power if they are correctly formulated (see examples in \citet{paillard01rge}; for more general background about complex systems consider the monographs of \cite{Haken06aa} and \cite{Nicolis07aa}). For illustration consider the following minimal conceptual setting. Suppose that the state of the climate system may be summarised by two variables. Call them $V$ and $D$. Variable $V$ represents the total continental ice volume. It accumulates variations in time due to the astronomical forcing $F(t)$ plus a drift term associated with the variable $D$, which is defined later. Mathematically this reads, if $\delta V$ is a variation in $V$ over a fixed time interval $\delta t$: \begin{subequations} \begin{equation} \tau \frac{\delta V}{\delta t} = - (D + F(t) + \beta) \label{v1} \end{equation} Parameter $\beta$ is introduced later. $\tau$ is a characteristic time that controls the rate at which ice volume grows or decays, and which is tuned on observations. Variable $D$ represents the state of a climatic component that responds with a hysteresis behaviour to changes in ice volume. The variations in surface pCO$_2$ associated with ocean circulation changes may present such a behaviour (e.g. \citep{Gildor01aa, paillard04eps}). This is mathematically modelled as follows: \begin{equation} \tau \frac{\delta D}{\delta t} = \alpha (9D^3 - 3D - V + 1), \label{v2} \end{equation} \label{FitzHughNagumo} \end{subequations} where $\alpha$ is another parameter that controls the rapidity of the transition between the two branches of the hysteresis (to understand the principle of this second equation, consider the situation $V=1$. There are then two possible values of $D$ that would be in equilibrium with $V$ (i.e.: $\frac{\delta D}{\delta t}=0$): $D=0.57$ or $D=-0.57$. Whichever is approached depends on the system history, i.e.: there is hysteresis). Equations (\ref{FitzHughNagumo}a,b) constitute the dynamical core to several conceptual models of ice ages available in the literature, including models by \cite{Saltzman91sm, Gildor01aa} and \cite{paillard04eps}. Mathematicians will recognize a particular formulation of the celebrated FitzHugh-Nagumo model (e.g. \citet{Izhikevich:2006}) originally developed to study neuronal responses to electrical stimuli (the term '+1' added at the end of equation \eqref{v2} plays no other role than constraining $V$ to be almost always positive, which is more physical for an ice volume). In the absence of astronomical forcing the behaviour of $V$ is mainly dictated by $\beta$. It may converge to a fixed point close to zero (if $\beta\leq1/3$), much above zero ($\beta \geq 1/3$) or oscillate between the two (if $-1/3<\beta<1/3$). Within the oscillation regime $\beta$ controls the asymmetry of the spontaneous oscillations and $\alpha$, the sharpness of the shifts between the glaciation and deglaciation regimes. Here we chose $\beta=0.2$ to capture the notion of slow glacial build-up and fast deglaciation, and $\alpha=3$ for rapid shifts in $D$ compared to variations in $V$. The oscillations are also known as relaxation oscillations, first documented by Van der Pol in 1921 \citep{Kanamaru:2007}. \begin{figure}[ht] \begin{center} \includegraphics[width=\textwidth]{Fig_VDModel/vdmodel.pdf} \end{center} \caption{Time-series generated with a two-variable model, equations \eqref{v1} and \eqref{v2}. The system is forced by a synthetic mix of obliquity and precession. The variable $V$ integrates the forcing with a correction $D$. $D$ shifts between two states according to a hysteresis behaviour controlled by $V$. In this conceptual model $V$ may be interpreted as the total volume of continental ice, and $D$ is often associated to the state of the ocean circulation. $V$ is here superimposed to the benthic stack by \cite{lisiecki05lr04}, which is appropriately scaled to highlight the similarity between the simulation and the palaeoclimate record.} \label{fig:vdmodel} \end{figure} For forcing the system we use a linear combination of climatic precession and obliquity in order to capture the notion that summer insolation controls ice ages ($F(t) = 30[e\sin\varpi + \varepsilon'$], where $'$ is the departure to the mean). With this scaling the forcing amplitude is approximately of the order of $1$ and thus of the same order as $D$. There are already two lessons to be drawn from this simple experiment. Simulations with this model are shown on Figure \ref{fig:vdmodel}. First, observe that the simulated future climate history is an ice volume decline up to 20~ka into the future, in spite of the fact that the model correctly reproduces the increases in benthic trends at the end of marine isotopic stages 11 (400~ka ago), 9 (325~ka ago, although the model spuriously produces a subsequent drop in ice volume), 7 (200~ka ago) and 5e (115~ka ago, but as for stage 9 the subsequent drop towards stage 5c is too strong). Hence, the argument that present ice volume should be growing (assuming no anthropogenic perturbation) because it did so during analogous situations in past interglacials is not sufficient. It is necessary to demonstrate, in addition, that this simple model is inappropriate. Second, observe that $D$ shifts to negative values at the end of the deglaciations and, in this model, this shift preconditions glacial inception. This may justify why, rather than considering insolation, it has sometimes been decided to seek an analogue to the present by considering the timing of terminations, because they are a proxy for the state of system variables that cannot be immediately observed. Namely, \citep{EPICA04} chose to align the previous deglaciation on the deglaciation that lead to stage 11. Based on the observed trends in the $\delta$D, they estimated that glacial inception should not occur before more than ten thousand years from now. This alignment was debated on the ground that it violates the insolation alignment \citep{crucifix06eos, ruddiman07rge} (further informal debate and correspondence followed). However, they may be a more fundamental reason why any `alignment', i.e., any blind application of the analogues method, may yield a wrong or at least overconfident prediction. To see this slightly modify \eqref{v2} to add a random number of standard deviation 0.05 every 1~ka (assume $\sigma$ normally distributed) : \begin{equation} \tau \frac{\delta D}{\delta t} = \alpha (D^3/3 - D - V + 1) + 0.05 \sigma \sqrt{\delta t} \tag{\ref{v2}'} \label{v2p} \end{equation} The random (stochastic) process parameterises unpredictable ocean-atmosphere disturbances such as Dansgaard-Oeschger events (cf. \cite{Ditlevsen09ab}), or volcanic eruptions, which impact on climate dynamics. The theory that allows us to represent non-resolved dynamical processes with stochastic processes was introduced in climatology by \citet{hasselmann76}. \citet{Nicolis05aa} discuss to what extent such parameterisations may also be used to estimate the effect of accumulating model errors, and a full volume entirely devoted to stochastic parameterisations was recently published \citep{Palmer10aa}. Together, equations \eqref{v1} and \eqref{v2p} form a \textit{stochastic dynamical system}. Two trajectories simulated with two different realisations of the random numbers are shown on (\figref{fig:vdstoch}). In principle the standard deviation of the noise (0.05) is an uncertain quantity that should be inferred from realistic computer experiments and / or calibrated on observations. Given that the present purpose is merely illustrative we are satisfied with the fact that the noise added to the evolution of $V$ and $D$ looks reasonable compared to typical benthic and ice core records. Indeed, it is seen that the stochastic disturbances are here too small to compromise the shape of glacial cycles and the action of astronomical forcing as a pace-maker, but they are large enough to alter the timing of glacial events and shift the succession of glacial cycles. Observe that the climatic history shown in black is similar to the deterministic simulation shown in \figref{fig:vdmodel}. The red one undergoes a shift around $-400$\,ka. The length of the simulated interglacials around $-400$\,ka are differ in the two experiments, in spite of the fact that the preceding deglaciations are in phase and of similar amplitude. \begin{figure}[ht] \begin{center} \includegraphics[width=\textwidth]{Fig_VDModel/vdstoch.pdf} \end{center} \caption{ Same model as \figref{fig:vdmodel} but with a small stochastic forcing added to $D$ (equation \ref{v2p}). The two time-series shown correspond to two realisations of the stochastic forcing. Observe the phase shift aroud $-400$\,ka. It corresponds to temporary loss of synchronisation of the system on its (astronomical) forcing.} \label{fig:vdstoch} \end{figure} A similar phenomenon was is fact visible in the experiments with the conceptual model of \citet{paillard98}, who showed that two future climate scenarios, one with immediate glacial inception and one with a long-delayed glacial inception, with only tiny changes in model parameters. It is possible to explain these shifts with arguments of dynamical system theory (article in preparation). They are related to the fact that the non-perturbed dynamical system may accommodate different plausible climate histories, characterised by different series of deglaciation timings. At certain times, random perturbations (or small parameter variations) may cause a shift from one possible history to another one. The (unsolved) mathematical problem is to identify and characterise the moments at which such phase shifts are the most likely to occur. The possibility of such shifts contrast with the earlier conclusions of \citet{tziperman06pacing}. Whether phase shifts effectively occurred in the true climatic history is still to be elucidated. However, if this phenomenon is confirmed, it implies that the predictive horizon of glacial cycles is intrinsically limited and that the analogues method may lead us to be overconfident about what can actually be predicted. \subsection{Developing dynamical systems of glacial cycles. } The above example suggests that the gross dynamics of glacial-interglacial cycles may be characterised by a simple dynamical system. However, \cite{tziperman06pacing} previously warned us about a form of ambiguity related with the interpretation of such simple dynamical systems of ice ages. Indeed, we have not been specific at all about the meaning of the variable $D$. In the model of \citet{paillard04eps}, the role of $D$ is played by a variable quantifying the overturning of the southern ocean circulation. \citet{saltzman90sm} choose to introduce a third variable in order to distinguish the ocean circulation from the carbon cycle. In their model, the non-linear terms appear in the carbon-dioxide equations. More interpretations of the meaning of $V$ and $D$ are possible. Ambiguity \textit{per se} is not necessarily a bad thing. It is useful to explore and understand the system dynamics without having to enquire about the details of the mechanisms involved. However, it is desirable to attach a more concrete meaning to climatic variables in order to explore mechanisms and provide credible predictions outside the range of observations. There are two complementary ways of doing this. The first one is to verify that the relationships between the different variables are \textit{compatible} with the information inferred from experiments with numerical simulators. For example, eq. \eqref{v1} encodes a linear relationship between the ice mass balance and insolation. How does it compare with climate simulators? The second one is to connect the variables with actual palaeoclimate observations. If $V$ and $D$ represent ice volume and, say, the North Atlantic Overturning cell, their simulated variations should be consistent with the current interpretations of planctonic and benthic foraminifera records. In this section we focus on how we include information inferred from numerical simulators in simple dynamical systems (\figref{fig:csim} supports the expos\'e). Our ambition is to reach a level of understanding of the simulator response that is more general than mere predictions of the future ice volume evolution obtained with this simulator. To this end we propose to examine the \emph{bifurcation structure} of the simulator in response to varying input parameters. \begin{figure}[ht] \begin{center} \large\textsf{Strategy to design the 'climate stochastic generator' } \includegraphics[width=\textwidth]{Fig_Clim_Simulator/cs.pdf} \end{center} \caption{ Route map for the design of a stochastic generator of climate histories. Statistical approximations of large numerical simulators (surrogates or emulators) are developed on the basis of appropriate plans of experiments. These mathematical objects are much smaller and tractable than the original simulators, and may be coupled with each other and/or with more conceptual models. Expert elicitation is required for assessing the uncertainties associated with numerical simulators and formulate of prior probability distributions of parameters. The resulting stochastic dynamical system may then be used to generate ensembles of palaeoclimate histories compatible with this model. } \label{fig:csim} \end{figure} To this end, consider again LLN-2D. This model is an implementation of equations of quasi-geostrophic motion in the atmosphere and rheological equations for ice-sheet dynamics assembled with the purpose of studying glacial cycles. As we noted above the level of sophistication of LLN-2D is largely superseded by modern simulators. Nevertheless it is a valid example because LLN-2D was used in the debate over the early anthropogenic \citep{crucifix05anthropocene,loutre07interglacials} . In this example, we attempt to determine the number and stability of steady-states obtained with LLN-2D, as a function of the precession parameter. Mathematicians devised very ingenious and sophisticated ways of exploring the dynamical structure of large mathematical systems but here we follow a very simple and intuitive method, previously used to analyse simulators of the ocean-atmosphere \cite{Rahmstorf05aa} and ocean-atmosphere-ice-sheet systems \cite{Calov05aa}. It is known as the `hysteresis experiment'. This is achieved as follows: consider eccentricity of 0.025, obliquity of $23.5^\circ$, a CO$_2$ concentration of 210~ppm, and run the simulator until equilibrium with $\varpi=90^\circ$ (perihelion on the 21st June). Then, slowly vary $\varpi$ along the trigonometrical circle (the experiment takes the model equivalent of 400,000 years) and record the total continental ice volume as $\varpi$ varies. The resulting curve (\figref{fig:llnhys}) depicts an ensemble of quasi-equilibrium points. The first observation is that there is at least one stable equilibrium response for all the forcings in the range explored. Secondly, there are two stable states for $e\sin(\varpi)$ within about $-0.02$ and $+0.01$. A similar curve was calculated for a CO$_2$ concentration of 260~ppm. It is similar in shape but the two-stable-state regime now ranges from $e\sin(\varpi)=-0.24$ to $0$ and the maximum ice volume is $16\ 10^6~\mathrm{km}^3$). Similar results were previously obtained with the CLIMBER-SICOPOLIS simulator \citep{Calov05aa}. \begin{figure}[ht] \begin{center} \includegraphics{Fig_Hys_LLN2D/HY7-crop.pdf} \end{center} \caption{Simulated volume of ice over the northern hemisphere near steady-state as a function of precession using the LLN-2D simulator by \cite{gallee92}, and given fixed eccentricity, obliquity and CO$_2$ concentration. The experiment consists in varying the precession parameter slowly all along the trigonometric cycle in 400~ka of model time. \label{fig:llnhys}} \end{figure} The bifurcation structure that emerges from LLN-2D was not apparent until we did the simulations. We have therefore learned a new bit of information that we are ready to trust, or at least to test against the observations. This information may be encoded as a simple equation, which is known to have a similar bifurcation structure as the one found in LLN-2D: \begin{equation} \mathrm{\delta }V = -\phi_V(V,\mathrm{CO}_2, \varepsilon, e,\varpi,\Psi) \delta t \label{eq:llnemu} \end{equation} where $\delta V$ is a variation in ice volume over a time-step $\delta t$, $\phi_V$ is a function of ice volume, astronomical forcing and CO$_2$ and other parameters (summarised by $\Psi$) (the full expression is given in Appendix 1). Equation \ref{eq:llnemu} is a \textit{surrogate} of the LLN-2D model: it is a simple, fast process that behaves dynamically in a way that is similar to LLN-2D. Here, we calibrated the parameter $\Psi$ such that the surrogate reproduces approximately the behaviour of LLN-2D over the last 800~ka. The calibration did not yield a simple \textit{best} value of $\Psi$, but rather a distribution of plausible values of the parameter $\Psi$ that reproduce reasonably well the model output. The resulting fit is shown on \figref{fig:lln-emulator}. \begin{figure}[ht] \begin{center} \includegraphics[width=\textwidth]{Fig_LLN_Surrogate/lln-emualator.pdf} \end{center} \caption{Simulation of ice volume with the LLN-2D model (red) forced by astronomical forcing and variations in CO$_2$ \citep{petit99,Luethi08aa}, compared with 50 realisations of the emulator (eq. \eqref{eq:llnemu}). The blue trajectory is obtained with the emulator using the central estimates of regression coefficients. All experiments are deterministic. } \label{fig:lln-emulator} \end{figure} By finding a surrogate for LLN-2D we have progressed on two fronts. First, we have distilled the vast network of partial differential equations implemented in LLN-2D into a much simpler structure. This structure can now be compared with canonical dynamical systems in order to comprehend its important dynamical characteristics (\cite{Ditlevsen09aa} precisely studied an equation similar to \eqref{eq:llnemu} in the context of glacial cycles). Secondly, it is possible to efficiently express our beliefs and uncertainties about this simulator. Namely, uncertainty about the position of tipping points may be expressed as uncertainty on the coefficients $\beta_i$ (cf. Appendix 1). Unfortunately the LLN-2D is a very rough simulator by today's standards. One should therefore attempt to correct or augment eq. \eqref{eq:llnemu} based on the knowledge gained from simulators more specifically designed to study the response of the atmosphere and oceans to changes in forcing and boundary conditions. This is where well-designed experiments with general circulation simulators of the ocean and the atmosphere, such as those reviewed in the previous section, may be useful. Such plans of experiments may help us to locate points in the parameter space from which snow may accumulate from one year to the next. More generally, they should allow us to explore the (possibly non-linear) response structure of the atmosphere-ocean system to changes in forcing and boundary conditions. To this end, statisticians have developed two concepts that should be useful: \begin{description} \item[The experiment design theory.] This theory provides practical guidance to efficiently sample the space of possible inputs (forcing and parameters) of a numerical model in order to learn as much as possible about the simulator with as few experiments as possible \citep{santner03}. \item[The Emulator ] is an interpolator that predicts the response of the climate simulator for other parameters than the experiments actually made. It also provides an uncertainty of the error associated with the interpolation process \citep{ Kennedy01bayes,Rougier09aa} \end{description} Just as the LLN-2D surrogate, the emulator provides us with a fast, efficient, high-level description of the general circulation simulator response. It may constitute the basis for non-linear parameterisations of, the northern hemisphere snow accumulation balance response to obliquity, precession, CO$_2$ and ice volume. In turn, these parameterisations can be included into a dynamical model of ice ages. At last, it happens that certain processes supposed to relevant at the glacial time scales are difficult to simulate reliably with numerical simulators. For example, calculating the response of the ocean circulation near the Antarctic ice shelves to changes in sea-level is particularly challenging because it involves connexions between processes at very different spacial scales. Yet, \cite{paillard04eps} speculated that these circulation changes may play a role on carbon cycle dynamics in glacial cycles, and they support their claim with a number qualitative arguments. Likewise, \cite{Ruddiman06ab} suggests a direct effect of precession on the carbon cycle through tropical dynamics and, possible, southern ocean temperatures. In such cases where numerical simulation are difficult or considered to be too unreliable, the framework of the climate stochastic dynamical model gives us the freedom to formulate hypotheses about the role of such processes in the form of simple equations, and then to test them against observations. \subsection{A statistical framework to accommodate observations} Many of the ideas expressed above were previously formulated by Barry \cite{saltzman02book}. The present programme for investigation and prediction of ice ages proposed here goes further. It follows a \textit{Bayesian} methodology because uncertainties are expressed by means of probability distribution functions and they are updated using observations. \citet{Haslett06aa} depicts the problem quite efficiently: the system defined by eq. \eqref{v1}, \eqref{v2p} may be interpreted as a generator of \textit{a priori} plausible climate histories. They are \textit{a priori} plausible in the sense that they are generated from a system of equations and probability distributions of parameters that express, transparently, our beliefs about the dynamics of the climate system. In practice, the calibration of this `generator' consists in rating these climate histories against palaeoclimate observations. This operation consists in attributing more or less weight to the different possible model parameters, depending on the likelihood of the climate histories that they generate. This weighting process constitutes the mechanism by which the \textit{a priori} uncertainty on model parameters is \textit{updated} by accounting for observations. The more general principle of \textit{model selection} consists, in the presence of two different models, in deciding which one produces the climatic histories that are the most likely given the observations. The concept of Bayesian model calibration was first applied to the problem of glacial cycle dynamics by \citet{hargreaves02}. The \textit{likelihood} is, in that framework, an important quantity. It is formally defined as \textit{the probability of observing what was observed, if the simulated palaeoclimate history was correct}. Estimating the likelihood therefore requires to express carefully the connections between modelled climate histories (the 'model'), the real world, and observations. These connections may be visualised by means of a Bayesian graphical network, an illustrative example of which is given in \figref{fig:modelproxy}. The graphic is greatly inspired from that published by \citet{Haslett06aa} but it is here adapted to the glacial cycles problem. Arrows indicate causal dependencies. Namely, our knowledge of benthic $\delta^{18}O$ sampled at a certain depth depends on the $\delta^{18}O$ that was actually preserved at that depth (after, e.g., bioturbation), which itself depends on the age corresponding to that depth, as well as on the $\delta^{18}O$ trapped by the foraminifera at the time corresponding to that age. Further up, the generated climate histories depend on the parameters of the climate stochastic model and on the external forcing. The climate stochastic dynamical system used to generate `palaeoclimate histories' is thus part of a large framework generating potential 'palaeoclimate records', chronologies and sedimentation histories. \cite{Haslett08aa} provide an algorithm to generate sediment chronologies compatible with time markers (radiocarbon dates or, in our case, tephra layers and/or magnetic inversions). \figref{fig:modelproxy} is a simplified diagram; the full network should also include parameters controlling the response of proxies on climate. Nonetheless, it is good enough to clarify some of our ideas about palaeoclimate how inferences are drawn. For example, palaeoclimate scientists often take advantage of astronomical forcing to constrain the chronology of climate records \cite{imbrie84}. The Bayesian graphical network shown here makes it clear that this operation \textit{requires} a climate model (the white ellipse representing climate histories generated by the climate model lies between the forcing and observations). Bayesian methodologies have become increasingly popular in climate science because they provide transparent ways of expressing our uncertainties, modelling choices \citep{annan06multiple,Rougier2007Probabilistic-i, Rougier07aa, Sanso08aa} and the distance estimated between the model and reality (on this specific point see \citet{Goldstein09aa}). Calibrating dynamical systems is however a very difficult problem. In general, the number of possible climate histories compatible with a model is so big that fairly complex algorithms are needed to reduce as much as possible the number of experiments required to efficiently update prior distributions. Palaeoclimates present additional specific challenges related to dating uncertainties and the complexity of the environmental factors affecting palaeoclimate archives. \citet{hargreaves02} proposed a straightforward application of an algorithm well known by statisticians called `Markov-Chain Metropolis Hastings'. The algorithm works well as long as the calibration period is relatively short and that climate trajectories all cluster around a same common history. Unfortunately this may not be the case if local instabilities occur, as in the example shown on \figref{fig:vdstoch}. Furthermore, the algorithm demands large computing resources. \cite{Crucifix09aa} then proposed a solution based on a statistical algorithm called `particle filter for parameter and state estimation' \citep{lw01}. This is a sequential filter: it `sweeps' observations forward in time from the past until the present in order to generate a posteriori distributions of parameters and model states at the end of the run. The filter seemed to be a powerful solution to the problem posed by the existence of local instabilities. Unfortunately (again), further sensitivity studies performed after publication lead us to tone down our enthusiasm. The filter performs poorly on the model selection problem because it fails to discriminate models on the basis of their long-term dynamics. For example, some of the models selected by the filter no longer exhibit glacial-interglacial cycles \citetext{unpublished results; available on request to the author}. Hope resides in more advanced Bayesian methods, which combine the Metropolis-Hastings strategy with the particle filter \citep{Andrieu10aa}. An alternative solution is to calibrate the parameters on the basis of invariant summary statistics \cite{Wood10aa} using a method known an 'Approximate Bayesian Computation' \cite{Sisson07aa, Wilkinson08aa}. Such statistics allow one to characterise a climatic trajectory in a way that is not sensitive to its initial conditions, nor to the exact timing of climatic events. For example, one may attempt to calibrate the dynamical system based on the average duration of a glacial cycle, or in the `skewness' or `asymmetry' of these cycles \citep{King1996247}. More complex metrics may be envisaged to capture complex features associated with the non-linear dynamics of the system \citep[and. ref. therein]{Marwan20094246, citeulike:753023}. The use of summary statistics, however, should still respect the inference process outlined on \figref{fig:modelproxy}. An easy mistake is to apply summary statistics on climate model outputs and compare them straightforwardly with palaeoclimate data, without regard to the data preservation, sampling and possibly interpolation processes. Good summary statistics should allow us to efficiently discriminate between climate models and, at the same time, they should be robust to hypotheses about the data preservation and sampling processes. The works by \cite{Mudelsee94aa, Mudelsee01aa,npg-16-43-2009} and ref. therein constitute important references in this respect. \begin{figure}[ht] \begin{center} \large\textsf{Bayesian inference network} \includegraphics[width=\textwidth]{Fig_Bayesian_Model/model-proxy.pdf} \end{center} \caption{Simplified Bayesian inference network for selection and calibration of stochastic dynamical systems of palaeoclimates (cf. \citet{Haslett06aa} for more theoretical background). Rectangles correspond to observations or certain quantities. Ellipses are uncertain quantities. Continuous functions of time are symbolically marked with $(t)$; functions of depth by $(d)$ and discrete series are marked with $[i]$. The sequence from left to right corresponds to forcing, climate model, natural archiving of the climate information (blue), sampling and measure (yellow). Modelling consists in defining the relationships symbolised by the arrows and attributing prior probability distributions to uncertain quantities at the top of the chain (e.g.: model parameters). Model validation and selection are judged based on whether actual observations are compatible the model (measured by the likelihood); statistical calibration consists in propagating the information backwards, from the observations to parameters in order to make predictions (green). 'Summary statistics' are variables such as spectral power or other integrated measures which summarise several observations. } \label{fig:modelproxy} \end{figure} \section{Conclusion} The Pleistocene record suggests that modern insolation is not much above that needed for glacial inception. However, the complexity of the climate response to the complex astronomical forcing implies that further theoretical elaborations are needed to transform this statement into a reliable prediction about the slow evolution of climate like: `greenhouse gas should have declined during the Holocene, leading to glacial inception'. In particular, there is no perfect insolation analogue to the Holocene in the past and, would there be one, it cannot be guaranteed that climate would behave exactly the same way as during that hypothetical analogue. We mentioned several reasons for this. One is the possibility that small disturbances may, at strategic times, delay glacial events by several thousands of years. Locating such 'high-sensitivity' times is an attractive challenge. If the Holocene was such a time, any small disturbance, including the pre-industrial anthropogenic activity, may have durably inflected the course of climate. The dynamical system approach proposed here offers opportunities to tackle this problem in the tradition of hypothetic-deductive scientific investigation, subject to the usual criteria of model parsimony, prediction skill and accommodation of available observations. We have seen that very simple dynamical systems may already roughly reproduce the benthic curve; on the other hand, the useful output of more sophisticated numerical simulators may also be captured by fairly simply structured surrogates or emulators. These considerations encourage our quest for fairly compact and efficient predictive models of glacial-interglacial cycles. They are our main hope to credibly accommodate the fairly low amount of information present in palaeoclimate records with the flow of information generated by the complex simulators of the climate system. Indeed, reduced dynamical models designed to emulate complex numerical simulators contain a much smaller number of adjustable parameters and they are way more computationally efficient. They are therefore easier to calibrate on palaeoclimate observations. Furthermore, they make it possible to infer very general statements about climate predictability based on the large body of literature available on dynamical system theory. Nevertheless, one should not be deceived by the apparent success our simple model and its close parents. Finding a dynamical system that convincingly accommodates more than one palaeoclimate data series is not trivial. A convincing model of glacial cycles should arguably accommodate observations about the oxygen isotopic composition of foraminifera, greenhouse gases, and carbon isotopic data, both in the oceans and in the atmosphere. It should explain spectral peaks, stochastic aspects, the asymmetry of cycles and slow changes in the system dynamics like those that seemingly occurred over the last million years. So far, this target ``is still elusive'' \citep{Raymo08aa}. Bayesian inference constitutes a powerful framework to handle knowledge and uncertainties associated with palaeoclimate dynamics. This framework will lead us to formulate probabilistic statements about the next glacial inception and, more generally, assist our investigations about the climate system mechanics and dynamics. Such a programme for statistical-dynamical modelling of (palaeo-)climates is in its beginnings and serious challenges are faced. The more general problem of statistical calibration of dynamical systems is already quite difficult. Palaeoclimate data bear further difficulties, both at the climate system level (intertwining of dynamical structures at different time scales like Dansgaard-Oeschger events and glacial cycles) and the observation level (time uncertainties, sediment disturbances, uncertain relationships between climate state and proxy\ldots) It is therefore my opinion that we can't yet express our predictions about natural trends in ice volume and greenhouse gases by probabilistic statements, and hence, that natural archives of previous interglacials do not falsify natural explanations for increasing greenhouse gas trends during the Holocene. \section*{Appendix : the LLN-2D surrogate} The form adopted for the surrogate is: \begin{equation} \delta V = -[ \beta_0 + \beta_1 (V)(V-\beta_2) + \beta_\Pi(e\sin\varpi) + \beta_\varepsilon(\varepsilon) + \beta_C\ln C]\delta t \label{LLN_emulator} \end{equation} where $V$ is ice volume, $\delta V$ the ice volume increment over $\delta t = 1$~ka.; $C$ is the CO$_2$ concentration, $\epsilon$ is a random number with Gaussian distribution, $\beta_i$ are regression coefficients. $V$ is further forced to be positive by imposing a lower bound of $-V\delta t$ on $\delta V$. Priors on regression coefficients are normal as follows (volumes are in $10^6~\mathrm{km}^3$, angles in radians, carbon dioxide concentrations in ppm and times in ka):~$\beta_0\sim\mathcal{N}(19,3^2)$, $\beta_1\sim\mathcal{N}(1.4,0.4^2)/10^3$, $\beta_1\sim\mathcal{N}(27,3^2)$, $\beta_\Pi\sim\mathcal{N}(60,5^2)$, $\beta_\varepsilon\sim\mathcal{N}(72,5^2)$, $\beta_C\sim\mathcal{N}(3,1^2)$ (conventionally, $\mathcal{N}(\mu,\sigma^2)$ is a normal distribution of centre $\mu$ and standard deviation $\sigma$). Posteriors are estimated by running a particle filter for state and parameter estimation \citep{lw01} on an experiment spanning the interval $[-800~\mathrm{ka};0]$ forced by astronomical forcing \citep{berger78} and CO$_2$ data by EPICA \citep{Luethi08aa} and Vostok \citep{petit99}. The likelihood function used to run the filter assumes that the errors of the surrogate every 1-kyr time step are independent and normally distributed with a standard deviation equal to $1$. The red curve on \figref{fig:lln-emulator} is a climate simulation with the LLN-2D model, while the blue curve is computed using the surrogate with the central estimates for $\beta_i$. Grey curves are trajectories obtained with parameters sampled from the posterior distribution. Note that the timing of the deglaciations is here constrained by the history of CO$_2$ imposed in these experiments. \section*{Acknowledgements} Material present in this paper was greatly influenced by discussions at the SUPRAnet workshops organised by Caitlin Buck (University of Sheffield), in particular with Jonty Rougier, John Haslett and Andrew Parnell, as well as by further discussions with Jonty Rougier made possible with the exchange programme between the French Community of Belgium and the British council. The editor (Bill Ruddiman) and two anonymous reviewers provided particularly constructive comments. The author is funded by the Belgian Fund for Scientific Research. This is a contribution to the European Research Council starting grant `Integrating Theory and Observations over the Pleistocene' nr ERC-2009-Stg 239604-ITOP.
1,314,259,996,461
arxiv
\section{Introduction: Constraints on the Gauss-Bonnet Parameter}\label{1} The Einstein-Gauss-Bonnet (EGB) gravity in $D$-dimensions in the presence of a negative cosmological constant $\Lambda$ is given by the action \cite{BD} (in the units $c=G=1$) \begin{equation} I_\text{EGB}=\frac{1}{16\pi} \int \text{d}^D x \sqrt{-g} \left(R-2\Lambda+\frac{\alpha}{D-4}\mathcal{G}\right), \end{equation} where $\mathcal{G}:=R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^2$ is the Gauss-Bonnet term, which is a topological invariant in $D=4$. Therefore we only consider EGB gravity in $D \geqslant 5$. The AdS curvature length scale $L$ is related to the cosmological constant via $\Lambda=-(D-1)(D-2)/L^2$. We will refer to $\alpha$ as the Gauss-Bonnet parameter\footnote{Different authors have defined the Gauss-Bonnet parameter with different normalizations. For example, our $\alpha/(D-4)$ as a whole is very often refer to as the Gauss-Bonnet parameter/coupling constant \cite{BD}. Evidently it has dimension length squared. In another convention, the coupling constant is defined as $\lambda_\text{GB}L^2/[(D-3)(D-4)]$, see, e.g., \cite{0911.3160}. The $(D-4)$ factor in the denominator allows Glavan and Lin to construct the so-called ``novel'' $D=4$ EGB theory \cite{1905.03601}, whose status is controversial due to a number of subtleties and issues \cite{2003.11552,2004.02858,2004.03390,2004.12998,2005.03859,2005.12292,2009.10715}. We will not specifically discuss this theory in this work, though our analysis would apply to this case just as well.}. This theory can be interpreted as a higher curvature correction to general relativity (GR), and in fact can be derived as a low energy limit of heterotic string theory \cite{BD,Z,GW,GS}, in which the Gauss-Bonnet parameter is proportional to the inverse string tension, or related to the coupling constant $\alpha'$ in the string worldsheet action \cite{Z}. From this ``top-down'' view, $\alpha$ is clearly positive. Black holes in the EGB theory have been studied in details, see for example, \cite{cai, 0504127, 0504141}. In gauge/gravity correspondence (hereinafter, ``holography''), it has become a common practice to employ modified gravity theories in the anti-de Sitter (AdS) bulk to model field theories with certain desired properties. Strictly speaking, the bulk physics is described by string theory. However, under certain well-defined circumstances (the string coupling, and the string length to AdS length scale ratio, are small), one hopes that stringy effects can be neglected. If so, then it suffices to deal with (mostly) semi-classical gravity. In such a ``bottom-up'' approach, one can often find analytic black hole solutions to work with. The hope is that such effective theories can in fact be embedded into string theory. For examples, attempts have been made to embed massive gravity \cite{1807.00591}, bimetric gravity \cite{2106.04614}, Ho\v{r}ava gravity \cite{1211.0010} and Lifshitz gravity \cite{1512.03554} in string theory, which provide some justifications for their applications in the literature. However, not all effective field theories can be embedded into string theory, as expressed by the ``Swampland Conjecture'' \cite{0605264,1903.06239}. On the other hand, from the bottom-up perspective, the Gauss-Bonnet parameter is just a coupling constant. It is not obvious that it \emph{has} to be positive. This is not to say that there is no constraint at all for the range of $\alpha$. For example, assuming that $\alpha>0$, various causality considerations (including: scattering amplitude, requirement that graviton Shapiro time delay as opposed to time \emph{advance}, shock wave characteristic analysis\footnote{In EGB theory, there exist superluminal propagation of gravitons along the characteristic surfaces of the PDEs, which determines the causal cone of the theory, distinct from the null cone \cite{1406.0677,1406.3379}.}, hyperbolicity, quantum entanglement constraints) \cite{1407.5597,1508.05303, 2101.02461,1401.5089} require $\alpha$ to be small. By small, we mean $\alpha \ll \ell_P^2$, the square of the Planck length. This is not automatically satisfied in string theory: in the weakly coupled case we can have $\alpha \gg \ell_P^2$. However, the existence of higher spin massive particles in the full string theory protects causality \cite{1407.5597}. From these constraints, $\alpha<0$ is not ruled out, as shown by the hyperbolicity and boundary causality analysis in \cite{1610.06078}. (See also \cite{0802.3318,0906.2922,0911.4257,0911.3160} for related discussions and constraints.) Boundary causality, as stated in the Gao-Wald theorem \cite{0007021}, essentially forbids signals to take a short-cut through the AdS bulk, so in particular, points which are not causally related on the boundary cannot be causally related through the bulk\footnote{In the ``novel'' $D=4$ case, Ge and Sin \cite{2004.12191} argued on causality ground that $\alpha < 0$. In the actual Universe, observations prefer $\alpha>0$ \cite{2006.15017}.} (see also \cite{1604.03944} and a recent lecture note by Witten \cite{1901.03928}). The Gauss-Bonnet parameter $\alpha$ has effects on the thermodynamics of black holes and thus on the dual field theory. In particular the Kovtun-Son-Starinets (KSS) viscosity bound \cite{0104066,0309213} is modified \cite{0712.0805,0712.0743,0812.2521}. Interestingly, such an entropy bound is closely related to the aforementioned causality issue \cite{0808.1919, 0904.4805}. Furthermore, eikonal instability develops if $\alpha$ is not small enough \cite{1701.01652,1705.07732}. Various applications of EGB black holes in holography had been studied, see, e.g., \cite{1103.3982,1305.4841, 1343004, 1311.3053, 1508.03364,1608.04208,1712.02772,Lu,2101.00882,0407083,1610.08987,2106.13942,2102.12171,1512.05666}. In fact the Gauss-Bonnet term is closely related to the central charges of the conformal field theory \cite{0906.2922,0712.0743}. Pathological behavior in the evolution of entanglement entropy and mutual information in the case of $\alpha<0$ (the mutual information becomes discontinuous during thermalization) did led Sircar et al. \cite{1602.07307} to conjecture that EGB theory with $\alpha<0$ may be inconsistent\footnote{\label{ft5}Another reason to disfavor $\alpha <0$ is that under Hawking evaporation the end state would be a naked (null) singularity with a finite mass, thus violating weak cosmic censorship. One can of course argue that at such a scale new higher order terms in the action would be important and the physical picture might change. Nevertheless, $\alpha>0$ case has no such problem -- the black hole tends to a zero mass and zero temperature state under Hawking evaporation \cite{2103.00257}, at least in 5-dimensions.}. In view of these, we raise the following question: \emph{is EGB gravity with $\alpha<0$ a consistent effective theory, with some possible embedding into string theory, or does it belong to the Swampland?} We would argue in favor of the latter case. Note that in the asymptotically flat case, working in the effective theory regime, Cheung and Remmen have shown that $\alpha$ is nonnegative for any unitary tree-level ultraviolet completion of the Gauss-Bonnet term, which is free of ghosts or tachyons \cite{1608.02942}. Our work, which utilizes a completely distinct method, provides strong evidence for $\alpha \geqslant 0$ in the AdS case as well. \section{Holographic Consistency and Brane Nucleation}\label{II} Holography relates gravitational theories in the AdS bulk to non-gravitational field theory on the boundary (see \cite{1501.00007,1409.3575} for a review). Such a surprising relation between two radically different types of theory is highly nontrivial and requires consistency conditions to be valid. A particularly deep condition of this kind in the Euclidean formulation that relates the minimum of a probe brane action to a gravitational bulk action \cite{1310.6788,1311.4520,1411.1887,1602.07177}: \begin{equation} S_g^*=\frac{N}{\gamma}S_b^*, \end{equation} where $S_g^*$ is the on-shell gravitational action in the bulk, $N$ and $\gamma$ are the number of colors and the scaling exponent for the free energy of the boundary field theory, respectively, and $S_b^*$ is the probe brane action. Surprisingly, this consistency condition holds if a certain ``isoperimetric \emph{inequality}'' \cite{1411.1887,1610.07313} (the superscript ``E'' denotes a Euclidean quantity; likewise superscript ``L'' denotes a Lorentzian one -- not to be confused with the AdS curvature length scale $L$) \begin{equation}\label{brane} \mathfrak{S}^\text{E}:=A(\Sigma)-\frac{D-1}{L}V(M_\Sigma) \geqslant 0, \end{equation} holds for any co-dimension 1 hypersurface $\Sigma$ in the bulk homologous to the boundary. Here $A(\Sigma)$ and $V(M_\Sigma)$ denote the area and the volume enclosed by $\Sigma$, respectively. We see that $\mathfrak{S}^\text{E}$ measures the competition between the area and the volume. Up to a brane tension constant coefficient, $\mathfrak{S}^\text{E}$ is the action of a probe BPS (Bogomol'nyi–Prasad–Sommerfield) brane that wraps around the black hole at a constant coordinate radius $r$. If the Wick-rotated spacetime is an Einstein manifold, this inequality is related to the topology at infinity and the Yamabe invariant, as discussed in the theorem of Wang \cite{Wang1,Wang2}. In holographic settings, many manifolds are not Einstein, and so inequality (\ref{brane}) must be checked on a case by case basis. As mentioned in \cite{1504.07344}, generalizations to Wang's theorem do exist -- for example, the works of Witten-Yau \cite{WY} and Cai-Galloway \cite{CG} -- but they are in general not useful in deciding whether inequality (\ref{brane}) holds. The reason inequality (\ref{brane}) is referred to as an ``isoperimetric inequality'' can be appreciated as follows \cite{1610.07313}: in planar geometry, the area bounded by a closed curve satisfies $A\leqslant {\ell^2}/{4\pi},$ where $\ell$ is the length of the curve, and with equality holds if and only if the closed curve is a circle. This is the usual isoperimetric inequality. We can call this a ``consistency condition'' for a geometry to be embedded in $\textbf{R}^2$, i.e. a closed curve cannot bound too large a volume, otherwise it ceases to be planar. Likewise, inequality \ref{brane} is a geometric consistency condition for asymptotically hyperbolic manifolds, i.e., the Euclidean signature version of AdS spacetime employed in holography (we refer the readers to \cite{1610.07313} and \cite{1504.07344} for the details and subtleties). The Lorentzian version of inequality (\ref{brane}) has a clear physical picture: if $\mathfrak{S}^\text{L}$ becomes negative at large $r$, the spacetime will nucleate a copious amount of branes via a Schwinger-like process in the bulk (see, for example, \cite{1005.4439} and the Appendix of \cite{2005.12075}). This would back-react on the metric, which in turn means that the original spacetime cannot be static. Any static black hole solution in classical gravity theories that suffer from this ``Seiberg-Witten instability'' \cite{9903224,0409242} is therefore \emph{not} a consistent object in the bulk\footnote{This kind of brane nucleation instability was also employed to investigate ``Euclidean wormholes'' (solutions with multiple boundaries) in the classic work of Maldacena-Maoz \cite{0401024}, and the recent follow-up by Marolf-Santos \cite{2101.08875}. In the context of \cite{0912.1061} such instability was dubbed ``Fermi Seasickness''. See also \cite{1910.06348,2104.00022}.}. Note that as with the Schwinger pair-production, this effect is non-perturbative \cite{0409242}. We remark that it is possible for $\mathfrak{S}^\text{E}$ to be everywhere non-negative but $\mathfrak{S}^\text{L}<0$ at some locations. Holography should be regarded as fully consistent when both inequalities are valid (although $\mathfrak{S}^\text{L}<0$ is acceptable if the boundary system only exists for a sufficiently short period so that any Lorentzian pathology would not have the time to influence the black hole geometry). See \cite{1504.07344} for detailed discussions. A classic example is that of toral Reissner-Nordstr\"om black holes \cite{0905.1180,0910.4456,1012.4056}. Although they are valid solutions to Einstein-Maxwell gravity for all values of the charge $Q$, they are not a consistent solution in the bulk when the electrical charge becomes near-extremal (for $\text{AdS}_5$, this occurs at $0.958 \times Q_{\text{extremal}}$ \cite{1012.4056}). This makes sense from the dual field theory (quark-gluon plasma) perspective, since no plasma can be arbitrarily cold. Another example is provided in \cite{1008.0231}, in which the brane action is used to explain why ``squashed'' AdS$_5$-Schwarzschild black hole is not consistent in the aforementioned sense, thus explaining why they do not have a sensible field theory dual, a phenomenon which was observed in \cite{0901.2574}. Interestingly, the same method predicts a bound on the strength of magnetic field compared to temperature squared of the surrounding plasma (independent on the AdS length scale $L$): $B \leqslant 2\pi^{3/2}T^2$; and cosmic magnetic fields during the plasma epoch of the early Universe may well come close to reaching that bound \cite{1409.3663}. Note that one should not be too alarmed by the word ``inconsistency'' used in this context. Often what happens -- as illustrated in the examples above -- is that there is no sensible gravitational dual of a field theory precisely because the field theory description is also problematic (e.g. no such thing as an arbitrarily cold plasma). Thus the gauge/gravity duality holds and is consistent in the sense that both sides of the descriptions fail. What is meant by ``inconsistent'' in the sense of \cite{1411.1887,1504.07344} is that the relevant geometry would not satisfy the isoperimetric inequality. Thus one must be careful to distinguish between inconsistency in this very precise sense, from the broad consistency of the gauge/gravity correspondence as a matter of principle. In this work, holographic consistency always refers to the former. \section{Brane Nucleation of Einstein-Gauss-Bonnet Black Holes} We will show that asymptotically locally AdS static black holes in EGB theory is not consistent in the aforementioned sense, even for the neutral case (for which $\mathfrak{S}^\text{E}$ is equivalent to $\mathfrak{S}^\text{L}$). The black hole solution of EGB gravity in $D$-dimensions is \cite{cai, 0504127, 2004.12191} \begin{equation} \d s^2 = -f(r) \d t^2 + f(r)^{-1} \d r^2 + r^2 \d \Omega_{k,D-2}^2, \end{equation} where $\d \Omega_{k,D-2}^2$ is a metric on a $(D-2)$-dimensional manifold $\Sigma_{k,D-2}$ with constant sectional curvature $k=\{-1,0,+1\}$, and \begin{equation}\label{metricfn} f(r)=k-\frac{r^2}{2\alpha (D-3)}\left[-1+\sqrt{1-4\alpha(D-3)g(r)}\right], \end{equation} with \begin{equation} g(r)=\frac{1}{L^2}-\frac{16\pi M}{(D-2)\Omega_{D-2}r^{D-1}}, \end{equation} where $M$ is proportional to the black hole mass. Here $\Omega_{D-2}$ denotes the unit area of $\Sigma_{k,D-2}$. For example, for a 3-sphere, $\Sigma_{1,3}=2\pi^2$, whereas for a cubic torus with periodicity $2\pi K$ on each copy of its $S^1$, we have $\Sigma_{0,3}=8\pi^3K^3$. In the presence of a Maxwell field, $g(r)$ would contain an additional charge term $2Q^2/[(D-3)(D-2)r^{2(D-2)}]$; its inclusion does not change the qualitative results discussed below. The brane action $\mathfrak{S}^E$ is evaluated as follows: \begin{flalign} \mathfrak{S}^\text{E}&:=r^{D-2} \int \d \Omega_{k,D-2} \int \d \tau \sqrt{g_{\tau\tau}}\\ \notag &- \frac{D-1}{L} \int \d\tau \int \d \Omega_{k,D-2} \int^r_{r_h} \d r' r'^{D-2} \sqrt{g_{\tau\tau}g_{r'r'}}, \end{flalign} where $g_{\tau\tau}$ is the Wick-rotated metric coefficient $g_{tt}$. For the uncharged case $g_{\tau\tau}=f(r)$. Here $r_h$ denotes the (Euclidean) event horizon. (In fact, the area and volume terms correspond to the Dirac-Born-Infeld (DBI) term and Chern-Simons term, this can potentially generalize the consistency conditions to a more general backgrounds \cite{1411.1887}). The normalized time coordinate $t/L$ is periodically identified with a periodicity $2\pi P$, so \begin{equation} \mathfrak{S}^\text{E}= 2\pi P L \Omega_{D-2} \left[\underbrace{r^{D-2}\sqrt{g_{\tau\tau}}-\frac{r^{D-1}-r_h^{D-1}}{L}}_{=:\mathfrak{F}(r)}\right]. \end{equation} The sign of $\mathfrak{S}^\text{E}$ depends on the competition between the terms in $\mathfrak{F}(r)$. For a sufficiently large $r$ (so that the constant term involving $r_h$, and the sectional curvature $k$ can be neglected), it is straightforward to show that for $\alpha < 0$, the bracket terms grow like \begin{flalign} \mathfrak{F}(r)\sim&\frac{r^{D-1}}{\sqrt{2|\alpha| (D-3)}}\left[\left(\sqrt{1+\frac{4|\alpha|(D-3)}{L^2}}-1\right)^{\frac{1}{2}}\right.\\ \notag &\left.-\frac{\sqrt{2|\alpha|(D-3)}}{L}\right], \end{flalign} which is negative by elementary algebra (similarly, the corresponding quantity for $\alpha>0$ case is positive). It is helpful to see visually the behavior of the function $\mathfrak{S}^\text{E}(r)$ as we vary $\alpha$, so let us consider the case $k=0$ in 5-dimensions and let $L=1$. Suppose the topology of the horizon is a cubic torus such that each copy of $S^1$ has periodicity $2\pi$. To ensure that the metric function Eq.(\ref{metricfn}) is real for all values of $M$, and in particular for the ground state without a black hole (i.e. $M=0$), we must have $\alpha < L^2/[4(D-3)]$. In our example, this means $\alpha < 1/8$. We show in Fig.(\ref{fig}) the behavior of $\mathfrak{S}^\text{E}$: it is negative for most values of $\alpha <0$, and positive for $\alpha \in (0, 1/8)$. (Actually, eikonal instability would rule out $\alpha > |1/8|$, at least in the planar limit \cite{1705.07732}.) \begin{figure}[h!] \centering \includegraphics[scale=0.40]{GBaction0} \caption{$\mathfrak{S}^\text{E}/(2\pi PL\Omega_{0,3})$ for a cubic toral EBG neutral black hole in 5-dimensions, with $L=1$ and $M=0.5$. The horizontal plane is the zero plane.} \label{fig} \end{figure} It turns out that the brane action is still positive for some negative values of $\alpha$ sufficiently close to the horizon. In Fig.(\ref{k0alpha}) we plot the curves along which $\mathfrak{S}^\text{E}(r)$ vanishes for various values of the mass. The region above a given curve corresponds to positive brane action, and likewise the region below it corresponds to negative brane action. We note that as $M$ increases, the curve is shifted towards the right, but so does the event horizon. If we denote the position where the action starts to become negative by $r_\text{neg}$, then $(r_\text{neg}-r_h)/r_h$ is constant (independent of the mass) for fixed $\alpha$. For $\alpha=-0.1$, this ratio is about $0.6617$. In fact, these curves exhibit a scaling symmetry under $M \mapsto M/r_h^4$ and $r \mapsto r/r_h$, which yields Fig.(\ref{renormalized}). \begin{figure}[h!] \centering \includegraphics[scale=0.40]{k0alpha-eps-converted-to} \caption{$\mathfrak{S}^\text{E}=0$ curve for a cubic toral EBG neutral black hole in 5-dimensions for $M=1,0.5,0.1,0.01,0.00001$ (right to left), where $L=1$.} \label{k0alpha} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.40]{renormalized-eps-converted-to} \caption{Brane action $\mathfrak{S}^\text{E}_\text{normalized}=0$ curve under the scaling symmetry $M \mapsto M/r_h^4$ and $r \mapsto r/r_h$. The horizon is now fixed at $r_h=3\pi^2/2$.} \label{renormalized} \end{figure} The fact that the Euclidean brane action is negative already violates the consistency condition discussed in Sec.(\ref{II}) \cite{1504.07344}. But even if we only consider the Lorentzian picture, in which branes are nucleated in the region where the action is negative, we still have a problem. Causality dictates that the branes will back-react on the black hole geometry only after some time has passed. This is roughly the time it takes for a brane to free-fall towards the black hole (see, e.g., \cite{1201.6443}). The reason is that as branes are copiously produced in the bulk, if they are far away, they do not immediately affect the black hole in the bulk, whereas if they fall towards the black hole, then the event horizon, being teleological in nature, would start to expand outward in anticipation of the falling branes -- and thus the spacetime is no longer static. Holography would be consistent if the field theory dual has a typical time scale larger than the instability time scale. This does not help us since the branes are nucleated close to the horizon, i.e., the instability time scale is short. The situation is actually worse: in the case \emph{without} a black hole ($M=0$), the brane action is negative for \emph{all} $\alpha < 0$. Thus \emph{the ground state is itself unstable}. Much of the above discussion also holds for the $k=1$ case, although the brane action remains positive near the origin $r=0$ when $M=0$. However, the ground state has no distinguished center, so $r=0$ is arbitrary. This implies that branes can be nucleated everywhere. The $k=-1$ case already suffers from brane production instability when the bulk gravity is GR (corresponds to the well known tachyonic instability of the boundary field scalar\footnote{``In this way, string theory reproduces the instability that is evident in the field theory.'' -- Seiberg and Witten \cite{9903224}} \cite{9903224}), so we do not analyze it here. In any case, $k$ cannot affect the leading term of the EGB brane action at sufficiently large $r$, so the Euclidean brane action is still negative. \section{Discussion: Gauss-Bonnet Coupling Constant Is Positive} Boulware and Deser already commented in 1985 \cite{BD} that a negative Gauss-Bonnet parameter $\alpha$ is pathological as it leads to a more singular behavior of the spacetime: a new curvature singularity arises at some nonzero value of $r$. However, since this singularity is behind the horizon (see also \cite{9706066}), one may still argue that this does not immediately rule out the possibility of $\alpha < 0$ EGB black holes (except perhaps for the reason raised in Footnote \ref{ft5}). In the bottom-up approach of holography, the $\alpha < 0$ case has also been considered in the literature. Often one hopes that in the low energy limit of string theory, stringy phenomena like effects of branes can be neglected. However, this is not always the case. This is reminiscent of the fact that some properties of the full theory can survive down to the infrared and imply constraints on the interactions that can be consistently added to the low energy EFT Lagrangian; see \cite{2106.08344} for an application in scalar-EGB theory. Holography consistency conditions impose constraints that render otherwise valid solutions (such as near extremal toral Reissner-Nordstr\"om black hole) in the anti-de Sitter bulk inconsistent \emph{even as effective low energy solutions}. This rules out the case for $\alpha < 0$, which is conjectured to be inconsistent by Sircar et al. in \cite{1602.07307} on phenomenological ground in holography. Since $\alpha > 0$ is a consequence of inverse string tension being positive in the top-down approach, it is also a remarkable consistency check that holography requires \emph{precisely} this even if in the bottom-up approach one only takes EGB gravity as a modified theory of gravity and knows nothing about the relationship between $\alpha$ and string coupling in certain heterotic string theory embeddings. As discussed in \cite{0911.3160}, there are many different possibilities of realizing curvature squared corrections to Einstein-Hilbert action from string theory and M-theory \cite{9903210, 0904.4466}, so it is not obvious whether $\alpha < 0$ is consistent with \emph{some} version of string theory. However, if the holographic consistency condition in the form of ``isoperimetric inequality'' (\ref{brane}) -- a ``law of physics'' which has been checked in a myriad of highly nontrivial contexts and shown to hold in \cite{1504.07344} -- is to be satisfied, then $\alpha < 0$ is indeed ruled out. Strictly speaking, our argument only shows that the usual static black hole solutions (along with their ground states) in EGB theory with $\alpha < 0$ are inconsistent, not the theory itself. However, taken together with the asymptotically flat case analysis of \cite{1608.02942}, the fact that the ground state itself is inconsistent strongly suggests that EGB gravity with $\alpha<0$ does indeed lie in the Swampland. Nevertheless, we also need to be careful not to overstate the results. Just like the causality argument \cite{1407.5597} can only conclude that $\alpha$ has to be small in an effective EGB theory, but not when the full string theory is considered (with an infinite tower of higher spin particles), it is obvious that our analysis only concerns the validity of EGB gravity as an effective theory in the bulk. The results could change if one likewise considers other effects and other spin fields in the bulk, which in turn modify the black hole solutions and the ground states. Indeed, one might object that even as a low energy solution of string theory, the Regge slope $\alpha'$-correction (not to be confused with the GB parameter $\alpha$) needs to be taken into account especially since we are dealing with a higher curvature gravity theory \cite{9903210}. However, the point is that an $\alpha'$-corrected solution is no longer the original plain vanilla EGB solution discussed in this work. (Interestingly, the probe brane action to calculate the holographic consistency condition for the $\alpha'$-corrected Schwarzschild-AdS black hole in GR is unaffected by said correction \cite{1602.07177}, so it remains to be seen if and how the EGB case is different). Such possible loopholes would in principle allow a negative $\alpha$ in the bottom-up construction. Our work merely pointed out that the simplest EGB theory with $\alpha < 0$ is very likely to be problematic. \begin{acknowledgments} The author thanks the National Natural Science Foundation of China (Grant No.11922508) for funding support. He also thanks Brett McInnes for useful discussions, and colleagues (notably Ruihong Yue and Xiao-Mei Kuang) at the Center for Gravitation and Cosmology, Yangzhou University, for related discussions that partially inspired this work. \end{acknowledgments} \newpage
1,314,259,996,462
arxiv
\section{Introduction} It is a well established fact that galaxy properties such as morphology and colour depend on environment -- red, early-type galaxies are usually found in denser regions of the universe, whereas the field is dominated by blue, late-type galaxies. The dependence of the galaxies' appearance on the local density is a clue to their formation and evolution, so it is important to investigate it in some detail. A powerful means to do that is the type-dependent luminosity function (LF in the following), calculated in different overdensity regimes. \begin{figure}[b] \centerline{\psfig{figure=deltaCOMBO_J2003c_natlog.ps,angle=270,clip=t,width=5.cm}\psfig{figure=deltaCOMBOredblue_J2003c_natlog.ps,angle=270,clip=t,width=5.cm}} \caption[ ]{{\bf Left:} Overdensities in three COMBO-17 fields. {\bf Right:} For red and blue galaxies in the CDFS. The vertical lines indicate the range in which the LF is calculated. \label{densities}} \end{figure} With spectroscopic data it is possible to measure the local galaxy density contrast $\delta_g=(\rho/\bar{\rho_0})-1$ in small spheres of e.g. $8 h^{-1}$\,Mpc. However, for an analysis at higher redshifts, deep multicolor surveys have to be used, and those have too large redshift inaccuracies to facilitate a determination of $\delta_g$ in small spheres. Instead, $\delta_g$ can be calculated in small redshift bins (with binsize $\Delta z=0.02$ and in steps of $\delta_z=0.005$). We tested the influence of redshift errors on the determination of $\delta_g$ using a COMBO-17 mock survey \citep{vanKampen05}, and found that the structures are smoothed according to the size of the errors. Red galaxies, which are statistically brighter, have smaller errors than the (statistically) fainter blue galaxies, so in order to directly compare the two samples, we have to calculate a blurring function which renders their redshifts as inaccurate as the blue ones. The blurring function can be found via the convolution theorem and then applied to the data in a Monte-Carlo fashion. We used COMBO-17 data (see \citealp{Wolf03}) and calculated the overdensities for all galaxies with $R\leq 23.65$ and rest-frame $B$-band magnitudes $M_B\leq-18.0$, see left panel of Fig. \ref{densities}. In the redshift range $0.25 \leq z \leq 0.4$ the CDFS (Chandra Deep Field South) is underdense with respect to the other two fields, where the underdensity fluctuates about the mean. We split the sample into red sequence and blue cloud galaxies (see \citealp{Bell04}) and repeated the calculation after blurring the red galaxies' redshifts. In the calculation of the mean density $\bar\rho$ the increasing number density of blue galaxies with redshift has to be taken into account, which we estimate from a fit to the measured mean in the three fields. The right panel of Fig. \ref{densities} shows the resulting underdensities in the CDFS. The underdensity at $0.25 \leq z \leq 0.4$ is clearly deeper in the red sample. We calculated the type-dependent LF in this redshift range, for each field separately, see Fig. \ref{lumifunc}. \begin{figure}[h] \centerline{\psfig{figure=GLF_s06bj_1z.ps,angle=270,clip=t,width=10.cm}} \caption[ ]{The luminosity function of the red sequence (upper panel) and blue cloud (lower panel) galaxies in the redshift range $0.25 \leq z \leq 0.4$, for the three COMBO-17 fields. The STY fit is overplotted in each panel.\label{lumifunc}} \end{figure} While the LF of the blue galaxies remains unaffected by environment, the LF of the red galaxies does depend on the local density: In the CDFS the faint-end slope is clearly more positive than in the other fields. Hence, the 'hole in the sky' is mainly due to a deficiency of faint red galaxies. This result is in general agreement with the result of \citet{Croton05}, who found the LF of blue 2dF galaxies to be essentially constant with varying overdensity, and the faint-end slope of the LF of the red galaxies to be steeper in higher density regions.